ACL-OCL / Base_JSON /prefixN /json /nuse /2021.nuse-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:58.097534Z"
},
"title": "Transformer-based Screenplay Summarization Using Augmented Learning Representation with Dialogue Information",
"authors": [
{
"first": "Myungji",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Hongseok",
"middle": [],
"last": "Kwon",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jaehun",
"middle": [],
"last": "Shin",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Wonkee",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Baikjin",
"middle": [],
"last": "Jung",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Screenplay summarization is the task of extracting informative scenes from a screenplay. The screenplay contains turning point (TP) events that change the story direction and thus define the story structure decisively. Accordingly, this task can be defined as the TP identification task. We suggest using dialogue information, one attribute of screenplays, motivated by previous work that discovered that TPs have a relation with dialogues appearing in screenplays. To teach a model this characteristic, we add a dialogue feature to the input embedding. Moreover, in an attempt to improve the model architecture of previous studies, we replace LSTM with Transformer. We observed that the model can better identify TPs in a screenplay by using dialogue information and that a model adopting Transformer outperforms LSTM-based models.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Screenplay summarization is the task of extracting informative scenes from a screenplay. The screenplay contains turning point (TP) events that change the story direction and thus define the story structure decisively. Accordingly, this task can be defined as the TP identification task. We suggest using dialogue information, one attribute of screenplays, motivated by previous work that discovered that TPs have a relation with dialogues appearing in screenplays. To teach a model this characteristic, we add a dialogue feature to the input embedding. Moreover, in an attempt to improve the model architecture of previous studies, we replace LSTM with Transformer. We observed that the model can better identify TPs in a screenplay by using dialogue information and that a model adopting Transformer outperforms LSTM-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text summarization is one major task in NLP that seeks to produce concise texts containing only the essential information in the original texts. Although most researches have been focusing on summarizing news articles (Narayan et al., 2018; See et al., 2017) , as various contents with different structures increase these days, there has been growing interests in applying text summarization to various domains, including social media (Sharifi et al., 2010; Kim and Monroy-Hernandez, 2016) , dialogue (Goo and Chen, 2018) , scientific articles (Cohan and Goharian, 2017; Yasunaga et al., 2019) , books (Mihalcea and Ceylan, 2007) , screenplays (or scripts) (Gorinski and Lapata, 2015; Papalampidi et al., 2020a) . Among them, this paper focuses on screenplay summarization.",
"cite_spans": [
{
"start": 218,
"end": 240,
"text": "(Narayan et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 241,
"end": 258,
"text": "See et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 435,
"end": 457,
"text": "(Sharifi et al., 2010;",
"ref_id": "BIBREF15"
},
{
"start": 458,
"end": 489,
"text": "Kim and Monroy-Hernandez, 2016)",
"ref_id": "BIBREF7"
},
{
"start": 501,
"end": 521,
"text": "(Goo and Chen, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 544,
"end": 570,
"text": "(Cohan and Goharian, 2017;",
"ref_id": "BIBREF1"
},
{
"start": 571,
"end": 593,
"text": "Yasunaga et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 602,
"end": 629,
"text": "(Mihalcea and Ceylan, 2007)",
"ref_id": "BIBREF9"
},
{
"start": 657,
"end": 684,
"text": "(Gorinski and Lapata, 2015;",
"ref_id": "BIBREF5"
},
{
"start": 685,
"end": 711,
"text": "Papalampidi et al., 2020a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A screenplay is a type of literary text, which typically contains around 120 pages and has a strictly structured format (Figure 1 ). It usually contains various storytelling elements, such as a story, dialogues, characters' actions, and what the camera",
"cite_spans": [],
"ref_spans": [
{
"start": 120,
"end": 129,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We're gonna get outta here, Buzz -Buzz?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INT. SID'S ROOM WOODY",
"sec_num": null
},
{
"text": "Buzz is not there. Woody looks down at the floor. Buzz is sitting on the floor, playing \"bombs away\" with his broken arm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INT. SID'S ROOM WOODY",
"sec_num": null
},
{
"text": "The rest of Andy's toys gather around the window to see Woody. Figure 1: An excerpt from \"Toy Story.\" A screenplay consists of scenes. A scene is an event that takes place at the same time or place. Every scene starts with a scene heading (starts with \"INT.\" or \"EXT.\") and is followed by action descriptions and dialogues. 'Scene heading' denotes when and where actions take place. 'Action description' explains who and what are in the scene. 'Character' is the speaker. 'Dialogue' is a spoken utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXT. ANDY'S BEDROOM WINDOW/SID'S WINDOW",
"sec_num": null
},
{
"text": "sees, thereby elaborating a complex story. In a real-life situation, filmmakers and directors hire script readers to select a script that seems to be a popular movie among numerous candidate scripts. They create a coverage per script, a report of about four pages containing a logline (the indicative summary), a synopsis (the informative summary), recommendations, ratings, and comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REX",
"sec_num": null
},
{
"text": "The goal of screenplay summarization is to help speeding up script browsing; to provide an overview of the script's contents and storyline; and to reduce the reading time (Gorinski and Lapata, 2015). As shown in Figure 2 , to make this long narrative-text summarization feasible, early work in screenplay summarization (Gorinski and Lapata, 2015; Papalampidi et al., 2020a) defined the task as extracting a sequence of scenes that represents informative summary (i.e., scene-level extractive summarization).",
"cite_spans": [
{
"start": 319,
"end": 346,
"text": "(Gorinski and Lapata, 2015;",
"ref_id": "BIBREF5"
},
{
"start": 347,
"end": 373,
"text": "Papalampidi et al., 2020a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 212,
"end": 220,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "REX",
"sec_num": null
},
{
"text": "To this end, Papalampidi et al. (2019 Papalampidi et al. ( , 2020b Figure 3 : A well-structured story consists of six stages. TPs divide a story into multiple sections and define the screenplay's structure. There are five TPs in a story (Cutting, 2016; Hauge, 2017; Papalampidi et al., 2019) .",
"cite_spans": [
{
"start": 13,
"end": 37,
"text": "Papalampidi et al. (2019",
"ref_id": "BIBREF12"
},
{
"start": 38,
"end": 66,
"text": "Papalampidi et al. ( , 2020b",
"ref_id": "BIBREF13"
},
{
"start": 237,
"end": 252,
"text": "(Cutting, 2016;",
"ref_id": "BIBREF2"
},
{
"start": 253,
"end": 265,
"text": "Hauge, 2017;",
"ref_id": "BIBREF6"
},
{
"start": 266,
"end": 291,
"text": "Papalampidi et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "REX",
"sec_num": null
},
{
"text": "assumed that such scenes compose a set of events, called turning points (TPs), which change the story's direction and thus determine the progression of the story (Figure 3 ). The definition of each TP is shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 171,
"text": "(Figure 3",
"ref_id": null
},
{
"start": 213,
"end": 220,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "REX",
"sec_num": null
},
{
"text": "Following their assumption, we propose two methods to identify TPs better: 1) we suggest using dialogue information included in screenplays ( Figure 1 ) as a training feature, considering one previous study revealed that there is a relation between TPs and the frequency of conversations (Cutting, 2016) in a screenplay; 2) we attempt to use Transformer (Vaswani et al., 2017) instead of LSTM, which have been dominantly used in previous studies (Papalampidi et al., 2019 (Papalampidi et al., , 2020b , because Transformer has generally shown to be beneficial in capturing long-term dependencies; we can expect that Transformer will summarize long and complex screenplays better.",
"cite_spans": [
{
"start": 288,
"end": 303,
"text": "(Cutting, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 354,
"end": 376,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 446,
"end": 471,
"text": "(Papalampidi et al., 2019",
"ref_id": "BIBREF12"
},
{
"start": 472,
"end": 500,
"text": "(Papalampidi et al., , 2020b",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 142,
"end": 150,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "REX",
"sec_num": null
},
{
"text": "Introductory event that occurs after presentation of setting and background of main characters TP2: Change of Plans Main goal of story is defined; action begins to increase TP3: Point of No Return Event that pushes the main characters to fully commit to their goal TP4: Major Setback Event where everything falls apart, temporarily or permanently TP5: Climax Final event of the main story, moment of resolution and \"biggest spoiler\" Table 1 : Definition of TPs (Papalampidi et al., 2019) .",
"cite_spans": [
{
"start": 461,
"end": 487,
"text": "(Papalampidi et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 433,
"end": 440,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "TP1: Opportunity",
"sec_num": null
},
{
"text": "Topic-Aware Model (TAM) (Papalampidi et al., 2019) is one screenplay summarization model that identifies TPs to use them for an informative summary. The key feature of this model is that it takes sentence-level inputs and uses Bi-LSTM to generate their latent representations; it produces scene representations by applying self-attention to the sentence representations belonging to each scene and applying a context-interaction layer to capture the similarity among scenes. At last, TPs are selected among all scene representations. Our proposed model is also inspired by this work, and our work aims to improve this study.",
"cite_spans": [
{
"start": 24,
"end": 50,
"text": "(Papalampidi et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work 2.1 Topic-Aware Model",
"sec_num": "2"
},
{
"text": "Another TP identification model is GraphTP (Papalampidi et al., 2020b), which uses Bi-LSTM and Graph Convolution Network (GCN) (Duvenaud et al., 2015) to encode direct interactions among scenes, thereby better capturing long-term dependencies. Specifically, they represent a screenplay as a sparse graph, and then the GCN produces scene representations that reflect information of neighboring scenes. It shows comparable performance with TAM. In our experiments, we adopt TAM and GraphTP as baselines.",
"cite_spans": [
{
"start": 127,
"end": 150,
"text": "(Duvenaud et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GraphTP",
"sec_num": "2.2"
},
{
"text": "Recall that screenplay summarization can be defined as identifying TPs, where the story stage's transition occurs. Therefore, we suggest using dialogue information related to the story stage's transition to identify TPs better. The motivation for this method is that a previous study (Cutting, 2016) that analyzed movies found that there is a pattern in which the frequency of conversations changes according to the story stage ( Figure 3) ; there are few conversations until the end of the setup; then the frequency of conversations stay constant for the progress and complication; and finally, it decreases during the beginning of the final push but increases again in the aftermath. This study implies that dialogue information can be a good hint to capture screenplays' story stage transition. However, to our knowledge, there has been no previous work that attempts to utilize such information for screenplay summarization, that is, most previous studies (Papalampidi et al., 2019 (Papalampidi et al., , 2020b do not consider employing various elements included in a screenplay.",
"cite_spans": [
{
"start": 284,
"end": 299,
"text": "(Cutting, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 960,
"end": 985,
"text": "(Papalampidi et al., 2019",
"ref_id": "BIBREF12"
},
{
"start": 986,
"end": 1014,
"text": "(Papalampidi et al., , 2020b",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 430,
"end": 439,
"text": "Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Input Augmentation",
"sec_num": "3.1"
},
{
"text": "We expect that adding dialogue information as an additional training feature will help a model predict TP scenes from screenplays better. Therefore, we first extract the binary label d i from a screenplay by inspecting whether a specific sentence is notated as a dialogue. We then concatenated the sentence embedding (x i ) and the binary label",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Augmentation",
"sec_num": "3.1"
},
{
"text": "(d i ) to design a new augmented input [x i ; d i ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Augmentation",
"sec_num": "3.1"
},
{
"text": "It has been generally known that RNN-based architectures, which were used also in aforementioned previous studies (Papalampidi et al., 2019 (Papalampidi et al., , 2020b , do not capture long-range dependencies well due to the vanishing gradient problem. Also in the case of screenplay summarization, because screenplays are normally long and complex, we speculate that there is a limit to generating a summary by using LSTM. Therefore, we propose a screenplay-summarization model to which Transformer (Vaswani et al., 2017) is applied, which is widely used for various NLP tasks and well known for having less computational complexity and better capturing long-term dependencies.",
"cite_spans": [
{
"start": 114,
"end": 139,
"text": "(Papalampidi et al., 2019",
"ref_id": "BIBREF12"
},
{
"start": 140,
"end": 168,
"text": "(Papalampidi et al., , 2020b",
"ref_id": "BIBREF13"
},
{
"start": 501,
"end": 523,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "3.2"
},
{
"text": "In detail, we propose a hierarchical screenplay encoder using Transformer (Figure 4 ). First, it receives a sentence-level input; we use Universal Sentence Encoder (USE) (Cer et al., 2018) as in TAM. After the sentence representations become contextualized by the first Transformer encoder, all the sentence representations belonging to the scene are added up to form the scene representation that is fed into the second Transformer encoder. The second Transformer encoder produces the final scene vectors and inputs them into five different linear layers, one classifier per TP, each of which projects the vectors to a scalar value. Lastly, a softmax layer produces five probability distributions over all scenes that indicate how relevant each scene is to the TPs. We then select one scene with the highest probability per TP; each selected scene joins together with its neighbors into three consecutive scenes, which compose the final summary.",
"cite_spans": [
{
"start": 170,
"end": 188,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 74,
"end": 83,
"text": "(Figure 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Architecture",
"sec_num": "3.2"
},
{
"text": "Sentence Representation Scene Representation Positional Encoding x 1 x 2 x 3 x k-1 x k \u2026 s 1 s 2 s 3 s N-1 s N \u2026 x 1 x 2 x 3 x k-1 x k \u2026",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "3.2"
},
{
"text": "For both training and evaluating our model, we use TRIPOD (Papalampidi et al., 2019) dataset. This dataset contains screenplays and their TPs; the TPs in the test set are manually annotated by human experts whereas those in the training set are pseudo-TPs. Statistics of the dataset are presented in Table 2 . .9 (26.9) sentence tokens 7.8 (6.0) 7.6 (6.4) ",
"cite_spans": [
{
"start": 58,
"end": 84,
"text": "(Papalampidi et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 300,
"end": 307,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "For our experiments, we adapted source codes in two repositories 1 2 Papalampidi et al. 2020b; Liu and Lapata (2019) to implement our model. We set the training hyperparameters as follows: L = 1, H = 128, A = 4, and P drop = 0.0, where L is the number of layers, H is the hidden size, A is the number of heads, and P drop is the dropout rate. We consider two previous methods that receive raw sentence representations as inputs as the baseline systems: TAM (Papalampidi et al., 2019) and GraphTP (Papalampidi et al., 2020b) . During training, because TRIPOD does not contain a validation set, we conducted n-fold cross-validation with n = 5 to extract the validation set from the existing test set. Finally, we averaged out the test results of the five models to obtain the final test results.",
"cite_spans": [
{
"start": 95,
"end": 116,
"text": "Liu and Lapata (2019)",
"ref_id": "BIBREF8"
},
{
"start": 457,
"end": 483,
"text": "(Papalampidi et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 496,
"end": 523,
"text": "(Papalampidi et al., 2020b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4.2"
},
{
"text": "To evaluate our model, we used the TP identification evaluation metrics proposed by Papalampidi et al. (2019) : Total Agreement (T A), Partial Agreement (P A), and Distance (D). Those Metrics are defined as follows. T A is the ratio of TP scenes that are correctly identified (Eq. 1). In the equation, S i is a set of scenes that is predicted as a certain TP in a screenplay, G i is the ground-truth set of scenes corresponding to that TP event, T is the number of TPs, in our case T = 5, and L is the number of Table 3 : Total Agreement (TA), Partial Agreement (PA), and mean distance (D). The first two rows are the baselines. A boldface score is the best score in its column.",
"cite_spans": [
{
"start": 84,
"end": 109,
"text": "Papalampidi et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 512,
"end": 519,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "# of parameters Training time (ratio) TAM 40.1k 1.12 GraphTP 41.6k 1.45 Transformer 46.3k 1.00 Table 4 : The number of parameters and training time of models. Numbers in 'Training time' are ratios to the training time of our proposed model set at 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "screenplays contained in the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "T A = 1 T \u2022 L T \u2022L i=1 |S i \u2229 G i | |S i \u222a G i |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "(1) P A is the ratio of TP events about which more than one ground-truth TP scenes are identified (Eq. 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P A = 1 T \u2022 L T \u2022L i=1 [|S i \u2229 G i | = \u03c6]",
"eq_num": "(2)"
}
],
"section": "Model",
"sec_num": null
},
{
"text": "D is the average distance between all pairs of predicted TP scenes (S i ) and ground-truth TP scenes (G i ) (Eq. 3, 4) , where N is the number of scenes in a screenplay.",
"cite_spans": [
{
"start": 108,
"end": 115,
"text": "(Eq. 3,",
"ref_id": null
},
{
"start": 116,
"end": 118,
"text": "4)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d [S i , G i ] = 1 N min s\u2208S i ,g\u2208G i |s \u2212 g| (3) D = 1 T \u2022 L T \u2022L i=1 d [S i , G i ]",
"eq_num": "(4)"
}
],
"section": "Model",
"sec_num": null
},
{
"text": "T A and P A indicate how correctly a model predicts TPs, and D indicates how well the model has learned TP positions. It can be seen that T A and P A represent the model's prediction bias, and D represents variance, so we can suppose that there is a trade-off between D and T A or P A. Also, when the TA and PA scores are similar, it means that the model has a high accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Input Augmentation It is revealed that the models trained with augmented inputs outperform those trained only with raw inputs by the TA and PA scores (Table 3) . This result supports our assumption that dialogue information will be helpful in finding TPs because the TA and PA scores, which indicate whether TPs are correctly identified, have improved. As aforementioned in Section 4.3, the D score has an inverse relationship with TA in that it represents the variance of model predictions. On the other hand, TAM shows a relatively poor TA score; it seems that dialogue information hardly improves the performance of a model that does not capture long-term dependencies well. One possible reason is that dialogue information provides the model with information that the model already knows even though it does not capture long-term dependencies well. For more accurate explanation, further analyses are required.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 159,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result Analysis",
"sec_num": "4.4"
},
{
"text": "Architecture In the case of raw sentence inputs, our proposed architecture based on Transformer outperforms the two baseline systems consistently. The result implies that the model that captures longterm dependencies well can improve the performance of summarizing long and complex texts, as we have expected. Because the model's performance has improved over the baseline by all metrics, our proposed architecture can be considered as an adequate model for TP identification, compared to the baselines. Also, even though our model contains a few more parameters than the two baselines, it has faster training speed, especially compared to GraphTP, showing a difference of almost 40% or more (Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 692,
"end": 701,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result Analysis",
"sec_num": "4.4"
},
{
"text": "When we fed dialogue-augmented inputs into the model, the TA and PA scores have improved. Although, when we used dialogue-augmented inputs, GraphTP recorded better performance by TA and PA, for D, our model shows much better results. This result means that the model predicts whether a given scene is a TP or not becomes more accurately whereas it does not predict well across all TPs (i.e., TP1 to TP5), but for a given scene, the model predicts certain TPs very well and some other TPs very bad. Therefore, the dialogue feature provides helpful information for TP identification that GraphTP lacks even though it is helpful for some TPs but redundant and even disturbing for some other TPs. This suggests that there is high possibility that not all TPs (i.e., TP1 to TP5) are included in the output summary. In this regard, we can conclude that our proposed model makes more confident predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result Analysis",
"sec_num": "4.4"
},
{
"text": "In this paper, we suggest using dialogue information as an additional training feature and propose a Transformer-based architecture for TP identification. Our experimental results present that dialogue information has a positive effect on the prediction accuracy on whether the scene is TP or not. However, the opposite was the case for the sequence-based model; further analyses are needed. In addition, the results indicate that using Transformer instead of LSTM significantly improves the overall performance in identifying TP scenes by encoding long-term dependencies among scenes better. We believe that using unique attributes in screenplays, such as dialogues, can help improving the model performance and when summarizing texts that have complex structures including screenplays, Transformer , which handles long histories robustly, is effective. In the future, we plan to go through the human evaluating process to see how dialogue information affects the output summary's informativeness, especially which one is identified better than another, and how the trade-off among automatic evaluation metrics affects the summary output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We appreciate all of the reviewers giving their invaluable comments on this paper. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Universal sentence encoder",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Sheng Yi Kong",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scientific document summarization via citation contextualization and scientific discourse",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal on Digital Libraries",
"volume": "19",
"issue": "",
"pages": "287--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arman Cohan and Nazli Goharian. 2017. Scientific document summarization via citation contextualiza- tion and scientific discourse. International Journal on Digital Libraries, 19:287-303.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Narrative theory and the dynamics of popular movies",
"authors": [
{
"first": "James",
"middle": [
"E"
],
"last": "Cutting",
"suffix": ""
}
],
"year": 2016,
"venue": "Psychonomic Bulletin Review",
"volume": "23",
"issue": "",
"pages": "1713--1743",
"other_ids": {
"DOI": [
"10.3758/s13423-016-1051-4"
]
},
"num": null,
"urls": [],
"raw_text": "James E. Cutting. 2016. Narrative theory and the dy- namics of popular movies. Psychonomic Bulletin Review, 23:1713--1743.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Convolutional networks on graphs for learning molecular fingerprints",
"authors": [
{
"first": "David",
"middle": [],
"last": "Duvenaud",
"suffix": ""
},
{
"first": "Dougal",
"middle": [],
"last": "Maclaurin",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Aguilera-Iparraguirre",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "G\u00f3mez-Bombarelli",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Hirzel",
"suffix": ""
},
{
"first": "Al\u00e1n",
"middle": [],
"last": "Aspuru-Guzik",
"suffix": ""
},
{
"first": "Ryan",
"middle": [
"P"
],
"last": "Adams",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Duvenaud, Dougal Maclaurin, Jorge Aguilera- Iparraguirre, Rafael G\u00f3mez-Bombarelli, Timothy Hirzel, Al\u00e1n Aspuru-Guzik, and Ryan P. Adams. 2015. Convolutional networks on graphs for learn- ing molecular fingerprints.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts",
"authors": [
{
"first": "C",
"middle": [],
"last": "Goo",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "735--742",
"other_ids": {
"DOI": [
"10.1109/SLT.2018.8639531"
]
},
"num": null,
"urls": [],
"raw_text": "C. Goo and Y. Chen. 2018. Abstractive dialogue sum- marization with sentence-gated modeling optimized by dialogue acts. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 735-742.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Movie script summarization as graph-based scene extraction",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "John Gorinski",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1066--1076",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1113"
]
},
"num": null,
"urls": [],
"raw_text": "Philip John Gorinski and Mirella Lapata. 2015. Movie script summarization as graph-based scene extrac- tion. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 1066-1076, Denver, Colorado. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Storytelling Made Easy: Persuade and Transform Your Audiences, Buyers, and Clients -Simply, Quickly, and Profitably",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Hauge",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Hauge. 2017. Storytelling Made Easy: Per- suade and Transform Your Audiences, Buyers, and Clients -Simply, Quickly, and Profitably. Indie Books International.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Storia: Summarizing social media content based on narrative theory using crowdsourcing",
"authors": [
{
"first": "Joy",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Andres",
"middle": [],
"last": "Monroy-Hernandez",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work amp; Social Computing, CSCW '16",
"volume": "",
"issue": "",
"pages": "1018--1027",
"other_ids": {
"DOI": [
"10.1145/2818048.2820072"
]
},
"num": null,
"urls": [],
"raw_text": "Joy Kim and Andres Monroy-Hernandez. 2016. Storia: Summarizing social media content based on narra- tive theory using crowdsourcing. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work amp; Social Computing, CSCW '16, page 1018-1027, New York, NY, USA. Associ- ation for Computing Machinery.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Text summarization with pretrained encoders. CoRR",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Mirella Lapata. 2019. Text sum- marization with pretrained encoders. CoRR, abs/1908.08345.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Explorations in automatic book summarization",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ceylan",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mihalcea and H. Ceylan. 2007. Explorations in au- tomatic book summarization. In EMNLP-CoNLL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. CoRR, abs/1808.08745.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Lea Frermann, and Mirella Lapata. 2020a. Screenplay summarization using latent narrative structure",
"authors": [
{
"first": "Pinelopi",
"middle": [],
"last": "Papalampidi",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinelopi Papalampidi, Frank Keller, Lea Frermann, and Mirella Lapata. 2020a. Screenplay summarization using latent narrative structure.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Movie plot analysis via turning point identification",
"authors": [
{
"first": "Pinelopi",
"middle": [],
"last": "Papalampidi",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinelopi Papalampidi, Frank Keller, and Mirella Lap- ata. 2019. Movie plot analysis via turning point iden- tification.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Movie summarization via sparse graph construction",
"authors": [
{
"first": "Pinelopi",
"middle": [],
"last": "Papalampidi",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinelopi Papalampidi, Frank Keller, and Mirella Lap- ata. 2020b. Movie summarization via sparse graph construction.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. CoRR, abs/1704.04368.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Experiments in microblog summarization",
"authors": [
{
"first": "B",
"middle": [],
"last": "Sharifi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hutton",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Second International Conference on Social Computing",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Sharifi, M. Hutton, and J. Kalita. 2010. Experiments in microblog summarization. 2010 IEEE Second In- ternational Conference on Social Computing, pages 49-56.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks",
"authors": [
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [
"R"
],
"last": "Radev",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michihiro Yasunaga, Jungo Kasai, Rui Zhang, A. R. Fabbri, Irene Li, D. Friedman, and Dragomir R. Radev. 2019. Scisummnet: A large annotated cor- pus and content-impact models for scientific paper summarization with citation networks. In AAAI.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Proposed architecture using Transformer encoders.",
"num": null
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td>: Statistics of TRIPOD (Papalampidi et al.,</td></tr><tr><td>2019).</td></tr></table>"
}
}
}
}