ACL-OCL / Base_JSON /prefixN /json /nlpcss /2020.nlpcss-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:49:25.543235Z"
},
"title": "Unsupervised Anomaly Detection in Parole Hearings using Language Models",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Todd",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Catalin",
"middle": [],
"last": "Voss",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jenny",
"middle": [],
"last": "Hong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Each year, thousands of roughly 150-page parole hearing transcripts in California go unread because legal experts lack the time to review them. Yet, reviewing transcripts is the only means of public oversight in the parole process. To assist reviewers, we present a simple unsupervised technique for using language models (LMs) to identify procedural anomalies in long-form legal text. Our technique highlights unusual passages that suggest further review could be necessary. We utilize a contrastive perplexity score to identify passages, defined as the scaled difference between its perplexities from two LMs, one fine-tuned on the target (parole) domain, and another pre-trained on out-of-domain text to normalize for grammatical or syntactic anomalies. We present quantitative analysis of the results and note that our method has identified some important cases for review. We are also excited about potential applications in unsupervised anomaly detection, and present a brief analysis of results for detecting fake TripAdvisor reviews.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Each year, thousands of roughly 150-page parole hearing transcripts in California go unread because legal experts lack the time to review them. Yet, reviewing transcripts is the only means of public oversight in the parole process. To assist reviewers, we present a simple unsupervised technique for using language models (LMs) to identify procedural anomalies in long-form legal text. Our technique highlights unusual passages that suggest further review could be necessary. We utilize a contrastive perplexity score to identify passages, defined as the scaled difference between its perplexities from two LMs, one fine-tuned on the target (parole) domain, and another pre-trained on out-of-domain text to normalize for grammatical or syntactic anomalies. We present quantitative analysis of the results and note that our method has identified some important cases for review. We are also excited about potential applications in unsupervised anomaly detection, and present a brief analysis of results for detecting fake TripAdvisor reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "California houses America's largest \"lifer\" population, with 25% of its 115,000 prisoners serving life sentences. Each year, the Board of Parole Hearings (BPH) conducts thousands of parole hearings to decide whether to grant prisoners early release. As California has enacted legislation to reduce its prison population, the number of hearings is scheduled to double this year and continue to rise for the foreseeable future. While each hearing is transcribed into about 150 pages of dialogue and sent to the BPH and governor's office for review, capacity constraints mean that, in practice, only grants of parole are reviewed. Legal scholars who painstakingly analyzed small subsets of transcripts have found that parole decisions are sometimes made in an arbitrary and capricious manner (Bell, 2019) , but they lack the resources for ongoing review.",
"cite_spans": [
{
"start": 789,
"end": 801,
"text": "(Bell, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To help alleviate these capacity constraints and allow for greater review of parole denials, we propose an automatic anomaly detection system that allows reviewers to focus their attention on the most anomalous portions of text in each hearing. 1 The lack of gold anomaly labels precludes the use of many supervised anomaly detection techniques, so instead we propose using language models trained on the parole transcripts to perform unsupervised anomaly detection.",
"cite_spans": [
{
"start": 245,
"end": 246,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Defining an \"anomaly\" in this context is challenging. There are many ways in which a piece of text might be unusual without constituting grounds for additional review. We distinguish primarily between non-semantic, semantic, and procedural anomalies. We define a non-semantic anomaly as an irregularity in the linguistic structure of a piece of text (for instance, a sentence fragment). A semantic anomaly, by contrast, is one caused by the meaning of the text. In the context of a parole hearing, a conversation that deviates substantially from the typical topics of discussion would constitute a semantic anomaly. Finally, a procedural anomaly is an irregularity that indicates the hearing differed substantively from the prescribed guidelines. Often, a procedural anomaly will also be a semantic anomaly. Figure 1 represents such a case, as it both includes language atypical for a parole hearing and, more generally, indicates a breakdown in communication between the commissioner and the parole candidate. We note that there are also, of course, legal anomalies that do not manifest as atypical language.",
"cite_spans": [],
"ref_spans": [
{
"start": 808,
"end": 816,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A language model (LM) provides an organic way to identify unusual text through its perplexity score. We hypothesize that many procedural anomalies can be identified by examining statistical anomalies in the texts of transcripts, which would seemingly allow for their detection by an LM. However, most instances of unusual text found by a naive LM are non-semantic, consisting of typos, ungrammatical sentences, etc. To solve this problem, we instead use a pair of language models. We define our anomaly metric, the contrastive perplexity score, as the scaled difference between the perplexity of one LM, which has been fine-tuned on the target domain, and the perplexity of another LM, which has only been pre-trained on out-of-domain text. Non-semantic anomalies will have high perplexity under both LMs (and thus low contrastive perplexity), so the second LM acts as a \"normalizer\" for non-semantic content. We present our results on a human-annotated subset of the parole data. Our method recalls 71% of human-labeled procedural anomalies while only asking experts to review 50% of the text of each transcript. We also show that our method can be extended to other domains where a large labeled corpus of anomalous text is unavailable, namely the task of opinion spam detection in TripAdvisor reviews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Anomaly detection (AD) techniques cover a range of problem settings. Sch\u00f6lkopf et al. (1999) Text is a challenging regime for AD because of the importance of domain-dependence: what is shocking in one case might be mundane in another. Few, if any, universal features for AD exist. General approaches for text AD include non-negative matrix factorization (Kannan et al., 2017) and the use of \"selectional preferences\" (Dasigi and Hovy, 2014) . One notable approach, studied in the dis-course coherence literature, is to focus on local abnormalities in topics. Li and Jurafsky (2017) and Lin et al. (2011) present deep models for identifying incoherent passages of text, but discourse coherence studies much shorter text than parole hearings. To address longer text, our approach, like that of Guthrie et al. (2008) , splits each document into segments ranked by anomaly score.",
"cite_spans": [
{
"start": 69,
"end": 92,
"text": "Sch\u00f6lkopf et al. (1999)",
"ref_id": "BIBREF20"
},
{
"start": 354,
"end": 375,
"text": "(Kannan et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 417,
"end": 440,
"text": "(Dasigi and Hovy, 2014)",
"ref_id": "BIBREF5"
},
{
"start": 559,
"end": 581,
"text": "Li and Jurafsky (2017)",
"ref_id": "BIBREF9"
},
{
"start": 586,
"end": 603,
"text": "Lin et al. (2011)",
"ref_id": "BIBREF10"
},
{
"start": 792,
"end": 813,
"text": "Guthrie et al. (2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our strategy of using LMs for AD has precedents, but primarily much simpler LMs, and for AD contexts that require more supervision than is available in the parole hearing setting. Laskov (2006, 2007) and Aktolga et al. (2011) use n-gram LMs to identify anomalous sections and documents in a corpus of American bills presented before Congress. Axelrod et al. (2011) and Xu et al. (2019) also explore using a \"baseline\" LM for translation and discourse coherence, respectively.",
"cite_spans": [
{
"start": 180,
"end": 199,
"text": "Laskov (2006, 2007)",
"ref_id": null
},
{
"start": 204,
"end": 225,
"text": "Aktolga et al. (2011)",
"ref_id": "BIBREF0"
},
{
"start": 343,
"end": 364,
"text": "Axelrod et al. (2011)",
"ref_id": "BIBREF1"
},
{
"start": 369,
"end": 385,
"text": "Xu et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our model uses GPT-2, a transformer-based LM pre-trained on WebText, a corpus scraped from the internet (Radford et al., 2019; Vaswani et al., 2017) . The following three observations motivate our approach to identifying anomalous text: (1) The perplexity of a fine-tuned LM on a target domain yields a score that measures both genre-specific semantic anomalies and general language anomalies (e.g. ungrammatical inputs, misspellings, incoherence).",
"cite_spans": [
{
"start": 104,
"end": 126,
"text": "(Radford et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 127,
"end": 148,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "(2) The perplexity of an LM only pre-trained on many domains represents solely general language anomalies. (3) Putting the two together, the difference in perplexity between a fine-tuned language model and a pre-trained language model gives a \"semantic anomaly score\" of a piece of text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "We define the contrastive perplexity LM anomaly score to be the scaled difference in perplexities observed from two models. One model, the fine-tuned LM, is fit to a target corpus of text, without any supervision on which passages are anomalies. The other model, the normalizer LM, is the out-of-the-box GPT-2 model (Radford et al., 2019; Vaswani et al., 2017) . For a mundane piece of text, both pplx fine-tuned and pplx norm. are low. For a non-semantic anomaly, both are high. In both cases, contrastive perplexity is low. However, for a semantic anomaly, we expect pplx fine-tuned to be high, because of its sensitivity to the text's context domain, and pplx norm. to be low, because the text may not otherwise be unusual in general English, leading to high contrastive perplexity.",
"cite_spans": [
{
"start": 316,
"end": 338,
"text": "(Radford et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 339,
"end": 360,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Because the fine-tuned LM achieves a lower perplexity, we use \u03b2 to re-scale the perplexity output of the normalizer and ensure the models operate at the same scale. While \u03b2 can be tuned as a hyperparameter, a reasonable and balanced choice is the ratio between the mean perplexities achieved by the fine-tuned model and the normalizer model on a validation dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "\u03b2 = x\u2208val pplx fine-tuned (x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "x\u2208val pplx normalizer (x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "We can use our LM anomaly score to identify the top k chunks of anomalous text for a given set of documents directly. In a completely unsupervised setting, with no labels as to which documents (or chunks) are anomalies, there is no way to associate the absolute contrastive perplexity scores with the predictive target. However, if given a clean dataset (i.e. a validation set that is labeled and known not to contain anomalies) we can instead anchor the scores to the clean dataset and detect anomalies by performing an out-of-distribution test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anomaly Aggregation",
"sec_num": "3.1"
},
{
"text": "4 Experimental Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anomaly Aggregation",
"sec_num": "3.1"
},
{
"text": "We compare our model to a number of unsupervised baseline models. Within AD, most existing algorithms are unsuitable for our task (e.g. due to the need for supervision, incompatibility with long-form text). The most straightforward baseline is simply the fine-tuned GPT-2 model alone. We also compare our work to an unsupervised topic-modeling baseline that should also be agnostic to non-semantic anomalies, like Misra et al. (2008) . We fit a latent Dirichlet allocation (LDA) model (Blei et al., 2003) to our train-corpus, then compute the mean representation and covariance matrix over topics, over a held-out portion of data. At prediction time, we compute the LDA representation for some text f (x) and use its Mahalanobis distance from the mean representation as our anomaly score:",
"cite_spans": [
{
"start": 414,
"end": 433,
"text": "Misra et al. (2008)",
"ref_id": "BIBREF11"
},
{
"start": 485,
"end": 504,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.1"
},
{
"text": "(f (x) \u2212 \u00b5 T ) T \u03a3 \u22121 T (f (x) \u2212 \u00b5 T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.1"
},
{
"text": ", where \u00b5 T and \u03a3 T are the sample mean and covariance over the topic mixture embeddings, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.1"
},
{
"text": "Our analysis is performed over the complete 2 set of parole hearing transcripts in California between January 2007 and July 2018, which totals 30,734 transcripts. Each document is a transcript of an hours-long conversation between the parole board and a candidate (other parties are occasionally also present), which ends in a decision from the parole board. Transcribed, each hearing is roughly 27,000 tokens long. We train our model on a train corpus of 27,577 transcripts, each split into non-overlapping chunks of 1024 tokens. We fit \u03b2 on a validation corpus of 1,963 transcripts, with chunksize 256. The training chunksize was selected to maximize efficiency of the underlying GPT-2 model, while the smaller validation chunksize better matches the scale at which we expect to observe linguistic anomalies. We collected a held-out test corpus of anomalies over 315 transcripts by asking undergraduate and law students to label instances of anomalous language. Out of 82,959 chunks, students found 179 anomalies. An experienced parole attorney checked the anomalies and confirmed 68. Student reviewers were asked to identify semantic anomalies and the expert was asked to determine which of those were also procedural anomalies. While we believe that this offers a viable estimate of the true set of procedural anomalies, this leaves out anomalies that are not manifested by irregular language. To evaluate our model's recall, we investigate the tradeoff between the share of the expert's \"true anomalies\" we recover, and the number of chunks human reviewers must read. We asked the parole attorney to review our model's predictions at a fixed threshold. We compute the mean reciprocal rank (MRR) (?), rather than precision, because a single anomaly suffices to flag a whole transcript for review: only the rank of the highest scoring anomaly affects reviewer time. Details are given in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parole Hearings",
"sec_num": "4.2"
},
{
"text": "Our second experiment is performed over the Deceptive Opinion Spam dataset (Ott et al., 2011 (Ott et al., , 2013 . The dataset consists of 1,600 short humangenerated reviews of 20 hotels in the Chicago area. 800 of these reviews were scraped from TripAdvisor and are marked \"authentic\"; the remaining 800 reviews are marked \"anomalous\" and were gen-Model k=20 k=50 LDA Baseline 0.103 0.426 Fine-tuned LM 0.235 0.573 Contrastive Perplex. 0.279 0.676 Table 1 : True anomaly recall achieved by reviewing the top-k chunks for each document. The average document has 105 chunks in this sample. erated by Mechanical Turk workers. In order to fine-tune GPT-2, we use a collection of TripAdvisor reviews collected by (Wang et al., 2010) . 3 We only include the 171,016 reviews that were shorter than 1024 tokens and longer than 30 tokens. Additionally, we hold out 10,000 reviews to fit \u00b5 and \u03a3 for the LDA baseline.",
"cite_spans": [
{
"start": 75,
"end": 92,
"text": "(Ott et al., 2011",
"ref_id": "BIBREF13"
},
{
"start": 93,
"end": 112,
"text": "(Ott et al., , 2013",
"ref_id": "BIBREF12"
},
{
"start": 709,
"end": 728,
"text": "(Wang et al., 2010)",
"ref_id": "BIBREF22"
},
{
"start": 731,
"end": 732,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 449,
"end": 456,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hotel Reviews",
"sec_num": "4.3"
},
{
"text": "We use the GPT-2 base model for all of our experiments, trained for 48 hours using the Adam optimizer with an initial learning rate of 10 \u22125 and linear decay.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model & Training",
"sec_num": "4.4"
},
{
"text": "Our fine-tuned and normalizer model achieve mean perplexities of 9.22 and 22.99 (\u03b2 = 0.40), respectively, on the validation set with fixed chunksize 256. Figure 2 describes the tradeoff between recall and the percentage the transcript human reviewers must read for our model and baselines as we vary the model. Contrastive perplexity outperforms all baselines, but overall recall is low. We also observe that the LM anomaly score produced by our model is not well-conserved across documents. Rather than using a global threshold for our model, we can instead ask reviewers to always use top k predictions for each document. Table 1 shows recall for different values of k.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 162,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 624,
"end": 631,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parole Hearings",
"sec_num": "5.1"
},
{
"text": "We evaluate our model's precision at the threshold that yields an average of 55 chunks per document (corresponding to about 52% of average transcript length) and recall of 0.68, marked on the plot. At this threshold, our model achieves an MRR of 0.227. Student annotators achieve 0.264 precision (note that, because the ratings from the students were not ranked, it is not possible to compute their MRR). The low human precision underscores the 3 We ensured that there is no overlap in between the reviews used for fine-tuning and the Deceptive Opinion Spam dataset. ",
"cite_spans": [
{
"start": 445,
"end": 446,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parole Hearings",
"sec_num": "5.1"
},
{
"text": "Our fine-tuned and normalizer model achieve mean perplexities of 22.62 and 53.60 (\u03b2 = 0.42) on the validation set of \"real\" TripAdvisor reviews. Figure 3 shows the ROC curve of our model compared to baselines, using our unsupervised LM anomaly measure as a \"fake review classifier\" on the Deceptive Opinion Spam dataset. Our model achieves an F1 of 0.537 at the optimal threshold. With manually tuned \u03b2 = 1.0, we achieve 0.679. While below the 0.898 F1 achieved by the best fully supervised models (Ott et al., 2011) , this indicates that our model is a promising unsupervised predictor.",
"cite_spans": [
{
"start": 498,
"end": 516,
"text": "(Ott et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 145,
"end": 151,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hotel Reviews",
"sec_num": "5.2"
},
{
"text": "We present a novel contrastive perplexity-based approach for unsupervised anomaly detection. We define semantic and non-semantic anomalies, and present evidence that our model can distinguish between them better than other unsupervised baselines. Detecting procedural anomalies in legal cases is easier with structured data, but that data is often not readily available. Our approach seeks to support legal decision makers in identifying anomalous cases for review when structured records are unavailable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion & Conclusion",
"sec_num": "6"
},
{
"text": "Our experiments on an unexplored dataset of 30,734 parole hearing transcripts have identified troubling cases for review. However, our quantitative evaluations also show the difficulty of defining a semantic anomaly consistently. Our results on detecting fake hotel reviews indicate that our approach becomes more powerful when anomalyfree documents are available to perform an out-ofdistribution test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion & Conclusion",
"sec_num": "6"
},
{
"text": "In future work, we seek to use conditional LMs to bridge the gap between our unsupervised method and settings in which some structured data is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion & Conclusion",
"sec_num": "6"
},
{
"text": "Our project raises ethical questions about the use of technology in criminal justice review procedures. We provide a statement about the ethical implications of our work in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Dept. of Corrections withheld a a few hundred transcripts from that period, citing \"confidential information.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Detecting outlier sections in us congressional legislation",
"authors": [
{
"first": "Elif",
"middle": [],
"last": "Aktolga",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Ros",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Assogba",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '11",
"volume": "",
"issue": "",
"pages": "235--244",
"other_ids": {
"DOI": [
"10.1145/2009916.2009951"
]
},
"num": null,
"urls": [],
"raw_text": "Elif Aktolga, Irene Ros, and Yannick Assogba. 2011. Detecting outlier sections in us congressional legisla- tion. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '11, pages 235-244, New York, NY, USA. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Domain adaptation via pseudo in-domain data selection",
"authors": [
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "355--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, EMNLP '11, pages 355-362, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A stone of hope: Legal and empirical analysis of california juvenile lifer parole decisions",
"authors": [
{
"first": "Kristen",
"middle": [],
"last": "Bell",
"suffix": ""
}
],
"year": 2019,
"venue": "Harv. CR-CLL Rev",
"volume": "54",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristen Bell. 2019. A stone of hope: Legal and empir- ical analysis of california juvenile lifer parole deci- sions. Harv. CR-CLL Rev., 54:455.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Anomaly detection: A survey",
"authors": [
{
"first": "Varun",
"middle": [],
"last": "Chandola",
"suffix": ""
},
{
"first": "Arindam",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Vipin",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Comput. Surv",
"volume": "41",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/1541880.1541882"
]
},
"num": null,
"urls": [],
"raw_text": "Varun Chandola, Arindam Banerjee, and Vipin Kumar. 2009. Anomaly detection: A survey. ACM Comput. Surv., 41(3):15:1-15:58.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Modeling newswire events using neural networks for anomaly detection",
"authors": [
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1414--1422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradeep Dasigi and Eduard Hovy. 2014. Modeling newswire events using neural networks for anomaly detection. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1414-1422, Dublin, Ireland. Dublin City University and Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An unsupervised probabilistic approach for the detection of outliers in corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Louise",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 2008,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Guthrie, Louise Guthrie, and Yorick Wilks. 2008. An unsupervised probabilistic approach for the de- tection of outliers in corpora. In LREC 2008.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A survey of outlier detection methodologies",
"authors": [
{
"first": "Victoria",
"middle": [],
"last": "Hodge",
"suffix": ""
},
{
"first": "Jim",
"middle": [],
"last": "Austin",
"suffix": ""
}
],
"year": 2004,
"venue": "Artificial Intelligence Review",
"volume": "22",
"issue": "2",
"pages": "85--126",
"other_ids": {
"DOI": [
"10.1023/B:AIRE.0000045502.10941.a9"
]
},
"num": null,
"urls": [],
"raw_text": "Victoria Hodge and Jim Austin. 2004. A survey of out- lier detection methodologies. Artificial Intelligence Review, 22(2):85-126.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Outlier detection for text data : An extended version",
"authors": [
{
"first": "Ramakrishnan",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Hyenkyun",
"middle": [],
"last": "Woo",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Charu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishnan Kannan, Hyenkyun Woo, Charu C. Ag- garwal, and Haesun Park. 2017. Outlier detection for text data : An extended version.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural net models of open-domain discourse coherence",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "198--209",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1019"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Dan Jurafsky. 2017. Neural net models of open-domain discourse coherence. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 198-209, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatically evaluating text coherence using discourse relations",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "997--1006",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using dis- course relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies -Volume 1, HLT '11, pages 997-1006, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Using LDA to detect semantically incoherent documents",
"authors": [
{
"first": "Hemant",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Capp\u00e9",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2008,
"venue": "CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hemant Misra, Olivier Capp\u00e9, and Fran\u00e7ois Yvon. 2008. Using LDA to detect semantically incoher- ent documents. In CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Lan- guage Learning, pages 41-48, Manchester, England. Coling 2008 Organizing Committee.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Negative deceptive opinion spam",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"T"
],
"last": "Hancock",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "497--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Claire Cardie, and Jeffrey T. Hancock. 2013. Negative deceptive opinion spam. In Proceedings of the 2013 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 497-501, At- lanta, Georgia. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Finding deceptive opinion spam by any stretch of the imagination",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"T"
],
"last": "Hancock",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "309--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T. Han- cock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 309-319, Portland, Oregon, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Detecting unknown network attacks using language models",
"authors": [
{
"first": "Konrad",
"middle": [],
"last": "Rieck",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Laskov",
"suffix": ""
}
],
"year": 2006,
"venue": "Detection of Intrusions and Malware & Vulnerability Assessment",
"volume": "",
"issue": "",
"pages": "74--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konrad Rieck and Pavel Laskov. 2006. Detecting un- known network attacks using language models. In Detection of Intrusions and Malware & Vulnera- bility Assessment, pages 74-90, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Language models for detection of unknown attacks in network traffic",
"authors": [
{
"first": "Konrad",
"middle": [],
"last": "Rieck",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Laskov",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal in Computer Virology",
"volume": "2",
"issue": "4",
"pages": "243--256",
"other_ids": {
"DOI": [
"10.1007/s11416-006-0030-0"
]
},
"num": null,
"urls": [],
"raw_text": "Konrad Rieck and Pavel Laskov. 2007. Language mod- els for detection of unknown attacks in network traf- fic. Journal in Computer Virology, 2(4):243-256.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep one-class classification",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Ruff",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Vandermeulen",
"suffix": ""
},
{
"first": "Nico",
"middle": [],
"last": "Goernitz",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Deecke",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Shoaib",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Siddiqui",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Binder",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kloft",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "4393--4402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lu- cas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel M\u00fcller, and Marius Kloft. 2018. Deep one-class classification. In Proceedings of the 35th International Conference on Machine Learn- ing, volume 80 of Proceedings of Machine Learn- ing Research, pages 4393-4402, Stockholmsm\u00e4ssan, Stockholm Sweden. PMLR.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Anomaly detection using autoencoders with nonlinear dimensionality reduction",
"authors": [
{
"first": "Mayu",
"middle": [],
"last": "Sakurada",
"suffix": ""
},
{
"first": "Takehisa",
"middle": [],
"last": "Yairi",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the MLSDA 2014 2Nd Workshop on Machine Learning for Sensory Data Analysis, MLSDA'14",
"volume": "4",
"issue": "",
"pages": "4--4",
"other_ids": {
"DOI": [
"10.1145/2689746.2689747"
]
},
"num": null,
"urls": [],
"raw_text": "Mayu Sakurada and Takehisa Yairi. 2014. Anomaly detection using autoencoders with nonlinear dimen- sionality reduction. In Proceedings of the MLSDA 2014 2Nd Workshop on Machine Learning for Sen- sory Data Analysis, MLSDA'14, pages 4:4-4:11, New York, NY, USA. ACM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised anomaly detection with generative adversarial networks to guide marker discovery",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Schlegl",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Seeb\u00f6ck",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [
"M"
],
"last": "Waldstein",
"suffix": ""
},
{
"first": "Ursula",
"middle": [],
"last": "Schmidt-Erfurth",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Langs",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Schlegl, Philipp Seeb\u00f6ck, Sebastian M. Wald- stein, Ursula Schmidt-Erfurth, and Georg Langs. 2017. Unsupervised anomaly detection with genera- tive adversarial networks to guide marker discovery.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Support vector method for novelty detection",
"authors": [
{
"first": "Bernhard",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Williamson",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS'99",
"volume": "",
"issue": "",
"pages": "582--588",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernhard Sch\u00f6lkopf, Robert Williamson, Alex Smola, John Shawe-Taylor, and John Platt. 1999. Support vector method for novelty detection. In Proceedings of the 12th International Conference on Neural In- formation Processing Systems, NIPS'99, pages 582- 588, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Latent aspect rating analysis on review text data: A rating regression approach",
"authors": [
{
"first": "Hongning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10",
"volume": "",
"issue": "",
"pages": "783--792",
"other_ids": {
"DOI": [
"10.1145/1835804.1835903"
]
},
"num": null,
"urls": [],
"raw_text": "Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: A rating regression approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10, pages 783-792, New York, NY, USA. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Let me ask you a question, Mr. [REDACT]. Are you angry? INMATE [REDACT]: No. PRESIDING COMMISSIONER: You seem kind of like you're a smart ass. I don't mean to say that rudely, but are you a smart ass? Example of a semantic anomaly"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "; Hodge and Austin (2004); Chandola et al. (2009); Sakurada and Yairi (2014); Ruff et al. (2018); Schlegl et al. (2017) present general techniques for outof-sample anomaly detection, with an increasing interest in deep unsupervised AD."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "LM anom. = pplx fine-tuned \u2212\u03b2\u2022pplx normalizer ."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Recall on true anomalies vs. the amount of reading required of the reviewer; (a) by varying the threshold, (b) by fixing k chunks per document."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "ROC curve for unsupervised fake review prediction on TripAdvisor dataset. The un-tuned \u03b2 = 0.42 is outperformed by \u03b2 = 1. intrinsic difficulty of the task and the level of disagreement between human annotators over what constitutes an anomaly."
}
}
}
}