ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-demos.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:46:23.114637Z"
},
"title": "MaintNet: A Collaborative Open-Source Library for Predictive Maintenance Language Resources",
"authors": [
{
"first": "Farhad",
"middle": [],
"last": "Akhbardeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {
"country": "United States"
}
},
"email": ""
},
{
"first": "Travis",
"middle": [],
"last": "Desell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {
"country": "United States"
}
},
"email": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rochester Institute of Technology",
"location": {
"country": "United States"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Maintenance record logbooks are an emerging text type in NLP. An important part of them typically consist of free text with many domain specific technical terms, abbreviations, and nonstandard spelling and grammar. This poses difficulties for NLP pipelines trained on standard corpora. Analyzing and annotating such documents is of particular importance in the development of predictive maintenance systems, which aim to improve operational efficiency, reduce costs, prevent accidents, and save lives. In order to facilitate and encourage research in this area, we have developed MaintNet, a collaborative open-source library of technical and domain-specific language resources. MaintNet provides novel logbook data from the aviation, automotive, and facility maintenance domains along with tools to aid in their (pre-)processing and clustering. Furthermore, it provides a way to encourage discussion on and sharing of new datasets and tools for logbook data analysis.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Maintenance record logbooks are an emerging text type in NLP. An important part of them typically consist of free text with many domain specific technical terms, abbreviations, and nonstandard spelling and grammar. This poses difficulties for NLP pipelines trained on standard corpora. Analyzing and annotating such documents is of particular importance in the development of predictive maintenance systems, which aim to improve operational efficiency, reduce costs, prevent accidents, and save lives. In order to facilitate and encourage research in this area, we have developed MaintNet, a collaborative open-source library of technical and domain-specific language resources. MaintNet provides novel logbook data from the aviation, automotive, and facility maintenance domains along with tools to aid in their (pre-)processing and clustering. Furthermore, it provides a way to encourage discussion on and sharing of new datasets and tools for logbook data analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the rapid development of information technologies, engineering systems are generating increasing amounts of data that are used by various industries to improve their products. Maintenance records are one such type of data. They typically consist of event logbooks which are collected in many domains such as aviation, transportation, and healthcare (Tanguy et al., 2016; Altuncu et al., 2018) . The analysis of maintenance records is particularly important in the development of predictive maintenance systems, which can be used to prevent accidents and reduce maintenance costs (Jarry et al., 2018) .",
"cite_spans": [
{
"start": 354,
"end": 375,
"text": "(Tanguy et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 376,
"end": 397,
"text": "Altuncu et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 584,
"end": 604,
"text": "(Jarry et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Maintenance record datasets generally contain free text fields describing issues (or problems) written in non-standard language with many abbreviations and domain specific terms, as in the instances presented in Table 1 area, we present MaintNet 1 , a collaborative, open-source library for technical language resources with a special focus on predictive maintenance data.",
"cite_spans": [],
"ref_spans": [
{
"start": 212,
"end": 219,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. The development of MaintNet, a user-friendly web-based platform that serves as a repository hosting a variety of resources and tools developed to process predictive maintenance and technical logbook data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. The creation of several important language resources for technical language and predictive maintenance such as abbreviation lists, morphosyntactic information lists, and termbanks for the aviation, automotive, and facility maintenance domains. All these resources as well as raw data from these domains are made freely available to the research community via MaintNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. The development of several novel Python packages for (pre-)processing technical language which we make available to the research community. This includes stop word removal, stemmers, lemmatizers, POS tagging, document clustering, and more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. A collaborative environment in which the community can contribute with data and resources and interact with developers and other members of the community via forums.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 MaintNet Features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, there are no freely available tools and libraries developed to process such data, which makes MaintNet a unique resource. MaintNet currently features datasets from the aviation, automotive, and facilities domains (see Table 2 ), and it will be expanded with the collaboration of the interested members of the NLP community working on similar topics. Predictive maintenance datasets are hard to obtain due to the sensitive information they contain. Therefore, we work closely with the data providers to ensure that any confidential and sensitive information in the dataset remains anonymous. In addition to the datasets, MaintNet further provides the user with domain specific abbreviation dictionaries, morphosyntactic annotation, and term banks. The abbreviation dictionaries contains abbreviated validated by domain experts. The morphosyntactic annotation contains the part of speech (POS) tag, compound, lemma, and word stems. Finally, the domain term banks contain the collected list of terms that are used in each domain along with a sample of usage in the corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 255,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Language Resources",
"sec_num": "2.1"
},
{
"text": "Grouping maintenance issues by time is an important step in the analysis of logbook data. Most of the predictive maintenance datasets available, however, do not feature the reason for maintenance or the category of the issues making it impossible to train classification systems on such systems. To address this problem, we implemented several (pre-)processing steps to clean and extract information from logbooks aiming at document clustering and classification. The complete processing pipeline is shown in Figure 1 . The pre-processing steps start with text normalization, lowercasing, stop word and punctuation removal. Then we treat special characters with NLTK's (Bird et al., 2009 ) regular expression library, followed by stemming (Snowball Stemmer), lemmatization (WordNet (Miller, 1992) ), and tokenization (NLTK tokenizer). POS annotation is carried out using the NLTK POS tagger. Finally, Term frequencyinverse document frequency (TF-IDF) is obtained using the gensim tfidf model (Rehurek and Sojka, 2010) . To address misspellings and abbreviations which are abundant in predictive maintenance datasets, we explored various state-of-the-art spellcheckers including Enchant 2 , Pyspellchecker 3 , Symspellpy 4 , and Autocorrect 5 . We also developed our own spell checker using Levenshtein distance (Aggarwal and Zhai, 2012) where a dictionary of domain specific words is used to map the misspelling candidates to words in the dictionary. The Levenshtein algorithm was chosen over other distance metrics (e.g., Euclidian, Cosine) as it allows us to control the minimum number of string edits. The performance of our method compared to other spellcheckers in a sub set of the aviation dataset is presented in Table 3 . In MaintNet we also developed document clustering systems customized to logbook data and we make the scripts available to the community. As previously stated, logbook datasets are often not annotated with issue categories requiring a domain expert to group instances into categories. Here we use clustering methods to help grouping documents together. We first convert tokens into a numerical representation using tfidfvectorizer (ElSahar et al., 2017) and we obtain a large matrix of document terms (DT). We used truncated singular value decomposition (SVD) (ElSahar et al., 2017) known as latent semantic analysis (LSA), to perform dimensionality reduction. We then experimented with four clustering techniques: k-means (Jain, 2010), Density-Based Spatial Clustering of Applications with Noise (DBSCAN) (Ester et al., 1996) , Latent Dirichlet Analysis (LDA) (Vorontsov et al., 2015) , and hierarchical clustering (Aggarwal and Zhai, 2012) . DBSCAN and hierarchical clustering do not require a predetermined number of clusters. For k-means, silhouette and inertia (Fraley and Raftery, 1998) were used to determine the number of clusters while perplexity (Fraley and Raftery, 1998) and coherence (Vorontsov et al., 2015) scores were used for LDA.",
"cite_spans": [
{
"start": 669,
"end": 687,
"text": "(Bird et al., 2009",
"ref_id": "BIBREF2"
},
{
"start": 992,
"end": 1017,
"text": "(Rehurek and Sojka, 2010)",
"ref_id": "BIBREF9"
},
{
"start": 2535,
"end": 2555,
"text": "(Ester et al., 1996)",
"ref_id": "BIBREF4"
},
{
"start": 2590,
"end": 2614,
"text": "(Vorontsov et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 2645,
"end": 2670,
"text": "(Aggarwal and Zhai, 2012)",
"ref_id": "BIBREF0"
},
{
"start": 2795,
"end": 2821,
"text": "(Fraley and Raftery, 1998)",
"ref_id": "BIBREF5"
},
{
"start": 2885,
"end": 2911,
"text": "(Fraley and Raftery, 1998)",
"ref_id": "BIBREF5"
},
{
"start": 2926,
"end": 2950,
"text": "(Vorontsov et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 509,
"end": 517,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 782,
"end": 796,
"text": "(Miller, 1992)",
"ref_id": "FIGREF0"
},
{
"start": 1720,
"end": 1727,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Pre-processing and Tools",
"sec_num": "2.2"
},
{
"text": "Finally, we use three different similarity algorithms: Levenshtein, Jaro, and cosine (Fraley and Raftery, 1998) to calculate intra-and inter-cluster similarity. Cosine similarity is commonly used and is independent of the length of document, while Jaro is more flexible by providing a rating of matching strings. We collected human annotated instances by a domain expert to serve as our gold standard, and these are provided on MaintNet to encourage research into improving unsupervised clustering of maintenance logbooks.",
"cite_spans": [
{
"start": 85,
"end": 111,
"text": "(Fraley and Raftery, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing and Tools",
"sec_num": "2.2"
},
{
"text": "MaintNet provides various webpages for users to communicate with each other and the project developers; as well as upload data to share with the community (see Figure 2 ). We hope this will help further facilitate discussion and research in this important and under explored area. ",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Community Participation",
"sec_num": "2.3"
},
{
"text": "In this paper we presented MaintNet, a collaborative open-source library for predictive maintenance language resources. MaintNet provides raw technical logbook data as well as several language resources such as abbreviation lists, morphosyntactic information lists, and termbanks from the aviation, automotive and facilities domains. Tools developed in Python are also made available for pre-processing, such as spell checking, POS tagging, and document clustering. In addition to these tools, the collaborative aspects of MaintNet should be emphasized. We welcome the community to contribute with new datasets that can be processed using the tools available at MaintNet, or share new and improved tools developed with MaintNet's open source data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "3"
},
{
"text": "MaintNet is also expanding as current work involves processing data from additional domains such as healthcare and power systems (e.g., wind turbines). These datasets will be made available on MaintNet in upcoming months. We also aim to collect and release datasets and tools for languages other than English in the near future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "3"
},
{
"text": "Available at: https://people.rit.edu/fa3019/MaintNet/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.abisource.com/projects/enchant/ 3 https://github.com/barrust/pyspellchecker 4 https://github.com/wolfgarbe/SymSpell 5 https://github.com/fsondej/autocorrect",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Rachael Thormann for the voiceover video. We further thank the University of North Dakota aviation program for the aviation maintenance records dataset and Zechariah Morgain for evaluating the results of the pre-processing and clustering algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A survey of text clustering algorithms",
"authors": [
{
"first": "C",
"middle": [],
"last": "Charu",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Aggarwal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2012,
"venue": "Mining Text Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charu C. Aggarwal and ChengXiang Zhai. 2012. A survey of text clustering algorithms. In Mining Text Data.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "From text to topics in healthcare records: An unsupervised graph partitioning methodology",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tarik Altuncu",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Mayer",
"suffix": ""
},
{
"first": "Sophia",
"middle": [
"N"
],
"last": "Yaliraki",
"suffix": ""
},
{
"first": "Mauricio",
"middle": [],
"last": "Barahona",
"suffix": ""
}
],
"year": 2018,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Tarik Altuncu, Erik Mayer, Sophia N. Yaliraki, and Mauricio Barahona. 2018. From text to topics in healthcare records: An unsupervised graph partitioning methodology. ArXiv, abs/1807.02599.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised open relation extraction",
"authors": [
{
"first": "Hady",
"middle": [],
"last": "Elsahar",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Demidova",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Gottschalk",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Gravier",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9rique",
"middle": [],
"last": "Laforest",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hady ElSahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, and Fr\u00e9d\u00e9rique Laforest. 2017. Unsuper- vised open relation extraction. ArXiv, abs/1801.07174.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A density-based algorithm for discovering clusters in large spatial databases with noise",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Ester",
"suffix": ""
},
{
"first": "Hans-Peter",
"middle": [],
"last": "Kriegel",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Sander",
"suffix": ""
},
{
"first": "Xiaowei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 1996,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Ester, Hans-Peter Kriegel, J\u00f6rg Sander, and Xiaowei Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "How many clusters? which clustering method? answers via modelbased cluster analysis",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Fraley",
"suffix": ""
},
{
"first": "Adrian",
"middle": [
"E"
],
"last": "Raftery",
"suffix": ""
}
],
"year": 1998,
"venue": "Comput. J",
"volume": "41",
"issue": "",
"pages": "578--588",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Fraley and Adrian E. Raftery. 1998. How many clusters? which clustering method? answers via model- based cluster analysis. Comput. J., 41:578-588.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Data clustering: 50 years beyond k-means",
"authors": [
{
"first": "Anil",
"middle": [],
"last": "Kumar Jain",
"suffix": ""
}
],
"year": 2010,
"venue": "Pattern Recognition Letters",
"volume": "31",
"issue": "",
"pages": "651--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anil Kumar Jain. 2010. Data clustering: 50 years beyond k-means. Pattern Recognition Letters, 31:651-666.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Aircraft atypical approach detection using functional principal component analysis",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Jarry",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Delahaye",
"suffix": ""
},
{
"first": "Florence",
"middle": [],
"last": "Nicol",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Feron",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Jarry, Daniel Delahaye, Florence Nicol, and Eric Feron. 2018. Aircraft atypical approach detection using functional principal component analysis. In SID.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Wordnet: A lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1992,
"venue": "Commun. ACM",
"volume": "38",
"issue": "",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1992. Wordnet: A lexical database for english. Commun. ACM, 38:39-41.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Software framework for topic modelling with large corpora",
"authors": [
{
"first": "Radim",
"middle": [],
"last": "Rehurek",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim Rehurek and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In LREC.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Natural language processing for aviation safety reports: From classification to interactive analysis",
"authors": [
{
"first": "Ludovic",
"middle": [],
"last": "Tanguy",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Tulechki",
"suffix": ""
},
{
"first": "Assaf",
"middle": [],
"last": "Urieli",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Hermann",
"suffix": ""
},
{
"first": "C\u00e9line",
"middle": [],
"last": "Raynal",
"suffix": ""
}
],
"year": 2016,
"venue": "Computers in Industry",
"volume": "78",
"issue": "",
"pages": "80--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ludovic Tanguy, Nikola Tulechki, Assaf Urieli, Eric Hermann, and C\u00e9line Raynal. 2016. Natural language processing for aviation safety reports: From classification to interactive analysis. Computers in Industry, 78:80- 95.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bigartm: Open source library for regularized multimodal topic modeling of large collections",
"authors": [
{
"first": "Konstantin",
"middle": [],
"last": "Vorontsov",
"suffix": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Frei",
"suffix": ""
},
{
"first": "Murat",
"middle": [],
"last": "Apishev",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Romov",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Dudarenko",
"suffix": ""
}
],
"year": 2015,
"venue": "AIST",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Konstantin Vorontsov, Oleksandr Frei, Murat Apishev, Peter Romov, and Marina Dudarenko. 2015. Bigartm: Open source library for regularized multimodal topic modeling of large collections. In AIST.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "A pipeline of pre-processing and information extraction of maintenance dataset in MaintNet.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "A screenshot of MaintNet's discussion webpages.",
"num": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"text": ".",
"html": null,
"content": "<table><tr><td>ID</td><td colspan=\"3\">Job Code Report Date Problem</td></tr><tr><td>211052</td><td>7130</td><td>8/28/2012</td><td>DOING MX PERFORM T/O @ 3200, MP WON'T GO ANY HIGHER</td></tr><tr><td/><td/><td/><td>THAN 24</td></tr><tr><td>221313</td><td>7200</td><td>4/7/2015</td><td>FRONT R/H BAFFLE WORN THROUGH FROM MUFFLER</td></tr><tr><td/><td/><td/><td>SHROUD NEED NEW BFFLE.</td></tr><tr><td>211585</td><td>550</td><td>4/10/2015</td><td>LACING CORD LOOSE ON SCAT TUBING + IGN LEAD TO</td></tr><tr><td/><td/><td/><td>FRAME, R/H SI, NEED @ ENG #2.</td></tr><tr><td>221958</td><td>7250</td><td>4/11/2016</td><td>ROUGH RUNNING ENG ON START. ENGINE RAN SMOOTHER</td></tr><tr><td/><td/><td/><td>AS IT WAR</td></tr><tr><td>221646</td><td>7230</td><td>4/20/2016</td><td>DURING IDLE CHECK ON RUN UP, ENGINE QUIT. RESTART EN-</td></tr><tr><td/><td/><td/><td>GINE &amp; Q</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Five sample of Maintnet's aviation dataset.",
"html": null,
"content": "<table><tr><td>This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:</td></tr><tr><td>//creativecommons.org/licenses/by/4.0/.</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "The number of instances and tokens in each dataset/domain.",
"html": null,
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "Results of the spelling correction and abbreviation expansion methods in terms of success rate.",
"html": null,
"content": "<table/>"
}
}
}
}