|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:23:28.091755Z" |
|
}, |
|
"title": "Not All Titles are Created Equal: Financial Document Structure Extraction Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "Anubhav", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Abi", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hugues", |
|
"middle": [], |
|
"last": "De Mazancourt", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents a multi-modal approach to FinTOC-2021 Shared Task. With help of a finetuned Faster-RCNN our solution achieved a Precision score comparatively better than other participants.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents a multi-modal approach to FinTOC-2021 Shared Task. With help of a finetuned Faster-RCNN our solution achieved a Precision score comparatively better than other participants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Heading or title is a phrase that either represents an oeuvre or demarcates a text into chapters, sections etc. It serves as a milestone that helps readers find their way through a long text. Its appearance is governed by style guides. For example, AER 1 mandates that the title of a section begins with roman numerals and that of a subsection with capital letters. APA 2 and MLA limit the number of levels of titles to 5. Their guide ensures that each level is visually distinct from another. Titles at level 4 or below are \"run-on heads 3 \" / \"run-in 4 \" / in-line headings i.e. they appear along with the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Unfortunately no such guide is available for the prospectuses provided as part of FinTOC 2021 Financial Document Structure Extraction Shared Task (El Maarouf et al., 2021) . In other words the documents do not follow the same style guide, assuming they are respecting one. As a result, the task of identifying a heading and its correct level is daunting. This is proved by the fact that the best score for the previous year's shared task (Bentabet et al., 2020) was 0.37.", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 171, |
|
"text": "(El Maarouf et al., 2021)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1 https://www.aeaweb.org/journals/ aer/submissions/accepted-articles/ styleguide#IVA 2 https://apastyle.apa.org/ style-grammar-guidelines/paper-format/ headings 3 https://projects.iq.harvard.edu/ crea-lit/headings-and-subheadings 4 http://www.creativeglossary.com/ graphic-design/run-in-heading.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The training set consisted of 47 prospectuses in PDF format along with the annotations in json format. The annotations had depth (level), page number, file name and raw text of each title. The test set had 10 prospectuses for each language and the task is to generate a json file as described before for each of the files.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The first challenge was to find these texts in the PDFs in order to extract more metadata viz. position, font, size etc. This process is detailed in Section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The second challenge was that the number of levels that the titles can have was not defined. Also, there were quite a number of documents having multiple depth one title i.e. main heading! A cursory glance reveals that the title level 1 is always present in the first page and is either the name of the fund or the key phrase Prospectus for English and Informations cl\u00e9s pour l'investisseur for French. If both the fund's name and the key phrase exist it is generally the one that appears first irrespective of the style of the sentence. There were certain documents where this wasn't the case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We argue that Prospectus or Informations cl\u00e9s pour l'investisseur should always be the first level title if present in the first page since it describes the document and is consistent with other main titles of documents such as Status, Reglement, Key Information Document, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "English 0.858 0.670 0.728 French 0.911 0.510 0.639 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language P R F1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used pdfminer.six 5 to parse the files. We extracted LTTextLine and matched it against the annotations. If", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 it is an exact match, we extract features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 LTTextLine is a subtext of an annotation, we find all such subtexts, then merge them and finally, extract features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 a annotation is a subtext of LTTextLine, it is ignored.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In short we ignored quite a few inline titles. This might have led the models to treat them as normal text and may have been the cause of low Recall score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "From each LTTextLine we collected:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 coordinates (normalized: divided by page dimensions)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 percentage of characters in bold", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 percentage of characters in italics", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 mode of character sizes (min-max normalized)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 height (min-max normalized)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 page number (min-max normalized)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 inverse length \u2022 normalized text (only alphabets without accents) in lowercase to compute tf-idf", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The scikit-learn (Pedregosa et al., 2011 ) was used to get TFIDF with the following arguments: a n a l y z e r = ' c h a r ' n g r a m _ r a n g e = ( 3 , 3 ) max_df = 0 . 9 3 m a x _ f e a t u r e s = 3000 s u b l i n e a r _ t f = T r u e As mentioned above, some LTTextLines were needed to be combined to match a title in the annotations. This was done as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 40, |
|
"text": "(Pedregosa et al., 2011", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 find LTTextLine that matches the beginning of the annotation", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 if this subtext along with previously matched LTTextLines has the least area then keep it", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 update the annotation by removing the prefix that matched LTTextLine", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 if annotation is an empty string then stop else repeat", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Once we identified the titles along with features in the PDFs we converted the documents into images with the help of pdf2image 7 . The coordinates of the titles were multiplied by 4 to get the bounding boxes and then saved in COCO format 8 . This was used to fine-tune the PubLayNet (Zhong et al., 2019 ) Faster-RCNN model as explained on their github repository 9 with the hyperparameters of Table 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 303, |
|
"text": "(Zhong et al., 2019", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 394, |
|
"end": 401, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The fine-tuned model was used to obtain IoU and probability of being title for each LTTextLine.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "French BASE_LR 0.005 0.001 MAX_ITER 36000 50000 STEPS [0, 24000 [0, 30000 , 32000] , 40000] These two values along with the features listed above was fed to a Gradient Boosting Classifier with parameters: We trained one model for each language. At test time, the fine-tuned PubLayNet was used to merge LTTextLines and then extract features for classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 82, |
|
"text": "[0, 24000 [0, 30000 , 32000]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperparameters English", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperparameters English", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "After classification the titles were sorted by their size. The largest titles were attributed a depth of 1. The next in order were given 2 as depth and so forth.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperparameters English", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our model, despite having the highest precision for French and second-highest precision for English, came second in the title detection task (see Table 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 153, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In case of TOC extraction, the English model had the highest Inex08 Precision and Inex08 Title Accuracy among the competing methods. We could not achieve the same performance for French due to less time allotted for fine-tuning the PubLayNet model. Since the models scored low on Inex08 Level Accuracy, we were nowhere near the top performing team that achieved the Harmonic Mean greater than 0.5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We can further improve the Inex08 scores related to title detection for French by better fine-tuning the PubLayNet model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We would also like to compare this model with LayoutLM 10,11 (Xu et al., 2020) , also based on Faster-RCNN. However, a model that can correctly identify the title levels and be ported to other domains remains elusive.", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 78, |
|
"text": "(Xu et al., 2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We feel that lack of an annotation guide makes it difficult to analyse the errors related to title levels and as a result improve the results. The use of a vision-based model improves the title detection and can be generalized to other domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://dirty-cat.github.io/stable/ generated/dirty_cat.MinHashEncoder.html# dirty_cat.MinHashEncoder 7 https://github.com/Belval/pdf2image 8 https://cocodataset.org/#format-data 9 https://github.com/ibm-aur-nlp/ PubLayNet/tree/master/pre-trained-models", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/microsoft/unilm/tree/master/layoutlm 11 https://huggingface.co/transformers/model_doc/layoutlm.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Fortia Financial Solutions for organizing this task and thus contributing to the advancement of document structure analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The Financial Document Structure Extraction Shared Task (FinToc 2020)", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Najah-Imane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Bentabet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ismail", |
|
"middle": [ |
|
"El" |
|
], |
|
"last": "Juge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Virginie", |
|
"middle": [], |
|
"last": "Maarouf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dialekti", |
|
"middle": [], |
|
"last": "Mouilleron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mahmoud", |
|
"middle": [], |
|
"last": "Valsamou-Stanislawski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "El-Haj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Najah-Imane Bentabet, Remi Juge, Ismail El Maarouf, Virginie Mouilleron, Dialekti Valsamou- Stanislawski, and Mahmoud El-Haj. 2020. The Financial Document Structure Extraction Shared Task (FinToc 2020). In The 1st Joint Workshop on Financial Narrative Processing and MultiL- ing Financial Summarisation (FNP-FNS 2020), Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The Financial Document Structure Extraction Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "Ismail", |
|
"middle": [ |
|
"El" |
|
], |
|
"last": "Maarouf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juyeon", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abderrahim", |
|
"middle": [], |
|
"last": "Aitazzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Bellato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mei", |
|
"middle": [], |
|
"last": "Gan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mahmoud", |
|
"middle": [], |
|
"last": "El-Haj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "The Third Financial Narrative Processing Workshop (FNP 2021)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ismail El Maarouf, Juyeon Kang, Abderrahim Aitazzi, Sandra Bellato, Mei Gan, and Mahmoud El-Haj. 2021. The Financial Document Structure Extraction Shared Task (FinToc 2021). In The Third Financial Narrative Processing Workshop (FNP 2021), Lan- caster, UK.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Scikit-learn: Machine learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vanderplas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cournapeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brucher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Perrot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Duchesnay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "LayoutLM: Pretraining of Text and Layout for Document Image Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Yiheng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minghao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Cui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaohan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. LayoutLM: Pre- training of Text and Layout for Document Image Understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Publaynet: largest dataset ever for document layout analysis", |
|
"authors": [ |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianbin", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio Jimeno", |
|
"middle": [], |
|
"last": "Yepes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "2019 International Conference on Document Analysis and Recognition (ICDAR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1015--1022", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICDAR.2019.00166" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. 2019. Publaynet: largest dataset ever for document layout analysis. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 1015-1022. IEEE.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "_ e s t i m a t o r s = 200 l e a r n i n g _ r a t e = 0 . 2 m a x _ l e a f _ n o d e s = 10 m i n _ s a m p l e s _ l e a f = 15 max_depth = 20 r a n d o m _ s t a t e = 10", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>Language</td><td/><td/><td>Inex08</td><td/><td>Harmonic</td></tr><tr><td/><td>P</td><td>R</td><td colspan=\"3\">F1 Title Acc Level Acc</td><td>Mean</td></tr><tr><td>French</td><td colspan=\"3\">46.8 28.1 34.4</td><td>47.3</td><td>16.6</td><td>22.4</td></tr><tr><td>English</td><td colspan=\"3\">61.1 50.3 53.4</td><td>68.2</td><td>12.4</td><td>20.1</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Title detection results." |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "TOC extraction results." |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Hyperparameters to finetune PubLayNet." |
|
} |
|
} |
|
} |
|
} |