|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:22:41.053706Z" |
|
}, |
|
"title": "LaTeX-Numeric: Language-agnostic Text attribute eXtraction for E-commerce Numeric Attributes", |
|
"authors": [ |
|
{ |
|
"first": "Kartik", |
|
"middle": [], |
|
"last": "Mehta", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ioana", |
|
"middle": [], |
|
"last": "Oprea", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Rasiwasia", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we present LaTeX-Numeric-a high-precision fully-automated scalable framework for extracting E-commerce numeric attributes from product text like product description. Most of the past work on attribute extraction is not scalable as they rely on manually curated training data, either with or without the use of active learning. We rely on distant supervision for training data generation, removing dependency on manual labels. One issue with distant supervision is that it leads to incomplete training annotation due to missing attribute values while matching. We propose a multi-task learning architecture to deal with missing labels in the training data, leading to F1 improvement of 9.2% for numeric attributes over single-task architecture. While multi-task architecture benefits both numeric and non-numeric attributes, we present automated techniques to further improve the numeric attributes extraction models. Numeric attributes require a list of units (or aliases) for better matching with distant supervision. We propose an automated algorithm for alias creation using product text and attribute values, leading to a 20.2% F1 improvement. Extensive experiments on real world dataset for 20 numeric attributes across 5 product categories and 3 English marketplaces show that LaTeXnumeric achieves a high F1-score, without any manual intervention, making it suitable for practical applications. Finally, we show that the improvements are language-agnostic and LaTeX-Numeric achieves 13.9% F1 improvement for 3 Romance languages 1 .", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we present LaTeX-Numeric-a high-precision fully-automated scalable framework for extracting E-commerce numeric attributes from product text like product description. Most of the past work on attribute extraction is not scalable as they rely on manually curated training data, either with or without the use of active learning. We rely on distant supervision for training data generation, removing dependency on manual labels. One issue with distant supervision is that it leads to incomplete training annotation due to missing attribute values while matching. We propose a multi-task learning architecture to deal with missing labels in the training data, leading to F1 improvement of 9.2% for numeric attributes over single-task architecture. While multi-task architecture benefits both numeric and non-numeric attributes, we present automated techniques to further improve the numeric attributes extraction models. Numeric attributes require a list of units (or aliases) for better matching with distant supervision. We propose an automated algorithm for alias creation using product text and attribute values, leading to a 20.2% F1 improvement. Extensive experiments on real world dataset for 20 numeric attributes across 5 product categories and 3 English marketplaces show that LaTeXnumeric achieves a high F1-score, without any manual intervention, making it suitable for practical applications. Finally, we show that the improvements are language-agnostic and LaTeX-Numeric achieves 13.9% F1 improvement for 3 Romance languages 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "1 Introduction E-commerce websites often sell billions of products. These websites provide information in form of product images, product text (such as title and product description) and structured information, 1 https://www.britannica.com/topic/Romance-languages henceforth, termed as product attributes 2 . These attributes often act as a concise summary of product information and are useful in product discovery, comparison and purchase decisions. They are usually provided by selling partners at the time of product listing and can be missing or invalid, even though they might be present in product text sources. Extracting attribute values from these product text sources can be used to populate the missing attribute values and is the focus of this work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 212, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Attribute Extraction from free form text can be posed as a Named Entity Recognition (NER) problem (Zheng et al., 2018) . Recently, deep learning models (Lample et al., 2016; Ma and Hovy, 2016; Huang et al., 2015) have shown remarkable performance on NER tasks, eliminating the need of manually curated features. However, these approaches still require large amount of labelled data. While active learning can be used to efficiently curate training data (Zheng et al., 2018) , however gathering data for hundreds of product categories and attributes is a resource extensive task. One solution is to use distant supervision to create training data. Distant supervision has been extensively used to curate training set without manual effort for relation extraction (Mintz et al., 2009) . In context of attribute extraction for E-commerce, we can curate training data by using attribute values and matching them with tokens in product text. However, if attribute values are missing, distant supervision leads to missing annotations, a phenomenon not studied in literature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 118, |
|
"text": "(Zheng et al., 2018)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 173, |
|
"text": "(Lample et al., 2016;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 192, |
|
"text": "Ma and Hovy, 2016;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 212, |
|
"text": "Huang et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 473, |
|
"text": "(Zheng et al., 2018)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 762, |
|
"end": 782, |
|
"text": "(Mintz et al., 2009)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this work, we present an automated framework for building high-precision attribute extraction models for numeric attributes using distant supervision. Multiple works in literature (Madaan et al., 2016; Ibrahim et al., 2016) argue that distant supervision for numeric attributes poses unique challenges and have given separate treatment to numeric attributes. Highlighted below are some interesting challenges that distant supervision poses for numeric attribute extraction models: Partial Annotations: Distant supervision leads to incorrect annotations when attribute is present in the text field but structured attribute value is missing. Diverse surface forms: There are multiple ways that attributes are mentioned in product text (e.g. resolution of '2' can be mentioned as '2 mp', '2 mpix' or '2 megapixels'). We term these different surface forms as alias.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 204, |
|
"text": "(Madaan et al., 2016;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 226, |
|
"text": "Ibrahim et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Confusing attributes: Many attributes have common units and may have confusing mention in the text (e.g. '16 GB memory' refers to RAM while '128 GB memory storage' refers to 'Hard Disk') Use of different units: Seller may use diverse units for numeric attributes (e.g. '1.5 kg' as attribute value and '3.3 pounds' in product text). Addressing these challenges in automated manner is the primary focus of this work. Our paper has the following contributions: (1) We propose a multitask architecture to deal with partial annotations introduced due to missing attributes. This multitask architecture leads to F1 improvement of 9.2% for numeric and 7.4% for non-numeric attributes over single task architecture. (2) We propose a fully automated algorithm for alias creation using product text and attribute values. These alias improve the quality of training annotation in distant supervision, leading to models with 20.2% F1 improvement for numeric attributes. We demonstrate the effectiveness of our proposed approach using a real-world dataset of 20 numeric attributes across 5 categories and 3 English marketplaces. Models trained using our proposed framework achieve a high F1-score without any manual intervention, making them suitable for practical applications. We show that our proposed approach is language agnostic. Experiments of using our framework on 3 Romance languages show 13.9% F1 improvement. To the best of our knowledge, this is first successful attempt at building automated attribute extraction for numeric attributes at E-commerce scale. Rest of the paper is organized as follows. We describe our proposed framework and its components in Section 3. We describe the 'Multi Task' architecture in Section 3.1 and 'automated alias creation' component in Section 3.2. We describe datasets, experimental setup and results in Section 4. Lastly, we summarize our work in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Early works on information extraction focused on extracting facts from generic web pages (Oren et al., 2005; Yates et al., 2007; Etzioni et al., 2008) . With rise of E-commerce, multiple works focused on extracting attributes from product pages. Ghani et al. (2006) proposed use of supervised learning to extract attributes from E-commerce product descriptions. Putthividhya and Hu (2011) formulated attribute extraction from short titles as NER problem, using multiple base classifiers and a CRF layer. The training data was created by matching entries from a seed dictionary. More (2016) proposed use of distant supervision for attribute extraction. They used token-wise string matching (henceforth referred as exact match) based on attribute values to annotate title tokens and train an NER model with manually defined features. They used manual intervention to improve the training annotations e.g. dealing with spelling mistakes and different surface forms of brand. Majumder et al. 2018extended this work with use of recurrent neural networks, excluding use of manually defined features. Zheng et al. (2018) proposed OpenTag using bidirectional LSTM, Conditional Random Fields (CRF) and attention mechanism. But the training data for OpenTag is manually created with use of active learning, making it challenging to use at E-commerce scale.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 108, |
|
"text": "(Oren et al., 2005;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 128, |
|
"text": "Yates et al., 2007;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 129, |
|
"end": 150, |
|
"text": "Etzioni et al., 2008)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 265, |
|
"text": "Ghani et al. (2006)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1094, |
|
"end": 1113, |
|
"text": "Zheng et al. (2018)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attribute Extraction for E-commerce", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Getting manual training data has always been a resource intensive and expensive task and distant supervision has been explored as an alternative. Distant supervision for numeric attributes has been used for relation extraction (Hoffmann et al., 2010; Madaan et al., 2016) , question answering (Davidov and Rappoport, 2010), entity linking (Ibrahim et al., 2016) . Madaan et al. (2016) argued that distant supervision for numerical attributes presents peculiar challenges not found for non-numeric attributes, such as high noise due to matching out of context, low recall due to different rounding level, and importance of units. Ibrahim et al. (2016) constructed a KB from freebase.com, keeping a list of units and conversion rules for numeric quantities. While these works have established the im- portance of units for distant supervision of numeric attributes, but the list of units is manually curated.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 250, |
|
"text": "(Hoffmann et al., 2010;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 271, |
|
"text": "Madaan et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 361, |
|
"text": "(Ibrahim et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 364, |
|
"end": 384, |
|
"text": "Madaan et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 629, |
|
"end": 650, |
|
"text": "Ibrahim et al. (2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distant supervision of numeric attributes", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Distant supervision may lead to noisy training data due to partial annotations. Tsuboi et al. (2008) argued that partial annotations may happen due to ambiguous annotation and proposed CRF-PA to alleviate the issue of partial annotations. Yang et al. (2018) studied partial annotations introduced due to incomplete dictionary and extended CRF-PA to NER models. Jie et al. (2019) proposed learning the probability distribution of all possible label sequences compatible with given incomplete annotation, and using this probability to clean the training annotations. For E-commerce attribute extraction, partial annotations may happen due to missing attribute value. Our paper is the first work to establish this phenomenon for attribute extraction and provide a systematic way to alleviate this problem. We compare our proposed approach with Jie et al. (2019) in Section 4.2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 100, |
|
"text": "Tsuboi et al. (2008)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 361, |
|
"end": 378, |
|
"text": "Jie et al. (2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NER with partial annotation", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We pose the attribute extraction problem (refer Figure 1 ) as a NER problem, where product attributes are treated as named entities. Formally, we are given a text X with a particular tokenization (x 1 , x 2 ,.....x m ) and a set of attributes A:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 58, |
|
"text": "Figure 1 )", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "LaTeX-Numeric Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(\u03b1 1 , \u03b1 2 ..... \u03b1 n ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LaTeX-Numeric Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The task is to extract v i = \u03b1 k for i \u2208 [1, m] where k \u2208 [0, n] and \u03b1 0 represents 'Other' entity. Figure 2 gives an overview of our proposed LaTeX-Numeric framework. We are given a list of pre-defined numeric attributes, dump of prod-ucts consisting of product text and existing attribute values. The attribute values are decimals (e.g. 16) and have an underlying unit (e.g. GB). We term this underlying unit as the canonical unit. For creating distant supervision-based training annotations, we use these canonical units and combine them with attribute values for matching with product text. This serves as the 'canonical aliasing' baseline for our comparisons. We use BIO tagging scheme for our experiments as it is a popular format. For training, we use the recently proposed BiLSTM-CNN-CRF model (Ma and Hovy, 2016) . This model consists of CNN architecture to encode character information, LSTM-based encoder to model contextual information of each token and a CRF based tag decoder, which exploits the labels of neighboring tokens for improved classification. Unlike Open-Tag (Zheng et al., 2018) , we don't use attention as we didn't observe any improvements with use of attention in our initial experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 802, |
|
"end": 821, |
|
"text": "(Ma and Hovy, 2016)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1084, |
|
"end": 1104, |
|
"text": "(Zheng et al., 2018)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 108, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "LaTeX-Numeric Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Creating training annotations using distant supervision may lead to partial annotations due to missing attribute values. In Section 3.1, we describe our proposed multi-task learning architecture to deal with such partial annotations. Additionally, we have observed that sellers use multiple surface forms to mention attributes in product text (e.g. '3mp', '3mpix', '3 megapixels' for resolution) and hence, distant supervision with just canonical units (e.g. 'mp') may lead to suboptimal training annotations. Curating a list of these diverse surface forms will help improve the quality of training annotations. We describe an automated approach for curating such diverse units and improving training annotations of numeric attributes in Section 3.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LaTeX-Numeric Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To jointly extract multiple attributes, the tagging strategy can be modified to consider an output label with tags for all attributes. With 'BIO' tagging, each attribute has its own ('B' and 'I') tags with 'O' common for all attributes, leading to total 2K + 1 tags for K attributes. Based on this modified tagging, a single NER model can be trained for multi-attributes extraction. We term this setting of training 'Multi Attribute Single Task' model as MAST-NER (refer Figure 3) . MAST-NER is the commonly used strategy for attributes extraction (Zheng et al., 2018; Sawant et al., 2017; Shen et al., 2017; Joshi et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 548, |
|
"end": 568, |
|
"text": "(Zheng et al., 2018;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 589, |
|
"text": "Sawant et al., 2017;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 590, |
|
"end": 608, |
|
"text": "Shen et al., 2017;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 609, |
|
"end": 628, |
|
"text": "Joshi et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 471, |
|
"end": 480, |
|
"text": "Figure 3)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi Attribute Joint Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Under distant supervision, attribute value is used to find matches in product text tokens. However, if the attribute value is missing, no match will be found even when attribute value is mentioned in text and hence, the corresponding tokens are incorrectly tagged as 'O'. We term this partial annotation due to missing attribute as Missing-PA problem. Table 1 shows an illustration of this problem. Missing-PA is generic to distant supervision for multi-attributes and exists for non-numeric attributes as well. To the best of our knowledge, this problem has not received attention in literature. To alleviate Missing-PA problem, one can train separate models for each attribute, by excluding samples where the corresponding attribute value is missing. However, such an approach requires training and managing a large number of models and separate computation for each attribute at evaluation time. Due to these practical challenges, this strategy is not suitable for practical applications. Another way to alleviate missing-PA is to use MAST-NER setting, and to exclude all samples where atleast one attribute has missing value. However, this approach may significantly reduce the size of training data as some attributes may have high missing rate, leading to a suboptimal model. To alleviate this problem, we propose a multi-task learning architecture with separate output layers for each attribute as separate tasks. We term this architecture of training 'Multi Attribute Multi Task' model as MAMT-NER (Refer Figure 3) . MAMT-NER consists of shared character encoder, word encoder and BiLSTM layers. For each training sample, loss is deactivated (using masking) for tasks where corresponding attribute value is missing and activated only for remaining tasks where corresponding attribute values are non-missing. Loss for all activated tasks are weighted uniformly and weights of those tasks (including shared weights) are updated for the given sample. Note that the proposed MAMT-NER architecture is generic and can be used for non-numeric attributes as well as any underlying NER architecture, including recently proposed BERT (Devlin et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 2132, |
|
"end": 2153, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 359, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1513, |
|
"end": 1522, |
|
"text": "Figure 3)", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi Attribute Joint Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As argued earlier, the canonical unit is often not sufficient to capture diverse surface forms that sellers use to mention attributes in product text. E.g. '13 inch', '13 inches', '13 in', are multiple ways to mention display_size. One can analyze the mention of attribute values in product text and leverage that to create a list of commonly used surface forms. While, such an algorithm will detect common surface forms, it will miss out on units which require a multiplicative factor (e.g. 'pounds', 'lbs' and 'ounces' for weight where attribute values are in 'kg'). To detect such units, we can analyze all numeric mentions in product texts (in isolation from attribute value) and filter out noisy candidates by using similarity with canonical units in embedding space. Additionally, we have observed that some numeric attributes have units which are specific to those attributes (e.g. 'mah' for battery_power and 'hertz' for refresh_rate). One can detect such attributes and use this information while creating training annotations using distant supervision. Based on these learnings, we propose an approach for generating a more exhaustive list of aliases, in an automated fashion (Figure 4 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1186, |
|
"end": 1195, |
|
"text": "(Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Automated Alias Creation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We create attribute-specific alias_dw using product text and attribute values. We use a regex matching function, M 3 , to find candidate alias which are preceded by numeric attribute value in text. Though this matching function may lead to instances of collision (e.g. 5 for RAM attribute may match with 5 ghz in text), we ignore cases where more than one match is found in text, to prevent impact of collisions. Alias_dw leads to common surface form of attribute units (e.g. 'in', 'inches', 'inch' for display_size and 'gb', 'gigabyte' for hard_disk).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creation of alias_dw", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "We create a single alias_bp (common across attributes) using product text data. We use a regex function F 4 to find candidate alias, which are to- kens preceeded by any numeric mention in product text. Alias_bp may contain noisy candidates which are not units for any attribute. We use word embeddings to match alias_bp candidates with attributes and exclude noisy candidates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creation of alias_bp", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "To remove noisy candidates and match alias_bp candidates to attributes, we leverage canonical units and Glove embeddings. For each attribute, we calculate similarity of each alias_bp candidate with its canonical unit in embedding space and keep only those candidates where similarity is greater than a pre-determined threshold. Thus, we obtain alias_bp_filter, which is attribute specific. E.g. we filter out 'inches' and select 'pounds' and 'lbs' for weight attribute having 'kg' as canonical unit.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding based filtering", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "Alias_dw and alias_bp_filter complement each other. Alias_bp_filter misses out on units which have low similarity using embeddings (e.g. 'in' for display_size as 'in' has a low similarity with 'inch'). Alternately, alias_dw misses out on cases where the unit mentioned in product text may require a multiplicative factor (e.g. alias_dw for item_weight misses out on 'pounds' and 'lbs'). We concatenate alias_dw and alias_bp_filter to obtain alias_combined for each attribute (shown for four attributes in Table 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 505, |
|
"end": 512, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Embedding based filtering", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "With small manual effort, one can get the multiplicative factor for converting values in canonical units to units in alias_combined and vice versa, which can further improve training annotations. As focus of current work is to build a fully automated attribute extraction system, we leave this as future work to be explored. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Embedding based filtering", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "We use a small manually labelled dev set (created for hyper-parameters tuning) to create a flag indicating which attributes have exclusive alias. We evaluate precision of extracting any mention of alias for a given attribute and if this precision is above a threshold, we consider that attribute to have exclusive alias. For attributes having an exclusive alias, we use regex-based matching for training annotations, tagging any numeric value followed by the corresponding unit, irrespective of the attribute value.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Exclusive Alias Flag", |
|
"sec_num": "3.2.4" |
|
}, |
|
{ |
|
"text": "We refer our proposed approach of using 'alias_combined' and 'exclusive alias flag' for creating training data of numeric attributes as 'autoaliasing' henceforth. We discuss experiments of using 'auto-aliasing' as compared to other distant supervision techniques for numeric attributes in Section 4.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Exclusive Alias Flag", |
|
"sec_num": "3.2.4" |
|
}, |
|
{ |
|
"text": "We picked five product categories and their 20 numeric attributes for three English marketplaces (IN, US and UK). We extracted product data (product description and attribute values) for these categories and split this data into two parts (80% train and 20% test). The train part is used for automated alias creation and creation of training annotations with distant supervision. From the test part, we randomly picked products for each category and label the mention of category-specific numeric attributes in text. Out of the total labelled attribute-product pairs, we observed mention of 6.9K attributes in product text. We term training data for English as 'Train-EN' and audited test dataset as 'Test-EN' (details in Table 3 ). To evaluate applicability of our proposed LaTeX-Numeric framework for non-English languages, we did a similar analysis with one product category for three Romance languages of French (FR), Spanish (SP) and Italian (IT). We term this training data as 'Train-Romance' and audited test dataset as 'Test-Romance'. Similar to (Zheng et al., 2018) , we use F1-score for evaluation. Predictions are given full credit if correct value is extracted, but extracting more values than actual is considered incorrect (e.g. for a mobile phone with '4 gb' RAM, extracting either '4' or '4 gb' is considered correct prediction, but extracting two values of '4 gb' and '16 gb' is considered incorrect prediction).", |
|
"cite_spans": [ |
|
{ |
|
"start": 1054, |
|
"end": 1074, |
|
"text": "(Zheng et al., 2018)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 722, |
|
"end": 729, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup and Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this section, we study improvements with our proposed alias creation. For comparison, we use two baselines of creating training annotation a) using lexical matching of numeric attribute value and product text ('exact match'), and b) matching based on canonical units ('canonical aliasing'). For each strategy, we use CNN-BiLSTM-CRF with MAST- NER architecture. Table 4 shows F1 score for using different matching techniques. 'Canonical aliasing' approach shows better F1 score than 'exact match', but it still suffers from low recall due to missing out on different surface forms mentioned in product text. With our proposed auto-aliasing, we address this limitation of 'canonical aliasing' and observe an average F1 improvement of 20.2%, establishing 'auto-aliasing' as best technique for distant supervision of numeric attributes. We use the training data created using 'auto-aliasing' for all subsequent experiments (unless otherwise specified).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 364, |
|
"end": 371, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of Matching Techniques", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this section, we perform a quantitative evaluation of our proposed MAMT architectures. Table 5 shows results of using MAST and MAMT architecture with CNN-BiLSTM-CRF. As compared to MAST-NER architecture, we observe 9.2% F1 improvement with our proposed MAMT architecture. Additionally, we observe that Jie et al. (2019) 5 shows better F1 score than MAST due to higher recall. However, Jie et al. (2019) leads to drop in precision due to confusion between close attributes 5 We use implementation of https://github.com/allanj/ner_incomplete_annotation. We show results only for IN as we get memory error when training for US and UK datasets which have larger training size. (e.g. front-camera and back-camera for mobile.) Our proposed MAMT shows 8.7% better F1 score than Jie et al. (2019) for IN. Further, we do comparison of MAST and MAMT architectures with BERT 6 as underlying model and observed 3.5% F1 improvement with MAMT, demonstrating its applicability to multiple underlying models. To establish the effectiveness of MAMT architecture for non-numeric attributes, we curated a test dataset of 600 samples per attribute for 8 textual attributes across 4 product categories. As shown in Table 6 , we observe 7.4% F1 improvement on this dataset with our proposed MAMT-NER architecture, showing its effectiveness on textual attributes as well. Table 7 : Comparison of auto-aliasing and multi-task architecture on Romance languages (numbers are relative to using canonical-aliasing).", |
|
"cite_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 324, |
|
"text": "Jie et al. (2019) 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 405, |
|
"text": "Jie et al. (2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 476, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 774, |
|
"end": 791, |
|
"text": "Jie et al. (2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 97, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 1197, |
|
"end": 1204, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 1352, |
|
"end": 1359, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of MAMT Architecture", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section, we study the applicability of LaTeX-Numeric for three Romance languages. We train a separate (Category-A) model for each Romance language replacing English word embeddings with language specific fastText (Lample et al., 2018 ) embeddings. Table 7 shows results on Test-Romance dataset. We observe 6.0% F1 improvement with our proposed auto-aliasing and additional 7.9% improvement with use of MAMT-NER architecture, showing effectiveness of our proposed approaches across languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 241, |
|
"text": "(Lample et al., 2018", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 263, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation on non-English Languages", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this paper, we described 'LaTeX-Numeric', a high-precision fully-automated framework for training attribute extraction models for Ecommerce numeric attributes. We characterized the problem of Missing-PA that arises with distant supervision due to missing attribute values and proposed a multi-task learning architecture to alleviate the Missing-PA problem, leading to 9.2% F1 improvement for numeric attributes. We established the applicability of our proposed multi-task architecture for textual attributes and BERT as underlying model as well. Additionally, we proposed an automated algorithm for alias creation, to deal with variations of numeric attribute mentions, leading to models with 20.2% F1 improvement. Our evaluation on three Romance languages establishes that these improvements are applicable across non-English languages as well. Models trained using our proposed LaTeX-Numeric framework achieve high F1 score, making them suitable for practical applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "E.g. RAM, weight and front_camera are some of the product attributes for mobile phone. We use the terminologies 'product attributes' and 'attributes' interchangeably in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "M = re.f indall(\" \" + <value> + r\"[ \u2212] * [a\u2212zA\u2212 Z] + , <text>), where <value> is attribute value (e.g. 8 for RAM) and <text> is product text. 4 F = re.f indall(\" \" + r\"[\\d\\.] * [\\d][ \u2212] * [a \u2212 zA \u2212 Z] + , <text>), where <text> is product text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use implementation of https://github.com/namisan/mtdnn, which uses bert-base and softmax as output layer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Extraction and approximation of numerical attributes from the web", |
|
"authors": [ |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Davidov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Rappoport", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "48th Annual Meeting of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1308--1317", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dmitry Davidov and Ari Rappoport. 2010. Extraction and approximation of numerical attributes from the web. In 48th Annual Meeting of ACL, pages 1308- 1317.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT (1).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Open information extraction from the web", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Banko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Communications of the ACM", |
|
"volume": "51", |
|
"issue": "12", |
|
"pages": "68--74", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extrac- tion from the web. Communications of the ACM, 51(12):68-74.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Text mining for product attribute extraction", |
|
"authors": [ |
|
{ |
|
"first": "Rayid", |
|
"middle": [], |
|
"last": "Ghani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Probst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marko", |
|
"middle": [], |
|
"last": "Krema", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Fano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ACM SIGKDD Explorations Newsletter", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "41--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rayid Ghani, Katharina Probst, Yan Liu, Marko Krema, and Andrew Fano. 2006. Text mining for product attribute extraction. ACM SIGKDD Explorations Newsletter, 8(1):41-48.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning 5000 relational extractors", |
|
"authors": [ |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Congle", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "48th Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "286--295", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raphael Hoffmann, Congle Zhang, and Daniel S Weld. 2010. Learning 5000 relational extractors. In 48th Annual Meeting of the ACL, pages 286-295.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Bidirectional lstm-crf models for sequence tagging", |
|
"authors": [ |
|
{ |
|
"first": "Zhiheng", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Making sense of entities and quantities in web tables", |
|
"authors": [ |
|
{ |
|
"first": "Yusra", |
|
"middle": [], |
|
"last": "Ibrahim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirek", |
|
"middle": [], |
|
"last": "Riedewald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "25th CIKM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1703--1712", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yusra Ibrahim, Mirek Riedewald, and Gerhard Weikum. 2016. Making sense of entities and quanti- ties in web tables. In 25th CIKM, pages 1703-1712. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Better modeling of incomplete annotations for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Zhanming", |
|
"middle": [], |
|
"last": "Jie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengjun", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruixue", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linlin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "729--734", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1079" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhanming Jie, Pengjun Xie, Wei Lu, Ruixue Ding, and Linlin Li. 2019. Better modeling of incomplete an- notations for named entity recognition. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 729-734, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Distributed word representations improve ner for e-commerce", |
|
"authors": [ |
|
{ |
|
"first": "Mahesh", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Hart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirko", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-David", |
|
"middle": [], |
|
"last": "Ruvini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "1st Workshop on Vector Space Modeling for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mahesh Joshi, Ethan Hart, Mirko Vogel, and Jean- David Ruvini. 2015. Distributed word representa- tions improve ner for e-commerce. In 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 160-167.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Neural architectures for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuya", |
|
"middle": [], |
|
"last": "Kawakami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "260--270", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1030" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the NAACL-HLT, pages 260-270, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Word translation without parallel data", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In Interna- tional Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Multi-task deep neural networks for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pengcheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weizhu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4487--4496", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF", |
|
"authors": [ |
|
{ |
|
"first": "Xuezhe", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1064--1074", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1101" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1064-1074, Berlin, Ger- many. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Numerical relation extraction with minimal supervision", |
|
"authors": [ |
|
{ |
|
"first": "Aman", |
|
"middle": [], |
|
"last": "Madaan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ganesh", |
|
"middle": [], |
|
"last": "Ramakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunita", |
|
"middle": [], |
|
"last": "Sarawagi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Thirtienth AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aman Madaan, Ashish Mittal, Ganesh Ramakrishnan, Sunita Sarawagi, et al. 2016. Numerical relation extraction with minimal supervision. In Thirtienth AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Deep recurrent neural networks for product attribute extraction in ecommerce", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Bodhisattwa Prasad Majumder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhinandan", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shreyansh", |
|
"middle": [], |
|
"last": "Krishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ajinkya", |
|
"middle": [], |
|
"last": "Gandhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "More", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bodhisattwa Prasad Majumder, Aditya Subramanian, Abhinandan Krishnan, Shreyansh Gandhi, and Ajinkya More. 2018. Deep recurrent neural net- works for product attribute extraction in ecommerce. arXiv.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Distant supervision for relation extraction without labeled data", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Mintz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bills", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1003--1011", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1003-1011.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Attribute extraction from product titles in ecommerce", |
|
"authors": [ |
|
{ |
|
"first": "Ajinkya", |
|
"middle": [], |
|
"last": "More", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ajinkya More. 2016. Attribute extraction from product titles in ecommerce. CoRR, abs/1608.04670.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Unsupervised named-entity extraction from web: An experimental study", |
|
"authors": [ |
|
{ |
|
"first": "Etzioni", |
|
"middle": [], |
|
"last": "Oren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cafarella", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Downey", |
|
"middle": [], |
|
"last": "Doug", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Popescu", |
|
"middle": [], |
|
"last": "Ana-Maria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaked", |
|
"middle": [], |
|
"last": "Tal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soderland", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weld", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yates", |
|
"middle": [], |
|
"last": "Alexander", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Artificial intelligence", |
|
"volume": "165", |
|
"issue": "1", |
|
"pages": "91--134", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Etzioni Oren, Cafarella Michael, Downey Doug, Popescu Ana-Maria, Shaked Tal, Soderland Stephen, Weld Daniel S, and Yates Alexander. 2005. Unsuper- vised named-entity extraction from web: An experi- mental study. Artificial intelligence, 165(1):91-134.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Bootstrapped named entity recognition for product attribute extraction", |
|
"authors": [ |
|
{ |
|
"first": "Junling", |
|
"middle": [], |
|
"last": "Duangmanee Pew Putthividhya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1557--1567", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Duangmanee Pew Putthividhya and Junling Hu. 2011. Bootstrapped named entity recognition for product attribute extraction. In EMNLP, pages 1557-1567. ACl.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Optimal hyperparameters for deep lstm-networks for sequence labeling tasks", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Optimal hy- perparameters for deep lstm-networks for sequence labeling tasks. arXiv.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "E-fashion product discovery via deep text parsing", |
|
"authors": [ |
|
{ |
|
"first": "Uma", |
|
"middle": [], |
|
"last": "Sawant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vijay", |
|
"middle": [], |
|
"last": "Gabale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anand", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "26th International Conference on WWW", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "837--838", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Uma Sawant, Vijay Gabale, and Anand Subramanian. 2017. E-fashion product discovery via deep text parsing. In 26th International Conference on WWW, pages 837-838.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Deep active learning for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Yanyao", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hyokun", |
|
"middle": [], |
|
"last": "Yun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Lipton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yakov", |
|
"middle": [], |
|
"last": "Kronrod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Animashree", |
|
"middle": [], |
|
"last": "Anandkumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "252--256", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-2630" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the 2nd Workshop on Representa- tion Learning for NLP, pages 252-256, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Deep multitask learning with low level tasks supervised at lower layers", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "54th Annual Meeting of the ACL", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "231--235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi- task learning with low level tasks supervised at lower layers. In 54th Annual Meeting of the ACL (Volume 2: Short Papers), pages 231-235.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Training conditional random fields using incomplete annotations", |
|
"authors": [ |
|
{ |
|
"first": "Yuta", |
|
"middle": [], |
|
"last": "Tsuboi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hisashi", |
|
"middle": [], |
|
"last": "Kashima", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroki", |
|
"middle": [], |
|
"last": "Oda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shinsuke", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuji", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "22nd COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "897--904", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuta Tsuboi, Hisashi Kashima, Hiroki Oda, Shinsuke Mori, and Yuji Matsumoto. 2008. Training condi- tional random fields using incomplete annotations. In 22nd COLING, pages 897-904. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Distantly supervised ner with partial annotation learning and reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Yaosheng", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenliang", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenghua", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengqiu", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "27th COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, and Min Zhang. 2018. Distantly su- pervised ner with partial annotation learning and re- inforcement learning. In 27th COLING.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Textrunner: open information extraction on the web", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Yates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Cafarella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Banko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Broadhead", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Human Language Technologies: The Annual Conference of the NAACL: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. Textrunner: open information ex- traction on the web. In Human Language Technolo- gies: The Annual Conference of the NAACL: Demon- strations, pages 25-26. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Opentag: Open attribute value extraction from product profiles", |
|
"authors": [ |
|
{ |
|
"first": "Guineng", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Subhabrata", |
|
"middle": [], |
|
"last": "Mukherjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [ |
|
"Luna" |
|
], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feifei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "24th ACM SIGKDD", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1049--1058", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In 24th ACM SIGKDD, pages 1049-1058. ACM.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Illustration of E-commerce attribute extraction problem.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"text": "LaTeX-Numeric framework for extraction of E-commerce numeric attributes.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Figure showingdifferent architectures for Multi Attributes Extraction models. We assume BIO-tagging of attributes with only 3 tags possible -B, I and O. For MAST-NER, we have two possible tags for each attribute and one others tag. For MAMT-NER, each attribute extraction is considered a separate task with weights shared for character embeddings, word embeddings and BiLSTM layer", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Flow-diagram for Automated alias creation.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"4\">Display RAM Weight BatteryLife</td></tr><tr><td>Attribute Value</td><td>12.3</td><td colspan=\"2\">16 missing</td><td>10</td></tr><tr><td>Canonical Unit</td><td>inches</td><td>gb</td><td>kg</td><td>hours</td></tr><tr><td colspan=\"5\">The high performance Chromebook. Features 7th Gen Intel</td></tr><tr><td colspan=\"2\">Core i7 processor,</td><td/><td/><td/></tr></table>", |
|
"html": null, |
|
"text": "16 GB RAM and 512 GB for storage. The long lasting battery provides up to 10 hours of use and its fast charging so you can get 2 hours of use in 15 minutes ## Pixelbook's super thin and lightweight design measures 10.3 mm and weighs 1.2 kg Features a 12.3 inches 360 touchscreen display" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Alias values shown for few attributes" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"5\">: Stats for training and test data. Number of</td></tr><tr><td colspan=\"5\">training products are shown with unit 'K' (K=1000)</td></tr><tr><td colspan=\"5\">and number of labelled attributes mention in test data</td></tr><tr><td colspan=\"5\">is shown in adjacent parenthesis. Number of attributes</td></tr><tr><td colspan=\"5\">is shown in parenthesis adjacent to each category.</td></tr><tr><td>Matching Technique</td><td>IN</td><td>US</td><td>UK</td><td>Avg</td></tr><tr><td>exact match</td><td>78.0</td><td>86.5</td><td>93.3</td><td>85.5</td></tr><tr><td>canonical aliasing</td><td colspan=\"4\">100.0 100.0 100.0 100.0</td></tr><tr><td>auto aliasing (our)</td><td colspan=\"4\">113.1 120.3 128.5 120.2</td></tr></table>", |
|
"html": null, |
|
"text": "" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Comparison of various matching techniques for training data generation using distant supervision (all numbers are relative to using canonical units)." |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Model</td><td>Archi-tecture</td><td colspan=\"2\">Precision Recall</td><td>F1</td></tr><tr><td>BiLSTM</td><td>MAST</td><td>100.0</td><td colspan=\"2\">100.0 100.0</td></tr><tr><td>BiLSTM</td><td>MAMT</td><td>93.6</td><td colspan=\"2\">117.8 107.4</td></tr></table>", |
|
"html": null, |
|
"text": "Study of multi-task architecture for numeric attributes. BERT uses softmax as output layer, while, BiLSTM refers to CNN-BiLSTM model with crf as output layer. All numbers are relative to using canonical units inTable 4." |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Study of multi-task architectures for textual attributes (all numbers are relative). BiLSTM refers to CNN-BiLSTM model with crf as output layer." |
|
} |
|
} |
|
} |
|
} |