ACL-OCL / Base_JSON /prefixM /json /mwe /2020.mwe-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:15:02.637209Z"
},
"title": "Hierarchy-aware Learning of Sequential Tool Usage via Semi-automatically Constructed Taxonomies",
"authors": [
{
"first": "Nima",
"middle": [],
"last": "Nabizadeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ruhr University",
"location": {
"settlement": "Bochum",
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Martin",
"middle": [],
"last": "Heckmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Honda Research Institute Europe GmbH",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Dorothea",
"middle": [],
"last": "Kolossa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ruhr University",
"location": {
"settlement": "Bochum",
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "When repairing a device, humans employ a series of tools that corresponds to the arrangement of the device components. Such sequences of tool usage can be learned from repair manuals, so that at each step, having observed the previously applied tools, a sequential model can predict the next required tool. In this paper, we improve the tool prediction performance of such methods by additionally taking the hierarchical relationships among the tools into account. To this aim, we build a taxonomy of tools with hyponymy and hypernymy relations from the data by decomposing all multi-word expressions of tool names. We then develop a sequential model that performs a binary prediction for each node in the taxonomy. The evaluation of the method on a dataset of repair manuals shows that encoding the tools with the constructed taxonomy and using a topdown beam search for decoding increases the prediction accuracy and yields an interpretable taxonomy as a potentially valuable byproduct.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "When repairing a device, humans employ a series of tools that corresponds to the arrangement of the device components. Such sequences of tool usage can be learned from repair manuals, so that at each step, having observed the previously applied tools, a sequential model can predict the next required tool. In this paper, we improve the tool prediction performance of such methods by additionally taking the hierarchical relationships among the tools into account. To this aim, we build a taxonomy of tools with hyponymy and hypernymy relations from the data by decomposing all multi-word expressions of tool names. We then develop a sequential model that performs a binary prediction for each node in the taxonomy. The evaluation of the method on a dataset of repair manuals shows that encoding the tools with the constructed taxonomy and using a topdown beam search for decoding increases the prediction accuracy and yields an interpretable taxonomy as a potentially valuable byproduct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Humans perform various tasks that have an inherent sequential nature comprising several steps; repairing a device is one of them. An AI agent serving as a cooperative assistant in such a task should be provided with contextual knowledge about the pertinent sequence of steps. The importance of such knowledge in cooperative situations has been shown in (Salas et al., 1995; Marwell and Schmitt, 2013) . An example of using sequential context knowledge in a cooperative situation is found in Whitney et al. (2016) . Here, it can be seen that learning the dependencies among the ingredients in cooking recipes helps with resolving the user's request for ingredients, since the system can anticipate what may be needed next.",
"cite_spans": [
{
"start": 353,
"end": 373,
"text": "(Salas et al., 1995;",
"ref_id": "BIBREF8"
},
{
"start": 374,
"end": 400,
"text": "Marwell and Schmitt, 2013)",
"ref_id": "BIBREF5"
},
{
"start": 491,
"end": 512,
"text": "Whitney et al. (2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been numerous efforts to acquire task knowledge from available sources of instructional data, for instance from the web. Related work on extracting the workflow from instructional text, such as (Maeta et al., 2015; Yamakata et al., 2016) , have not built a sequential model for generalizing the obtained knowledge to unseen tasks. Working on data collected from the wikiHow website as the resource, Chu et al. 2017and Zhou et al. (2019) developed models that learn the temporal order of steps but only take one previous step into account and ignore the higher-order dependencies.",
"cite_spans": [
{
"start": 205,
"end": 225,
"text": "(Maeta et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 226,
"end": 248,
"text": "Yamakata et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 429,
"end": 447,
"text": "Zhou et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nabizadeh et al. (2020a) compared sequence learning methods for modeling long-term dependencies among the used tools in various steps of repair tasks, showing the advantage of Recurrent Neural Network (RNN) models for this purpose. Their results revealed that the similarity among the sequence of used tools in different repair manuals makes it possible to predict the next required tool on unseen repair tasks. In their approach, each tool is represented as a distinct class, while the relationships among the related types of the tools are not considered. As a result, the input does not provide the model with any information about different types associated with a class of tool. For instance, the model has no clue that different types of screwdrivers, such as Phillips and Torx, are all screwdrivers, and that different sizes of Phillips screwdrivers belong to the same category of Phillips screwdrivers. Such information is also missing in calculating the cross-entropy loss of the RNN. However, we posit that for instance the penalty for predicting a 4mm Nut Driver instead of a 5mm Nut Driver should be less than the penalty of predicting a hammer, instead. This paper, therefore, extends their work by taking such missing information into account. Specifically, we develop a sequential model for predicting the tool usage in unseen repair tasks, where we encode the tools using a semi-automatically constructed taxonomy. Previous work has shown the advantage of hierarchy-aware neural networks for different tasks, such as audio event detection (Jati et al., 2019) and entity type classification (Xu and Barbosa, 2018) , which both benefit from predefined hierarchies.",
"cite_spans": [
{
"start": 1555,
"end": 1574,
"text": "(Jati et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 1606,
"end": 1628,
"text": "(Xu and Barbosa, 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The tool names in repair tasks are often compound nouns, containing information about the main class of the tool and its detailed attributes. We use the arrangement of words in the tool names for building a tool taxonomy, with different types and sizes of a parent node tool arrayed as its child nodes. Applying a binary classifier for predicting each node in the taxonomy, i.e., predicting the main class of the tool and its details separately, we show the advantages of hierarchy-aware prediction model over the flat one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section introduces the proposed approach for producing the tool taxonomy from data and modeling the dependencies among the used tools in the steps of the repair tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "The head-modifier principle (Sp\u00e4rck Jones, 1983) inspired our approach for constructing a taxonomy, stating that the linear arrangement of the elements in a compound reveals the kind of information it conveys. The head of a compound, which is usually the right-most word for compound nouns in the English language, serves as the general (semantic) category to which the entire compound belongs. Other elements, i.e., modifier, distinguish this member from other members of the same category. The automatic process of constructing the taxonomy contains two main stages: 1-branching and 2-merging. In the branching stage, we split the head and the modifier of the multi-word tool names and arrange the modifiers as the child nodes of the node corresponding to the head of the compound noun. In the merging step, a constructed node with only one child is merged with its child node, making a single node for both. E.g., the node \"1.5mm Hex\" in Figure 1 , is produced by merging the parent node \"Hex\" and its child node \"1.5mm\". The inherent structure of the tool names in the repair tasks allows us to perform the above process for more than one level. The multi-word tool names usually follow the pattern <size, type, main class>, as is the case, for example, in \"t6 Torx screwdriver.\" However, we still needed to apply several handcrafted rules, e.g. via regular expressions, to standardize the tool names that were entered with a different pattern. For instance, \"Phillips #00 screwdriver\" was changed to the equivalent and normalized name \"Ph00 Phillips screwdriver\" to follow a unified pattern of left-branching compounds. It is worth mentioning that on the iFixit website, the instructors usually link the tools to the iFixit tool store; therefore, in most cases, different manuals use a unique name for a specific tool. In the process of constructing the taxonomy, the single-word tool names, and the heads of the multi-word tool names are grouped under the root node with the compound modifiers as the child nodes. This process is repeated for creating a taxonomy with up to three levels. Figure 1 shows the instances of the produced taxonomy, where the leaves, i.e., terminal nodes, are marked in gray. The constructed taxonomy is a non-ultrametric tree, i.e., the distance between the leaves and the root node is not the same for all leaves. ",
"cite_spans": [
{
"start": 28,
"end": 48,
"text": "(Sp\u00e4rck Jones, 1983)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 941,
"end": 949,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 2094,
"end": 2102,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Semi-automatic Construction of Tool Taxonomy",
"sec_num": "2.1"
},
{
"text": "A repair manual can be understood as a list of steps; each step might require a different tool. Moreover, each tool is a node in the constructed taxonomy, with one or more parent nodes representing the more general categories of the tool. Let O denote the set of all nodes in the taxonomy except for the root node. The model is trained to predict the probability of observing each node from O in the following step, based on the sequence of prior, observed tools, i.e., based on the sequence of the taxonomy nodes seen in the preceding steps. We represent each node in the set O with a one-hot vector. A tool is then encoded by the sum of its active node vectors. The resulting multi-hot vector is later used as the input of the sequential model at each timestep. Our sequential model consists of a Long-Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) layer with state size 256, followed by a Fully Connected (FC) hidden layer of the same size. The LSTM layer takes the encoded tools from the input and generates a representation of the used tools at each step. The LSTM output is fed to a fully-connected hidden layer with a hyperbolic tangent activation function while its parameters are shared among all the timesteps. The output layer has |O| neurons and a sigmoid activation function that estimates the probability of observing each node in O. The model takes the multi-hot vector of the next tool as the ground truth during training. Its parameters are learned by minimizing the binary cross-entropy loss in Equation 1using the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.001.",
"cite_spans": [
{
"start": 829,
"end": 863,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Models of Tool Usage",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(y,\u0177) = \u2212 1 |O| i\u2208O y i \u2022 log(\u0177 i ) + (1 \u2212 y i ) \u2022 log(1 \u2212\u0177 i )",
"eq_num": "(1)"
}
],
"section": "Sequential Models of Tool Usage",
"sec_num": "2.2"
},
{
"text": "Here,\u0177 i denotes the probability of i-th node in the model output and y i is the corresponding target value. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Models of Tool Usage",
"sec_num": "2.2"
},
{
"text": "To infer the required tool from the model's predicted distribution of taxonomy nodes, we used beam search, a search algorithm that generates the sequence of nodes one-by-one while keeping a fixed number of active candidates, the beam size, denoted by m. For each example in the test set, starting from the root node, we take m child nodes of the root node with the highest probability scores. For each node candidate, we expand it if it is not a leaf node and take its m child nodes with the highest probability. This process continues until we have expanded all the non-leaf-node candidates. Finally, the tool with the highest probability is returned as the prediction for the next step. The probability associated with a tool prediction is the average of the probabilities of its corresponding nodes in the taxonomy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "2.3"
},
{
"text": "MyFixit is a collection of repair manuals collected by Nabizadeh et al. (2020b) from the iFixit website. The manuals are divided into several steps by the instructors, where at each step, the user should typically detach a specific component of the device under repair. Each step of the manuals in the \"Mac Laptop\" category is manually annotated with the required tools of the steps. In total, 1,497 manuals with 36,973 steps are annotated with the required tools. The authors also proposed an unsupervised method for the automatic annotation of tools from each step description. Their method utilizes the Jaccard similarity",
"cite_spans": [
{
"start": 55,
"end": 79,
"text": "Nabizadeh et al. (2020b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "3.1"
},
{
"text": "Annotation Taxonomy Levels 1 2 3 Total Flat Manual 50 .56 \u00b1 2.0 83.69 \u00b1 1.1 78.90 \u00b1 1.3 76.30 \u00b1 0.8 1 Automatic 29.31 \u00b1 1.6 72.08 \u00b1 0.9 65.68 \u00b1 1.9 62.39 \u00b1 0.9 Hierarchy-aware Manual 49.97 \u00b1 1.0 86.29 \u00b1 0.6 88.33 \u00b1 0.5 81.78 \u00b1 0.5 Automatic 30.68 \u00b1 0.9 72.87 \u00b1 0.9 80.97 \u00b1 1.0 70.42 \u00b1 0.8 Table 1 : Average accuracy of the tool prediction (%) with standard deviation, using the flat and hierarchyaware model trained on automatically and manually annotated data.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 61,
"text": "Taxonomy Levels 1 2 3 Total Flat Manual 50",
"ref_id": null
},
{
"start": 297,
"end": 304,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "between the bags of n-grams of the text description of steps and each tool name to return the tool with the highest similarity as the required tool of the step. The automatically annotated tools were reported to be correct in 94% of the steps. In addition to the sequential model trained on human-annotated data, we also evaluate the models trained on automatically annotated tools but tested on human annotations. This allows us to investigate the effect of hierarchy-aware prediction in the presence of annotation errors. Among the total steps of the annotated data, 51.8% of the used tools have three levels, 38.1% have two levels, and 10.1% have only one level in the constructed taxonomy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We compare the result of our proposed hierarchy-aware prediction to the baseline flat prediction of (Nabizadeh et al., 2020a) . In this model, each tool is independently encoded with a one-hot vector, and the model is trained to reduce the cross-entropy loss between the predicted and ground-truth distribution.",
"cite_spans": [
{
"start": 100,
"end": 125,
"text": "(Nabizadeh et al., 2020a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Model",
"sec_num": "3.2"
},
{
"text": "To evaluate the proposed methodology in Section 2 we used ten folds of cross-validation; in each fold, we randomly split the data into 70% training, 20% test, and 10% development set. The development set is used for an early stopping mechanism. In our experiments, the best result is achieved with beam size 3. For the evaluation metric, we report the per-level leaves' accuracy, standard deviation, and total accuracy of the leaf nodes. The accuracy of each level is the number of correct predictions of leave nodes in a taxonomy level, divided by the total number of tools having leaf nodes at that level in the test set. Perlevel leaves' accuracy can be calculated similarly for the flat predictor. The total accuracy is the count of all correct predictions divided by the size of the test set. Table 1 shows the result of our evaluation. It can be seen that the hierarchy-aware model improves the total accuracy by 5.48% for the manually annotated tools and 8.03% for the automatically extracted ones. The accuracy improvement achieved for predicting the tools with three levels in taxonomy is considerably higher than for the tools with a lower number of levels. Moreover, the hierarchy-aware model has a lower average standard deviation, and using this model helps the most for the prediction with automatically annotated data. This could be due to the fact that in the hierarchy-aware encoding of the tools, even if the annotation of the tool's detailed characteristics is wrong, the model can still be provided with correct information about the more general category of the tool. In 36.6% of the automatic annotation errors, the automatically and manually annotated tools have a common parent in the taxonomy.",
"cite_spans": [],
"ref_spans": [
{
"start": 798,
"end": 805,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.3"
},
{
"text": "In this paper, we utilize the head-modifier principle to decompose the multi-word expressions of tool names and build a taxonomy for the used tools in a dataset of repair manuals. We noted that utilizing the constructed taxonomy in sequential modeling of the used tools improves the tool prediction performance, especially when the data is annotated automatically and includes annotation errors. We imagine that hierarchy-aware modeling also helps when we have an imperfect observation of the used tools, e.g., when the model is uncertain about the size of used screwdrivers in the past. In the future, we plan to study the effect of such observation uncertainty on the prediction performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Compared to(Nabizadeh et al., 2020a), we achieved a slightly higher accuracy for the flat predictor, due to the standardization of the tool names that led to a lower number of unique tools.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Distilling task knowledge from how-to communities",
"authors": [
{
"first": "Niket",
"middle": [],
"last": "Cuong Xuan Chu",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Tandon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "805--814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cuong Xuan Chu, Niket Tandon, and Gerhard Weikum. 2017. Distilling task knowledge from how-to communi- ties. In Proceedings of the 26th International Conference on World Wide Web, pages 805-814. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hierarchy-aware loss function on a tree structured label space for audio event detection",
"authors": [
{
"first": "Arindam",
"middle": [],
"last": "Jati",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ruxin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Panayiotis",
"middle": [],
"last": "Georgiou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arindam Jati, Naveen Kumar, Ruxin Chen, and Panayiotis Georgiou. 2019. Hierarchy-aware loss function on a tree structured label space for audio event detection. In In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6-10. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations, ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A framework for procedural text understanding",
"authors": [
{
"first": "Hirokuni",
"middle": [],
"last": "Maeta",
"suffix": ""
},
{
"first": "Tetsuro",
"middle": [],
"last": "Sasada",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 14th International Conference on Parsing Technologies, IWPT",
"volume": "",
"issue": "",
"pages": "50--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirokuni Maeta, Tetsuro Sasada, and Shinsuke Mori. 2015. A framework for procedural text understanding. In Proceedings of the 14th International Conference on Parsing Technologies, IWPT, pages 50-60. ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cooperation: An experimental analysis",
"authors": [
{
"first": "Gerald",
"middle": [],
"last": "Marwell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmitt",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerald Marwell and David R Schmitt. 2013. Cooperation: An experimental analysis. Academic Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Target-aware prediction of tool usage in sequential repair tasks",
"authors": [
{
"first": "Nima",
"middle": [],
"last": "Nabizadeh",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Heckmann",
"suffix": ""
},
{
"first": "Dorothea",
"middle": [],
"last": "Kolossa",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The Sixth International Conference on Machine Learning, Optimization, and Data Science",
"volume": "",
"issue": "",
"pages": "869--880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nima Nabizadeh, Martin Heckmann, and Dorothea Kolossa. 2020a. Target-aware prediction of tool usage in se- quential repair tasks. In Proceedings of The Sixth International Conference on Machine Learning, Optimization, and Data Science, pages 869-880. Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Myfixit: An annotated dataset, annotation tool, and baseline methods for information extraction from repair manuals",
"authors": [
{
"first": "Nima",
"middle": [],
"last": "Nabizadeh",
"suffix": ""
},
{
"first": "Dorothea",
"middle": [],
"last": "Kolossa",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Heckmann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Twelfth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "2120--2128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nima Nabizadeh, Dorothea Kolossa, and Martin Heckmann. 2020b. Myfixit: An annotated dataset, annotation tool, and baseline methods for information extraction from repair manuals. In Proceedings of Twelfth Interna- tional Conference on Language Resources and Evaluation, pages 2120-2128. European Language Resources Association.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Situation awareness in team performance: Implications for measurement and training",
"authors": [
{
"first": "Eduardo",
"middle": [],
"last": "Salas",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Prince",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shrestha",
"suffix": ""
}
],
"year": 1995,
"venue": "Human factors",
"volume": "37",
"issue": "1",
"pages": "123--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduardo Salas, Carolyn Prince, David P Baker, and Lisa Shrestha. 1995. Situation awareness in team performance: Implications for measurement and training. Human factors, 37(1):123-136.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Compound noun interpretation problems",
"authors": [
{
"first": "Karen Sp\u00e4rck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Sp\u00e4rck Jones. 1983. Compound noun interpretation problems. Technical report, University of Cambridge, Computer Laboratory.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Interpreting multimodal referring expressions in real time",
"authors": [
{
"first": "David",
"middle": [],
"last": "Whitney",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Eldon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Oberlin",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Tellex",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of International Conference on Robotics and Automation (ICRA)",
"volume": "",
"issue": "",
"pages": "3331--3338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Whitney, Miles Eldon, John Oberlin, and Stefanie Tellex. 2016. Interpreting multimodal referring ex- pressions in real time. In Proceedings of International Conference on Robotics and Automation (ICRA), pages 3331-3338. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural fine-grained entity type classification with hierarchy-aware loss",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Denilson",
"middle": [],
"last": "Barbosa",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.03378"
]
},
"num": null,
"urls": [],
"raw_text": "Peng Xu and Denilson Barbosa. 2018. Neural fine-grained entity type classification with hierarchy-aware loss. arXiv preprint arXiv:1803.03378.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A method for extracting major workflow composed of ingredients, tools, and actions from cooking procedural text",
"authors": [
{
"first": "Yoko",
"middle": [],
"last": "Yamakata",
"suffix": ""
},
{
"first": "Shinji",
"middle": [],
"last": "Imahori",
"suffix": ""
},
{
"first": "Hirokuni",
"middle": [],
"last": "Maeta",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of International Conference on Multimedia & Expo Workshops (ICMEW)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoko Yamakata, Shinji Imahori, Hirokuni Maeta, and Shinsuke Mori. 2016. A method for extracting major work- flow composed of ingredients, tools, and actions from cooking procedural text. In Proceedings of International Conference on Multimedia & Expo Workshops (ICMEW), pages 1-6. IEEE.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning household task knowledge from wikihow descriptions",
"authors": [
{
"first": "Yilun",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Schockaert",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 5th Workshop on Semantic Deep Learning",
"volume": "",
"issue": "",
"pages": "50--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yilun Zhou, Julie Shah, and Steven Schockaert. 2019. Learning household task knowledge from wikihow descrip- tions. In Proceedings of the 5th Workshop on Semantic Deep Learning, pages 50-56. ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Instances of the tool taxonomy constructed from the MyFixit dataset"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "illustrates the unrolled graph of the proposed sequential model. Proposed sequential model for learning the sequence of tools in repair tasks with an example of an encoded tool input."
}
}
}
}