Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L16-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:05:05.604889Z"
},
"title": "Evaluating the Impact of Light Post-Editing on Usability",
"authors": [
{
"first": "Sheila",
"middle": [],
"last": "Castilho",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sharon",
"middle": [],
"last": "O'brien",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper discusses a methodology to measure the usability of machine translated content by end users, comparing lightly post-edited content with raw output and with the usability of source language content. The content selected consists of Online Help articles from a software company for a spreadsheet application, translated from English into German. Three groups of five users each used either the source textthe English version (EN)-, the raw MT version (DE_MT), or the light PE version (DE_PE), and were asked to carry out six tasks. Usability was measured using an eye tracker and cognitive, temporal and pragmatic measures of usability. Satisfaction was measured via a post-task questionnaire presented after the participants had completed the tasks.",
"pdf_parse": {
"paper_id": "L16-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper discusses a methodology to measure the usability of machine translated content by end users, comparing lightly post-edited content with raw output and with the usability of source language content. The content selected consists of Online Help articles from a software company for a spreadsheet application, translated from English into German. Three groups of five users each used either the source textthe English version (EN)-, the raw MT version (DE_MT), or the light PE version (DE_PE), and were asked to carry out six tasks. Usability was measured using an eye tracker and cognitive, temporal and pragmatic measures of usability. Satisfaction was measured via a post-task questionnaire presented after the participants had completed the tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent advances in machine translation (MT) have enabled post-editing (PE) to become a more common practice in the translation industry, which has led to much research in the area (De Almeida and O'Brien, 2010; Depraetere, 2010; Plitt and Masselot, 2010; Sousa et al., 2011; Specia, 2011; Koponen, 2012; O'Brien et al., 2013; Guerberof, 2014; Moorkens et al., 2015) . However, we know little about how end users engage with raw machine-translated text or post-edited text, or how usable such texts are, in particular if users have to follow instructions and subsequently act on them. This paper reports on a methodology to measure usability of machine translation output. The main objectives of this study are: i) to investigate the extent to which light human post-editing of machine translation impacts on the usability of instructional, online help content and, ii) to compare this with usability levels of the source text. The paper is structured as follows: Section 2 discusses related research, Section 3 and Section 4 describe the content used and the participants of the experiment respectively, Section 5 discusses the methods deployed to measure usability, Section 6 provides the preliminarily results, while Section 7 presents conclusions and plans for future work.",
"cite_spans": [
{
"start": 180,
"end": 210,
"text": "(De Almeida and O'Brien, 2010;",
"ref_id": "BIBREF1"
},
{
"start": 211,
"end": 228,
"text": "Depraetere, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 229,
"end": 254,
"text": "Plitt and Masselot, 2010;",
"ref_id": null
},
{
"start": 255,
"end": 274,
"text": "Sousa et al., 2011;",
"ref_id": null
},
{
"start": 275,
"end": 288,
"text": "Specia, 2011;",
"ref_id": null
},
{
"start": 289,
"end": 303,
"text": "Koponen, 2012;",
"ref_id": null
},
{
"start": 304,
"end": 325,
"text": "O'Brien et al., 2013;",
"ref_id": null
},
{
"start": 326,
"end": 342,
"text": "Guerberof, 2014;",
"ref_id": null
},
{
"start": 343,
"end": 365,
"text": "Moorkens et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Existing work measuring the usability of machine translated content is still somewhat limited. Tomita et al. (1993) compare different MT systems by using reading comprehension tests from texts extracted from an English proficiency exam and translated into Japanese. They show that reading comprehension is a valid evaluation methodology for MT. Fuji et al. (2001) examine the \"usefulness\" of machine translated text from two commercial MT systems compared to the original English version. The experiment consists of participants reading the texts and answering comprehension questions.",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "Tomita et al. (1993)",
"ref_id": "BIBREF7"
},
{
"start": 345,
"end": 363,
"text": "Fuji et al. (2001)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Afterwards, participants evaluate the MT outputs on a 5-point scale using comprehensibility and awkwardness as concepts. Results suggest that the MT output reduces the time to answer questions for the lower score group. The authors claim their evaluation approach delivers statistically significant results easily understood by the general public.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Jones et al. 2005present a usability test where participants answer questions from a machine translated version of an Arabic language test. Their results suggest that MT may enable an Interagency Language Roundtable (ILR) level 2 (limited working proficiency) but it is not suitable for level 3 (general professional proficiency). Stymne et al. (2012) present a preliminary study using eye tracking as a complement to MT error analysis and comprehension tasks to compare different MT systems. Human Translation (HT) was also factored into their experiment. Native speakers of Swedish were asked to read the translated texts and answer three multiple-choice questions. Participants were also asked to recall their confidence for those multiple-choice questions. Results show that the number of correct answers is higher for the system trained with a larger number of sentences; however, confidence scores are low. O'Brien (2012, 2014) is the first study to use eye-tracking techniques to measure the usability of texts via the end-user. They compare the usability of raw machine translated output for four target languages (Spanish, French, German and Japanese) against the usability of the source content (English). Twenty-nine participants were recruited (all native speakers in the target languages) and asked to read instructions and perform tasks while their eye movements were being recorded. Results show that, although the raw MT output scored lower for usability measurements when compared with the source language content, the raw MT output was deemed to be usable, especially for Spanish as a target language. Klerk et al. (2015) present an experimental eye-tracking usability test with text simplification and machine translation (for both the original and simplified versions) of logic puzzles.",
"cite_spans": [
{
"start": 331,
"end": 351,
"text": "Stymne et al. (2012)",
"ref_id": "BIBREF6"
},
{
"start": 913,
"end": 933,
"text": "O'Brien (2012, 2014)",
"ref_id": null
},
{
"start": 1620,
"end": 1639,
"text": "Klerk et al. (2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Twenty native speakers of Danish were presented with 80 different logic puzzles and asked to solve and judge the puzzles while having their eye movements recorded. The results demonstrated a greater number of fixations on the MT version of the original text (with no simplification). Regarding task efficiency, results show that participants were less efficient when using the MT version of the original puzzles; however, the simplified MT version seemed to ease task performance when compared to the original English version. The present paper builds on previous work by the authors (see Castilho et al., 2014) , which demonstrates that lightly post-edited instructions present a higher level of usability when compared to raw MT output for Brazilian Portuguese. In this instance, German was selected as the TL due to the fact that German is frequently reported as being a challenging target language for MT. As such, we expected that the post-edited instructions would have a higher level of usability and a greater level of satisfaction when compared with the unedited instructions. We also expected that the source language English instructions would have higher usability and satisfaction compared with the machine translated/post-edited instructions.",
"cite_spans": [
{
"start": 589,
"end": 611,
"text": "Castilho et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In collaboration with one industry partner, we selected Online Help Content articles for one specific software program, i.e. a spreadsheet application, as the corpus for the experiment. The articles describe features of the application as well as instructions on how to use such features. The articles are published on the company's website and the total number of words in the source content is 457. The articles were translated using Microsoft Translator 1 , with a custom domain for end-user content which was trained using the Microsoft Translator Hub 2 . It is the production system used for the company's standard raw-MT publishing. Post-editing was carried out by the company's translation providers and was only applied if terminology did not conform to the client-specific glossary and only if there were grammatical errors in the output. No edits were implemented for purely stylistic reasons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content",
"sec_num": "3."
},
{
"text": "Fifteen participants were recruited from the student and staff body of Dublin City University 3 for the experiment, five of whom were native speakers of English (EN) and ten of whom were native speakers of German. The latter were randomly assigned to one of two groups: the unedited MT group (DE_MT) or the light PE group (DE_PE). Participants were seated at the eye tracker (a Tobii T60XL) and were instructed not to reposition any of the windows relating to the software product or the instructions, so as to facilitate eye-tracking analysis. Each group was initially presented with a baseline text to read in order to measure their normal reading speed. The source language group was presented with a text in English 4 and both the DE_MT and DE_PE group read the same text in German (not machine translated, all related to the topic).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participants",
"sec_num": "4."
},
{
"text": "All users were asked to read the instructions and to carry out tasks using the spreadsheet application. Neither of the DE groups were told that the texts had been translated. While the users were carrying out the tasks, fixation data was collected via the eye tracker. This data was used to measure cognitive effort for each condition, as part of the usability measurement. The instructions were displayed on the left-hand side of the monitor and the application where tasks were carried out took up the centre and right-hand sides of the monitor (Figure 1 ). The tasks consisted of: changing colors, fonts and effects in the worksheet; changing font format for hyperlinks; formatting headers and footers; applying conditional formatting with color; inserting an 'exploding pie chart'; and inserting a 'bar of pie chart'. After each task, users were asked to specify whether they had completed the task. When all tasks were completed, users were asked to fill in a post-task questionnaire specifying their levels of satisfaction with the instructions.",
"cite_spans": [],
"ref_spans": [
{
"start": 547,
"end": 556,
"text": "(Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Participants",
"sec_num": "4."
},
{
"text": "For the measurement of usability, we adopt the ISO/TR 16982 definition: \"the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified content of use\" (ISO 2002) . Effectiveness is measured through task completion, that is, how successful the users were at accomplishing tasks documented in the instructions measured by observing the user interactions as recorded by an eye tracker.",
"cite_spans": [
{
"start": 241,
"end": 251,
"text": "(ISO 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Usability",
"sec_num": "5."
},
{
"text": "Efficiency is measured as the number of successful tasks completed (out of all possible tasks) when total task time is taken into account. A second measure of efficiency is cognitive effort, i.e. how much cognitive effort is evident when users are reading the instructions and trying to complete their tasks? Cognitive effort is measured using typical indicators recorded via the eye tracking apparatus, i.e. fixation duration, fixation count and visit duration. Fixation duration (FD) is the total length of fixations inside an area of interest (AOI). Fixation count (FC) is the total number of fixations within an AOI. Visit duration (VD) is the total time (in seconds) spent looking at an AOI, starting with a fixation within the AOI and ending with a fixation outside this AOI, that is, saccades (or rapid eye movements between fixations) are also counted. Such fixation data are well established as indicators of cognitive effort (Rayner 1998 , Radach et al. 2004 . For example, the more fixations there are on a set of instructions, the more probable it is that the reader is having difficulties in processing the instructions.",
"cite_spans": [
{
"start": 935,
"end": 947,
"text": "(Rayner 1998",
"ref_id": null
},
{
"start": 948,
"end": 968,
"text": ", Radach et al. 2004",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Usability",
"sec_num": "5."
},
{
"text": "Satisfaction is a measure of user satisfaction with the translated content and, by extension, the product itself. As satisfaction is a multi-faceted concept, we measure it using a questionnaire with a Likert scale ranging from 1 (Strongly Disagree) to 5 (Strongly Agree). In our questionnaire, \"satisfaction\" is addressed using a number of statements (see Section 6.6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring Usability",
"sec_num": "5."
},
{
"text": "We first present the fixation data as measures of cognitive load and then present the task time and questionnaire data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6."
},
{
"text": "We report the Mean Fixation Duration, which is the sum of the fixation lengths (for all participants) divided by the number of all fixations. It was measured for three AOIs: baseline reading task, instructions and user interface (UI). .19, DE_MT=0.18, DE_PE=0.20) . We can see that the groups present slightly different means, however the differences were not statistically significant F(2, 12) = 1.47, p = .268), which indicates that all participants read at a similar speed. Results for mean FD for the actual task itself also show no significant differences between groups for the Instructions (p = .355) and (p = .366) UI AOIs (EN=.19, DE_MT=.19 and DE_PE=.21).",
"cite_spans": [
{
"start": 235,
"end": 239,
"text": ".19,",
"ref_id": null
},
{
"start": 240,
"end": 251,
"text": "DE_MT=0.18,",
"ref_id": null
},
{
"start": 252,
"end": 263,
"text": "DE_PE=0.20)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fixation Duration",
"sec_num": "6.1"
},
{
"text": "For the FC, a one-way ANOVA found a significant difference between two groups for the instructions AOI, where F(2, 12)=6.81, p=.01 (see Figure 3) . Tukey post-hoc comparisons indicate that the mean score for the EN condition (M=198.9, SD=22.0) was significantly different to the DE_PE condition (M=305.8, SD=62.2). However, the DE_MT (M=255, SD=43.9) condition did not significantly differ from the EN and DE_PE conditions. There are no statistically significant differences for fixation count on the UI AOI. These results show that the DE_PE group has more fixations on the instructions AOI. ",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 145,
"text": "Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fixation Count",
"sec_num": "6.2"
},
{
"text": "We report the mean visit duration (in seconds), which is the sum of the visit length (for all participants) divided by the number of total visits. For the visit duration, a one-way ANOVA found a significant difference between two groups for the instructions AOI, where F(2, 12) = 3.7, p=.05 (see Figure 4) . Tukey post-hoc comparisons indicate that the mean score for the EN condition (M=2.1, SD=.38) was significantly different to the DE_PE condition (M=3.0, SD=.52). However, the DE_MT (M=2.7, SD=.62) condition did not significantly differ from EN and DE_PE conditions.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 305,
"text": "Figure 4)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Visit Duration",
"sec_num": "6.3"
},
{
"text": "There was a significant difference for the UI AOI, where F(2,12) = 5.0, p=.02. Tukey post-hoc comparisons indicate that the mean score for the DE_MT condition (M=3.6, SD=1.0) was significantly different to the DE_PE (M=2.4, SD=.52) and EN (M=2.3, SD=.32) conditions. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visit Duration",
"sec_num": "6.3"
},
{
"text": "For fixation count and visit duration, significant differences were found for the Instructions between the EN and PE groups. No significant differences were found for mean fixation duration. For visit duration, only the MT group had a significant difference for visits to the UI. The lack of difference between the MT and PE groups was surprising. However, we note that the MT group seems more reliant on the UI and less so on the instructions, which we speculate to be caused by the fact that the instructions were abandoned by the MT group in search of clarity on the UI, whereas the instructions were actually more \"usable\" for the PE group, which explains why they fixated on them more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": null
},
{
"text": "Goal Completion is the total number of successfully completed tasks; this was self-reported after each task via the question: \"Was the task completed?\" ('Yes', 'No' and 'Parts of it'). The validity of answers was verified by the researchers. Table 1 summarises the total number of completed tasks for all the participants. Note that DE_PE group presents a higher number of tasks successfully completed (76%), with 13% of tasks partially completed. Even though both the DE_MT and EN groups have the same percentage for the number of tasks completed, it is interesting to note that EN has 33% of tasks partially complete against 20% for the DE_MT group, and 10% for tasks not completed against 23% for the DE_MT group.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Effectiveness -Goal Completion",
"sec_num": "6.4"
},
{
"text": "Another metric used to compute Effectiveness is the total task time. Table 2 summarises the total task time (in seconds) per group. A one way ANOVA found significant difference between groups for the total task time, where F(2,12) = 5.28, p=.02. Tukey post-hoc comparisons indicate that the mean score for the EN condition was significantly different to both DE_MT and DE_PE conditions. No significant difference was found between the conditions DE-MT and DE_PE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness -Goal Completion",
"sec_num": "6.4"
},
{
"text": "Efficiency is measured as the number of successful tasks completed divided by the total task time. Table 3 shows the results for Efficiency per group. Even though no statistically significant differences were found, these results suggest that the EN group was the most efficient, followed by the DE_PE group. Although having a higher total time, the DE_PE group completed more tasks than the DE_MT group, which might indicate that the latter 'gave up' on the tasks more easily. ",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Efficiency",
"sec_num": "6.5"
},
{
"text": "Once tasks were finished, participants were presented with a 5-point scale questionnaire (1-strongly disagree -5-strongly agree) with the following statements:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Satisfaction",
"sec_num": "6.6"
},
{
"text": "Q1-The instructions were usable Q2-The instructions were comprehensible Q3-The instructions allowed me to complete all of the necessary tasks Q4-I was satisfied with the instructions provided Q5-The instructions could be improved upon Q6-I would be happy to consult these instructions again in the future Q7-I would be able to use the software again in the future without re-reading the instructions Q8-I would rather have seen the original (English) version of the instructions 5 Q9-I would recommend the software to a friend/colleague For all statements, except numbers 5 and 8, the higher score (5) indicates higher satisfaction (the opposite is true for statements 5 and 8). Table 4 presents the results for each statement and each group, while Table 5 summarises the median scores. As can be seen, the EN and DE_PE group seem to be more satisfied with the instructions given, finding them more usable/comprehensible when compared to the DE_MT group. It is interesting to note that for Q3, the DE_PE group has a median of 4, which supports the Efficiency scores; that is, the DE_PE group had a higher number of complete tasks and, therefore, scored the 5 Note that statement 8 was not displayed for the EN group. Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 EN 4 3 2 3 4 4 2 xx 3 DE_MT 3 3 2 2 5 3 2 4 3 DE_PE 3 4 4 2 5 4 3 3 3 Score Median Table 5 : Post-task Questionnaire -Median Scores instructions as \"helpful\". Finally, all groups agreed that the instructions need to be improved upon (Q5).",
"cite_spans": [],
"ref_spans": [
{
"start": 679,
"end": 686,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 749,
"end": 756,
"text": "Table 5",
"ref_id": null
},
{
"start": 1217,
"end": 1351,
"text": "Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 EN 4 3 2 3 4 4 2 xx 3 DE_MT 3 3 2 2 5 3 2 4 3 DE_PE 3 4 4 2 5 4 3 3 3",
"ref_id": "TABREF1"
},
{
"start": 1365,
"end": 1372,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Satisfaction",
"sec_num": "6.6"
},
{
"text": "This paper describes an evaluation experiment designed to measure the usability of machine translated, light post-edited and source versions for Online Help Content. Our goal was to verify whether light-post editing would increase usability compared to the raw machine translated versions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7."
},
{
"text": "The results show no significant differences in cognitive effort between raw and post-edited instructions, but differences exist between the post-edited versions and the source language. The cognitive data should not be viewed in isolation, however, since task time measures show the PE group to be faster and more efficient, as well as more satisfied than the MT group. This highlights the importance of collecting qualitative data for measuring usability. The observations are somewhat limited due to the relatively small number of participants and also the fact that only one language pair is used for the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7."
},
{
"text": "For the next phase, we are collecting data from Japanese and Chinese native speakers (a further two challenging languages for MT) in order to learn if results from this paper are replicated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7."
},
{
"text": "https://www.microsoft.com/en-us/translator 2 https://hub.microsofttranslator.com/SignIn?returnURL=%2FH ome%2FIndex 3 Ethics approval was granted by the relevant university research ethics committee.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "With a total of 160 words in the English text and 150 in the German version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Proceedings of MT Summit VIII, Santiago de Compostela, Spain, pp. 103--108 Guerberof, A. (2014) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Does post-editing increase usability? A study with Brazilian Portuguese as target language",
"authors": [
{
"first": "S",
"middle": [],
"last": "Castilho",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "O'brien",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Alvez",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "O'brien",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Seventeenth Annual Conference of the European Association for Machine Translation. Dubrovnik, HR: EAMT",
"volume": "",
"issue": "",
"pages": "183--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Castilho, S., O'Brien, S., Alvez, F. and O'Brien, M. (2014). Does post-editing increase usability? A study with Brazilian Portuguese as target language. In Proceedings of the Seventeenth Annual Conference of the European Association for Machine Translation. Dubrovnik, HR: EAMT, pp. 183--190.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Analysing Post-Editing Performance: Correlations with Years of Translation Experience",
"authors": [
{
"first": "G",
"middle": [],
"last": "De Almeida",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "O'brien",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 14th Annual Conference of the European Association for Machine Translation. St. Raphael, FR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "De Almeida, G. and O'Brien, S. (2010). Analysing Post-Editing Performance: Correlations with Years of Translation Experience. In Proceedings of the 14th Annual Conference of the European Association for Machine Translation. St. Raphael, FR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "What Counts as Useful Advice in a University Postediting Training Context? Report on a case study",
"authors": [
{
"first": "Ilse",
"middle": [],
"last": "Depraetere",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 14th Annual Conference of the European Association for Machine Translation. St. Raphael, FR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Depraetere, Ilse. (2010). What Counts as Useful Advice in a University Postediting Training Context? Report on a case study. In Proceedings of the 14th Annual Conference of the European Association for Machine Translation. St. Raphael, FR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A User-Based Usability Assessment of Raw Machine Translated Technical Instructions",
"authors": [
{
"first": "S",
"middle": [],
"last": "Doherty",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "O'brien",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Tenth Conference of the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doherty, S. and O'Brien, S. (2012) A User-Based Usability Assessment of Raw Machine Translated Technical Instructions. In Proceedings of the Tenth Conference of the Association for Machine Translation in the Americas. San Diego, CA: AMTA, pp. 1--10.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Assessing the Usability of Raw Machine Translation Output: A User-Centered Study using Eye Tracking",
"authors": [
{
"first": "S",
"middle": [],
"last": "Doherty",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "O'brien",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Human-Computer Interaction",
"volume": "30",
"issue": "1",
"pages": "40--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doherty, S. and O'Brien, S. (2014). Assessing the Usability of Raw Machine Translation Output: A User-Centered Study using Eye Tracking. International Journal of Human-Computer Interaction, 30(1), pp. 40--51.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Evaluation method for determining groups of users who find MT useful",
"authors": [
{
"first": "M",
"middle": [],
"last": "Fuji",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hatanaka",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ito",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kamai",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sukehiro",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Yoshimi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ishara",
"suffix": ""
}
],
"year": 2001,
"venue": "Leuven",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fuji, M., Hatanaka, E., Ito, S., Kamai, H., Sukehiro, T., Yoshimi, T., & Ishara, H. (2001). Evaluation method for determining groups of users who find MT useful. In Leuven, Belgium: EAMT, pp. 73--80.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Eye Tracking as a Tool for Machine Translation Error Analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Danielsson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bremin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Karlsson",
"suffix": ""
},
{
"first": "A",
"middle": [
"P"
],
"last": "Lillkull",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wester",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1121--1126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stymne, S., Danielsson, H., Bremin, S., Hu, H., Karlsson, J., Lillkull, A.P., and Wester, M. (2012). Eye Tracking as a Tool for Machine Translation Error Analysis. In Proceedings of the Language Resources and Evaluation Conference. Istanbul, pp. 1121--1126.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Evaluation of MT Systems by TOEFL",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tomita",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Shirai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsutsumi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Matsumura",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yoshikawa",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 5 th International Conference on Theoretical and Methodological Issues in Machine Translation. (TMI-93)",
"volume": "",
"issue": "",
"pages": "252--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomita, M., Shirai, M., Tsutsumi, J., Matsumura, M. and Yoshikawa, Y. (1993). Evaluation of MT Systems by TOEFL. In Proceedings of the 5 th International Conference on Theoretical and Methodological Issues in Machine Translation. (TMI-93), pp. 252-265.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "presents the baseline reading task mean fixation duration (in seconds) for each group (EN=0",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Task design",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Mean Fixation Duration (secs) Figure 3: Total Fixation Count",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Mean Visit Duration (secs)",
"uris": null
},
"TABREF1": {
"num": null,
"html": null,
"text": "Total Number of Completed Tasks",
"content": "<table><tr><td/><td>Total Time (secs)</td><td>Total number of</td><td>TOTAL (secs per task)</td></tr><tr><td/><td/><td>complete tasks</td><td/></tr><tr><td>EN</td><td>3963.29</td><td>18</td><td>220.18</td></tr><tr><td>DE_MT</td><td>5643.93</td><td>17</td><td>332.00</td></tr><tr><td>DE_PE</td><td>5965.39</td><td>23</td><td>259.36</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td>: Efficiency</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"html": null,
"text": "Total Task Time (Seconds)",
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"text": "Post-task Questionnaire Scores",
"content": "<table/>",
"type_str": "table"
}
}
}
}