|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:22:16.175539Z" |
|
}, |
|
"title": "Approaches to the Anonymisation of Sign Language Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Amy", |
|
"middle": [], |
|
"last": "Isard", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Hamburg", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we survey the state of the art for the anonymisation of sign language corpora. We begin by exploring the motivations behind anonymisation and the close connection with the issue of ethics and informed consent for corpus participants. We detail how the the names which should be anonymised can be identified. We then describe the processes which can be used to anonymise both the video and the annotations belonging to a corpus, and the variety of ways in which these can be carried out. We provide examples for all of these processes from three sign language corpora in which anonymisation of the data has been performed.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we survey the state of the art for the anonymisation of sign language corpora. We begin by exploring the motivations behind anonymisation and the close connection with the issue of ethics and informed consent for corpus participants. We detail how the the names which should be anonymised can be identified. We then describe the processes which can be used to anonymise both the video and the annotations belonging to a corpus, and the variety of ways in which these can be carried out. We provide examples for all of these processes from three sign language corpora in which anonymisation of the data has been performed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The purpose of anonymisation is to ensure that no personal information is shared for which the person concerned has not given their informed consent. The discussion of what exactly informed consent is, and how to obtain it, is not a simple one (Crasborn, 2010; Rock, 2001; McEnery and Hardie, 2011; Singleton et al., 2014; Schembri et al., 2013) . The issues vary depending on among other things the size of the community in which the corpus is collected, the nature of the corpus content and the technological background of the subjects, and it is important to consult the subjects about what they would find appropriate. When describing data collection with the shared signing community in Adamorobe, Kusters (2012, p. 32) observes: \"As for anonymity it appeared that people were happy for me to use their real names. The idea of changing their names in 'a book that is about them', seemed very odd to them.\" Singleton et al. (2014, supplementary material) asked Deaf focus group participants for suggestions about how to use material in research presentations while maintaining anonymity, and they suggested the use of avatars or actors to reproduce the data, or digital editing which could obscure the subject's identity. Conversations in sign language corpora also often contain mentions of third parties, who are known to the corpus participants but have not been asked for or given any kind of consent for information about them to be shared publicly. Particularly when small communities are involved, it is often easy to identify a person from minimal amounts of information, and care should therefore be taken to obscure as much of this information as possible if videos and annotations are going to be available to the public. Before any analysis or annotation work is carried out on a corpus, participants should always be given a copy of their own recordings and allowed the further opportunity to refuse consent for all or any parts of the recordings to be shown or used in any way. The process of anonymisation is expensive and timeconsuming, and many corpus projects have taken the decision to publicly release only parts of the data where no personal information is revealed, or to ensure that informed consent has been acquired to the best standard possible, and/or that anyone who has access to the data has signed a confidentiality agreement and understands exactly how the data may be used for further research. In this paper, we describe what the options are once the decision to carry out anonymisation has been taken, and various ways in which these can be implemented. Throughout the rest of the paper, examples of the anonymisation processes and techniques used by the three corpora briefly described below will be used.", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 260, |
|
"text": "(Crasborn, 2010;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 272, |
|
"text": "Rock, 2001;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 298, |
|
"text": "McEnery and Hardie, 2011;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 322, |
|
"text": "Singleton et al., 2014;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 345, |
|
"text": "Schembri et al., 2013)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 724, |
|
"text": "Kusters (2012, p. 32)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 911, |
|
"end": 958, |
|
"text": "Singleton et al. (2014, supplementary material)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The DGS Corpus is a corpus of German Sign Language (DGS). It consists of 560 hours of video dialogues, and about 50 hours has been made available as the Public DGS Corpus 1 (Jahn et al., 2018) . The data was elicited using 18 different tasks, some of which involved free conversation where personal information about third parties was sometimes mentioned. The Public DGS Corpus video and annotations have been anonymised to remove references to which would allow the identification of third parties.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 192, |
|
"text": "(Jahn et al., 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The NGT Corpus is a corpus of Dutch Sign Language (NGT). It consists of dialogues between 92 participants and is available online 2 (Crasborn and Zwitserlood, 2008) . A number of different elicitation tasks were used and some of the conversations involve references which could identify third parties. The available annotations have been anonymised but the video has not.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 164, |
|
"text": "(Crasborn and Zwitserlood, 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The Rudge Corpus is a small corpus of British Sign Language (BSL) collected by Luke Rudge for his PhD thesis on the topic of the use of Systemic Functional Grammar in the analysis of BSL (Rudge, 2018) . There were 12 participants who gave pre-prepared presentations about a prominent period in their lives, which sometimes revealed personal information. The videos and annotations have been anonymised but they are not publicly available.", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 200, |
|
"text": "(Rudge, 2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In sign language corpora, it is impossible to completely anonymise the video data, because both the face and hands of the participants must be fully visible for the content to be understandable (Quer and Steinbach, 2019; Hanke, 2016; Crasborn, 2010) . Chen Pichler et al. (2016, page 32) note that: \"there appears to be virtually unanimous agreement that total anonymization, long taken as a standard practice for medical data, is not feasible for language data that include audio and/or video components\". Although it is not possible to completely conceal the identities of the participants in a sign language corpus, it is nonetheless necessary to ensure that as few of their personal details are revealed as possible. In addition, care must be taken to obscure personal information of third parties who are mentioned during the dialogue, if it could lead to their identification. These third parties will not have had the opportunity to give their informed consent for any sort of appearance in the corpus. There are two main situations in which the anonymisation of sign language corpora is carried out:", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 220, |
|
"text": "(Quer and Steinbach, 2019;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 221, |
|
"end": 233, |
|
"text": "Hanke, 2016;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 249, |
|
"text": "Crasborn, 2010)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What to Anonymise", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 anonymisation of a whole corpus for wider distribution to a larger team or outside researchers", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What to Anonymise", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 anonymisation of single words or phrases for use in settings such as a conference talk, seminar or sign language dictionary", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What to Anonymise", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In both cases, it is first necessary to identify which information needs to be anonymised. In a small corpus it may be possible to make the selection by watching all the videos, but in a larger corpus it maybe helpful to use some automatic processing. The anonymisation of videos is described in Section 3, and of annotations in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What to Anonymise", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "There are a number of different ways in which video can be anonymised. These can be divided into two categories, those which conceal all or part of a video, and those which reproduce a video. Concealing can be effected on part or all of a video frame with the use of blurring or pixellation, or by obscuring the image entirely. Reproduction can be carried out by using either an actor or a computer-generated avatar. These two approaches are generally used for different purposes. Reproduction can conceal the identity of the signers themselves, while concealing preserves the anonymity of third parties by hiding references to people or places. No detailed studies have been published about the extent to which reproduction affects the viewer's understanding of a sign language video, or what level of blurring is necessary to ensure that the movements cannot be distinguished. In the related area of spoken dialogue research, the CASE corpus of Skype dialogues experimented with video anonymisation using Adobe Premiere pixel, art, and transformation filters, and chose a contour filter. In control tests, they discovered that when this filter was used, subjects did not recognise themselves (Diemer et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1194, |
|
"end": 1215, |
|
"text": "(Diemer et al., 2016)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Anonymisation of Video", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Concealment can be used on just part of the image of a video, and usually over a small time frame. The viewer's experience is not hugely disrupted, as only a sign or two will be concealed. Inevitably some information will be lost, but this can be kept to a minimum. The concealment can be carried out by blackening all or part of the image ()adding one or more black rectangles), or by blurring or pixellating all or part of the image to such an extent that the signing or mouthing is no longer recognisable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concealment", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "In the Public DGS Corpus mentions of sensitive information in videos are anonymised by blackening sections of the image (Bleicken et al., 2016). The timings from the annotation tiers (see Section 4) are used to identify the relevant timespan. Experiments were carried out which showed that if the whole timespan was blackened, this invalidated a whole sentence for linguistic analysis, because it disturbed suprasegmental signals. They therefore imposed one or more black rectangles on the image, to cover the mouth, one or both hands and/or the trunk, depending on the position of the sign. Experiments also showed that blackening was less disturbing to viewers than pixellation. OpenPose analysis (Cao et al., 2017) had already been carried out on the corpus (Schulder, 2019) , providing machine-readable information on the location of various body parts, such as hands, shoulders, and mouth, so this was used to find the location of the relevant body parts, and the size and shape of the rectangles were then adjusted by hand. An example screenshot is shown in 3.1.1, where the mouth, cheeks, right hand and right arm of the signer have been hidden, along with a portion of the torso in front of which the sign was being performed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 761, |
|
"end": 777, |
|
"text": "(Schulder, 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Blackening", |
|
"sec_num": "3.1.1." |
|
}, |
|
{ |
|
"text": "For the Rudge corpus, the author went through the video recordings and noted where participants had signed a proper name of a person, specific location or any other information which could identify a third party. The video was then loaded into editing software such as Final Cut Pro or Adobe After Effects, and a local blur or pixellation filter was applied to the signer's hands and mouth for the duration of the relevant sign, which was normally only a few tenths of a second during fluent signing (Rudge, personal communication, January 2020). This ensured that any third party information had been removed before the recordings were passed to other researchers for annotation. No screenshots are available as participants did not give consent for any images to be shown to people outside the initial small research group.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pixelation", |
|
"sec_num": "3.1.2." |
|
}, |
|
{ |
|
"text": "Reproduction of a corpus can in theory be carried out by either humans or computer-generated avatars. Some corpus examples where human actors have been used are described in Section 3.2.1 and the steps which would be necessary for avatar reproduction in Section 3.2.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reproduction", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "For total anonymity, short examples from a corpus can be reproduced by a human actor. In this case complete anonymity is assured, but there are several disadvantages as a result. The process is very labour-intensive, requiring not only the time of the signer but also of a studio and technicians to carry out the recording. In addition, no matter how well the second signer copies the original, some information will be lost. Performativity is a vital part of sign language and it is impossible to fully separate the affective and grammatical functions of facial expressions. The participants in the Rudge corpus had agreed only to their recordings being seen by the author and a limited number of other researchers who worked on verification of the data. Because the thesis is publicly available, examples used in it were reproduced by the author or another signer, to preserve the anonymity of the original participants (Rudge, 2018 and personal communication, January 2020). The DGS Corpus is being used in the compilation of a Dictionary of German Sign Language and the preference is to use examples taken directly from the corpus, for the reasons discussed in detail in Langer et al. (2018) . However, in very occasional cases where the dictionary compilers want to use an example which contains personal information about a third party, they re-record the example with a signing model and replace any personal names in the re-recording and the associated translation with a common German family name.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1175, |
|
"end": 1195, |
|
"text": "Langer et al. (2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Actors", |
|
"sec_num": "3.2.1." |
|
}, |
|
{ |
|
"text": "In practice, although avatars have been improving rapidly in quality, no large-scale avatar reproduction has been carried out. A survey of the state of the art in sign language avatars can be found in (Bragg et al., 2019) . The are a number of technical problems with the use of avatars for sign language, and some of these are related to the process of creating the content and ensuring that the correct manual and non-manual gestures are created. In the case of reproduction these particular issues are avoided, because the data for the avatar comes directly from the original videos. The problems of designing avatars which are acceptable to the Deaf community in terms of appearance and comprehensibility remain, and it is essential that the acceptability of avatars be systematically reviewed and assessed before they are used (Kipp et al., 2011) . In order to use avatars for reproduction, the original videos must first be processed using pose estimation software, which can identify particular body parts including hands, Figure 2 : Visual representation of the pose information provided by OpenPose, computed for a video from the DGS-Korpus project. Sets of keypoints are generated for the body, the face and each hand. Lines between the points are added to the visual representation to indicate the logical connection between individual keypoints.", |
|
"cite_spans": [ |
|
{ |
|
"start": 832, |
|
"end": 851, |
|
"text": "(Kipp et al., 2011)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 221, |
|
"text": "(Bragg et al., 2019)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1030, |
|
"end": 1038, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Avatars", |
|
"sec_num": "3.2.2." |
|
}, |
|
{ |
|
"text": "OpenPose analysis from the DGS corpus (Schulder, 2019) is shown in Figure 3 .2.2, illustrating the keypoints identified by the software and lines between the points to indicate logical connections between them. However, Open-Pose only produces two-dimensional images, and additional (extremely time and resource intensive) processing is required to reconstruct three-dimensional images (Xiang et al., 2019) . The resulting machine-readable information on the location of various body parts could then be used to animate an avatar which would reproduce the desired data, but as far as we are aware, no sign language avatar has so far been tested on this output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 54, |
|
"text": "(Schulder, 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 406, |
|
"text": "(Xiang et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 75, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "arms, and facial features. A visual representation of an", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If OpenPose data are made publicly available, they must also be anonymised, to the same level as the videos on which they were based. If the data were later used, for example to animate an avatar, they could make personal information visible. OpenPose data are available to download as part of the Public DGS Corpus (Schulder, 2019) , and they have been anonymised to remove keypoints for timespans which were previously chosen for anonymisation as described in Section 4.1. It is possible to differentiate between keypoints which have been anonymised and those which are missing because the body part is temporarily hidden (for example when a person puts a hand behind their head), so that if the OpenPose data were used to animate an avatar, anonymised keypoints could be covered by a black square, as with video blackening (see Section 3.1.1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 332, |
|
"text": "(Schulder, 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OpenPose data", |
|
"sec_num": "3.2.3." |
|
}, |
|
{ |
|
"text": "Before the anonymisation of annotations can be carried out, the sensitive names must first be identified. In a small corpus, this may have been done by watching the video data, but where many hours of video have been translated and annotated by a team of researchers, automatic methods can also be used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Anonymisation of Annotations", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "For the Rudge corpus, names were found by manual inspection of the videos (see Section 3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Name Identification Methods", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "In the NGT corpus, information which had been manually annotated in the gloss and mouth tiers was used to identify names which needed to be anonymised (Crasborn and Bank, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 176, |
|
"text": "(Crasborn and Bank, 2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Name Identification Methods", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "The DGS-Korpus project tested a subset of the DGS corpus to see how reliable different techniques were for finding sensitive items which should be anonymised (Bleicken et al., 2016) . Because German translations had already been carried out, they could use computational linguistic tools for German which are available through Weblicht (Hinrichs et al., 2010) as pre-defined chains. They used four approaches and compared the results for each to a ground truth defined as the sum of the names correctly identified by each technique. The four approaches which they used were:", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 181, |
|
"text": "(Bleicken et al., 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 359, |
|
"text": "(Hinrichs et al., 2010)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Name Identification Methods", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "\u2022 Manual inspection of the videos by a deaf annotator who was asked to mark every occurrence of a name", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Name Identification Methods", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "\u2022 Extraction of potential names from the annotations, which were then checked against the German translations; when a match was found, a manual inspection was carried out", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Name Identification Methods", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "\u2022 Use of named entity recognition on the German translations", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Name Identification Methods", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "\u2022 Checking mouthing annotations and translations against name lists When comparing the final outcomes of all the methods, they found that the most effective process was to combine the automatic methods with a one-pass manual inspection. The DGS-Korpus project found that they were more conservative in their selection of data which needed to be anonymised than the participants themselves had been after reviewing their own recordings. They decided therefore that it was unfair to make the participants entirely responsible for these decisions, and better to be more cautious, and carry out more anonymisation rather than less, in an effort to prevent any identifiable information on third parties being released accidentally.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Name Identification Methods", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "Once the names to be anonymised have been identified, the actual anonymisation can be done using either categorisation or pseudonymisation. Pseudonymisation involves the use of replacement names (Section 4.2). In categorisation, a name is usually replaced by a string indicating the type of proper name plus a numeric identifier, so that subsequent mentions in the same dialogue can be seen to be referring to the same entity (Section 4.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Name Identification Methods", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "When pseudonymisation is carried out, the pseudonyms can be chosen to match the original names on as many levels as desired. This could for example involve choosing replacement cities of approximately the same size, or family names which originate from the same geographical region. Anonymisation with pseudonyms was for example carried out in the spoken German FOLK corpus (Schmidt, 2016; Winterscheid, 2015) . One disadvantage of this approach is that it can be very time consuming as time must be spent choosing replacement names and making sure that they fit all of the chosen criteria. There are currently no sign language corpora for which a description of anonymisation using pseudonyms is available. Issues to consider would include the question of how to define \"similar\" names in terms of sign language phonology.", |
|
"cite_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 389, |
|
"text": "(Schmidt, 2016;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 409, |
|
"text": "Winterscheid, 2015)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pseudonymisation", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "Categorisation is a quicker and simpler process than pseudonymisation because it is only necessary to identify the type of a proper name in order to create its replacement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Categorisation", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "In the NGT corpus, glosses and annotation tiers are anonymised so that it will not be possible for anyone to make a simple automatic search for names. All glosses which refer to participants and other people who are not considered to be in the public domain are replaced by the type * NAMESIGN. In mouthing and translation tiers, they are replaced by the type * eigennaam (Crasborn and Bank, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 397, |
|
"text": "(Crasborn and Bank, 2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Categorisation", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "In the Rudge corpus, the timestamps from the manual analysis of the video data (see Section 3.1.2) were used to find places in the translation and annotations where names, locations and other personal data needed to be anonymised. They were replaced with types such as [NAME] or [LOCATION] . If there were multiple instances of anonymisation in the same clause or in quick succession, a suffix was added of the form [NAME-a], [NAME-b] , etc. so that any following indicating verbs or signs requiring more complex spatio-kinetic features (e.g., placement in the signing space) could still be understood in spite of the visual noise (Rudge, 2018 and personal communication, January 2020). The DGS-Korpus project examined each person name to determine whether it belonged to someone for whom information is already available in the public domain, such as television personalities or politicians, whose names would not then be anonymised. They also defined a population threshold above which places were considered to be large enough to not require anonymisation. Proper names in the translation and mouthing annotations, and most of the gloss tier, were replaced by numbered placeholders of the form Name#1, Name#2, etc. so that it is still possible to tell when the same person or place is referred to more than once.", |
|
"cite_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 275, |
|
"text": "[NAME]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 289, |
|
"text": "[LOCATION]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 434, |
|
"text": "[NAME-b]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Categorisation", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "It must always be kept in mind that in a large corpus it is basically impossible to ensure that all possible identifiable information has been removed, and that this must be made clear to the participants as part of the process of obtaining informed consent. For example, in one dialogue from the Public DGS corpus (English translation shown below), a place name is anonymised, but two sentences later it is mentioned that it is the previous residence of a princess from the 18th century who, as a person in the public eye, would not normally have her name anonymised:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Thoughts", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "My hometown Place#1 also has a small tourist attraction. There used to be a castle right where the German Catholic Church is located today. The Austrian princess Elisabeth used to live there.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Thoughts", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "It would therefore be theoretically possible for someone who comes from the same area or has a thorough knowledge of the history of the region to figure out the name of the participant's home town. To avoid this, the name of the princess would then also have to be anonymised, and possibly even her nationality, but at some point a decision has to be made about how far to continue the process, and in this case, it was decided that the name of the princess would not be anonymised.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final Thoughts", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "This work was supported by the BMBF (German Federal Ministry of Education and Research) Project QUEST: Quality-Established 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Bleicken, J., Hanke, T., Salden, U., and Wagner, S. (2016) . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 58, |
|
"text": "Hanke, T., Salden, U., and Wagner, S. (2016)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bibliographical References", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "http://ling.meine-dgs.de 2 https://www.ru.nl/corpusngten/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.slm.uni-hamburg.de/en/ifuu/ forschung/forschungsprojekte/quest.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "What Does \"Informed Consent\" Mean in the Internet Age? Publishing Sign Language Corpora as Open Content", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Crasborn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Sign Language Studies", |
|
"volume": "10", |
|
"issue": "2", |
|
"pages": "276--290", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Crasborn, O. (2010). What Does \"Informed Consent\" Mean in the Internet Age? Publishing Sign Language Corpora as Open Content. Sign Language Studies, 10(2):276-290.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Compiling computer-mediated spoken language corpora: Key issues and recommendations", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Diemer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-L", |
|
"middle": [], |
|
"last": "Brunner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International Journal of Corpus Linguistics", |
|
"volume": "21", |
|
"issue": "3", |
|
"pages": "348--371", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diemer, S., Brunner, M.-L., and Schmidt, S. (2016). Com- piling computer-mediated spoken language corpora: Key issues and recommendations. International Journal of Corpus Linguistics, 21(3):348-371.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Towards a Visual Sign Language Corpus Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hanke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Seventh Workshop on the Representation and Processing of Sign Languages: Corpus Mining at LREC 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--92", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanke, T. (2016). Towards a Visual Sign Language Corpus Linguistics. In Proceedings of the Seventh Workshop on the Representation and Processing of Sign Languages: Corpus Mining at LREC 2016, pages 89-92, Portoro\u017e, Slovenia.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "We-bLicht: Web-Based LRT Services for German", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hinrichs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hinrichs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Zastrow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the ACL 2010 System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hinrichs, E., Hinrichs, M., and Zastrow, T. (2010). We- bLicht: Web-Based LRT Services for German. In Pro- ceedings of the ACL 2010 System Demonstrations, pages 25-29, Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Publishing DGS corpus data: Different Formats for Different Needs", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Jahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Konrad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Langer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Wagner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hanke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eighth Workshop on the Representation and Processing of Sign Languages: Involving the Language Community at LREC 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "83--90", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jahn, E., Konrad, R., Langer, G., Wagner, S., and Hanke, T. (2018). Publishing DGS corpus data: Different Formats for Different Needs. In Proceedings of the Eighth Work- shop on the Representation and Processing of Sign Lan- guages: Involving the Language Community at LREC 2018, pages 83-90, Miyazaki, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Sign Language Avatars: Animation and Comprehensibility", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kipp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Heloir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Intelligent Virtual Agents", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kipp, M., Heloir, A., and Nguyen, Q. (2011). Sign Lan- guage Avatars: Animation and Comprehensibility. In Hannes H\u00f6gni Vilhj\u00e1lmsson, et al., editors, Intelligent Virtual Agents, Lecture Notes in Computer Science, pages 113-126, Berlin, Heidelberg. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Being a deaf white anthropologist in Adamorobe: Some ethical and methodological issues", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kusters", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Sign Languages in Village Communities: Anthropological and Linguistic Insights", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "27--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kusters, A. (2012). Being a deaf white anthropologist in Adamorobe: Some ethical and methodological issues. In Sign Languages in Village Communities: Anthropo- logical and Linguistic Insights, pages 27-52. De Gruyter Mouton, Berlin, Boston.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Authentic Examples in a Corpus-Based Sign Language Dictionary -Why and How", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Langer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "W\u00e4hl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bleicken", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the XVIII EURALEX International Congress: Lexicography in Global Contexts", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "483--497", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Langer, G., M\u00fcller, A., W\u00e4hl, S., and Bleicken, J. (2018). Authentic Examples in a Corpus-Based Sign Language Dictionary -Why and How. In Proceedings of the XVIII EURALEX International Congress: Lexicography in Global Contexts., pages 483-497, Ljubljana, Slovenia.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Corpus Linguistics: Method, Theory and Practice. Cambridge Textbooks in Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mcenery", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McEnery, T. and Hardie, A. (2011). Corpus Linguistics: Method, Theory and Practice. Cambridge Textbooks in Linguistics. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Handling Sign Language Data: The Impact of Modality", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Quer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Steinbach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Frontiers in Psychology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quer, J. and Steinbach, M. (2019). Handling Sign Lan- guage Data: The Impact of Modality. Frontiers in Psy- chology, 10.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Policy and Practice in the Anonymisation of Linguistic Data", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Rock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "International Journal of Corpus Linguistics", |
|
"volume": "6", |
|
"issue": "1", |
|
"pages": "1--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rock, F. (2001). Policy and Practice in the Anonymisa- tion of Linguistic Data. International Journal of Corpus Linguistics, 6(1):1-26.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Analysing British Sign Language through the Lens of Systemic Functional Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Rudge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rudge, L. A. (2018). Analysing British Sign Lan- guage through the Lens of Systemic Functional Lin- guistics. Ph.D. thesis, University of the West of Eng- land. https://uwe-repository.worktribe. com/output/863200.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Building the British Sign Language Corpus. Language Documentation & Conservation", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Schembri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fenlon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Rentelis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Reynolds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Cormier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "136--154", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schembri, A., Fenlon, J., Rentelis, R., Reynolds, S., and Cormier, K. (2013). Building the British Sign Lan- guage Corpus. Language Documentation & Conserva- tion, 7:136-154.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Construction and Dissemination of a Corpus of Spoken Interaction -Tools and Workflows in the FOLK project", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Journal for Language Technology and Computational Linguistics", |
|
"volume": "31", |
|
"issue": "1", |
|
"pages": "127--154", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schmidt, T. (2016). Construction and Dissemination of a Corpus of Spoken Interaction -Tools and Workflows in the FOLK project. Journal for Language Technology and Computational Linguistics, 31(1):127-154.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "OpenPose in the Public DGS Corpus", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Schulder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schulder, M. (2019). OpenPose in the Public DGS Corpus. Project Note AP06-2019-01, Institute for German Sign Language, Hamburg University, Hamburg, Germany. https://www.sign-lang.uni-hamburg.de/ dgs-korpus/arbeitspapiere/AP06-2019- 01.html.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Toward Ethical Research Practice With Deaf Participants", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Singleton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hanumantha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Empirical Research on Human Research Ethics", |
|
"volume": "9", |
|
"issue": "3", |
|
"pages": "59--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Singleton, J. L., Jones, G., and Hanumantha, S. (2014). Toward Ethical Research Practice With Deaf Partici- pants. Journal of Empirical Research on Human Re- search Ethics, 9(3):59-66.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Maskierung. Working Paper, Institut f\u00fcr Deutsche Sprache", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Winterscheid", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Winterscheid, J. (2015). Maskierung. Working Pa- per, Institut f\u00fcr Deutsche Sprache, Mannheim. https://ids-pub.bsz-bw.de/frontdoor/ deliver/index/docId/3904/file/ Winterscheid_Maskierung_2015.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Monocular total capture: Posing face, body, and hands in the wild", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Joo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Sheikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10965--10974", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiang, D., Joo, H., and Sheikh, Y. (2019). Monocular to- tal capture: Posing face, body, and hands in the wild. In Proceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pages 10965-10974, Long Beach, CA, USA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Screenshot from the DGS Corpus, anonymised through blackening with one black rectangle over the mouth and cheeks and another over the right hand and arm and the top right portion of the torso.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Using a Language Technology Infrastructure for German in order to Anonymize German Sign Language Corpus Data. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 3303-3306, Portoro\u017e, Slovenia. Bragg, D., Koller, O., Bellard, M., Berke, L., Boudreault, P., Braffort, A., Caselli, N., Huenerfauth, M., Kacorri, H., Verhoef, T., Vogler, C., and Ringel Morris, M. (2019). Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '19, pages 16-31, Pittsburgh, PA, USA. Cao, Z., Simon, T., Wei, S.-E., and Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7291-7299, Honolulu, HI, USA. Chen Pichler, Deborah, D., Hochgesang, J., Simons, Doreen, D., and Lillo-Martin, Diane, D. (2016). Community Input on Re-consenting for Data Sharing. In Proceedings of the Seventh Workshop on the Representation and Processing of Sign Languages: Corpus Processing at LREC 2016, Portoro\u017e, Slovenia. Crasborn, O. and Bank, R. (2015). Corpus NGT Anonymisation Protocol. https://www.academia.edu/ 40438732/Corpus_NGT_Anonymisation_ Protocol. Crasborn, O. A. and Zwitserlood, I. E. P. (2008). The Corpus NGT: An Online Corpus for Professionals and Laymen. In Proceedings of the Third Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora at LREC 2008, pages 44-49, Marrakech, Morocco.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |