Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L16-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:02:52.759679Z"
},
"title": "Benchmarking Multimedia Technologies with the CAMOMILE Platform: the Case of Multimodal Person Discovery at MediaEval 2015",
"authors": [
{
"first": "Johann",
"middle": [],
"last": "Poignant",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 Paris-Saclay",
"location": {
"postCode": "F-91405",
"settlement": "Orsay"
}
},
"email": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "Bredin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 Paris-Saclay",
"location": {
"postCode": "F-91405",
"settlement": "Orsay"
}
},
"email": ""
},
{
"first": "Claude",
"middle": [],
"last": "Barras",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 Paris-Saclay",
"location": {
"postCode": "F-91405",
"settlement": "Orsay"
}
},
"email": ""
},
{
"first": "Mickael",
"middle": [],
"last": "Stefas",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Pierrick",
"middle": [],
"last": "Bruneau",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Tamisier",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we claim that the CAMOMILE collaborative annotation platform (developed in the framework of the eponymous CHIST-ERA project) eases the organization of multimedia technology benchmarks, automating most of the campaign technical workflow and enabling collaborative (hence faster and cheaper) annotation of the evaluation data. This is demonstrated through the successful organization of a new multimedia task at MediaEval 2015, Multimodal Person Discovery in Broadcast TV.",
"pdf_parse": {
"paper_id": "L16-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we claim that the CAMOMILE collaborative annotation platform (developed in the framework of the eponymous CHIST-ERA project) eases the organization of multimedia technology benchmarks, automating most of the campaign technical workflow and enabling collaborative (hence faster and cheaper) annotation of the evaluation data. This is demonstrated through the successful organization of a new multimedia task at MediaEval 2015, Multimodal Person Discovery in Broadcast TV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "For decades, NIST evaluation campaigns have been driving research in the field of human language technology (Martin et al., 2004) , recently followed by the CLEF (Peters and Braschler, 2002) and ESTER/ETAPE (Gravier et al., 2004) initiatives. The concept has been successfully transposed to other research areas, such as image recognition (ImageNet Large Scale Visual Recognition Challenge (Russakovsky et al., 2015) ), video (TRECVID (Smeaton et al., 2006)) or multimedia indexing (MediaEval (Larson et al., 2015) ). More generally, evaluation campaigns allow the assessment of experimental research in fields where human perception and decision must be reproduced by machine learning algorithms (Geoffrois, 2008) . The general workflow of\u00e0 la NIST evaluation campaigns comprises the following stages (Martin et al., 2004) : specification of the task; definition of the evaluation metric and provision of an automatic scoring software; design and annotation of the training, development and evaluation corpora; definition of evaluation rules, schedule, protocols and submission formats; sharing of participant results through system descriptions and workshop communications. Automatic scoring is made possible by the manual annotation of the data according to the task definition. Costly and time-consuming, this annotation step usually is the main bottleneck of evaluation campaigns. When addressing new tasks in multimodal perception, it becomes challenging (if not impossible) to pre-annotate the ever-increasing volume of multimedia data. A compromise has been successfully explored in the TREC and TRECVid campaigns, where the annotation of a small (but carefully chosen (Yilmaz and Aslam, 2006) ) subset of the test data is bootstrapped by the participants' submissions. In this paper, we claim that the CAMOMILE collaborative annotation platform (developed in the framework of the eponymous CHIST-ERA project) eases the organization of multimedia technology benchmarks, automating most of the campaign technical workflow and enabling collaborative (hence faster and cheaper) annotation of the evaluation data. This is demonstrated through the successful organi-zation of a new multimedia task at MediaEval 2015, Multimodal Person Discovery in Broadcast TV (Poignant et al., 2015b) .",
"cite_spans": [
{
"start": 108,
"end": 129,
"text": "(Martin et al., 2004)",
"ref_id": "BIBREF9"
},
{
"start": 162,
"end": 190,
"text": "(Peters and Braschler, 2002)",
"ref_id": "BIBREF11"
},
{
"start": 207,
"end": 229,
"text": "(Gravier et al., 2004)",
"ref_id": "BIBREF4"
},
{
"start": 390,
"end": 416,
"text": "(Russakovsky et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 435,
"end": 458,
"text": "(Smeaton et al., 2006))",
"ref_id": "BIBREF16"
},
{
"start": 493,
"end": 514,
"text": "(Larson et al., 2015)",
"ref_id": null
},
{
"start": 697,
"end": 714,
"text": "(Geoffrois, 2008)",
"ref_id": "BIBREF3"
},
{
"start": 802,
"end": 823,
"text": "(Martin et al., 2004)",
"ref_id": "BIBREF9"
},
{
"start": 1677,
"end": 1701,
"text": "(Yilmaz and Aslam, 2006)",
"ref_id": "BIBREF17"
},
{
"start": 2264,
"end": 2288,
"text": "(Poignant et al., 2015b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The objective of this new task is to make TV archives fully exploitable and searchable through people indexing. Participants were provided with a collection of TV broadcast recordings pre-segmented into shots. Each shot had to be automatically tagged with the names of people both speaking and appearing at the same time during the shot. Since one cannot assume that biometric models of persons of interest are available at indexing time, the main novelty of the task was that the list of persons was not provided a priori. Biometric models (either voice or face) could not be trained on external data. The only way to identify a person was by finding their name in the audio (using speech transcription -ASR) or visual (using optical character recognition -OCR) streams and associating them to the correct person -making the task completely unsupervised with respect to prior biometric models. To ensure that participants followed this strict \"no biometric supervision\" constraint, each hypothesized name had to be backed up by an \"evidence\": a unique and carefully selected shot proving that the person actually holds this name (e.g. a shot showing a text overlay introducing the person by their name). In real-world conditions, this evidence would help a human annotator double-check the automatically-generated index, even for people they did not know beforehand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Person Discovery in Broadcast TV",
"sec_num": "2."
},
{
"text": "Participants were provided with a fully functional baseline system, allowing them to only focus on some aspects of the task (e.g. speaker diarization) while still being able to rely on the baseline modules for the other ones (e.g. optical character recognition). The task was evaluated as a standard information retrieval task using a metric derived from mean average precision. Nine teams (Nishi et al., 2015; Budnik et al., 2015; Lopez-Otero et al., 2015; India et al., 2015; Poignant et al., 2015a; Bendris et al., 2015 ; dos Santos Jr et al., 2015; Le et al., 2015) managed to reach the submission deadline, amounting to a total of 70 submitted runs. For further details about the task, dataset and metrics, the interested reader can refer to (Poignant et al., 2015b) .",
"cite_spans": [
{
"start": 390,
"end": 410,
"text": "(Nishi et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 411,
"end": 431,
"text": "Budnik et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 432,
"end": 457,
"text": "Lopez-Otero et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 458,
"end": 477,
"text": "India et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 478,
"end": 501,
"text": "Poignant et al., 2015a;",
"ref_id": "BIBREF12"
},
{
"start": 502,
"end": 522,
"text": "Bendris et al., 2015",
"ref_id": "BIBREF0"
},
{
"start": 747,
"end": 771,
"text": "(Poignant et al., 2015b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multimodal Person Discovery in Broadcast TV",
"sec_num": "2."
},
{
"text": "The CAMOMILE platform was initially developed for supporting collaborative annotation of multimodal, multilingual and multimedia data (Poignant et al., 2016 A corpus is a set of media (e.g. the evaluation corpus made of all test videos). An annotation is defined by a fragment of a medium (e.g. a shot) with attached metadata (e.g. the name of the current speaker). Finally, a layer is an homogeneous set of annotations, sharing the same fragment type and the same metadata type (e.g. a complete run submitted by one participant). All these resources are accessible through a RESTful API (clients in Python and Javascript are readily available), with user authentication and permission management.",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "(Poignant et al., 2016",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Person Discovery made easy with CAMOMILE",
"sec_num": "3."
},
{
"text": "A generic queueing mechanism is also available on the CAMOMILE backend as a means to control the workflow. The CAMOMILE platform is distributed as open-source software at the following address: http://github. com/camomile-project/camomile-server.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Person Discovery made easy with CAMOMILE",
"sec_num": "3."
},
{
"text": "The upper part of Figure 2 depicts the technical workflow of the proposed evaluation campaign. The lower parts of Figure 2 summarize how we relied on the CAMOMILE platform and its Python and Javascript clients to automate most of the workflow.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 2",
"ref_id": null
},
{
"start": 114,
"end": 122,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automating the benchmarking workflow",
"sec_num": "3.1."
},
{
"text": "After the task was advertised through the MediaEval call for participation, we relied on MediaEval standard registration procedure (i.e. filling an online form and signing dataset usage agreements) to gather the list of participating teams. Through a web interface, users and groups management features of the CAMOMILE platform were used to create one group per team and one user account for each team member.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Registration",
"sec_num": "3.1.1."
},
{
"text": "Due to technical (limited internet bandwith) or copyright concerns (datasets distributed by third parties), the development and evaluation datasets were not distributed through the CAMOMILE platform. Instead, ELDA and INA took care of sending the datasets to the participants. Nevertheless, corresponding metadata for corpora (development and test sets) and layers (for each video) were created as CAMOMILE resources with read permissions for each team, then bound to a local copy of the videos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distribution",
"sec_num": "3.1.2."
},
{
"text": "While the standard MediaEval submission procedure is to ask participating teams to upload their runs into a shared online directory, we chose to distribute to all participants a submission management tool, based on the CAMOMILE Python client. This command line tool would automatically check the format of the submission files, authenticate users with their CAMOMILE credentials and creates a new layer (and associated annotations) for each submission, with read/write permissions to (and only to) every team member.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submission",
"sec_num": "3.1.3."
},
{
"text": "For the duration of the submission period, a continuous evaluation service based on the CAMOMILE Python client would update a live leaderboard computed on a secret subset of the evaluation dataset -providing feedback to participants about the performance of their current submissions. These four modules could easily be adapted to other benchmarking campaigns, as long as the reference and submissions can follow the CAMOMILE data model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.1.4."
},
{
"text": "While the development dataset had already been annotated in the framework of the past REPERE evaluation campaigns, the evaluation dataset was distributed by INA without any annotation. Thanks to the CAMOMILE platform, we were able to setup a collaborative annotation campaign where participants themselves would contribute some time to annotate the evaluation dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collaborative annotation",
"sec_num": "3.2."
},
{
"text": "Two dedicated and complementary annotation web interfaces were developed, both based on the CAMOMILE Javascript client. The first one is dedicated to the correction of the \"pieces of evidence\" submitted by participants. For each correct evidence, annotators had to draw a bounding box around the face of the person and spellcheck their hypothesized name (firstname lastname). The second one relies on the resulting mugshots to ask the annotator to decide visually if the hypothesized person is actually speaking and visible during a video shot. Moreover, a monitoring interface was also accessible to the organizers to quickly gain insight into the status of the annotation campaign (e.g. number of shots already annotated).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation interfaces",
"sec_num": "3.2.1."
},
{
"text": "As shown in Figure 4 , both annotation interfaces relied on the CAMOMILE queueing feature, thanks to a submission Every time a new run was submitted, the annotation management service would push not-yet annotated evidences into the CAMOMILE queue used as input for the evidence annotation interface. Corresponding mugshots (i.e. small picture depicting the person's face) would then be extracted automatically for later use in the label annotation interface. Similarly, not-yet annotated shots would be added into the CAMOMILE queue used as input of the label annotation interface. Once a consensus is reached (cf. next section), thoses shots would be added to the CAMOMILE groundtruth layer. Finally, a submission scoring daemon would continuously evaluate each submission, providing scores displayed by the live leaderboard.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Backend",
"sec_num": "3.2.2."
},
{
"text": "3.2.3. Reaching consensus... Table 1 summarizes the amount of work done during the annotation campaign. 7k+ \"evidence\" annotations were performed by 3 organizers while 66k+ \"label\" annotations were gathered from 20 team members -leading to the annotation of half of the evaluation corpus in less than a month. While the annotation of \"evidence\" was done by the organizers themselves, we wanted to guarantee the quality of the \"labels\" annotation done by the participants themselves. To that end, each shot was required to be annotated at least twice. Additional annotation of the same shot were requested until a consensus was found. Tables 2 and 3 show that, thanks to a simple, focused and dedicated \"label\" interface, the average number of required annotations A quick look at the few shots with 4 or more annotations reveals a few ambiguous cases that were not forecast when designing the \"label\" annotation interface: people singing or dubbed, barely audible speech, etc. ",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 1",
"ref_id": null
},
{
"start": 634,
"end": 649,
"text": "Tables 2 and 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Backend",
"sec_num": "3.2.2."
},
{
"text": "Relying entirely on the CAMOMILE annotation platform, a team of two people was able to manage a large scale multimedia technology benchmark (nine teams, 70 submissions, 30k shots) -including the development of the submission management script, the leaderboard service and the whole annotation campaign. Everything was hosted on a virtual private server with 2 cores and 2 GB of RAM and resisted the load even during the peak submission time (right before the deadline) and the concurrent collaborative annotation period. All the scripts and interfaces related to this campaign are publicly available on the CAMOMILE GitHub page. Though some were designed specifically for the proposed MediaEval Person Discovery task, we believe that a significant part of the approach is generic enough to be easily ported to a different task where manual and automatic annotation of audio-visual corpora is involved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4."
}
],
"back_matter": [
{
"text": "This work was supported by France \"Agence Nationale de la Recherche\" (ANR) under grant ANR-12-CHRI-0006-01 and Luxembourg \"Fonds National de la Recherche\" (FNR). We thank ELDA and INA for supporting the task with development and evaluation datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Percolatte : A multimodal person discovery system in tv broadcast for the medieval 2015 evaluation campaign",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bendris",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Charlet",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Senay",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Favre",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rouvier",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Bechet",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Damnati",
"suffix": ""
}
],
"year": 2015,
"venue": "MediaEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bendris, M., Charlet, D., Senay, G., Kim, M., Favre, B., Rouvier, M., Bechet, F., and Damnati, G. (2015). Per- colatte : A multimodal person discovery system in tv broadcast for the medieval 2015 evaluation campaign. In MediaEval.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Lig at mediaeval 2015 multimodal person discovery in broadcast tv task",
"authors": [
{
"first": "M",
"middle": [],
"last": "Budnik",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Safadi",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Qu\u00e9not",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Khodabakhsh",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Demiroglu",
"suffix": ""
}
],
"year": 2015,
"venue": "MediaEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Budnik, M., Safadi, B., Besacier, L., Qu\u00e9not, G., Khod- abakhsh, A., and Demiroglu, C. (2015). Lig at medi- aeval 2015 multimodal person discovery in broadcast tv task. In MediaEval.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Ssig and irisa at multimodal person discovery",
"authors": [
{
"first": "C",
"middle": [
"E"
],
"last": "Santos",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Gravier",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "dos Santos Jr, C. E., Gravier, G., and Schwartz, W. (2015). Ssig and irisa at multimodal person discovery. In Medi- aEval.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An economic view on human language technology evaluation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Geoffrois",
"suffix": ""
}
],
"year": 2008,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrois, E. (2008). An economic view on human lan- guage technology evaluation. In LREC.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The ester evaluation campaign of rich transcription of french broadcast news",
"authors": [
{
"first": "G",
"middle": [],
"last": "Gravier",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bonastre",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Galliano",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Geoffrois",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mc Tait",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Choukri",
"suffix": ""
}
],
"year": 2004,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gravier, G., Bonastre, J., Galliano, S., Geoffrois, E., Mc Tait, K., and Choukri, K. (2004). The ester evalu- ation campaign of rich transcription of french broadcast news. In LREC.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Upc system for the 2015 mediaeval multimodal person discovery in broadcast tv task",
"authors": [
{
"first": "M",
"middle": [],
"last": "India",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Varas",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vilaplana",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Morros",
"suffix": ""
},
{
"first": "Hernando",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Me-diaEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "India, M., Varas, D., Vilaplana, V., Morros, J., and Her- nando, J. (2015). Upc system for the 2015 mediaeval multimodal person discovery in broadcast tv task. In Me- diaEval.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Eumssi team at the mediaeval person discovery challenge",
"authors": [
{
"first": "N",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Meignier",
"suffix": ""
},
{
"first": "J.-M",
"middle": [],
"last": "Odobez",
"suffix": ""
}
],
"year": 2015,
"venue": "MediaEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le, N., Wu, D., Meignier, S., and Odobez, J.-M. (2015). Eumssi team at the mediaeval person discovery chal- lenge. In MediaEval.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Gtm-uvigo systems for person discovery task at mediaeval 2015",
"authors": [
{
"first": "P",
"middle": [],
"last": "Lopez-Otero",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Barros",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Docio-Fernandez",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gonz\u00e1lez-Agulla",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Alba-Castro",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Garcia-Mateo",
"suffix": ""
}
],
"year": 2015,
"venue": "MediaEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lopez-Otero, P., Barros, R., Docio-Fernandez, L., Gonz\u00e1lez-Agulla, E., Alba-Castro, J., and Garcia-Mateo, C. (2015). Gtm-uvigo systems for person discovery task at mediaeval 2015. In MediaEval.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Nist language technology evaluation cookbook",
"authors": [
{
"first": "A",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Garofolo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fiscus",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pallett",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Przybocki",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sanders",
"suffix": ""
}
],
"year": 2004,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin, A., Garofolo, J., Fiscus, J., Le, A., Pallett, D., Przy- bocki, M., and Sanders, G. (2004). Nist language tech- nology evaluation cookbook. In LREC.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Combining audio features and visual i-vector at mediaeval 2015 multimodal person discovery in broadcast tv",
"authors": [
{
"first": "F",
"middle": [],
"last": "Nishi",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Inoue",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Shinoda",
"suffix": ""
}
],
"year": 2015,
"venue": "MediaEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nishi, F., Inoue, N., and Shinoda, K. (2015). Combining audio features and visual i-vector at mediaeval 2015 mul- timodal person discovery in broadcast tv. In MediaEval.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The importance of evaluation for cross-language system development: the clef experience",
"authors": [
{
"first": "C",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Braschler",
"suffix": ""
}
],
"year": 2002,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peters, C. and Braschler, M. (2002). The importance of evaluation for cross-language system development: the clef experience. In LREC.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Limsi at mediaeval 2015: Person discovery in broadcast tv task",
"authors": [
{
"first": "J",
"middle": [],
"last": "Poignant",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bredin",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Barras",
"suffix": ""
}
],
"year": 2015,
"venue": "MediaEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poignant, J., Bredin, H., and Barras, C. (2015a). Limsi at mediaeval 2015: Person discovery in broadcast tv task. In MediaEval.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multimodal person discovery in broadcast tv at mediaeval 2015",
"authors": [
{
"first": "J",
"middle": [],
"last": "Poignant",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bredin",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Barras",
"suffix": ""
}
],
"year": 2015,
"venue": "MediaEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poignant, J., Bredin, H., and Barras, C. (2015b). Mul- timodal person discovery in broadcast tv at mediaeval 2015. In MediaEval.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The CAMOMILE Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents",
"authors": [
{
"first": "J",
"middle": [],
"last": "Poignant",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Budnik",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bredin",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Barras",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stefas",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bruneau",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Adda",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ekenel",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Francopoulo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hernando",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Morros",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Qu\u00e9not",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosset",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Tamisier",
"suffix": ""
}
],
"year": 2016,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poignant, J., Budnik, M., Bredin, H., Barras, C., Ste- fas, M., Bruneau, P., Adda, G., Besacier, L., Ekenel, H., Francopoulo, G., Hernando, J., Mariani, J., Mor- ros, R., Qu\u00e9not, G., Rosset, S., and Tamisier, T. (2016). The CAMOMILE Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Docu- ments. In LREC 2016.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Imagenet large scale visual recognition challenge",
"authors": [
{
"first": "O",
"middle": [],
"last": "Russakovsky",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2015,
"venue": "IJCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A., and Fei-Fei, L. (2015). Imagenet large scale visual recognition challenge. In IJCV.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluation campaigns and trecvid",
"authors": [
{
"first": "A",
"middle": [],
"last": "Smeaton",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Over",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Kraaij",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smeaton, A., Over, P., and Kraaij, W. (2006). Evaluation campaigns and trecvid. In MIR.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Estimating average precision with incomplete and imperfect judgments",
"authors": [
{
"first": "E",
"middle": [],
"last": "Yilmaz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Aslam",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 15th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yilmaz, E. and Aslam, J. (2006). Estimating average pre- cision with incomplete and imperfect judgments. In In Proceedings of the 15th ACM International Conference on Information and Knowledge Management.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "CAMOMILE data model",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Annotation web interfaces monitoring service that would continuously watch for new submissions and update annotation queues accordingly.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Annotation management service",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">corpus</td></tr><tr><td/><td colspan=\"2\">homogeneous</td></tr><tr><td>medium multimedia document audio image video text</td><td>collection of multimedia documents</td><td>layer homogeneous collection of annotations</td></tr><tr><td/><td colspan=\"2\">annotation</td></tr><tr><td colspan=\"2\">medium fragment</td><td>attached metadata</td></tr><tr><td/><td/><td>categorical value</td></tr><tr><td>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis ut aliquip ex ea commodo consequat. nostrud exercitation ullamco laboris nisi</td><td>time</td><td>free text numerical value raw content</td></tr></table>",
"html": null,
"text": "). The data model was kept intentionally simple and generic, with four types of resources: corpus, medium, layer and annotation."
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"2\">: Proportion of shots with/without consensus</td></tr><tr><td># annotations</td><td># shots</td></tr><tr><td>2</td><td>22770 (81.7%)</td></tr><tr><td>3</td><td>4257 (15.3%)</td></tr><tr><td>4</td><td>658 ( 2.4%)</td></tr><tr><td>5+</td><td>188 ( 0.6%)</td></tr></table>",
"html": null,
"text": ""
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "Number of annotations per shot with consensus"
}
}
}
}