Datasets:
qrels
Hi there,
I would like to ask what is the purpose of the qrels, the score is always 1 for all languages I have checked.
Thank you
Best Regards
David Koleckar
Hi David,
yes, well observed:) It is true that the relevance of documents is only judged by a single label: 1 if the document appears to be a corresponding answer to the question/query in an FAQ page, which we extracted from a web crawl dump. We know that these kind of "sparse relevance judgment" is in some sense suboptimal, yet for now we live with it, as it is a common practice in many IR datasets.
Hence, we could simply omit the "score" column. The reason why we nevertheless kept the "score" column in the qrels data splits, is to use the exact same format as in other datasets that serve as MTEB Retrieval task, e.g. SciFact. Besides streamlining the formatting, as I said, there is no deeper motivation.
Best regards
Michael
Hello Michael,
thank you for your answer.
When you say "if the document appears to be a corresponding answer to the question/query in an FAQ page, which we extracted from a web crawl dump" how do you automatically evaluate/compute that score for all the pairs?
Hi David,
imagine this FAQ pages: https://huggingface.co/docs/leaderboards/open_llm_leaderboard/faq
One of the extracted queries is, e.g., Do you keep track of who submits models?
The corresponding answer is Yes, we store information about which user submitted each model in the requests files here. This helps us prevent spam and encourages responsible submissions. Users are accountable for their submissions, as the community can identify who submitted each model.
In this sense, every query has only one relevant document, which is the one given answer to the respective question. For such question-answer pairs, the score is set to 1.