applied-ai-018 commited on
Commit
39d59a1
Β·
verified Β·
1 Parent(s): 28bf99c

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. env-llmeval/lib/python3.10/site-packages/datasets/__init__.py +70 -0
  2. env-llmeval/lib/python3.10/site-packages/datasets/arrow_dataset.py +0 -0
  3. env-llmeval/lib/python3.10/site-packages/datasets/arrow_reader.py +661 -0
  4. env-llmeval/lib/python3.10/site-packages/datasets/arrow_writer.py +745 -0
  5. env-llmeval/lib/python3.10/site-packages/datasets/builder.py +0 -0
  6. env-llmeval/lib/python3.10/site-packages/datasets/combine.py +215 -0
  7. env-llmeval/lib/python3.10/site-packages/datasets/config.py +259 -0
  8. env-llmeval/lib/python3.10/site-packages/datasets/data_files.py +806 -0
  9. env-llmeval/lib/python3.10/site-packages/datasets/dataset_dict.py +0 -0
  10. env-llmeval/lib/python3.10/site-packages/datasets/distributed.py +39 -0
  11. env-llmeval/lib/python3.10/site-packages/datasets/exceptions.py +85 -0
  12. env-llmeval/lib/python3.10/site-packages/datasets/filesystems/__init__.py +86 -0
  13. env-llmeval/lib/python3.10/site-packages/datasets/filesystems/__pycache__/__init__.cpython-310.pyc +0 -0
  14. env-llmeval/lib/python3.10/site-packages/datasets/filesystems/__pycache__/compression.cpython-310.pyc +0 -0
  15. env-llmeval/lib/python3.10/site-packages/datasets/filesystems/__pycache__/s3filesystem.cpython-310.pyc +0 -0
  16. env-llmeval/lib/python3.10/site-packages/datasets/filesystems/compression.py +178 -0
  17. env-llmeval/lib/python3.10/site-packages/datasets/filesystems/s3filesystem.py +116 -0
  18. env-llmeval/lib/python3.10/site-packages/datasets/fingerprint.py +494 -0
  19. env-llmeval/lib/python3.10/site-packages/datasets/info.py +592 -0
  20. env-llmeval/lib/python3.10/site-packages/datasets/inspect.py +581 -0
  21. env-llmeval/lib/python3.10/site-packages/datasets/iterable_dataset.py +0 -0
  22. env-llmeval/lib/python3.10/site-packages/datasets/keyhash.py +104 -0
  23. env-llmeval/lib/python3.10/site-packages/datasets/load.py +0 -0
  24. env-llmeval/lib/python3.10/site-packages/datasets/metric.py +652 -0
  25. env-llmeval/lib/python3.10/site-packages/datasets/naming.py +84 -0
  26. env-llmeval/lib/python3.10/site-packages/datasets/search.py +779 -0
  27. env-llmeval/lib/python3.10/site-packages/datasets/splits.py +635 -0
  28. env-llmeval/lib/python3.10/site-packages/datasets/streaming.py +140 -0
  29. env-llmeval/lib/python3.10/site-packages/datasets/table.py +2360 -0
  30. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__init__.py +46 -0
  31. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/__init__.cpython-310.pyc +0 -0
  32. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/audio_classification.cpython-310.pyc +0 -0
  33. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/automatic_speech_recognition.cpython-310.pyc +0 -0
  34. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/base.cpython-310.pyc +0 -0
  35. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/image_classification.cpython-310.pyc +0 -0
  36. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/language_modeling.cpython-310.pyc +0 -0
  37. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/question_answering.cpython-310.pyc +0 -0
  38. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/summarization.cpython-310.pyc +0 -0
  39. env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/text_classification.cpython-310.pyc +0 -0
  40. env-llmeval/lib/python3.10/site-packages/datasets/tasks/audio_classification.py +33 -0
  41. env-llmeval/lib/python3.10/site-packages/datasets/tasks/automatic_speech_recognition.py +30 -0
  42. env-llmeval/lib/python3.10/site-packages/datasets/tasks/base.py +39 -0
  43. env-llmeval/lib/python3.10/site-packages/datasets/tasks/image_classification.py +33 -0
  44. env-llmeval/lib/python3.10/site-packages/datasets/tasks/language_modeling.py +18 -0
  45. env-llmeval/lib/python3.10/site-packages/datasets/tasks/question_answering.py +29 -0
  46. env-llmeval/lib/python3.10/site-packages/datasets/tasks/summarization.py +19 -0
  47. env-llmeval/lib/python3.10/site-packages/datasets/tasks/text_classification.py +34 -0
  48. env-llmeval/lib/python3.10/site-packages/dill-0.3.8.dist-info/INSTALLER +1 -0
  49. env-llmeval/lib/python3.10/site-packages/dill-0.3.8.dist-info/LICENSE +35 -0
  50. env-llmeval/lib/python3.10/site-packages/dill-0.3.8.dist-info/METADATA +280 -0
env-llmeval/lib/python3.10/site-packages/datasets/__init__.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ruff: noqa
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ __version__ = "2.18.0"
17
+
18
+ from .arrow_dataset import Dataset
19
+ from .arrow_reader import ReadInstruction
20
+ from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
21
+ from .combine import concatenate_datasets, interleave_datasets
22
+ from .dataset_dict import DatasetDict, IterableDatasetDict
23
+ from .download import *
24
+ from .features import *
25
+ from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled
26
+ from .info import DatasetInfo, MetricInfo
27
+ from .inspect import (
28
+ get_dataset_config_info,
29
+ get_dataset_config_names,
30
+ get_dataset_default_config_name,
31
+ get_dataset_infos,
32
+ get_dataset_split_names,
33
+ inspect_dataset,
34
+ inspect_metric,
35
+ list_datasets,
36
+ list_metrics,
37
+ )
38
+ from .iterable_dataset import IterableDataset
39
+ from .load import load_dataset, load_dataset_builder, load_from_disk, load_metric
40
+ from .metric import Metric
41
+ from .splits import (
42
+ NamedSplit,
43
+ NamedSplitAll,
44
+ Split,
45
+ SplitBase,
46
+ SplitDict,
47
+ SplitGenerator,
48
+ SplitInfo,
49
+ SubSplitInfo,
50
+ percent,
51
+ )
52
+ from .tasks import *
53
+ from .utils import *
54
+ from .utils import logging
55
+
56
+
57
+ # deprecated modules
58
+ from datasets import arrow_dataset as _arrow_dataset # isort:skip
59
+ from datasets import utils as _utils # isort:skip
60
+ from datasets.utils import download_manager as _deprecated_download_manager # isort:skip
61
+
62
+ _arrow_dataset.concatenate_datasets = concatenate_datasets
63
+ _utils.DownloadConfig = DownloadConfig
64
+ _utils.DownloadManager = DownloadManager
65
+ _utils.DownloadMode = DownloadMode
66
+ _deprecated_download_manager.DownloadConfig = DownloadConfig
67
+ _deprecated_download_manager.DownloadMode = DownloadMode
68
+ _deprecated_download_manager.DownloadManager = DownloadManager
69
+
70
+ del _arrow_dataset, _utils, _deprecated_download_manager
env-llmeval/lib/python3.10/site-packages/datasets/arrow_dataset.py ADDED
The diff for this file is too large to render. See raw diff
 
env-llmeval/lib/python3.10/site-packages/datasets/arrow_reader.py ADDED
@@ -0,0 +1,661 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Lint as: python3
16
+ """Arrow ArrowReader."""
17
+
18
+ import copy
19
+ import math
20
+ import os
21
+ import re
22
+ import shutil
23
+ from dataclasses import dataclass
24
+ from functools import partial
25
+ from pathlib import Path
26
+ from typing import TYPE_CHECKING, List, Optional, Union
27
+
28
+ import pyarrow as pa
29
+ import pyarrow.parquet as pq
30
+ from tqdm.contrib.concurrent import thread_map
31
+
32
+ from .download.download_config import DownloadConfig
33
+ from .naming import _split_re, filenames_for_dataset_split
34
+ from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables
35
+ from .utils import logging
36
+ from .utils import tqdm as hf_tqdm
37
+ from .utils.file_utils import cached_path
38
+
39
+
40
+ if TYPE_CHECKING:
41
+ from .info import DatasetInfo # noqa: F401
42
+ from .splits import Split, SplitInfo # noqa: F401
43
+
44
+
45
+ logger = logging.get_logger(__name__)
46
+
47
+ HF_GCP_BASE_URL = "https://storage.googleapis.com/huggingface-nlp/cache/datasets"
48
+
49
+ _SUB_SPEC_RE = re.compile(
50
+ rf"""
51
+ ^
52
+ (?P<split>{_split_re[1:-1]})
53
+ (\[
54
+ ((?P<from>-?\d+)
55
+ (?P<from_pct>%)?)?
56
+ :
57
+ ((?P<to>-?\d+)
58
+ (?P<to_pct>%)?)?
59
+ \])?(\((?P<rounding>[^\)]*)\))?
60
+ $
61
+ """, # remove ^ and $
62
+ re.X,
63
+ )
64
+
65
+ _ADDITION_SEP_RE = re.compile(r"\s*\+\s*")
66
+
67
+
68
+ class DatasetNotOnHfGcsError(ConnectionError):
69
+ """When you can't get the dataset from the Hf google cloud storage"""
70
+
71
+ pass
72
+
73
+
74
+ class MissingFilesOnHfGcsError(ConnectionError):
75
+ """When some files are missing on the Hf oogle cloud storage"""
76
+
77
+ pass
78
+
79
+
80
+ @dataclass(frozen=True)
81
+ class FileInstructions:
82
+ """The file instructions associated with a split ReadInstruction.
83
+
84
+ Attributes:
85
+ num_examples: `int`, The total number of examples
86
+ file_instructions: List[dict(filename, skip, take)], the files information.
87
+ The filenames contains the relative path, not absolute.
88
+ skip/take indicates which example read in the file: `ds.slice(skip, take)`
89
+ """
90
+
91
+ num_examples: int
92
+ file_instructions: List[dict]
93
+
94
+
95
+ def make_file_instructions(
96
+ name: str,
97
+ split_infos: List["SplitInfo"],
98
+ instruction: Union[str, "ReadInstruction"],
99
+ filetype_suffix: Optional[str] = None,
100
+ prefix_path: Optional[str] = None,
101
+ ) -> FileInstructions:
102
+ """Returns instructions of the split dict.
103
+
104
+ Args:
105
+ name (`str`): Name of the dataset.
106
+ split_infos (`list` of `[SplitInfo]`): Dataset splits information.
107
+ instruction ([`ReadInstruction`] or `str`): Reading instruction for a dataset.
108
+ filetype_suffix (`str`, *optional*): Suffix of dataset files, e.g. 'arrow' or 'parquet'.
109
+ prefix_path (`str`, *optional*): Prefix of dataset files, e.g. directory name.
110
+
111
+ Returns:
112
+ [`FileInstructions`]
113
+ """
114
+ if not isinstance(name, str):
115
+ raise TypeError(f"Expected str 'name', but got: {type(name).__name__}")
116
+ elif not name:
117
+ raise ValueError("Expected non-empty str 'name'")
118
+ name2len = {info.name: info.num_examples for info in split_infos}
119
+ name2shard_lengths = {info.name: info.shard_lengths for info in split_infos}
120
+ name2filenames = {
121
+ info.name: filenames_for_dataset_split(
122
+ path=prefix_path,
123
+ dataset_name=name,
124
+ split=info.name,
125
+ filetype_suffix=filetype_suffix,
126
+ shard_lengths=name2shard_lengths[info.name],
127
+ )
128
+ for info in split_infos
129
+ }
130
+ if not isinstance(instruction, ReadInstruction):
131
+ instruction = ReadInstruction.from_spec(instruction)
132
+ # Create the absolute instruction (per split)
133
+ absolute_instructions = instruction.to_absolute(name2len)
134
+
135
+ # For each split, return the files instruction (skip/take)
136
+ file_instructions = []
137
+ num_examples = 0
138
+ for abs_instr in absolute_instructions:
139
+ split_length = name2len[abs_instr.splitname]
140
+ filenames = name2filenames[abs_instr.splitname]
141
+ shard_lengths = name2shard_lengths[abs_instr.splitname]
142
+ from_ = 0 if abs_instr.from_ is None else abs_instr.from_
143
+ to = split_length if abs_instr.to is None else abs_instr.to
144
+ if shard_lengths is None: # not sharded
145
+ for filename in filenames:
146
+ take = to - from_
147
+ if take == 0:
148
+ continue
149
+ num_examples += take
150
+ file_instructions.append({"filename": filename, "skip": from_, "take": take})
151
+ else: # sharded
152
+ index_start = 0 # Beginning (included) of moving window.
153
+ index_end = 0 # End (excluded) of moving window.
154
+ for filename, shard_length in zip(filenames, shard_lengths):
155
+ index_end += shard_length
156
+ if from_ < index_end and to > index_start: # There is something to take.
157
+ skip = from_ - index_start if from_ > index_start else 0
158
+ take = to - index_start - skip if to < index_end else -1
159
+ if take == 0:
160
+ continue
161
+ file_instructions.append({"filename": filename, "skip": skip, "take": take})
162
+ num_examples += shard_length - skip if take == -1 else take
163
+ index_start += shard_length
164
+ return FileInstructions(
165
+ num_examples=num_examples,
166
+ file_instructions=file_instructions,
167
+ )
168
+
169
+
170
+ class BaseReader:
171
+ """
172
+ Build a Dataset object out of Instruction instance(s).
173
+ """
174
+
175
+ def __init__(self, path: str, info: Optional["DatasetInfo"]):
176
+ """Initializes ArrowReader.
177
+
178
+ Args:
179
+ path (str): path where tfrecords are stored.
180
+ info (DatasetInfo): info about the dataset.
181
+ """
182
+ self._path: str = path
183
+ self._info: Optional["DatasetInfo"] = info
184
+ self._filetype_suffix: Optional[str] = None
185
+
186
+ def _get_table_from_filename(self, filename_skip_take, in_memory=False) -> Table:
187
+ """Returns a Dataset instance from given (filename, skip, take)."""
188
+ raise NotImplementedError
189
+
190
+ def _read_files(self, files, in_memory=False) -> Table:
191
+ """Returns Dataset for given file instructions.
192
+
193
+ Args:
194
+ files: List[dict(filename, skip, take)], the files information.
195
+ The filenames contain the absolute path, not relative.
196
+ skip/take indicates which example read in the file: `ds.slice(skip, take)`
197
+ in_memory (bool, default False): Whether to copy the data in-memory.
198
+ """
199
+ if len(files) == 0 or not all(isinstance(f, dict) for f in files):
200
+ raise ValueError("please provide valid file informations")
201
+ files = copy.deepcopy(files)
202
+ for f in files:
203
+ f["filename"] = os.path.join(self._path, f["filename"])
204
+
205
+ pa_tables = thread_map(
206
+ partial(self._get_table_from_filename, in_memory=in_memory),
207
+ files,
208
+ tqdm_class=hf_tqdm,
209
+ desc="Loading dataset shards",
210
+ # set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached
211
+ disable=len(files) <= 16 or None,
212
+ )
213
+ pa_tables = [t for t in pa_tables if len(t) > 0]
214
+ if not pa_tables and (self._info is None or self._info.features is None):
215
+ raise ValueError(
216
+ "Tried to read an empty table. Please specify at least info.features to create an empty table with the right type."
217
+ )
218
+ pa_tables = pa_tables or [InMemoryTable.from_batches([], schema=pa.schema(self._info.features.type))]
219
+ pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0]
220
+ return pa_table
221
+
222
+ def get_file_instructions(self, name, instruction, split_infos):
223
+ """Return list of dict {'filename': str, 'skip': int, 'take': int}"""
224
+ file_instructions = make_file_instructions(
225
+ name, split_infos, instruction, filetype_suffix=self._filetype_suffix, prefix_path=self._path
226
+ )
227
+ files = file_instructions.file_instructions
228
+ return files
229
+
230
+ def read(
231
+ self,
232
+ name,
233
+ instructions,
234
+ split_infos,
235
+ in_memory=False,
236
+ ):
237
+ """Returns Dataset instance(s).
238
+
239
+ Args:
240
+ name (str): name of the dataset.
241
+ instructions (ReadInstruction): instructions to read.
242
+ Instruction can be string and will then be passed to the Instruction
243
+ constructor as it.
244
+ split_infos (list of SplitInfo proto): the available splits for dataset.
245
+ in_memory (bool, default False): Whether to copy the data in-memory.
246
+
247
+ Returns:
248
+ kwargs to build a single Dataset instance.
249
+ """
250
+
251
+ files = self.get_file_instructions(name, instructions, split_infos)
252
+ if not files:
253
+ msg = f'Instruction "{instructions}" corresponds to no data!'
254
+ raise ValueError(msg)
255
+ return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
256
+
257
+ def read_files(
258
+ self,
259
+ files: List[dict],
260
+ original_instructions: Union[None, "ReadInstruction", "Split"] = None,
261
+ in_memory=False,
262
+ ):
263
+ """Returns single Dataset instance for the set of file instructions.
264
+
265
+ Args:
266
+ files: List[dict(filename, skip, take)], the files information.
267
+ The filenames contains the relative path, not absolute.
268
+ skip/take indicates which example read in the file: `ds.skip().take()`
269
+ original_instructions: store the original instructions used to build the dataset split in the dataset.
270
+ in_memory (bool, default False): Whether to copy the data in-memory.
271
+
272
+ Returns:
273
+ kwargs to build a Dataset instance.
274
+ """
275
+ # Prepend path to filename
276
+ pa_table = self._read_files(files, in_memory=in_memory)
277
+ # If original_instructions is not None, convert it to a human-readable NamedSplit
278
+ if original_instructions is not None:
279
+ from .splits import Split # noqa
280
+
281
+ split = Split(str(original_instructions))
282
+ else:
283
+ split = None
284
+ dataset_kwargs = {"arrow_table": pa_table, "info": self._info, "split": split}
285
+ return dataset_kwargs
286
+
287
+ def download_from_hf_gcs(self, download_config: DownloadConfig, relative_data_dir):
288
+ """
289
+ Download the dataset files from the Hf GCS
290
+
291
+ Args:
292
+ dl_cache_dir: `str`, the local cache directory used to download files
293
+ relative_data_dir: `str`, the relative directory of the remote files from
294
+ the `datasets` directory on GCS.
295
+
296
+ """
297
+ remote_cache_dir = HF_GCP_BASE_URL + "/" + relative_data_dir.replace(os.sep, "/")
298
+ try:
299
+ remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json")
300
+ downloaded_dataset_info = cached_path(
301
+ remote_dataset_info.replace(os.sep, "/"), download_config=download_config
302
+ )
303
+ shutil.move(downloaded_dataset_info, os.path.join(self._path, "dataset_info.json"))
304
+ if self._info is not None:
305
+ self._info.update(self._info.from_directory(self._path))
306
+ except FileNotFoundError as err:
307
+ raise DatasetNotOnHfGcsError(err) from None
308
+ try:
309
+ for split in self._info.splits:
310
+ file_instructions = self.get_file_instructions(
311
+ name=self._info.builder_name,
312
+ instruction=split,
313
+ split_infos=self._info.splits.values(),
314
+ )
315
+ for file_instruction in file_instructions:
316
+ file_to_download = str(Path(file_instruction["filename"]).relative_to(self._path))
317
+ remote_prepared_filename = os.path.join(remote_cache_dir, file_to_download)
318
+ downloaded_prepared_filename = cached_path(
319
+ remote_prepared_filename.replace(os.sep, "/"), download_config=download_config
320
+ )
321
+ shutil.move(downloaded_prepared_filename, file_instruction["filename"])
322
+ except FileNotFoundError as err:
323
+ raise MissingFilesOnHfGcsError(err) from None
324
+
325
+
326
+ class ArrowReader(BaseReader):
327
+ """
328
+ Build a Dataset object out of Instruction instance(s).
329
+ This Reader uses either memory mapping or file descriptors (in-memory) on arrow files.
330
+ """
331
+
332
+ def __init__(self, path: str, info: Optional["DatasetInfo"]):
333
+ """Initializes ArrowReader.
334
+
335
+ Args:
336
+ path (str): path where Arrow files are stored.
337
+ info (DatasetInfo): info about the dataset.
338
+ """
339
+ super().__init__(path, info)
340
+ self._filetype_suffix = "arrow"
341
+
342
+ def _get_table_from_filename(self, filename_skip_take, in_memory=False) -> Table:
343
+ """Returns a Dataset instance from given (filename, skip, take)."""
344
+ filename, skip, take = (
345
+ filename_skip_take["filename"],
346
+ filename_skip_take["skip"] if "skip" in filename_skip_take else None,
347
+ filename_skip_take["take"] if "take" in filename_skip_take else None,
348
+ )
349
+ table = ArrowReader.read_table(filename, in_memory=in_memory)
350
+ if take == -1:
351
+ take = len(table) - skip
352
+ # here we don't want to slice an empty table, or it may segfault
353
+ if skip is not None and take is not None and not (skip == 0 and take == len(table)):
354
+ table = table.slice(skip, take)
355
+ return table
356
+
357
+ @staticmethod
358
+ def read_table(filename, in_memory=False) -> Table:
359
+ """
360
+ Read table from file.
361
+
362
+ Args:
363
+ filename (str): File name of the table.
364
+ in_memory (bool, default=False): Whether to copy the data in-memory.
365
+
366
+ Returns:
367
+ pyarrow.Table
368
+ """
369
+ table_cls = InMemoryTable if in_memory else MemoryMappedTable
370
+ return table_cls.from_file(filename)
371
+
372
+
373
+ class ParquetReader(BaseReader):
374
+ """
375
+ Build a Dataset object out of Instruction instance(s).
376
+ This Reader uses memory mapping on parquet files.
377
+ """
378
+
379
+ def __init__(self, path: str, info: Optional["DatasetInfo"]):
380
+ """Initializes ParquetReader.
381
+
382
+ Args:
383
+ path (str): path where tfrecords are stored.
384
+ info (DatasetInfo): info about the dataset.
385
+ """
386
+ super().__init__(path, info)
387
+ self._filetype_suffix = "parquet"
388
+
389
+ def _get_table_from_filename(self, filename_skip_take, **kwargs):
390
+ """Returns a Dataset instance from given (filename, skip, take)."""
391
+ filename, skip, take = (
392
+ filename_skip_take["filename"],
393
+ filename_skip_take["skip"] if "skip" in filename_skip_take else None,
394
+ filename_skip_take["take"] if "take" in filename_skip_take else None,
395
+ )
396
+ # Parquet read_table always loads data in memory, independently of memory_map
397
+ pa_table = pq.read_table(filename, memory_map=True)
398
+ # here we don't want to slice an empty table, or it may segfault
399
+ if skip is not None and take is not None and not (skip == 0 and take == len(pa_table)):
400
+ pa_table = pa_table.slice(skip, take)
401
+ return pa_table
402
+
403
+
404
+ @dataclass(frozen=True)
405
+ class _AbsoluteInstruction:
406
+ """A machine friendly slice: defined absolute positive boundaries."""
407
+
408
+ splitname: str
409
+ from_: int # uint (starting index).
410
+ to: int # uint (ending index).
411
+
412
+
413
+ @dataclass(frozen=True)
414
+ class _RelativeInstruction:
415
+ """Represents a single parsed slicing instruction, can use % and negatives."""
416
+
417
+ splitname: str
418
+ from_: Optional[int] = None # int (starting index) or None if no lower boundary.
419
+ to: Optional[int] = None # int (ending index) or None if no upper boundary.
420
+ unit: Optional[str] = None
421
+ rounding: Optional[str] = None
422
+
423
+ def __post_init__(self):
424
+ if self.unit is not None and self.unit not in ["%", "abs"]:
425
+ raise ValueError("unit must be either % or abs")
426
+ if self.rounding is not None and self.rounding not in ["closest", "pct1_dropremainder"]:
427
+ raise ValueError("rounding must be either closest or pct1_dropremainder")
428
+ if self.unit != "%" and self.rounding is not None:
429
+ raise ValueError("It is forbidden to specify rounding if not using percent slicing.")
430
+ if self.unit == "%" and self.from_ is not None and abs(self.from_) > 100:
431
+ raise ValueError("Percent slice boundaries must be > -100 and < 100.")
432
+ if self.unit == "%" and self.to is not None and abs(self.to) > 100:
433
+ raise ValueError("Percent slice boundaries must be > -100 and < 100.")
434
+ # Update via __dict__ due to instance being "frozen"
435
+ self.__dict__["rounding"] = "closest" if self.rounding is None and self.unit == "%" else self.rounding
436
+
437
+
438
+ def _str_to_read_instruction(spec):
439
+ """Returns ReadInstruction for given string."""
440
+ res = _SUB_SPEC_RE.match(spec)
441
+ if not res:
442
+ raise ValueError(f"Unrecognized instruction format: {spec}")
443
+ unit = "%" if res.group("from_pct") or res.group("to_pct") else "abs"
444
+ return ReadInstruction(
445
+ split_name=res.group("split"),
446
+ rounding=res.group("rounding"),
447
+ from_=int(res.group("from")) if res.group("from") else None,
448
+ to=int(res.group("to")) if res.group("to") else None,
449
+ unit=unit,
450
+ )
451
+
452
+
453
+ def _pct_to_abs_pct1(boundary, num_examples):
454
+ # Using math.trunc here, since -99.5% should give -99%, not -100%.
455
+ if num_examples < 100:
456
+ msg = (
457
+ 'Using "pct1_dropremainder" rounding on a split with less than 100 '
458
+ "elements is forbidden: it always results in an empty dataset."
459
+ )
460
+ raise ValueError(msg)
461
+ return boundary * math.trunc(num_examples / 100.0)
462
+
463
+
464
+ def _pct_to_abs_closest(boundary, num_examples):
465
+ return int(round(boundary * num_examples / 100.0))
466
+
467
+
468
+ def _rel_to_abs_instr(rel_instr, name2len):
469
+ """Returns _AbsoluteInstruction instance for given RelativeInstruction.
470
+
471
+ Args:
472
+ rel_instr: RelativeInstruction instance.
473
+ name2len: dict {split_name: num_examples}.
474
+ """
475
+ pct_to_abs = _pct_to_abs_closest if rel_instr.rounding == "closest" else _pct_to_abs_pct1
476
+ split = rel_instr.splitname
477
+ if split not in name2len:
478
+ raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.')
479
+ num_examples = name2len[split]
480
+ from_ = rel_instr.from_
481
+ to = rel_instr.to
482
+ if rel_instr.unit == "%":
483
+ from_ = 0 if from_ is None else pct_to_abs(from_, num_examples)
484
+ to = num_examples if to is None else pct_to_abs(to, num_examples)
485
+ else:
486
+ from_ = 0 if from_ is None else from_
487
+ to = num_examples if to is None else to
488
+ if from_ < 0:
489
+ from_ = max(num_examples + from_, 0)
490
+ if to < 0:
491
+ to = max(num_examples + to, 0)
492
+ from_ = min(from_, num_examples)
493
+ to = min(to, num_examples)
494
+ return _AbsoluteInstruction(split, from_, to)
495
+
496
+
497
+ class ReadInstruction:
498
+ """Reading instruction for a dataset.
499
+
500
+ Examples::
501
+
502
+ # The following lines are equivalent:
503
+ ds = datasets.load_dataset('mnist', split='test[:33%]')
504
+ ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction.from_spec('test[:33%]'))
505
+ ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction('test', to=33, unit='%'))
506
+ ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction(
507
+ 'test', from_=0, to=33, unit='%'))
508
+
509
+ # The following lines are equivalent:
510
+ ds = datasets.load_dataset('mnist', split='test[:33%]+train[1:-1]')
511
+ ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction.from_spec(
512
+ 'test[:33%]+train[1:-1]'))
513
+ ds = datasets.load_dataset('mnist', split=(
514
+ datasets.ReadInstruction('test', to=33, unit='%') +
515
+ datasets.ReadInstruction('train', from_=1, to=-1, unit='abs')))
516
+
517
+ # The following lines are equivalent:
518
+ ds = datasets.load_dataset('mnist', split='test[:33%](pct1_dropremainder)')
519
+ ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction.from_spec(
520
+ 'test[:33%](pct1_dropremainder)'))
521
+ ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction(
522
+ 'test', from_=0, to=33, unit='%', rounding="pct1_dropremainder"))
523
+
524
+ # 10-fold validation:
525
+ tests = datasets.load_dataset(
526
+ 'mnist',
527
+ [datasets.ReadInstruction('train', from_=k, to=k+10, unit='%')
528
+ for k in range(0, 100, 10)])
529
+ trains = datasets.load_dataset(
530
+ 'mnist',
531
+ [datasets.ReadInstruction('train', to=k, unit='%') + datasets.ReadInstruction('train', from_=k+10, unit='%')
532
+ for k in range(0, 100, 10)])
533
+
534
+ """
535
+
536
+ def _init(self, relative_instructions):
537
+ # Private initializer.
538
+ self._relative_instructions = relative_instructions
539
+
540
+ @classmethod
541
+ def _read_instruction_from_relative_instructions(cls, relative_instructions):
542
+ """Returns ReadInstruction obj initialized with relative_instructions."""
543
+ # Use __new__ to bypass __init__ used by public API and not conveniant here.
544
+ result = cls.__new__(cls)
545
+ result._init(relative_instructions) # pylint: disable=protected-access
546
+ return result
547
+
548
+ def __init__(self, split_name, rounding=None, from_=None, to=None, unit=None):
549
+ """Initialize ReadInstruction.
550
+
551
+ Args:
552
+ split_name (str): name of the split to read. Eg: 'train'.
553
+ rounding (str, optional): The rounding behaviour to use when percent slicing is
554
+ used. Ignored when slicing with absolute indices.
555
+ Possible values:
556
+ - 'closest' (default): The specified percentages are rounded to the
557
+ closest value. Use this if you want specified percents to be as
558
+ much exact as possible.
559
+ - 'pct1_dropremainder': the specified percentages are treated as
560
+ multiple of 1%. Use this option if you want consistency. Eg:
561
+ len(5%) == 5 * len(1%).
562
+ Using this option, one might not be able to use the full set of
563
+ examples, if the number of those is not a multiple of 100.
564
+ from_ (int):
565
+ to (int): alternative way of specifying slicing boundaries. If any of
566
+ {from_, to, unit} argument is used, slicing cannot be specified as
567
+ string.
568
+ unit (str): optional, one of:
569
+ '%': to set the slicing unit as percents of the split size.
570
+ 'abs': to set the slicing unit as absolute numbers.
571
+ """
572
+ # This constructor is not always called. See factory method
573
+ # `_read_instruction_from_relative_instructions`. Common init instructions
574
+ # MUST be placed in the _init method.
575
+ self._init([_RelativeInstruction(split_name, from_, to, unit, rounding)])
576
+
577
+ @classmethod
578
+ def from_spec(cls, spec):
579
+ """Creates a `ReadInstruction` instance out of a string spec.
580
+
581
+ Args:
582
+ spec (`str`):
583
+ Split(s) + optional slice(s) to read + optional rounding
584
+ if percents are used as the slicing unit. A slice can be specified,
585
+ using absolute numbers (`int`) or percentages (`int`).
586
+
587
+ Examples:
588
+
589
+ ```
590
+ test: test split.
591
+ test + validation: test split + validation split.
592
+ test[10:]: test split, minus its first 10 records.
593
+ test[:10%]: first 10% records of test split.
594
+ test[:20%](pct1_dropremainder): first 10% records, rounded with the pct1_dropremainder rounding.
595
+ test[:-5%]+train[40%:60%]: first 95% of test + middle 20% of train.
596
+ ```
597
+
598
+ Returns:
599
+ ReadInstruction instance.
600
+ """
601
+ spec = str(spec) # Need to convert to str in case of NamedSplit instance.
602
+ subs = _ADDITION_SEP_RE.split(spec)
603
+ if not subs:
604
+ raise ValueError(f"No instructions could be built out of {spec}")
605
+ instruction = _str_to_read_instruction(subs[0])
606
+ return sum((_str_to_read_instruction(sub) for sub in subs[1:]), instruction)
607
+
608
+ def to_spec(self):
609
+ rel_instr_specs = []
610
+ for rel_instr in self._relative_instructions:
611
+ rel_instr_spec = rel_instr.splitname
612
+ if rel_instr.from_ is not None or rel_instr.to is not None:
613
+ from_ = rel_instr.from_
614
+ to = rel_instr.to
615
+ unit = rel_instr.unit
616
+ rounding = rel_instr.rounding
617
+ unit = unit if unit == "%" else ""
618
+ from_ = str(from_) + unit if from_ is not None else ""
619
+ to = str(to) + unit if to is not None else ""
620
+ slice_str = f"[{from_}:{to}]"
621
+ rounding_str = (
622
+ f"({rounding})" if unit == "%" and rounding is not None and rounding != "closest" else ""
623
+ )
624
+ rel_instr_spec += slice_str + rounding_str
625
+ rel_instr_specs.append(rel_instr_spec)
626
+ return "+".join(rel_instr_specs)
627
+
628
+ def __add__(self, other):
629
+ """Returns a new ReadInstruction obj, result of appending other to self."""
630
+ if not isinstance(other, ReadInstruction):
631
+ msg = "ReadInstruction can only be added to another ReadInstruction obj."
632
+ raise TypeError(msg)
633
+ self_ris = self._relative_instructions
634
+ other_ris = other._relative_instructions # pylint: disable=protected-access
635
+ if (
636
+ self_ris[0].unit != "abs"
637
+ and other_ris[0].unit != "abs"
638
+ and self._relative_instructions[0].rounding != other_ris[0].rounding
639
+ ):
640
+ raise ValueError("It is forbidden to sum ReadInstruction instances with different rounding values.")
641
+ return self._read_instruction_from_relative_instructions(self_ris + other_ris)
642
+
643
+ def __str__(self):
644
+ return self.to_spec()
645
+
646
+ def __repr__(self):
647
+ return f"ReadInstruction({self._relative_instructions})"
648
+
649
+ def to_absolute(self, name2len):
650
+ """Translate instruction into a list of absolute instructions.
651
+
652
+ Those absolute instructions are then to be added together.
653
+
654
+ Args:
655
+ name2len (`dict`):
656
+ Associating split names to number of examples.
657
+
658
+ Returns:
659
+ list of _AbsoluteInstruction instances (corresponds to the + in spec).
660
+ """
661
+ return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
env-llmeval/lib/python3.10/site-packages/datasets/arrow_writer.py ADDED
@@ -0,0 +1,745 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # Unless required by applicable law or agreed to in writing, software
8
+ # distributed under the License is distributed on an "AS IS" BASIS,
9
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10
+ # See the License for the specific language governing permissions and
11
+ # limitations under the License.
12
+
13
+ # Lint as: python3
14
+ """To write records into Parquet files."""
15
+
16
+ import errno
17
+ import json
18
+ import os
19
+ import sys
20
+ from pathlib import Path
21
+ from typing import Any, Dict, Iterable, List, Optional, Tuple, Union
22
+
23
+ import fsspec
24
+ import numpy as np
25
+ import pyarrow as pa
26
+ import pyarrow.parquet as pq
27
+
28
+ from . import config
29
+ from .features import Features, Image, Value
30
+ from .features.features import (
31
+ FeatureType,
32
+ _ArrayXDExtensionType,
33
+ cast_to_python_objects,
34
+ generate_from_arrow_type,
35
+ get_nested_type,
36
+ list_of_np_array_to_pyarrow_listarray,
37
+ numpy_to_pyarrow_listarray,
38
+ to_pyarrow_listarray,
39
+ )
40
+ from .filesystems import is_remote_filesystem
41
+ from .info import DatasetInfo
42
+ from .keyhash import DuplicatedKeysError, KeyHasher
43
+ from .table import array_cast, cast_array_to_feature, embed_table_storage, table_cast
44
+ from .utils import logging
45
+ from .utils import tqdm as hf_tqdm
46
+ from .utils.file_utils import hash_url_to_filename
47
+ from .utils.py_utils import asdict, first_non_null_value
48
+
49
+
50
+ logger = logging.get_logger(__name__)
51
+
52
+ type_ = type # keep python's type function
53
+
54
+
55
+ class SchemaInferenceError(ValueError):
56
+ pass
57
+
58
+
59
+ class TypedSequence:
60
+ """
61
+ This data container generalizes the typing when instantiating pyarrow arrays, tables or batches.
62
+
63
+ More specifically it adds several features:
64
+ - Support extension types like ``datasets.features.Array2DExtensionType``:
65
+ By default pyarrow arrays don't return extension arrays. One has to call
66
+ ``pa.ExtensionArray.from_storage(type, pa.array(data, type.storage_type))``
67
+ in order to get an extension array.
68
+ - Support for ``try_type`` parameter that can be used instead of ``type``:
69
+ When an array is transformed, we like to keep the same type as before if possible.
70
+ For example when calling :func:`datasets.Dataset.map`, we don't want to change the type
71
+ of each column by default.
72
+ - Better error message when a pyarrow array overflows.
73
+
74
+ Example::
75
+
76
+ from datasets.features import Array2D, Array2DExtensionType, Value
77
+ from datasets.arrow_writer import TypedSequence
78
+ import pyarrow as pa
79
+
80
+ arr = pa.array(TypedSequence([1, 2, 3], type=Value("int32")))
81
+ assert arr.type == pa.int32()
82
+
83
+ arr = pa.array(TypedSequence([1, 2, 3], try_type=Value("int32")))
84
+ assert arr.type == pa.int32()
85
+
86
+ arr = pa.array(TypedSequence(["foo", "bar"], try_type=Value("int32")))
87
+ assert arr.type == pa.string()
88
+
89
+ arr = pa.array(TypedSequence([[[1, 2, 3]]], type=Array2D((1, 3), "int64")))
90
+ assert arr.type == Array2DExtensionType((1, 3), "int64")
91
+
92
+ table = pa.Table.from_pydict({
93
+ "image": TypedSequence([[[1, 2, 3]]], type=Array2D((1, 3), "int64"))
94
+ })
95
+ assert table["image"].type == Array2DExtensionType((1, 3), "int64")
96
+
97
+ """
98
+
99
+ def __init__(
100
+ self,
101
+ data: Iterable,
102
+ type: Optional[FeatureType] = None,
103
+ try_type: Optional[FeatureType] = None,
104
+ optimized_int_type: Optional[FeatureType] = None,
105
+ ):
106
+ # assert type is None or try_type is None,
107
+ if type is not None and try_type is not None:
108
+ raise ValueError("You cannot specify both type and try_type")
109
+ # set attributes
110
+ self.data = data
111
+ self.type = type
112
+ self.try_type = try_type # is ignored if it doesn't match the data
113
+ self.optimized_int_type = optimized_int_type
114
+ # when trying a type (is ignored if data is not compatible)
115
+ self.trying_type = self.try_type is not None
116
+ self.trying_int_optimization = optimized_int_type is not None and type is None and try_type is None
117
+ # used to get back the inferred type after __arrow_array__() is called once
118
+ self._inferred_type = None
119
+
120
+ def get_inferred_type(self) -> FeatureType:
121
+ """Return the inferred feature type.
122
+ This is done by converting the sequence to an Arrow array, and getting the corresponding
123
+ feature type.
124
+
125
+ Since building the Arrow array can be expensive, the value of the inferred type is cached
126
+ as soon as pa.array is called on the typed sequence.
127
+
128
+ Returns:
129
+ FeatureType: inferred feature type of the sequence.
130
+ """
131
+ if self._inferred_type is None:
132
+ self._inferred_type = generate_from_arrow_type(pa.array(self).type)
133
+ return self._inferred_type
134
+
135
+ @staticmethod
136
+ def _infer_custom_type_and_encode(data: Iterable) -> Tuple[Iterable, Optional[FeatureType]]:
137
+ """Implement type inference for custom objects like PIL.Image.Image -> Image type.
138
+
139
+ This function is only used for custom python objects that can't be direclty passed to build
140
+ an Arrow array. In such cases is infers the feature type to use, and it encodes the data so
141
+ that they can be passed to an Arrow array.
142
+
143
+ Args:
144
+ data (Iterable): array of data to infer the type, e.g. a list of PIL images.
145
+
146
+ Returns:
147
+ Tuple[Iterable, Optional[FeatureType]]: a tuple with:
148
+ - the (possibly encoded) array, if the inferred feature type requires encoding
149
+ - the inferred feature type if the array is made of supported custom objects like
150
+ PIL images, else None.
151
+ """
152
+ if config.PIL_AVAILABLE and "PIL" in sys.modules:
153
+ import PIL.Image
154
+
155
+ non_null_idx, non_null_value = first_non_null_value(data)
156
+ if isinstance(non_null_value, PIL.Image.Image):
157
+ return [Image().encode_example(value) if value is not None else None for value in data], Image()
158
+ return data, None
159
+
160
+ def __arrow_array__(self, type: Optional[pa.DataType] = None):
161
+ """This function is called when calling pa.array(typed_sequence)"""
162
+
163
+ if type is not None:
164
+ raise ValueError("TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)")
165
+ del type # make sure we don't use it
166
+ data = self.data
167
+ # automatic type inference for custom objects
168
+ if self.type is None and self.try_type is None:
169
+ data, self._inferred_type = self._infer_custom_type_and_encode(data)
170
+ if self._inferred_type is None:
171
+ type = self.try_type if self.trying_type else self.type
172
+ else:
173
+ type = self._inferred_type
174
+ pa_type = get_nested_type(type) if type is not None else None
175
+ optimized_int_pa_type = (
176
+ get_nested_type(self.optimized_int_type) if self.optimized_int_type is not None else None
177
+ )
178
+ trying_cast_to_python_objects = False
179
+ try:
180
+ # custom pyarrow types
181
+ if isinstance(pa_type, _ArrayXDExtensionType):
182
+ storage = to_pyarrow_listarray(data, pa_type)
183
+ return pa.ExtensionArray.from_storage(pa_type, storage)
184
+
185
+ # efficient np array to pyarrow array
186
+ if isinstance(data, np.ndarray):
187
+ out = numpy_to_pyarrow_listarray(data)
188
+ elif isinstance(data, list) and data and isinstance(first_non_null_value(data)[1], np.ndarray):
189
+ out = list_of_np_array_to_pyarrow_listarray(data)
190
+ else:
191
+ trying_cast_to_python_objects = True
192
+ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
193
+ # use smaller integer precisions if possible
194
+ if self.trying_int_optimization:
195
+ if pa.types.is_int64(out.type):
196
+ out = out.cast(optimized_int_pa_type)
197
+ elif pa.types.is_list(out.type):
198
+ if pa.types.is_int64(out.type.value_type):
199
+ out = array_cast(out, pa.list_(optimized_int_pa_type))
200
+ elif pa.types.is_list(out.type.value_type) and pa.types.is_int64(out.type.value_type.value_type):
201
+ out = array_cast(out, pa.list_(pa.list_(optimized_int_pa_type)))
202
+ # otherwise we can finally use the user's type
203
+ elif type is not None:
204
+ # We use cast_array_to_feature to support casting to custom types like Audio and Image
205
+ # Also, when trying type "string", we don't want to convert integers or floats to "string".
206
+ # We only do it if trying_type is False - since this is what the user asks for.
207
+ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
208
+ return out
209
+ except (
210
+ TypeError,
211
+ pa.lib.ArrowInvalid,
212
+ pa.lib.ArrowNotImplementedError,
213
+ ) as e: # handle type errors and overflows
214
+ # Ignore ArrowNotImplementedError caused by trying type, otherwise re-raise
215
+ if not self.trying_type and isinstance(e, pa.lib.ArrowNotImplementedError):
216
+ raise
217
+
218
+ if self.trying_type:
219
+ try: # second chance
220
+ if isinstance(data, np.ndarray):
221
+ return numpy_to_pyarrow_listarray(data)
222
+ elif isinstance(data, list) and data and any(isinstance(value, np.ndarray) for value in data):
223
+ return list_of_np_array_to_pyarrow_listarray(data)
224
+ else:
225
+ trying_cast_to_python_objects = True
226
+ return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
227
+ except pa.lib.ArrowInvalid as e:
228
+ if "overflow" in str(e):
229
+ raise OverflowError(
230
+ f"There was an overflow with type {type_(data)}. Try to reduce writer_batch_size to have batches smaller than 2GB.\n({e})"
231
+ ) from None
232
+ elif self.trying_int_optimization and "not in range" in str(e):
233
+ optimized_int_pa_type_str = np.dtype(optimized_int_pa_type.to_pandas_dtype()).name
234
+ logger.info(
235
+ f"Failed to cast a sequence to {optimized_int_pa_type_str}. Falling back to int64."
236
+ )
237
+ return out
238
+ elif trying_cast_to_python_objects and "Could not convert" in str(e):
239
+ out = pa.array(
240
+ cast_to_python_objects(data, only_1d_for_numpy=True, optimize_list_casting=False)
241
+ )
242
+ if type is not None:
243
+ out = cast_array_to_feature(out, type, allow_number_to_str=True)
244
+ return out
245
+ else:
246
+ raise
247
+ elif "overflow" in str(e):
248
+ raise OverflowError(
249
+ f"There was an overflow with type {type_(data)}. Try to reduce writer_batch_size to have batches smaller than 2GB.\n({e})"
250
+ ) from None
251
+ elif self.trying_int_optimization and "not in range" in str(e):
252
+ optimized_int_pa_type_str = np.dtype(optimized_int_pa_type.to_pandas_dtype()).name
253
+ logger.info(f"Failed to cast a sequence to {optimized_int_pa_type_str}. Falling back to int64.")
254
+ return out
255
+ elif trying_cast_to_python_objects and "Could not convert" in str(e):
256
+ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True, optimize_list_casting=False))
257
+ if type is not None:
258
+ out = cast_array_to_feature(out, type, allow_number_to_str=True)
259
+ return out
260
+ else:
261
+ raise
262
+
263
+
264
+ class OptimizedTypedSequence(TypedSequence):
265
+ def __init__(
266
+ self,
267
+ data,
268
+ type: Optional[FeatureType] = None,
269
+ try_type: Optional[FeatureType] = None,
270
+ col: Optional[str] = None,
271
+ optimized_int_type: Optional[FeatureType] = None,
272
+ ):
273
+ optimized_int_type_by_col = {
274
+ "attention_mask": Value("int8"), # binary tensor
275
+ "special_tokens_mask": Value("int8"),
276
+ "input_ids": Value("int32"), # typical vocab size: 0-50k (max ~500k, never > 1M)
277
+ "token_type_ids": Value(
278
+ "int8"
279
+ ), # binary mask; some (XLNetModel) use an additional token represented by a 2
280
+ }
281
+ if type is None and try_type is None:
282
+ optimized_int_type = optimized_int_type_by_col.get(col, None)
283
+ super().__init__(data, type=type, try_type=try_type, optimized_int_type=optimized_int_type)
284
+
285
+
286
+ class ArrowWriter:
287
+ """Shuffles and writes Examples to Arrow files."""
288
+
289
+ _WRITER_CLASS = pa.RecordBatchStreamWriter
290
+
291
+ def __init__(
292
+ self,
293
+ schema: Optional[pa.Schema] = None,
294
+ features: Optional[Features] = None,
295
+ path: Optional[str] = None,
296
+ stream: Optional[pa.NativeFile] = None,
297
+ fingerprint: Optional[str] = None,
298
+ writer_batch_size: Optional[int] = None,
299
+ hash_salt: Optional[str] = None,
300
+ check_duplicates: Optional[bool] = False,
301
+ disable_nullable: bool = False,
302
+ update_features: bool = False,
303
+ with_metadata: bool = True,
304
+ unit: str = "examples",
305
+ embed_local_files: bool = False,
306
+ storage_options: Optional[dict] = None,
307
+ ):
308
+ if path is None and stream is None:
309
+ raise ValueError("At least one of path and stream must be provided.")
310
+ if features is not None:
311
+ self._features = features
312
+ self._schema = None
313
+ elif schema is not None:
314
+ self._schema: pa.Schema = schema
315
+ self._features = Features.from_arrow_schema(self._schema)
316
+ else:
317
+ self._features = None
318
+ self._schema = None
319
+
320
+ if hash_salt is not None:
321
+ # Create KeyHasher instance using split name as hash salt
322
+ self._hasher = KeyHasher(hash_salt)
323
+ else:
324
+ self._hasher = KeyHasher("")
325
+
326
+ self._check_duplicates = check_duplicates
327
+ self._disable_nullable = disable_nullable
328
+
329
+ if stream is None:
330
+ fs_token_paths = fsspec.get_fs_token_paths(path, storage_options=storage_options)
331
+ self._fs: fsspec.AbstractFileSystem = fs_token_paths[0]
332
+ self._path = (
333
+ fs_token_paths[2][0]
334
+ if not is_remote_filesystem(self._fs)
335
+ else self._fs.unstrip_protocol(fs_token_paths[2][0])
336
+ )
337
+ self.stream = self._fs.open(fs_token_paths[2][0], "wb")
338
+ self._closable_stream = True
339
+ else:
340
+ self._fs = None
341
+ self._path = None
342
+ self.stream = stream
343
+ self._closable_stream = False
344
+
345
+ self.fingerprint = fingerprint
346
+ self.disable_nullable = disable_nullable
347
+ self.writer_batch_size = writer_batch_size or config.DEFAULT_MAX_BATCH_SIZE
348
+ self.update_features = update_features
349
+ self.with_metadata = with_metadata
350
+ self.unit = unit
351
+ self.embed_local_files = embed_local_files
352
+
353
+ self._num_examples = 0
354
+ self._num_bytes = 0
355
+ self.current_examples: List[Tuple[Dict[str, Any], str]] = []
356
+ self.current_rows: List[pa.Table] = []
357
+ self.pa_writer: Optional[pa.RecordBatchStreamWriter] = None
358
+ self.hkey_record = []
359
+
360
+ def __len__(self):
361
+ """Return the number of writed and staged examples"""
362
+ return self._num_examples + len(self.current_examples) + len(self.current_rows)
363
+
364
+ def __enter__(self):
365
+ return self
366
+
367
+ def __exit__(self, exc_type, exc_val, exc_tb):
368
+ self.close()
369
+
370
+ def close(self):
371
+ # Try closing if opened; if closed: pyarrow.lib.ArrowInvalid: Invalid operation on closed file
372
+ if self.pa_writer: # it might be None
373
+ try:
374
+ self.pa_writer.close()
375
+ except Exception: # pyarrow.lib.ArrowInvalid, OSError
376
+ pass
377
+ if self._closable_stream and not self.stream.closed:
378
+ self.stream.close() # This also closes self.pa_writer if it is opened
379
+
380
+ def _build_writer(self, inferred_schema: pa.Schema):
381
+ schema = self.schema
382
+ inferred_features = Features.from_arrow_schema(inferred_schema)
383
+ if self._features is not None:
384
+ if self.update_features: # keep original features it they match, or update them
385
+ fields = {field.name: field for field in self._features.type}
386
+ for inferred_field in inferred_features.type:
387
+ name = inferred_field.name
388
+ if name in fields:
389
+ if inferred_field == fields[name]:
390
+ inferred_features[name] = self._features[name]
391
+ self._features = inferred_features
392
+ schema: pa.Schema = inferred_schema
393
+ else:
394
+ self._features = inferred_features
395
+ schema: pa.Schema = inferred_features.arrow_schema
396
+ if self.disable_nullable:
397
+ schema = pa.schema(pa.field(field.name, field.type, nullable=False) for field in schema)
398
+ if self.with_metadata:
399
+ schema = schema.with_metadata(self._build_metadata(DatasetInfo(features=self._features), self.fingerprint))
400
+ else:
401
+ schema = schema.with_metadata({})
402
+ self._schema = schema
403
+ self.pa_writer = self._WRITER_CLASS(self.stream, schema)
404
+
405
+ @property
406
+ def schema(self):
407
+ _schema = (
408
+ self._schema
409
+ if self._schema is not None
410
+ else (pa.schema(self._features.type) if self._features is not None else None)
411
+ )
412
+ if self._disable_nullable and _schema is not None:
413
+ _schema = pa.schema(pa.field(field.name, field.type, nullable=False) for field in _schema)
414
+ return _schema if _schema is not None else []
415
+
416
+ @staticmethod
417
+ def _build_metadata(info: DatasetInfo, fingerprint: Optional[str] = None) -> Dict[str, str]:
418
+ info_keys = ["features"] # we can add support for more DatasetInfo keys in the future
419
+ info_as_dict = asdict(info)
420
+ metadata = {}
421
+ metadata["info"] = {key: info_as_dict[key] for key in info_keys}
422
+ if fingerprint is not None:
423
+ metadata["fingerprint"] = fingerprint
424
+ return {"huggingface": json.dumps(metadata)}
425
+
426
+ def write_examples_on_file(self):
427
+ """Write stored examples from the write-pool of examples. It makes a table out of the examples and write it."""
428
+ if not self.current_examples:
429
+ return
430
+ # preserve the order the columns
431
+ if self.schema:
432
+ schema_cols = set(self.schema.names)
433
+ examples_cols = self.current_examples[0][0].keys() # .keys() preserves the order (unlike set)
434
+ common_cols = [col for col in self.schema.names if col in examples_cols]
435
+ extra_cols = [col for col in examples_cols if col not in schema_cols]
436
+ cols = common_cols + extra_cols
437
+ else:
438
+ cols = list(self.current_examples[0][0])
439
+ batch_examples = {}
440
+ for col in cols:
441
+ # We use row[0][col] since current_examples contains (example, key) tuples.
442
+ # Morever, examples could be Arrow arrays of 1 element.
443
+ # This can happen in `.map()` when we want to re-write the same Arrow data
444
+ if all(isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) for row in self.current_examples):
445
+ arrays = [row[0][col] for row in self.current_examples]
446
+ arrays = [
447
+ chunk
448
+ for array in arrays
449
+ for chunk in (array.chunks if isinstance(array, pa.ChunkedArray) else [array])
450
+ ]
451
+ batch_examples[col] = pa.concat_arrays(arrays)
452
+ else:
453
+ batch_examples[col] = [
454
+ row[0][col].to_pylist()[0] if isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) else row[0][col]
455
+ for row in self.current_examples
456
+ ]
457
+ self.write_batch(batch_examples=batch_examples)
458
+ self.current_examples = []
459
+
460
+ def write_rows_on_file(self):
461
+ """Write stored rows from the write-pool of rows. It concatenates the single-row tables and it writes the resulting table."""
462
+ if not self.current_rows:
463
+ return
464
+ table = pa.concat_tables(self.current_rows)
465
+ self.write_table(table)
466
+ self.current_rows = []
467
+
468
+ def write(
469
+ self,
470
+ example: Dict[str, Any],
471
+ key: Optional[Union[str, int, bytes]] = None,
472
+ writer_batch_size: Optional[int] = None,
473
+ ):
474
+ """Add a given (Example,Key) pair to the write-pool of examples which is written to file.
475
+
476
+ Args:
477
+ example: the Example to add.
478
+ key: Optional, a unique identifier(str, int or bytes) associated with each example
479
+ """
480
+ # Utilize the keys and duplicate checking when `self._check_duplicates` is passed True
481
+ if self._check_duplicates:
482
+ # Create unique hash from key and store as (key, example) pairs
483
+ hash = self._hasher.hash(key)
484
+ self.current_examples.append((example, hash))
485
+ # Maintain record of keys and their respective hashes for checking duplicates
486
+ self.hkey_record.append((hash, key))
487
+ else:
488
+ # Store example as a tuple so as to keep the structure of `self.current_examples` uniform
489
+ self.current_examples.append((example, ""))
490
+
491
+ if writer_batch_size is None:
492
+ writer_batch_size = self.writer_batch_size
493
+ if writer_batch_size is not None and len(self.current_examples) >= writer_batch_size:
494
+ if self._check_duplicates:
495
+ self.check_duplicate_keys()
496
+ # Re-intializing to empty list for next batch
497
+ self.hkey_record = []
498
+
499
+ self.write_examples_on_file()
500
+
501
+ def check_duplicate_keys(self):
502
+ """Raises error if duplicates found in a batch"""
503
+ tmp_record = set()
504
+ for hash, key in self.hkey_record:
505
+ if hash in tmp_record:
506
+ duplicate_key_indices = [
507
+ str(self._num_examples + index)
508
+ for index, (duplicate_hash, _) in enumerate(self.hkey_record)
509
+ if duplicate_hash == hash
510
+ ]
511
+
512
+ raise DuplicatedKeysError(key, duplicate_key_indices)
513
+ else:
514
+ tmp_record.add(hash)
515
+
516
+ def write_row(self, row: pa.Table, writer_batch_size: Optional[int] = None):
517
+ """Add a given single-row Table to the write-pool of rows which is written to file.
518
+
519
+ Args:
520
+ row: the row to add.
521
+ """
522
+ if len(row) != 1:
523
+ raise ValueError(f"Only single-row pyarrow tables are allowed but got table with {len(row)} rows.")
524
+ self.current_rows.append(row)
525
+ if writer_batch_size is None:
526
+ writer_batch_size = self.writer_batch_size
527
+ if writer_batch_size is not None and len(self.current_rows) >= writer_batch_size:
528
+ self.write_rows_on_file()
529
+
530
+ def write_batch(
531
+ self,
532
+ batch_examples: Dict[str, List],
533
+ writer_batch_size: Optional[int] = None,
534
+ ):
535
+ """Write a batch of Example to file.
536
+ Ignores the batch if it appears to be empty,
537
+ preventing a potential schema update of unknown types.
538
+
539
+ Args:
540
+ batch_examples: the batch of examples to add.
541
+ """
542
+ if batch_examples and len(next(iter(batch_examples.values()))) == 0:
543
+ return
544
+ features = None if self.pa_writer is None and self.update_features else self._features
545
+ try_features = self._features if self.pa_writer is None and self.update_features else None
546
+ arrays = []
547
+ inferred_features = Features()
548
+ # preserve the order the columns
549
+ if self.schema:
550
+ schema_cols = set(self.schema.names)
551
+ batch_cols = batch_examples.keys() # .keys() preserves the order (unlike set)
552
+ common_cols = [col for col in self.schema.names if col in batch_cols]
553
+ extra_cols = [col for col in batch_cols if col not in schema_cols]
554
+ cols = common_cols + extra_cols
555
+ else:
556
+ cols = list(batch_examples)
557
+ for col in cols:
558
+ col_values = batch_examples[col]
559
+ col_type = features[col] if features else None
560
+ if isinstance(col_values, (pa.Array, pa.ChunkedArray)):
561
+ array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
562
+ arrays.append(array)
563
+ inferred_features[col] = generate_from_arrow_type(col_values.type)
564
+ else:
565
+ col_try_type = try_features[col] if try_features is not None and col in try_features else None
566
+ typed_sequence = OptimizedTypedSequence(col_values, type=col_type, try_type=col_try_type, col=col)
567
+ arrays.append(pa.array(typed_sequence))
568
+ inferred_features[col] = typed_sequence.get_inferred_type()
569
+ schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
570
+ pa_table = pa.Table.from_arrays(arrays, schema=schema)
571
+ self.write_table(pa_table, writer_batch_size)
572
+
573
+ def write_table(self, pa_table: pa.Table, writer_batch_size: Optional[int] = None):
574
+ """Write a Table to file.
575
+
576
+ Args:
577
+ example: the Table to add.
578
+ """
579
+ if writer_batch_size is None:
580
+ writer_batch_size = self.writer_batch_size
581
+ if self.pa_writer is None:
582
+ self._build_writer(inferred_schema=pa_table.schema)
583
+ pa_table = pa_table.combine_chunks()
584
+ pa_table = table_cast(pa_table, self._schema)
585
+ if self.embed_local_files:
586
+ pa_table = embed_table_storage(pa_table)
587
+ self._num_bytes += pa_table.nbytes
588
+ self._num_examples += pa_table.num_rows
589
+ self.pa_writer.write_table(pa_table, writer_batch_size)
590
+
591
+ def finalize(self, close_stream=True):
592
+ self.write_rows_on_file()
593
+ # In case current_examples < writer_batch_size, but user uses finalize()
594
+ if self._check_duplicates:
595
+ self.check_duplicate_keys()
596
+ # Re-intializing to empty list for next batch
597
+ self.hkey_record = []
598
+ self.write_examples_on_file()
599
+ # If schema is known, infer features even if no examples were written
600
+ if self.pa_writer is None and self.schema:
601
+ self._build_writer(self.schema)
602
+ if self.pa_writer is not None:
603
+ self.pa_writer.close()
604
+ self.pa_writer = None
605
+ if close_stream:
606
+ self.stream.close()
607
+ else:
608
+ if close_stream:
609
+ self.stream.close()
610
+ raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
611
+ logger.debug(
612
+ f"Done writing {self._num_examples} {self.unit} in {self._num_bytes} bytes {self._path if self._path else ''}."
613
+ )
614
+ return self._num_examples, self._num_bytes
615
+
616
+
617
+ class ParquetWriter(ArrowWriter):
618
+ _WRITER_CLASS = pq.ParquetWriter
619
+
620
+
621
+ class BeamWriter:
622
+ """
623
+ Shuffles and writes Examples to Arrow files.
624
+ The Arrow files are converted from Parquet files that are the output of Apache Beam pipelines.
625
+ """
626
+
627
+ def __init__(
628
+ self,
629
+ features: Optional[Features] = None,
630
+ schema: Optional[pa.Schema] = None,
631
+ path: Optional[str] = None,
632
+ namespace: Optional[str] = None,
633
+ cache_dir: Optional[str] = None,
634
+ ):
635
+ if features is None and schema is None:
636
+ raise ValueError("At least one of features and schema must be provided.")
637
+ if path is None:
638
+ raise ValueError("Path must be provided.")
639
+
640
+ if features is not None:
641
+ self._features: Features = features
642
+ self._schema: pa.Schema = features.arrow_schema
643
+ else:
644
+ self._schema: pa.Schema = schema
645
+ self._features: Features = Features.from_arrow_schema(schema)
646
+
647
+ self._path = path
648
+ self._parquet_path = os.path.splitext(path)[0] # remove extension
649
+ self._namespace = namespace or "default"
650
+ self._num_examples = None
651
+ self._cache_dir = cache_dir or config.HF_DATASETS_CACHE
652
+
653
+ def write_from_pcollection(self, pcoll_examples):
654
+ """Add the final steps of the beam pipeline: write to parquet files."""
655
+ import apache_beam as beam
656
+
657
+ def inc_num_examples(example):
658
+ beam.metrics.Metrics.counter(self._namespace, "num_examples").inc()
659
+
660
+ # count examples
661
+ _ = pcoll_examples | "Count N. Examples" >> beam.Map(inc_num_examples)
662
+
663
+ # save dataset
664
+ return (
665
+ pcoll_examples
666
+ | "Get values" >> beam.Values()
667
+ | "Save to parquet"
668
+ >> beam.io.parquetio.WriteToParquet(
669
+ self._parquet_path, self._schema, shard_name_template="-SSSSS-of-NNNNN.parquet"
670
+ )
671
+ )
672
+
673
+ def finalize(self, metrics_query_result: dict):
674
+ """
675
+ Run after the pipeline has finished.
676
+ It converts the resulting parquet files to arrow and it completes the info from the pipeline metrics.
677
+
678
+ Args:
679
+ metrics_query_result: `dict` obtained from pipeline_results.metrics().query(m_filter). Make sure
680
+ that the filter keeps only the metrics for the considered split, under the namespace `split_name`.
681
+ """
682
+
683
+ # Beam FileSystems require the system's path separator in the older versions
684
+ fs, _, [parquet_path] = fsspec.get_fs_token_paths(self._parquet_path)
685
+ parquet_path = str(Path(parquet_path)) if not is_remote_filesystem(fs) else fs.unstrip_protocol(parquet_path)
686
+
687
+ shards = fs.glob(parquet_path + "*.parquet")
688
+ num_bytes = sum(fs.sizes(shards))
689
+ shard_lengths = get_parquet_lengths(shards)
690
+
691
+ # Convert to arrow
692
+ if self._path.endswith(".arrow"):
693
+ logger.info(f"Converting parquet files {self._parquet_path} to arrow {self._path}")
694
+ try: # stream conversion
695
+ num_bytes = 0
696
+ for shard in hf_tqdm(shards, unit="shards"):
697
+ with fs.open(shard, "rb") as source:
698
+ with fs.open(shard.replace(".parquet", ".arrow"), "wb") as destination:
699
+ shard_num_bytes, _ = parquet_to_arrow(source, destination)
700
+ num_bytes += shard_num_bytes
701
+ except OSError as e: # broken pipe can happen if the connection is unstable, do local conversion instead
702
+ if e.errno != errno.EPIPE: # not a broken pipe
703
+ raise
704
+ logger.warning(
705
+ "Broken Pipe during stream conversion from parquet to arrow. Using local convert instead"
706
+ )
707
+ local_convert_dir = os.path.join(self._cache_dir, "beam_convert")
708
+ os.makedirs(local_convert_dir, exist_ok=True)
709
+ num_bytes = 0
710
+ for shard in hf_tqdm(shards, unit="shards"):
711
+ local_parquet_path = os.path.join(local_convert_dir, hash_url_to_filename(shard) + ".parquet")
712
+ fs.download(shard, local_parquet_path)
713
+ local_arrow_path = local_parquet_path.replace(".parquet", ".arrow")
714
+ shard_num_bytes, _ = parquet_to_arrow(local_parquet_path, local_arrow_path)
715
+ num_bytes += shard_num_bytes
716
+ remote_arrow_path = shard.replace(".parquet", ".arrow")
717
+ fs.upload(local_arrow_path, remote_arrow_path)
718
+
719
+ # Save metrics
720
+ counters_dict = {metric.key.metric.name: metric.result for metric in metrics_query_result["counters"]}
721
+ self._num_examples = counters_dict["num_examples"]
722
+ self._num_bytes = num_bytes
723
+ self._shard_lengths = shard_lengths
724
+ return self._num_examples, self._num_bytes
725
+
726
+
727
+ def get_parquet_lengths(sources) -> List[int]:
728
+ shard_lengths = []
729
+ for source in hf_tqdm(sources, unit="parquet files"):
730
+ parquet_file = pa.parquet.ParquetFile(source)
731
+ shard_lengths.append(parquet_file.metadata.num_rows)
732
+ return shard_lengths
733
+
734
+
735
+ def parquet_to_arrow(source, destination) -> List[int]:
736
+ """Convert parquet file to arrow file. Inputs can be str paths or file-like objects"""
737
+ stream = None if isinstance(destination, str) else destination
738
+ parquet_file = pa.parquet.ParquetFile(source)
739
+ # Beam can create empty Parquet files, so we need to pass the source Parquet file's schema
740
+ with ArrowWriter(schema=parquet_file.schema_arrow, path=destination, stream=stream) as writer:
741
+ for record_batch in parquet_file.iter_batches():
742
+ pa_table = pa.Table.from_batches([record_batch])
743
+ writer.write_table(pa_table)
744
+ num_bytes, num_examples = writer.finalize()
745
+ return num_bytes, num_examples
env-llmeval/lib/python3.10/site-packages/datasets/builder.py ADDED
The diff for this file is too large to render. See raw diff
 
env-llmeval/lib/python3.10/site-packages/datasets/combine.py ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Optional, TypeVar
2
+
3
+ from .arrow_dataset import Dataset, _concatenate_map_style_datasets, _interleave_map_style_datasets
4
+ from .dataset_dict import DatasetDict, IterableDatasetDict
5
+ from .info import DatasetInfo
6
+ from .iterable_dataset import IterableDataset, _concatenate_iterable_datasets, _interleave_iterable_datasets
7
+ from .splits import NamedSplit
8
+ from .utils import logging
9
+ from .utils.py_utils import Literal
10
+
11
+
12
+ logger = logging.get_logger(__name__)
13
+
14
+
15
+ DatasetType = TypeVar("DatasetType", Dataset, IterableDataset)
16
+
17
+
18
+ def interleave_datasets(
19
+ datasets: List[DatasetType],
20
+ probabilities: Optional[List[float]] = None,
21
+ seed: Optional[int] = None,
22
+ info: Optional[DatasetInfo] = None,
23
+ split: Optional[NamedSplit] = None,
24
+ stopping_strategy: Literal["first_exhausted", "all_exhausted"] = "first_exhausted",
25
+ ) -> DatasetType:
26
+ """
27
+ Interleave several datasets (sources) into a single dataset.
28
+ The new dataset is constructed by alternating between the sources to get the examples.
29
+
30
+ You can use this function on a list of [`Dataset`] objects, or on a list of [`IterableDataset`] objects.
31
+
32
+ - If `probabilities` is `None` (default) the new dataset is constructed by cycling between each source to get the examples.
33
+ - If `probabilities` is not `None`, the new dataset is constructed by getting examples from a random source at a time according to the provided probabilities.
34
+
35
+ The resulting dataset ends when one of the source datasets runs out of examples except when `oversampling` is `True`,
36
+ in which case, the resulting dataset ends when all datasets have ran out of examples at least one time.
37
+
38
+ Note for iterable datasets:
39
+
40
+ In a distributed setup or in PyTorch DataLoader workers, the stopping strategy is applied per process.
41
+ Therefore the "first_exhausted" strategy on an sharded iterable dataset can generate less samples in total (up to 1 missing sample per subdataset per worker).
42
+
43
+ Args:
44
+ datasets (`List[Dataset]` or `List[IterableDataset]`):
45
+ List of datasets to interleave.
46
+ probabilities (`List[float]`, *optional*, defaults to `None`):
47
+ If specified, the new dataset is constructed by sampling
48
+ examples from one source at a time according to these probabilities.
49
+ seed (`int`, *optional*, defaults to `None`):
50
+ The random seed used to choose a source for each example.
51
+ info ([`DatasetInfo`], *optional*):
52
+ Dataset information, like description, citation, etc.
53
+ <Added version="2.4.0"/>
54
+ split ([`NamedSplit`], *optional*):
55
+ Name of the dataset split.
56
+ <Added version="2.4.0"/>
57
+ stopping_strategy (`str`, defaults to `first_exhausted`):
58
+ Two strategies are proposed right now, `first_exhausted` and `all_exhausted`.
59
+ By default, `first_exhausted` is an undersampling strategy, i.e the dataset construction is stopped as soon as one dataset has ran out of samples.
60
+ If the strategy is `all_exhausted`, we use an oversampling strategy, i.e the dataset construction is stopped as soon as every samples of every dataset has been added at least once.
61
+ Note that if the strategy is `all_exhausted`, the interleaved dataset size can get enormous:
62
+ - with no probabilities, the resulting dataset will have `max_length_datasets*nb_dataset` samples.
63
+ - with given probabilities, the resulting dataset will have more samples if some datasets have really low probability of visiting.
64
+ Returns:
65
+ [`Dataset`] or [`IterableDataset`]: Return type depends on the input `datasets`
66
+ parameter. `Dataset` if the input is a list of `Dataset`, `IterableDataset` if the input is a list of
67
+ `IterableDataset`.
68
+
69
+ Example:
70
+
71
+ For regular datasets (map-style):
72
+
73
+ ```python
74
+ >>> from datasets import Dataset, interleave_datasets
75
+ >>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
76
+ >>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
77
+ >>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
78
+ >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
79
+ >>> dataset["a"]
80
+ [10, 0, 11, 1, 2, 20, 12, 10, 0, 1, 2, 21, 0, 11, 1, 2, 0, 1, 12, 2, 10, 0, 22]
81
+ >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
82
+ >>> dataset["a"]
83
+ [10, 0, 11, 1, 2]
84
+ >>> dataset = interleave_datasets([d1, d2, d3])
85
+ >>> dataset["a"]
86
+ [0, 10, 20, 1, 11, 21, 2, 12, 22]
87
+ >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
88
+ >>> dataset["a"]
89
+ [0, 10, 20, 1, 11, 21, 2, 12, 22]
90
+ >>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
91
+ >>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
92
+ >>> d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})
93
+ >>> dataset = interleave_datasets([d1, d2, d3])
94
+ >>> dataset["a"]
95
+ [0, 10, 20, 1, 11, 21, 2, 12, 22]
96
+ >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
97
+ >>> dataset["a"]
98
+ [0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 10, 24]
99
+ >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
100
+ >>> dataset["a"]
101
+ [10, 0, 11, 1, 2]
102
+ >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
103
+ >>> dataset["a"]
104
+ [10, 0, 11, 1, 2, 20, 12, 13, ..., 0, 1, 2, 0, 24]
105
+ For datasets in streaming mode (iterable):
106
+
107
+ >>> from datasets import load_dataset, interleave_datasets
108
+ >>> d1 = load_dataset("oscar", "unshuffled_deduplicated_en", split="train", streaming=True)
109
+ >>> d2 = load_dataset("oscar", "unshuffled_deduplicated_fr", split="train", streaming=True)
110
+ >>> dataset = interleave_datasets([d1, d2])
111
+ >>> iterator = iter(dataset)
112
+ >>> next(iterator)
113
+ {'text': 'Mtendere Village was inspired by the vision...}
114
+ >>> next(iterator)
115
+ {'text': "MΓ©dia de dΓ©bat d'idΓ©es, de culture...}
116
+ ```
117
+ """
118
+ from .arrow_dataset import Dataset
119
+ from .iterable_dataset import IterableDataset
120
+
121
+ if not datasets:
122
+ raise ValueError("Unable to interleave an empty list of datasets.")
123
+ for i, dataset in enumerate(datasets):
124
+ if not isinstance(dataset, (Dataset, IterableDataset)):
125
+ if isinstance(dataset, (DatasetDict, IterableDatasetDict)):
126
+ if not dataset:
127
+ raise ValueError(
128
+ f"Expected a list of Dataset objects or a list of IterableDataset objects, but element at position {i} "
129
+ "is an empty dataset dictionary."
130
+ )
131
+ raise ValueError(
132
+ f"Dataset at position {i} has at least one split: {list(dataset)}\n"
133
+ f"Please pick one to interleave with the other datasets, for example: dataset['{next(iter(dataset))}']"
134
+ )
135
+ raise ValueError(
136
+ f"Expected a list of Dataset objects or a list of IterableDataset objects, but element at position {i} is a {type(dataset).__name__}."
137
+ )
138
+ if i == 0:
139
+ dataset_type, other_type = (
140
+ (Dataset, IterableDataset) if isinstance(dataset, Dataset) else (IterableDataset, Dataset)
141
+ )
142
+ elif not isinstance(dataset, dataset_type):
143
+ raise ValueError(
144
+ f"Unable to interleave a {dataset_type.__name__} (at position 0) with a {other_type.__name__} (at position {i}). Expected a list of Dataset objects or a list of IterableDataset objects."
145
+ )
146
+ if stopping_strategy not in ["first_exhausted", "all_exhausted"]:
147
+ raise ValueError(f"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.")
148
+ if dataset_type is Dataset:
149
+ return _interleave_map_style_datasets(
150
+ datasets, probabilities, seed, info=info, split=split, stopping_strategy=stopping_strategy
151
+ )
152
+ else:
153
+ return _interleave_iterable_datasets(
154
+ datasets, probabilities, seed, info=info, split=split, stopping_strategy=stopping_strategy
155
+ )
156
+
157
+
158
+ def concatenate_datasets(
159
+ dsets: List[DatasetType],
160
+ info: Optional[DatasetInfo] = None,
161
+ split: Optional[NamedSplit] = None,
162
+ axis: int = 0,
163
+ ) -> DatasetType:
164
+ """
165
+ Converts a list of [`Dataset`] with the same schema into a single [`Dataset`].
166
+
167
+ Args:
168
+ dsets (`List[datasets.Dataset]`):
169
+ List of Datasets to concatenate.
170
+ info (`DatasetInfo`, *optional*):
171
+ Dataset information, like description, citation, etc.
172
+ split (`NamedSplit`, *optional*):
173
+ Name of the dataset split.
174
+ axis (`{0, 1}`, defaults to `0`):
175
+ Axis to concatenate over, where `0` means over rows (vertically) and `1` means over columns
176
+ (horizontally).
177
+
178
+ <Added version="1.6.0"/>
179
+
180
+ Example:
181
+
182
+ ```py
183
+ >>> ds3 = concatenate_datasets([ds1, ds2])
184
+ ```
185
+ """
186
+
187
+ if not dsets:
188
+ raise ValueError("Unable to concatenate an empty list of datasets.")
189
+ for i, dataset in enumerate(dsets):
190
+ if not isinstance(dataset, (Dataset, IterableDataset)):
191
+ if isinstance(dataset, (DatasetDict, IterableDatasetDict)):
192
+ if not dataset:
193
+ raise ValueError(
194
+ f"Expected a list of Dataset objects or a list of IterableDataset objects, but element at position {i} "
195
+ "is an empty dataset dictionary."
196
+ )
197
+ raise ValueError(
198
+ f"Dataset at position {i} has at least one split: {list(dataset)}\n"
199
+ f"Please pick one to interleave with the other datasets, for example: dataset['{next(iter(dataset))}']"
200
+ )
201
+ raise ValueError(
202
+ f"Expected a list of Dataset objects or a list of IterableDataset objects, but element at position {i} is a {type(dataset).__name__}."
203
+ )
204
+ if i == 0:
205
+ dataset_type, other_type = (
206
+ (Dataset, IterableDataset) if isinstance(dataset, Dataset) else (IterableDataset, Dataset)
207
+ )
208
+ elif not isinstance(dataset, dataset_type):
209
+ raise ValueError(
210
+ f"Unable to interleave a {dataset_type.__name__} (at position 0) with a {other_type.__name__} (at position {i}). Expected a list of Dataset objects or a list of IterableDataset objects."
211
+ )
212
+ if dataset_type is Dataset:
213
+ return _concatenate_map_style_datasets(dsets, info=info, split=split, axis=axis)
214
+ else:
215
+ return _concatenate_iterable_datasets(dsets, info=info, split=split, axis=axis)
env-llmeval/lib/python3.10/site-packages/datasets/config.py ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib
2
+ import importlib.metadata
3
+ import logging
4
+ import os
5
+ import platform
6
+ from pathlib import Path
7
+ from typing import Optional
8
+
9
+ from packaging import version
10
+
11
+
12
+ logger = logging.getLogger(__name__.split(".", 1)[0]) # to avoid circular import from .utils.logging
13
+
14
+ # Datasets
15
+ S3_DATASETS_BUCKET_PREFIX = "https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets"
16
+ CLOUDFRONT_DATASETS_DISTRIB_PREFIX = "https://cdn-datasets.huggingface.co/datasets/datasets"
17
+ REPO_DATASETS_URL = "https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}"
18
+
19
+ # Metrics
20
+ S3_METRICS_BUCKET_PREFIX = "https://s3.amazonaws.com/datasets.huggingface.co/datasets/metrics"
21
+ CLOUDFRONT_METRICS_DISTRIB_PREFIX = "https://cdn-datasets.huggingface.co/datasets/metric"
22
+ REPO_METRICS_URL = "https://raw.githubusercontent.com/huggingface/datasets/{revision}/metrics/{path}/{name}"
23
+
24
+ # Hub
25
+ HF_ENDPOINT = os.environ.get("HF_ENDPOINT", "https://huggingface.co")
26
+ HUB_DATASETS_URL = HF_ENDPOINT + "/datasets/{repo_id}/resolve/{revision}/{path}"
27
+ HUB_DATASETS_HFFS_URL = "hf://datasets/{repo_id}@{revision}/{path}"
28
+ HUB_DEFAULT_VERSION = "main"
29
+
30
+ PY_VERSION = version.parse(platform.python_version())
31
+
32
+ # General environment variables accepted values for booleans
33
+ ENV_VARS_TRUE_VALUES = {"1", "ON", "YES", "TRUE"}
34
+ ENV_VARS_FALSE_VALUES = {"0", "OFF", "NO", "FALSE"}
35
+ ENV_VARS_TRUE_AND_AUTO_VALUES = ENV_VARS_TRUE_VALUES.union({"AUTO"})
36
+ ENV_VARS_FALSE_AND_AUTO_VALUES = ENV_VARS_FALSE_VALUES.union({"AUTO"})
37
+
38
+
39
+ # Imports
40
+ DILL_VERSION = version.parse(importlib.metadata.version("dill"))
41
+ FSSPEC_VERSION = version.parse(importlib.metadata.version("fsspec"))
42
+ PANDAS_VERSION = version.parse(importlib.metadata.version("pandas"))
43
+ PYARROW_VERSION = version.parse(importlib.metadata.version("pyarrow"))
44
+ HF_HUB_VERSION = version.parse(importlib.metadata.version("huggingface_hub"))
45
+
46
+ USE_TF = os.environ.get("USE_TF", "AUTO").upper()
47
+ USE_TORCH = os.environ.get("USE_TORCH", "AUTO").upper()
48
+ USE_JAX = os.environ.get("USE_JAX", "AUTO").upper()
49
+
50
+ TORCH_VERSION = "N/A"
51
+ TORCH_AVAILABLE = False
52
+
53
+ if USE_TORCH in ENV_VARS_TRUE_AND_AUTO_VALUES and USE_TF not in ENV_VARS_TRUE_VALUES:
54
+ TORCH_AVAILABLE = importlib.util.find_spec("torch") is not None
55
+ if TORCH_AVAILABLE:
56
+ try:
57
+ TORCH_VERSION = version.parse(importlib.metadata.version("torch"))
58
+ logger.info(f"PyTorch version {TORCH_VERSION} available.")
59
+ except importlib.metadata.PackageNotFoundError:
60
+ pass
61
+ else:
62
+ logger.info("Disabling PyTorch because USE_TF is set")
63
+
64
+ TF_VERSION = "N/A"
65
+ TF_AVAILABLE = False
66
+
67
+ if USE_TF in ENV_VARS_TRUE_AND_AUTO_VALUES and USE_TORCH not in ENV_VARS_TRUE_VALUES:
68
+ TF_AVAILABLE = importlib.util.find_spec("tensorflow") is not None
69
+ if TF_AVAILABLE:
70
+ # For the metadata, we have to look for both tensorflow and tensorflow-cpu
71
+ for package in [
72
+ "tensorflow",
73
+ "tensorflow-cpu",
74
+ "tensorflow-gpu",
75
+ "tf-nightly",
76
+ "tf-nightly-cpu",
77
+ "tf-nightly-gpu",
78
+ "intel-tensorflow",
79
+ "tensorflow-rocm",
80
+ "tensorflow-macos",
81
+ ]:
82
+ try:
83
+ TF_VERSION = version.parse(importlib.metadata.version(package))
84
+ except importlib.metadata.PackageNotFoundError:
85
+ continue
86
+ else:
87
+ break
88
+ else:
89
+ TF_AVAILABLE = False
90
+ if TF_AVAILABLE:
91
+ if TF_VERSION.major < 2:
92
+ logger.info(f"TensorFlow found but with version {TF_VERSION}. `datasets` requires version 2 minimum.")
93
+ TF_AVAILABLE = False
94
+ else:
95
+ logger.info(f"TensorFlow version {TF_VERSION} available.")
96
+ else:
97
+ logger.info("Disabling Tensorflow because USE_TORCH is set")
98
+
99
+
100
+ JAX_VERSION = "N/A"
101
+ JAX_AVAILABLE = False
102
+
103
+ if USE_JAX in ENV_VARS_TRUE_AND_AUTO_VALUES:
104
+ JAX_AVAILABLE = importlib.util.find_spec("jax") is not None and importlib.util.find_spec("jaxlib") is not None
105
+ if JAX_AVAILABLE:
106
+ try:
107
+ JAX_VERSION = version.parse(importlib.metadata.version("jax"))
108
+ logger.info(f"JAX version {JAX_VERSION} available.")
109
+ except importlib.metadata.PackageNotFoundError:
110
+ pass
111
+ else:
112
+ logger.info("Disabling JAX because USE_JAX is set to False")
113
+
114
+
115
+ USE_BEAM = os.environ.get("USE_BEAM", "AUTO").upper()
116
+ BEAM_VERSION = "N/A"
117
+ BEAM_AVAILABLE = False
118
+ if USE_BEAM in ENV_VARS_TRUE_AND_AUTO_VALUES:
119
+ try:
120
+ BEAM_VERSION = version.parse(importlib.metadata.version("apache_beam"))
121
+ BEAM_AVAILABLE = True
122
+ logger.info(f"Apache Beam version {BEAM_VERSION} available.")
123
+ except importlib.metadata.PackageNotFoundError:
124
+ pass
125
+ else:
126
+ logger.info("Disabling Apache Beam because USE_BEAM is set to False")
127
+
128
+
129
+ # Optional tools for data loading
130
+ SQLALCHEMY_AVAILABLE = importlib.util.find_spec("sqlalchemy") is not None
131
+
132
+ # Optional tools for feature decoding
133
+ PIL_AVAILABLE = importlib.util.find_spec("PIL") is not None
134
+ IS_OPUS_SUPPORTED = importlib.util.find_spec("soundfile") is not None and version.parse(
135
+ importlib.import_module("soundfile").__libsndfile_version__
136
+ ) >= version.parse("1.0.31")
137
+ IS_MP3_SUPPORTED = importlib.util.find_spec("soundfile") is not None and version.parse(
138
+ importlib.import_module("soundfile").__libsndfile_version__
139
+ ) >= version.parse("1.1.0")
140
+
141
+ # Optional compression tools
142
+ RARFILE_AVAILABLE = importlib.util.find_spec("rarfile") is not None
143
+ ZSTANDARD_AVAILABLE = importlib.util.find_spec("zstandard") is not None
144
+ LZ4_AVAILABLE = importlib.util.find_spec("lz4") is not None
145
+ PY7ZR_AVAILABLE = importlib.util.find_spec("py7zr") is not None
146
+
147
+ # Cache location
148
+ DEFAULT_XDG_CACHE_HOME = "~/.cache"
149
+ XDG_CACHE_HOME = os.getenv("XDG_CACHE_HOME", DEFAULT_XDG_CACHE_HOME)
150
+ DEFAULT_HF_CACHE_HOME = os.path.join(XDG_CACHE_HOME, "huggingface")
151
+ HF_CACHE_HOME = os.path.expanduser(os.getenv("HF_HOME", DEFAULT_HF_CACHE_HOME))
152
+
153
+ DEFAULT_HF_DATASETS_CACHE = os.path.join(HF_CACHE_HOME, "datasets")
154
+ HF_DATASETS_CACHE = Path(os.getenv("HF_DATASETS_CACHE", DEFAULT_HF_DATASETS_CACHE))
155
+
156
+ DEFAULT_HF_METRICS_CACHE = os.path.join(HF_CACHE_HOME, "metrics")
157
+ HF_METRICS_CACHE = Path(os.getenv("HF_METRICS_CACHE", DEFAULT_HF_METRICS_CACHE))
158
+
159
+ DEFAULT_HF_MODULES_CACHE = os.path.join(HF_CACHE_HOME, "modules")
160
+ HF_MODULES_CACHE = Path(os.getenv("HF_MODULES_CACHE", DEFAULT_HF_MODULES_CACHE))
161
+
162
+ DOWNLOADED_DATASETS_DIR = "downloads"
163
+ DEFAULT_DOWNLOADED_DATASETS_PATH = os.path.join(HF_DATASETS_CACHE, DOWNLOADED_DATASETS_DIR)
164
+ DOWNLOADED_DATASETS_PATH = Path(os.getenv("HF_DATASETS_DOWNLOADED_DATASETS_PATH", DEFAULT_DOWNLOADED_DATASETS_PATH))
165
+
166
+ EXTRACTED_DATASETS_DIR = "extracted"
167
+ DEFAULT_EXTRACTED_DATASETS_PATH = os.path.join(DEFAULT_DOWNLOADED_DATASETS_PATH, EXTRACTED_DATASETS_DIR)
168
+ EXTRACTED_DATASETS_PATH = Path(os.getenv("HF_DATASETS_EXTRACTED_DATASETS_PATH", DEFAULT_EXTRACTED_DATASETS_PATH))
169
+
170
+ # Download count for the website
171
+ HF_UPDATE_DOWNLOAD_COUNTS = (
172
+ os.environ.get("HF_UPDATE_DOWNLOAD_COUNTS", "AUTO").upper() in ENV_VARS_TRUE_AND_AUTO_VALUES
173
+ )
174
+
175
+ # Remote dataset scripts support
176
+ __HF_DATASETS_TRUST_REMOTE_CODE = os.environ.get("HF_DATASETS_TRUST_REMOTE_CODE", "1")
177
+ HF_DATASETS_TRUST_REMOTE_CODE: Optional[bool] = (
178
+ True
179
+ if __HF_DATASETS_TRUST_REMOTE_CODE.upper() in ENV_VARS_TRUE_VALUES
180
+ else False
181
+ if __HF_DATASETS_TRUST_REMOTE_CODE.upper() in ENV_VARS_FALSE_VALUES
182
+ else None
183
+ )
184
+ TIME_OUT_REMOTE_CODE = 15
185
+
186
+ # Datasets-server
187
+ USE_PARQUET_EXPORT = True
188
+
189
+ # Batch size constants. For more info, see:
190
+ # https://github.com/apache/arrow/blob/master/docs/source/cpp/arrays.rst#size-limitations-and-recommendations)
191
+ DEFAULT_MAX_BATCH_SIZE = 1000
192
+
193
+ # Size of the preloaded record batch in `Dataset.__iter__`
194
+ ARROW_READER_BATCH_SIZE_IN_DATASET_ITER = 10
195
+
196
+ # Max shard size in bytes (e.g. to shard parquet datasets in push_to_hub or download_and_prepare)
197
+ MAX_SHARD_SIZE = "500MB"
198
+
199
+ # Parquet configuration
200
+ PARQUET_ROW_GROUP_SIZE_FOR_AUDIO_DATASETS = 100
201
+ PARQUET_ROW_GROUP_SIZE_FOR_IMAGE_DATASETS = 100
202
+ PARQUET_ROW_GROUP_SIZE_FOR_BINARY_DATASETS = 100
203
+
204
+ # Offline mode
205
+ HF_DATASETS_OFFLINE = os.environ.get("HF_DATASETS_OFFLINE", "AUTO").upper() in ENV_VARS_TRUE_VALUES
206
+
207
+ # Here, `True` will disable progress bars globally without possibility of enabling it
208
+ # programmatically. `False` will enable them without possibility of disabling them.
209
+ # If environment variable is not set (None), then the user is free to enable/disable
210
+ # them programmatically.
211
+ # TL;DR: env variable has priority over code
212
+ __HF_DATASETS_DISABLE_PROGRESS_BARS = os.environ.get("HF_DATASETS_DISABLE_PROGRESS_BARS")
213
+ HF_DATASETS_DISABLE_PROGRESS_BARS: Optional[bool] = (
214
+ __HF_DATASETS_DISABLE_PROGRESS_BARS.upper() in ENV_VARS_TRUE_VALUES
215
+ if __HF_DATASETS_DISABLE_PROGRESS_BARS is not None
216
+ else None
217
+ )
218
+
219
+ # In-memory
220
+ DEFAULT_IN_MEMORY_MAX_SIZE = 0 # Disabled
221
+ IN_MEMORY_MAX_SIZE = float(os.environ.get("HF_DATASETS_IN_MEMORY_MAX_SIZE", DEFAULT_IN_MEMORY_MAX_SIZE))
222
+
223
+ # File names
224
+ DATASET_ARROW_FILENAME = "dataset.arrow"
225
+ DATASET_INDICES_FILENAME = "indices.arrow"
226
+ DATASET_STATE_JSON_FILENAME = "state.json"
227
+ DATASET_INFO_FILENAME = "dataset_info.json"
228
+ DATASETDICT_INFOS_FILENAME = "dataset_infos.json"
229
+ LICENSE_FILENAME = "LICENSE"
230
+ METRIC_INFO_FILENAME = "metric_info.json"
231
+ DATASETDICT_JSON_FILENAME = "dataset_dict.json"
232
+ METADATA_CONFIGS_FIELD = "configs"
233
+ REPOCARD_FILENAME = "README.md"
234
+ REPOYAML_FILENAME = ".huggingface.yaml"
235
+
236
+ MODULE_NAME_FOR_DYNAMIC_MODULES = "datasets_modules"
237
+
238
+ MAX_DATASET_CONFIG_ID_READABLE_LENGTH = 255
239
+
240
+ # Temporary cache directory prefix
241
+ TEMP_CACHE_DIR_PREFIX = "hf_datasets-"
242
+
243
+ # Streaming
244
+ STREAMING_READ_MAX_RETRIES = 20
245
+ STREAMING_READ_RETRY_INTERVAL = 5
246
+
247
+ # Datasets without script
248
+ DATA_FILES_MAX_NUMBER_FOR_MODULE_INFERENCE = 200
249
+ GLOBBED_DATA_FILES_MAX_NUMBER_FOR_MODULE_INFERENCE = 10
250
+ ARCHIVED_DATA_FILES_MAX_NUMBER_FOR_MODULE_INFERENCE = 200
251
+
252
+ # Progress bars
253
+ PBAR_REFRESH_TIME_INTERVAL = 0.05 # 20 progress updates per sec
254
+
255
+ # Maximum number of uploaded files per commit
256
+ UPLOADS_MAX_NUMBER_PER_COMMIT = 50
257
+
258
+ # Backward compatibiliy
259
+ MAX_TABLE_NBYTES_FOR_PICKLING = 4 << 30
env-llmeval/lib/python3.10/site-packages/datasets/data_files.py ADDED
@@ -0,0 +1,806 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ from functools import partial
4
+ from glob import has_magic
5
+ from pathlib import Path, PurePath
6
+ from typing import Callable, Dict, List, Optional, Set, Tuple, Union
7
+
8
+ import huggingface_hub
9
+ from fsspec import get_fs_token_paths
10
+ from fsspec.implementations.http import HTTPFileSystem
11
+ from huggingface_hub import HfFileSystem
12
+ from packaging import version
13
+ from tqdm.contrib.concurrent import thread_map
14
+
15
+ from . import config
16
+ from .download import DownloadConfig
17
+ from .download.streaming_download_manager import _prepare_path_and_storage_options, xbasename, xjoin
18
+ from .naming import _split_re
19
+ from .splits import Split
20
+ from .utils import logging
21
+ from .utils import tqdm as hf_tqdm
22
+ from .utils.file_utils import is_local_path, is_relative_path
23
+ from .utils.py_utils import glob_pattern_to_regex, string_to_dict
24
+
25
+
26
+ SANITIZED_DEFAULT_SPLIT = str(Split.TRAIN)
27
+
28
+
29
+ logger = logging.get_logger(__name__)
30
+
31
+
32
+ class Url(str):
33
+ pass
34
+
35
+
36
+ class EmptyDatasetError(FileNotFoundError):
37
+ pass
38
+
39
+
40
+ SPLIT_PATTERN_SHARDED = "data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"
41
+
42
+ SPLIT_KEYWORDS = {
43
+ Split.TRAIN: ["train", "training"],
44
+ Split.VALIDATION: ["validation", "valid", "dev", "val"],
45
+ Split.TEST: ["test", "testing", "eval", "evaluation"],
46
+ }
47
+ NON_WORDS_CHARS = "-._ 0-9"
48
+ if config.FSSPEC_VERSION < version.parse("2023.9.0"):
49
+ KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = ["{keyword}[{sep}/]**", "**[{sep}/]{keyword}[{sep}/]**"]
50
+ elif config.FSSPEC_VERSION < version.parse("2023.12.0"):
51
+ KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = ["{keyword}[{sep}/]**", "**/*[{sep}/]{keyword}[{sep}/]**"]
52
+ else:
53
+ KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = [
54
+ "**/{keyword}[{sep}]*",
55
+ "**/{keyword}/**",
56
+ "**/*[{sep}]{keyword}[{sep}]*",
57
+ "**/*[{sep}]{keyword}[{sep}]*/**",
58
+ "**/{keyword}[{sep}]*/**",
59
+ "**/*[{sep}]{keyword}/**",
60
+ ]
61
+
62
+ DEFAULT_SPLITS = [Split.TRAIN, Split.VALIDATION, Split.TEST]
63
+ DEFAULT_PATTERNS_SPLIT_IN_PATH_NAME = {
64
+ split: [
65
+ pattern.format(keyword=keyword, sep=NON_WORDS_CHARS)
66
+ for keyword in SPLIT_KEYWORDS[split]
67
+ for pattern in KEYWORDS_IN_PATH_NAME_BASE_PATTERNS
68
+ ]
69
+ for split in DEFAULT_SPLITS
70
+ }
71
+
72
+ DEFAULT_PATTERNS_ALL = {
73
+ Split.TRAIN: ["**"],
74
+ }
75
+
76
+ ALL_SPLIT_PATTERNS = [SPLIT_PATTERN_SHARDED]
77
+ ALL_DEFAULT_PATTERNS = [
78
+ DEFAULT_PATTERNS_SPLIT_IN_PATH_NAME,
79
+ DEFAULT_PATTERNS_ALL,
80
+ ]
81
+ if config.FSSPEC_VERSION < version.parse("2023.9.0"):
82
+ METADATA_PATTERNS = [
83
+ "metadata.csv",
84
+ "**/metadata.csv",
85
+ "metadata.jsonl",
86
+ "**/metadata.jsonl",
87
+ ] # metadata file for ImageFolder and AudioFolder
88
+ else:
89
+ METADATA_PATTERNS = [
90
+ "**/metadata.csv",
91
+ "**/metadata.jsonl",
92
+ ] # metadata file for ImageFolder and AudioFolder
93
+ WILDCARD_CHARACTERS = "*[]"
94
+ FILES_TO_IGNORE = [
95
+ "README.md",
96
+ "config.json",
97
+ "dataset_info.json",
98
+ "dataset_infos.json",
99
+ "dummy_data.zip",
100
+ "dataset_dict.json",
101
+ ]
102
+
103
+
104
+ def contains_wildcards(pattern: str) -> bool:
105
+ return any(wilcard_character in pattern for wilcard_character in WILDCARD_CHARACTERS)
106
+
107
+
108
+ def sanitize_patterns(patterns: Union[Dict, List, str]) -> Dict[str, Union[List[str], "DataFilesList"]]:
109
+ """
110
+ Take the data_files patterns from the user, and format them into a dictionary.
111
+ Each key is the name of the split, and each value is a list of data files patterns (paths or urls).
112
+ The default split is "train".
113
+
114
+ Returns:
115
+ patterns: dictionary of split_name -> list of patterns
116
+ """
117
+ if isinstance(patterns, dict):
118
+ return {str(key): value if isinstance(value, list) else [value] for key, value in patterns.items()}
119
+ elif isinstance(patterns, str):
120
+ return {SANITIZED_DEFAULT_SPLIT: [patterns]}
121
+ elif isinstance(patterns, list):
122
+ if any(isinstance(pattern, dict) for pattern in patterns):
123
+ for pattern in patterns:
124
+ if not (
125
+ isinstance(pattern, dict)
126
+ and len(pattern) == 2
127
+ and "split" in pattern
128
+ and isinstance(pattern.get("path"), (str, list))
129
+ ):
130
+ raise ValueError(
131
+ f"Expected each split to have a 'path' key which can be a string or a list of strings, but got {pattern}"
132
+ )
133
+ splits = [pattern["split"] for pattern in patterns]
134
+ if len(set(splits)) != len(splits):
135
+ raise ValueError(f"Some splits are duplicated in data_files: {splits}")
136
+ return {
137
+ str(pattern["split"]): pattern["path"] if isinstance(pattern["path"], list) else [pattern["path"]]
138
+ for pattern in patterns
139
+ }
140
+ else:
141
+ return {SANITIZED_DEFAULT_SPLIT: patterns}
142
+ else:
143
+ return sanitize_patterns(list(patterns))
144
+
145
+
146
+ def _is_inside_unrequested_special_dir(matched_rel_path: str, pattern: str) -> bool:
147
+ """
148
+ When a path matches a pattern, we additionnally check if it's inside a special directory
149
+ we ignore by default (if it starts with a double underscore).
150
+
151
+ Users can still explicitly request a filepath inside such a directory if "__pycache__" is
152
+ mentioned explicitly in the requested pattern.
153
+
154
+ Some examples:
155
+
156
+ base directory:
157
+
158
+ ./
159
+ └── __pycache__
160
+ └── b.txt
161
+
162
+ >>> _is_inside_unrequested_special_dir("__pycache__/b.txt", "**")
163
+ True
164
+ >>> _is_inside_unrequested_special_dir("__pycache__/b.txt", "*/b.txt")
165
+ True
166
+ >>> _is_inside_unrequested_special_dir("__pycache__/b.txt", "__pycache__/*")
167
+ False
168
+ >>> _is_inside_unrequested_special_dir("__pycache__/b.txt", "__*/*")
169
+ False
170
+ """
171
+ # We just need to check if every special directories from the path is present explicly in the pattern.
172
+ # Since we assume that the path matches the pattern, it's equivalent to counting that both
173
+ # the parent path and the parent pattern have the same number of special directories.
174
+ data_dirs_to_ignore_in_path = [part for part in PurePath(matched_rel_path).parent.parts if part.startswith("__")]
175
+ data_dirs_to_ignore_in_pattern = [part for part in PurePath(pattern).parent.parts if part.startswith("__")]
176
+ return len(data_dirs_to_ignore_in_path) != len(data_dirs_to_ignore_in_pattern)
177
+
178
+
179
+ def _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(matched_rel_path: str, pattern: str) -> bool:
180
+ """
181
+ When a path matches a pattern, we additionnally check if it's a hidden file or if it's inside
182
+ a hidden directory we ignore by default, i.e. if the file name or a parent directory name starts with a dot.
183
+
184
+ Users can still explicitly request a filepath that is hidden or is inside a hidden directory
185
+ if the hidden part is mentioned explicitly in the requested pattern.
186
+
187
+ Some examples:
188
+
189
+ base directory:
190
+
191
+ ./
192
+ └── .hidden_file.txt
193
+
194
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_file.txt", "**")
195
+ True
196
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_file.txt", ".*")
197
+ False
198
+
199
+ base directory:
200
+
201
+ ./
202
+ └── .hidden_dir
203
+ └── a.txt
204
+
205
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_dir/a.txt", "**")
206
+ True
207
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_dir/a.txt", ".*/*")
208
+ False
209
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_dir/a.txt", ".hidden_dir/*")
210
+ False
211
+
212
+ base directory:
213
+
214
+ ./
215
+ └── .hidden_dir
216
+ └── .hidden_file.txt
217
+
218
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_dir/.hidden_file.txt", "**")
219
+ True
220
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_dir/.hidden_file.txt", ".*/*")
221
+ True
222
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_dir/.hidden_file.txt", ".*/.*")
223
+ False
224
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_dir/.hidden_file.txt", ".hidden_dir/*")
225
+ True
226
+ >>> _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(".hidden_dir/.hidden_file.txt", ".hidden_dir/.*")
227
+ False
228
+ """
229
+ # We just need to check if every hidden part from the path is present explicly in the pattern.
230
+ # Since we assume that the path matches the pattern, it's equivalent to counting that both
231
+ # the path and the pattern have the same number of hidden parts.
232
+ hidden_directories_in_path = [
233
+ part for part in PurePath(matched_rel_path).parts if part.startswith(".") and not set(part) == {"."}
234
+ ]
235
+ hidden_directories_in_pattern = [
236
+ part for part in PurePath(pattern).parts if part.startswith(".") and not set(part) == {"."}
237
+ ]
238
+ return len(hidden_directories_in_path) != len(hidden_directories_in_pattern)
239
+
240
+
241
+ def _get_data_files_patterns(pattern_resolver: Callable[[str], List[str]]) -> Dict[str, List[str]]:
242
+ """
243
+ Get the default pattern from a directory or repository by testing all the supported patterns.
244
+ The first patterns to return a non-empty list of data files is returned.
245
+
246
+ In order, it first tests if SPLIT_PATTERN_SHARDED works, otherwise it tests the patterns in ALL_DEFAULT_PATTERNS.
247
+ """
248
+ # first check the split patterns like data/{split}-00000-of-00001.parquet
249
+ for split_pattern in ALL_SPLIT_PATTERNS:
250
+ pattern = split_pattern.replace("{split}", "*")
251
+ try:
252
+ data_files = pattern_resolver(pattern)
253
+ except FileNotFoundError:
254
+ continue
255
+ if len(data_files) > 0:
256
+ splits: Set[str] = {
257
+ string_to_dict(xbasename(p), glob_pattern_to_regex(xbasename(split_pattern)))["split"]
258
+ for p in data_files
259
+ }
260
+ if any(not re.match(_split_re, split) for split in splits):
261
+ raise ValueError(f"Split name should match '{_split_re}'' but got '{splits}'.")
262
+ sorted_splits = [str(split) for split in DEFAULT_SPLITS if split in splits] + sorted(
263
+ splits - set(DEFAULT_SPLITS)
264
+ )
265
+ return {split: [split_pattern.format(split=split)] for split in sorted_splits}
266
+ # then check the default patterns based on train/valid/test splits
267
+ for patterns_dict in ALL_DEFAULT_PATTERNS:
268
+ non_empty_splits = []
269
+ for split, patterns in patterns_dict.items():
270
+ for pattern in patterns:
271
+ try:
272
+ data_files = pattern_resolver(pattern)
273
+ except FileNotFoundError:
274
+ continue
275
+ if len(data_files) > 0:
276
+ non_empty_splits.append(split)
277
+ break
278
+ if non_empty_splits:
279
+ return {split: patterns_dict[split] for split in non_empty_splits}
280
+ raise FileNotFoundError(f"Couldn't resolve pattern {pattern} with resolver {pattern_resolver}")
281
+
282
+
283
+ def _get_metadata_files_patterns(pattern_resolver: Callable[[str], List[str]]) -> List[str]:
284
+ """
285
+ Get the supported metadata patterns from a directory or repository.
286
+ """
287
+ non_empty_patterns = []
288
+ for pattern in METADATA_PATTERNS:
289
+ try:
290
+ metadata_files = pattern_resolver(pattern)
291
+ if len(metadata_files) > 0:
292
+ non_empty_patterns.append(pattern)
293
+ except FileNotFoundError:
294
+ pass
295
+ if non_empty_patterns:
296
+ return non_empty_patterns
297
+ raise FileNotFoundError(f"Couldn't resolve pattern {pattern} with resolver {pattern_resolver}")
298
+
299
+
300
+ def resolve_pattern(
301
+ pattern: str,
302
+ base_path: str,
303
+ allowed_extensions: Optional[List[str]] = None,
304
+ download_config: Optional[DownloadConfig] = None,
305
+ ) -> List[str]:
306
+ """
307
+ Resolve the paths and URLs of the data files from the pattern passed by the user.
308
+
309
+ You can use patterns to resolve multiple local files. Here are a few examples:
310
+ - *.csv to match all the CSV files at the first level
311
+ - **.csv to match all the CSV files at any level
312
+ - data/* to match all the files inside "data"
313
+ - data/** to match all the files inside "data" and its subdirectories
314
+
315
+ The patterns are resolved using the fsspec glob. In fsspec>=2023.12.0 this is equivalent to
316
+ Python's glob.glob, Path.glob, Path.match and fnmatch where ** is unsupported with a prefix/suffix
317
+ other than a forward slash /.
318
+
319
+ More generally:
320
+ - '*' matches any character except a forward-slash (to match just the file or directory name)
321
+ - '**' matches any character including a forward-slash /
322
+
323
+ Hidden files and directories (i.e. whose names start with a dot) are ignored, unless they are explicitly requested.
324
+ The same applies to special directories that start with a double underscore like "__pycache__".
325
+ You can still include one if the pattern explicilty mentions it:
326
+ - to include a hidden file: "*/.hidden.txt" or "*/.*"
327
+ - to include a hidden directory: ".hidden/*" or ".*/*"
328
+ - to include a special directory: "__special__/*" or "__*/*"
329
+
330
+ Example::
331
+
332
+ >>> from datasets.data_files import resolve_pattern
333
+ >>> base_path = "."
334
+ >>> resolve_pattern("docs/**/*.py", base_path)
335
+ [/Users/mariosasko/Desktop/projects/datasets/docs/source/_config.py']
336
+
337
+ Args:
338
+ pattern (str): Unix pattern or paths or URLs of the data files to resolve.
339
+ The paths can be absolute or relative to base_path.
340
+ Remote filesystems using fsspec are supported, e.g. with the hf:// protocol.
341
+ base_path (str): Base path to use when resolving relative paths.
342
+ allowed_extensions (Optional[list], optional): White-list of file extensions to use. Defaults to None (all extensions).
343
+ For example: allowed_extensions=[".csv", ".json", ".txt", ".parquet"]
344
+ Returns:
345
+ List[str]: List of paths or URLs to the local or remote files that match the patterns.
346
+ """
347
+ if is_relative_path(pattern):
348
+ pattern = xjoin(base_path, pattern)
349
+ elif is_local_path(pattern):
350
+ base_path = os.path.splitdrive(pattern)[0] + os.sep
351
+ else:
352
+ base_path = ""
353
+ pattern, storage_options = _prepare_path_and_storage_options(pattern, download_config=download_config)
354
+ fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)
355
+ fs_base_path = base_path.split("::")[0].split("://")[-1] or fs.root_marker
356
+ fs_pattern = pattern.split("::")[0].split("://")[-1]
357
+ files_to_ignore = set(FILES_TO_IGNORE) - {xbasename(pattern)}
358
+ protocol = fs.protocol if isinstance(fs.protocol, str) else fs.protocol[0]
359
+ protocol_prefix = protocol + "://" if protocol != "file" else ""
360
+ glob_kwargs = {}
361
+ if protocol == "hf" and config.HF_HUB_VERSION >= version.parse("0.20.0"):
362
+ # 10 times faster glob with detail=True (ignores costly info like lastCommit)
363
+ glob_kwargs["expand_info"] = False
364
+ matched_paths = [
365
+ filepath if filepath.startswith(protocol_prefix) else protocol_prefix + filepath
366
+ for filepath, info in fs.glob(pattern, detail=True, **glob_kwargs).items()
367
+ if info["type"] == "file"
368
+ and (xbasename(filepath) not in files_to_ignore)
369
+ and not _is_inside_unrequested_special_dir(
370
+ os.path.relpath(filepath, fs_base_path), os.path.relpath(fs_pattern, fs_base_path)
371
+ )
372
+ and not _is_unrequested_hidden_file_or_is_inside_unrequested_hidden_dir(
373
+ os.path.relpath(filepath, fs_base_path), os.path.relpath(fs_pattern, fs_base_path)
374
+ )
375
+ ] # ignore .ipynb and __pycache__, but keep /../
376
+ if allowed_extensions is not None:
377
+ out = [
378
+ filepath
379
+ for filepath in matched_paths
380
+ if any("." + suffix in allowed_extensions for suffix in xbasename(filepath).split(".")[1:])
381
+ ]
382
+ if len(out) < len(matched_paths):
383
+ invalid_matched_files = list(set(matched_paths) - set(out))
384
+ logger.info(
385
+ f"Some files matched the pattern '{pattern}' but don't have valid data file extensions: {invalid_matched_files}"
386
+ )
387
+ else:
388
+ out = matched_paths
389
+ if not out:
390
+ error_msg = f"Unable to find '{pattern}'"
391
+ if allowed_extensions is not None:
392
+ error_msg += f" with any supported extension {list(allowed_extensions)}"
393
+ raise FileNotFoundError(error_msg)
394
+ return out
395
+
396
+
397
+ def get_data_patterns(base_path: str, download_config: Optional[DownloadConfig] = None) -> Dict[str, List[str]]:
398
+ """
399
+ Get the default pattern from a directory testing all the supported patterns.
400
+ The first patterns to return a non-empty list of data files is returned.
401
+
402
+ Some examples of supported patterns:
403
+
404
+ Input:
405
+
406
+ my_dataset_repository/
407
+ β”œβ”€β”€ README.md
408
+ └── dataset.csv
409
+
410
+ Output:
411
+
412
+ {"train": ["**"]}
413
+
414
+ Input:
415
+
416
+ my_dataset_repository/
417
+ β”œβ”€β”€ README.md
418
+ β”œβ”€β”€ train.csv
419
+ └── test.csv
420
+
421
+ my_dataset_repository/
422
+ β”œβ”€β”€ README.md
423
+ └── data/
424
+ β”œβ”€β”€ train.csv
425
+ └── test.csv
426
+
427
+ my_dataset_repository/
428
+ β”œβ”€β”€ README.md
429
+ β”œβ”€β”€ train_0.csv
430
+ β”œβ”€β”€ train_1.csv
431
+ β”œβ”€β”€ train_2.csv
432
+ β”œβ”€β”€ train_3.csv
433
+ β”œβ”€β”€ test_0.csv
434
+ └── test_1.csv
435
+
436
+ Output:
437
+
438
+ {'train': ['train[-._ 0-9/]**', '**/*[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**/*[-._ 0-9/]training[-._ 0-9/]**'],
439
+ 'test': ['test[-._ 0-9/]**', '**/*[-._ 0-9/]test[-._ 0-9/]**', 'testing[-._ 0-9/]**', '**/*[-._ 0-9/]testing[-._ 0-9/]**', ...]}
440
+
441
+ Input:
442
+
443
+ my_dataset_repository/
444
+ β”œβ”€β”€ README.md
445
+ └── data/
446
+ β”œβ”€β”€ train/
447
+ β”‚ β”œβ”€β”€ shard_0.csv
448
+ β”‚ β”œβ”€β”€ shard_1.csv
449
+ β”‚ β”œβ”€β”€ shard_2.csv
450
+ β”‚ └── shard_3.csv
451
+ └── test/
452
+ β”œβ”€β”€ shard_0.csv
453
+ └── shard_1.csv
454
+
455
+ Output:
456
+
457
+ {'train': ['train[-._ 0-9/]**', '**/*[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**/*[-._ 0-9/]training[-._ 0-9/]**'],
458
+ 'test': ['test[-._ 0-9/]**', '**/*[-._ 0-9/]test[-._ 0-9/]**', 'testing[-._ 0-9/]**', '**/*[-._ 0-9/]testing[-._ 0-9/]**', ...]}
459
+
460
+ Input:
461
+
462
+ my_dataset_repository/
463
+ β”œβ”€β”€ README.md
464
+ └── data/
465
+ β”œβ”€β”€ train-00000-of-00003.csv
466
+ β”œβ”€β”€ train-00001-of-00003.csv
467
+ β”œβ”€β”€ train-00002-of-00003.csv
468
+ β”œβ”€β”€ test-00000-of-00001.csv
469
+ β”œβ”€β”€ random-00000-of-00003.csv
470
+ β”œβ”€β”€ random-00001-of-00003.csv
471
+ └── random-00002-of-00003.csv
472
+
473
+ Output:
474
+
475
+ {'train': ['data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*'],
476
+ 'test': ['data/test-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*'],
477
+ 'random': ['data/random-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*']}
478
+
479
+ In order, it first tests if SPLIT_PATTERN_SHARDED works, otherwise it tests the patterns in ALL_DEFAULT_PATTERNS.
480
+ """
481
+ resolver = partial(resolve_pattern, base_path=base_path, download_config=download_config)
482
+ try:
483
+ return _get_data_files_patterns(resolver)
484
+ except FileNotFoundError:
485
+ raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
486
+
487
+
488
+ def get_metadata_patterns(
489
+ base_path: str,
490
+ download_config: Optional[DownloadConfig] = None,
491
+ ) -> List[str]:
492
+ """
493
+ Get the supported metadata patterns from a local directory.
494
+ """
495
+ resolver = partial(resolve_pattern, base_path=base_path, download_config=download_config)
496
+ try:
497
+ return _get_metadata_files_patterns(resolver)
498
+ except FileNotFoundError:
499
+ raise FileNotFoundError(f"The directory at {base_path} doesn't contain any metadata file") from None
500
+
501
+
502
+ def _get_single_origin_metadata(
503
+ data_file: str,
504
+ download_config: Optional[DownloadConfig] = None,
505
+ ) -> Tuple[str]:
506
+ data_file, storage_options = _prepare_path_and_storage_options(data_file, download_config=download_config)
507
+ fs, _, _ = get_fs_token_paths(data_file, storage_options=storage_options)
508
+ if isinstance(fs, HfFileSystem):
509
+ resolved_path = fs.resolve_path(data_file)
510
+ return (resolved_path.repo_id, resolved_path.revision)
511
+ elif isinstance(fs, HTTPFileSystem) and data_file.startswith(config.HF_ENDPOINT):
512
+ hffs = HfFileSystem(endpoint=config.HF_ENDPOINT, token=download_config.token)
513
+ data_file = "hf://" + data_file[len(config.HF_ENDPOINT) + 1 :].replace("/resolve/", "@", 1)
514
+ resolved_path = hffs.resolve_path(data_file)
515
+ return (resolved_path.repo_id, resolved_path.revision)
516
+ info = fs.info(data_file)
517
+ # s3fs uses "ETag", gcsfs uses "etag", and for local we simply check mtime
518
+ for key in ["ETag", "etag", "mtime"]:
519
+ if key in info:
520
+ return (str(info[key]),)
521
+ return ()
522
+
523
+
524
+ def _get_origin_metadata(
525
+ data_files: List[str],
526
+ max_workers=64,
527
+ download_config: Optional[DownloadConfig] = None,
528
+ ) -> Tuple[str]:
529
+ return thread_map(
530
+ partial(_get_single_origin_metadata, download_config=download_config),
531
+ data_files,
532
+ max_workers=max_workers,
533
+ tqdm_class=hf_tqdm,
534
+ desc="Resolving data files",
535
+ # set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached
536
+ disable=len(data_files) <= 16 or None,
537
+ )
538
+
539
+
540
+ class DataFilesList(List[str]):
541
+ """
542
+ List of data files (absolute local paths or URLs).
543
+ It has two construction methods given the user's data files patterns :
544
+ - ``from_hf_repo``: resolve patterns inside a dataset repository
545
+ - ``from_local_or_remote``: resolve patterns from a local path
546
+
547
+ Moreover DataFilesList has an additional attribute ``origin_metadata``.
548
+ It can store:
549
+ - the last modified time of local files
550
+ - ETag of remote files
551
+ - commit sha of a dataset repository
552
+
553
+ Thanks to this additional attribute, it is possible to hash the list
554
+ and get a different hash if and only if at least one file changed.
555
+ This is useful for caching Dataset objects that are obtained from a list of data files.
556
+ """
557
+
558
+ def __init__(self, data_files: List[str], origin_metadata: List[Tuple[str]]):
559
+ super().__init__(data_files)
560
+ self.origin_metadata = origin_metadata
561
+
562
+ def __add__(self, other):
563
+ return DataFilesList([*self, *other], self.origin_metadata + other.origin_metadata)
564
+
565
+ @classmethod
566
+ def from_hf_repo(
567
+ cls,
568
+ patterns: List[str],
569
+ dataset_info: huggingface_hub.hf_api.DatasetInfo,
570
+ base_path: Optional[str] = None,
571
+ allowed_extensions: Optional[List[str]] = None,
572
+ download_config: Optional[DownloadConfig] = None,
573
+ ) -> "DataFilesList":
574
+ base_path = f"hf://datasets/{dataset_info.id}@{dataset_info.sha}/{base_path or ''}".rstrip("/")
575
+ return cls.from_patterns(
576
+ patterns, base_path=base_path, allowed_extensions=allowed_extensions, download_config=download_config
577
+ )
578
+
579
+ @classmethod
580
+ def from_local_or_remote(
581
+ cls,
582
+ patterns: List[str],
583
+ base_path: Optional[str] = None,
584
+ allowed_extensions: Optional[List[str]] = None,
585
+ download_config: Optional[DownloadConfig] = None,
586
+ ) -> "DataFilesList":
587
+ base_path = base_path if base_path is not None else Path().resolve().as_posix()
588
+ return cls.from_patterns(
589
+ patterns, base_path=base_path, allowed_extensions=allowed_extensions, download_config=download_config
590
+ )
591
+
592
+ @classmethod
593
+ def from_patterns(
594
+ cls,
595
+ patterns: List[str],
596
+ base_path: Optional[str] = None,
597
+ allowed_extensions: Optional[List[str]] = None,
598
+ download_config: Optional[DownloadConfig] = None,
599
+ ) -> "DataFilesList":
600
+ base_path = base_path if base_path is not None else Path().resolve().as_posix()
601
+ data_files = []
602
+ for pattern in patterns:
603
+ try:
604
+ data_files.extend(
605
+ resolve_pattern(
606
+ pattern,
607
+ base_path=base_path,
608
+ allowed_extensions=allowed_extensions,
609
+ download_config=download_config,
610
+ )
611
+ )
612
+ except FileNotFoundError:
613
+ if not has_magic(pattern):
614
+ raise
615
+ origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
616
+ return cls(data_files, origin_metadata)
617
+
618
+ def filter_extensions(self, extensions: List[str]) -> "DataFilesList":
619
+ pattern = "|".join("\\" + ext for ext in extensions)
620
+ pattern = re.compile(f".*({pattern})(\\..+)?$")
621
+ return DataFilesList(
622
+ [data_file for data_file in self if pattern.match(data_file)],
623
+ origin_metadata=self.origin_metadata,
624
+ )
625
+
626
+
627
+ class DataFilesDict(Dict[str, DataFilesList]):
628
+ """
629
+ Dict of split_name -> list of data files (absolute local paths or URLs).
630
+ It has two construction methods given the user's data files patterns :
631
+ - ``from_hf_repo``: resolve patterns inside a dataset repository
632
+ - ``from_local_or_remote``: resolve patterns from a local path
633
+
634
+ Moreover each list is a DataFilesList. It is possible to hash the dictionary
635
+ and get a different hash if and only if at least one file changed.
636
+ For more info, see ``DataFilesList``.
637
+
638
+ This is useful for caching Dataset objects that are obtained from a list of data files.
639
+
640
+ Changing the order of the keys of this dictionary also doesn't change its hash.
641
+ """
642
+
643
+ @classmethod
644
+ def from_local_or_remote(
645
+ cls,
646
+ patterns: Dict[str, Union[List[str], DataFilesList]],
647
+ base_path: Optional[str] = None,
648
+ allowed_extensions: Optional[List[str]] = None,
649
+ download_config: Optional[DownloadConfig] = None,
650
+ ) -> "DataFilesDict":
651
+ out = cls()
652
+ for key, patterns_for_key in patterns.items():
653
+ out[key] = (
654
+ DataFilesList.from_local_or_remote(
655
+ patterns_for_key,
656
+ base_path=base_path,
657
+ allowed_extensions=allowed_extensions,
658
+ download_config=download_config,
659
+ )
660
+ if not isinstance(patterns_for_key, DataFilesList)
661
+ else patterns_for_key
662
+ )
663
+ return out
664
+
665
+ @classmethod
666
+ def from_hf_repo(
667
+ cls,
668
+ patterns: Dict[str, Union[List[str], DataFilesList]],
669
+ dataset_info: huggingface_hub.hf_api.DatasetInfo,
670
+ base_path: Optional[str] = None,
671
+ allowed_extensions: Optional[List[str]] = None,
672
+ download_config: Optional[DownloadConfig] = None,
673
+ ) -> "DataFilesDict":
674
+ out = cls()
675
+ for key, patterns_for_key in patterns.items():
676
+ out[key] = (
677
+ DataFilesList.from_hf_repo(
678
+ patterns_for_key,
679
+ dataset_info=dataset_info,
680
+ base_path=base_path,
681
+ allowed_extensions=allowed_extensions,
682
+ download_config=download_config,
683
+ )
684
+ if not isinstance(patterns_for_key, DataFilesList)
685
+ else patterns_for_key
686
+ )
687
+ return out
688
+
689
+ @classmethod
690
+ def from_patterns(
691
+ cls,
692
+ patterns: Dict[str, Union[List[str], DataFilesList]],
693
+ base_path: Optional[str] = None,
694
+ allowed_extensions: Optional[List[str]] = None,
695
+ download_config: Optional[DownloadConfig] = None,
696
+ ) -> "DataFilesDict":
697
+ out = cls()
698
+ for key, patterns_for_key in patterns.items():
699
+ out[key] = (
700
+ DataFilesList.from_patterns(
701
+ patterns_for_key,
702
+ base_path=base_path,
703
+ allowed_extensions=allowed_extensions,
704
+ download_config=download_config,
705
+ )
706
+ if not isinstance(patterns_for_key, DataFilesList)
707
+ else patterns_for_key
708
+ )
709
+ return out
710
+
711
+ def filter_extensions(self, extensions: List[str]) -> "DataFilesDict":
712
+ out = type(self)()
713
+ for key, data_files_list in self.items():
714
+ out[key] = data_files_list.filter_extensions(extensions)
715
+ return out
716
+
717
+
718
+ class DataFilesPatternsList(List[str]):
719
+ """
720
+ List of data files patterns (absolute local paths or URLs).
721
+ For each pattern there should also be a list of allowed extensions
722
+ to keep, or a None ot keep all the files for the pattern.
723
+ """
724
+
725
+ def __init__(
726
+ self,
727
+ patterns: List[str],
728
+ allowed_extensions: List[Optional[List[str]]],
729
+ ):
730
+ super().__init__(patterns)
731
+ self.allowed_extensions = allowed_extensions
732
+
733
+ def __add__(self, other):
734
+ return DataFilesList([*self, *other], self.allowed_extensions + other.allowed_extensions)
735
+
736
+ @classmethod
737
+ def from_patterns(
738
+ cls, patterns: List[str], allowed_extensions: Optional[List[str]] = None
739
+ ) -> "DataFilesPatternsDict":
740
+ return cls(patterns, [allowed_extensions] * len(patterns))
741
+
742
+ def resolve(
743
+ self,
744
+ base_path: str,
745
+ download_config: Optional[DownloadConfig] = None,
746
+ ) -> "DataFilesList":
747
+ base_path = base_path if base_path is not None else Path().resolve().as_posix()
748
+ data_files = []
749
+ for pattern, allowed_extensions in zip(self, self.allowed_extensions):
750
+ try:
751
+ data_files.extend(
752
+ resolve_pattern(
753
+ pattern,
754
+ base_path=base_path,
755
+ allowed_extensions=allowed_extensions,
756
+ download_config=download_config,
757
+ )
758
+ )
759
+ except FileNotFoundError:
760
+ if not has_magic(pattern):
761
+ raise
762
+ origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
763
+ return DataFilesList(data_files, origin_metadata)
764
+
765
+ def filter_extensions(self, extensions: List[str]) -> "DataFilesList":
766
+ return DataFilesPatternsList(
767
+ self, [allowed_extensions + extensions for allowed_extensions in self.allowed_extensions]
768
+ )
769
+
770
+
771
+ class DataFilesPatternsDict(Dict[str, DataFilesPatternsList]):
772
+ """
773
+ Dict of split_name -> list of data files patterns (absolute local paths or URLs).
774
+ """
775
+
776
+ @classmethod
777
+ def from_patterns(
778
+ cls, patterns: Dict[str, List[str]], allowed_extensions: Optional[List[str]] = None
779
+ ) -> "DataFilesPatternsDict":
780
+ out = cls()
781
+ for key, patterns_for_key in patterns.items():
782
+ out[key] = (
783
+ DataFilesPatternsList.from_patterns(
784
+ patterns_for_key,
785
+ allowed_extensions=allowed_extensions,
786
+ )
787
+ if not isinstance(patterns_for_key, DataFilesPatternsList)
788
+ else patterns_for_key
789
+ )
790
+ return out
791
+
792
+ def resolve(
793
+ self,
794
+ base_path: str,
795
+ download_config: Optional[DownloadConfig] = None,
796
+ ) -> "DataFilesDict":
797
+ out = DataFilesDict()
798
+ for key, data_files_patterns_list in self.items():
799
+ out[key] = data_files_patterns_list.resolve(base_path, download_config)
800
+ return out
801
+
802
+ def filter_extensions(self, extensions: List[str]) -> "DataFilesPatternsDict":
803
+ out = type(self)()
804
+ for key, data_files_patterns_list in self.items():
805
+ out[key] = data_files_patterns_list.filter_extensions(extensions)
806
+ return out
env-llmeval/lib/python3.10/site-packages/datasets/dataset_dict.py ADDED
The diff for this file is too large to render. See raw diff
 
env-llmeval/lib/python3.10/site-packages/datasets/distributed.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import TypeVar
2
+
3
+ from .arrow_dataset import Dataset, _split_by_node_map_style_dataset
4
+ from .iterable_dataset import IterableDataset, _split_by_node_iterable_dataset
5
+
6
+
7
+ DatasetType = TypeVar("DatasetType", Dataset, IterableDataset)
8
+
9
+
10
+ def split_dataset_by_node(dataset: DatasetType, rank: int, world_size: int) -> DatasetType:
11
+ """
12
+ Split a dataset for the node at rank `rank` in a pool of nodes of size `world_size`.
13
+
14
+ For map-style datasets:
15
+
16
+ Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset.
17
+ To maximize data loading throughput, chunks are made of contiguous data on disk if possible.
18
+
19
+ For iterable datasets:
20
+
21
+ If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`),
22
+ then the shards are evenly assigned across the nodes, which is the most optimized.
23
+ Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples.
24
+
25
+ Args:
26
+ dataset ([`Dataset`] or [`IterableDataset`]):
27
+ The dataset to split by node.
28
+ rank (`int`):
29
+ Rank of the current node.
30
+ world_size (`int`):
31
+ Total number of nodes.
32
+
33
+ Returns:
34
+ [`Dataset`] or [`IterableDataset`]: The dataset to be used on the node at rank `rank`.
35
+ """
36
+ if isinstance(dataset, Dataset):
37
+ return _split_by_node_map_style_dataset(dataset, rank=rank, world_size=world_size)
38
+ else:
39
+ return _split_by_node_iterable_dataset(dataset, rank=rank, world_size=world_size)
env-llmeval/lib/python3.10/site-packages/datasets/exceptions.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPDX-License-Identifier: Apache-2.0
2
+ # Copyright 2023 The HuggingFace Authors.
3
+ from typing import Any, Dict, List, Optional, Union
4
+
5
+ from huggingface_hub import HfFileSystem
6
+
7
+ from . import config
8
+ from .table import CastError
9
+ from .utils.track import TrackedIterable, tracked_list, tracked_str
10
+
11
+
12
+ class DatasetsError(Exception):
13
+ """Base class for exceptions in this library."""
14
+
15
+
16
+ class DefunctDatasetError(DatasetsError):
17
+ """The dataset has been defunct."""
18
+
19
+
20
+ class FileNotFoundDatasetsError(DatasetsError, FileNotFoundError):
21
+ """FileNotFoundError raised by this library."""
22
+
23
+
24
+ class DataFilesNotFoundError(FileNotFoundDatasetsError):
25
+ """No (supported) data files found."""
26
+
27
+
28
+ class DatasetNotFoundError(FileNotFoundDatasetsError):
29
+ """Dataset not found.
30
+
31
+ Raised when trying to access:
32
+ - a missing dataset, or
33
+ - a private/gated dataset and the user is not authenticated.
34
+ """
35
+
36
+
37
+ class DatasetBuildError(DatasetsError):
38
+ pass
39
+
40
+
41
+ class ManualDownloadError(DatasetBuildError):
42
+ pass
43
+
44
+
45
+ class FileFormatError(DatasetBuildError):
46
+ pass
47
+
48
+
49
+ class DatasetGenerationError(DatasetBuildError):
50
+ pass
51
+
52
+
53
+ class DatasetGenerationCastError(DatasetGenerationError):
54
+ @classmethod
55
+ def from_cast_error(
56
+ cls,
57
+ cast_error: CastError,
58
+ builder_name: str,
59
+ gen_kwargs: Dict[str, Any],
60
+ token: Optional[Union[bool, str]],
61
+ ) -> "DatasetGenerationCastError":
62
+ explanation_message = (
63
+ f"\n\nAll the data files must have the same columns, but at some point {cast_error.details()}"
64
+ )
65
+ formatted_tracked_gen_kwargs: List[str] = []
66
+ for gen_kwarg in gen_kwargs.values():
67
+ if not isinstance(gen_kwarg, (tracked_str, tracked_list, TrackedIterable)):
68
+ continue
69
+ while isinstance(gen_kwarg, (tracked_list, TrackedIterable)) and gen_kwarg.last_item is not None:
70
+ gen_kwarg = gen_kwarg.last_item
71
+ if isinstance(gen_kwarg, tracked_str):
72
+ gen_kwarg = gen_kwarg.get_origin()
73
+ if isinstance(gen_kwarg, str) and gen_kwarg.startswith("hf://"):
74
+ resolved_path = HfFileSystem(endpoint=config.HF_ENDPOINT, token=token).resolve_path(gen_kwarg)
75
+ gen_kwarg = "hf://" + resolved_path.unresolve()
76
+ if "@" + resolved_path.revision in gen_kwarg:
77
+ gen_kwarg = (
78
+ gen_kwarg.replace("@" + resolved_path.revision, "", 1)
79
+ + f" (at revision {resolved_path.revision})"
80
+ )
81
+ formatted_tracked_gen_kwargs.append(str(gen_kwarg))
82
+ if formatted_tracked_gen_kwargs:
83
+ explanation_message += f"\n\nThis happened while the {builder_name} dataset builder was generating data using\n\n{', '.join(formatted_tracked_gen_kwargs)}"
84
+ help_message = "\n\nPlease either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)"
85
+ return cls("An error occurred while generating the dataset" + explanation_message + help_message)
env-llmeval/lib/python3.10/site-packages/datasets/filesystems/__init__.py ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib
2
+ import shutil
3
+ import threading
4
+ import warnings
5
+ from typing import List
6
+
7
+ import fsspec
8
+ import fsspec.asyn
9
+ from fsspec.implementations.local import LocalFileSystem
10
+
11
+ from ..utils.deprecation_utils import deprecated
12
+ from . import compression
13
+
14
+
15
+ _has_s3fs = importlib.util.find_spec("s3fs") is not None
16
+
17
+ if _has_s3fs:
18
+ from .s3filesystem import S3FileSystem # noqa: F401
19
+
20
+ COMPRESSION_FILESYSTEMS: List[compression.BaseCompressedFileFileSystem] = [
21
+ compression.Bz2FileSystem,
22
+ compression.GzipFileSystem,
23
+ compression.Lz4FileSystem,
24
+ compression.XzFileSystem,
25
+ compression.ZstdFileSystem,
26
+ ]
27
+
28
+ # Register custom filesystems
29
+ for fs_class in COMPRESSION_FILESYSTEMS:
30
+ if fs_class.protocol in fsspec.registry and fsspec.registry[fs_class.protocol] is not fs_class:
31
+ warnings.warn(f"A filesystem protocol was already set for {fs_class.protocol} and will be overwritten.")
32
+ fsspec.register_implementation(fs_class.protocol, fs_class, clobber=True)
33
+
34
+
35
+ @deprecated(
36
+ "This function is deprecated and will be removed in a future version. Please use `fsspec.core.strip_protocol` instead."
37
+ )
38
+ def extract_path_from_uri(dataset_path: str) -> str:
39
+ """
40
+ Preprocesses `dataset_path` and removes remote filesystem (e.g. removing `s3://`).
41
+
42
+ Args:
43
+ dataset_path (`str`):
44
+ Path (e.g. `dataset/train`) or remote uri (e.g. `s3://my-bucket/dataset/train`) of the dataset directory.
45
+ """
46
+ if "://" in dataset_path:
47
+ dataset_path = dataset_path.split("://")[1]
48
+ return dataset_path
49
+
50
+
51
+ def is_remote_filesystem(fs: fsspec.AbstractFileSystem) -> bool:
52
+ """
53
+ Checks if `fs` is a remote filesystem.
54
+
55
+ Args:
56
+ fs (`fsspec.spec.AbstractFileSystem`):
57
+ An abstract super-class for pythonic file-systems, e.g. `fsspec.filesystem(\'file\')` or [`datasets.filesystems.S3FileSystem`].
58
+ """
59
+ return not isinstance(fs, LocalFileSystem)
60
+
61
+
62
+ def rename(fs: fsspec.AbstractFileSystem, src: str, dst: str):
63
+ """
64
+ Renames the file `src` in `fs` to `dst`.
65
+ """
66
+ if not is_remote_filesystem(fs):
67
+ # LocalFileSystem.mv does copy + rm, it is more efficient to simply move a local directory
68
+ shutil.move(fs._strip_protocol(src), fs._strip_protocol(dst))
69
+ else:
70
+ fs.mv(src, dst, recursive=True)
71
+
72
+
73
+ def _reset_fsspec_lock() -> None:
74
+ """
75
+ Clear reference to the loop and thread.
76
+ This is necessary otherwise HTTPFileSystem hangs in the ML training loop.
77
+ Only required for fsspec >= 0.9.0
78
+ See https://github.com/fsspec/gcsfs/issues/379
79
+ """
80
+ if hasattr(fsspec.asyn, "reset_lock"):
81
+ # for future fsspec>2022.05.0
82
+ fsspec.asyn.reset_lock()
83
+ else:
84
+ fsspec.asyn.iothread[0] = None
85
+ fsspec.asyn.loop[0] = None
86
+ fsspec.asyn.lock = threading.Lock()
env-llmeval/lib/python3.10/site-packages/datasets/filesystems/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (2.84 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/filesystems/__pycache__/compression.cpython-310.pyc ADDED
Binary file (6.29 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/filesystems/__pycache__/s3filesystem.cpython-310.pyc ADDED
Binary file (6.06 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/filesystems/compression.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from typing import Optional
3
+
4
+ import fsspec
5
+ from fsspec.archive import AbstractArchiveFileSystem
6
+ from fsspec.utils import DEFAULT_BLOCK_SIZE
7
+
8
+
9
+ class BaseCompressedFileFileSystem(AbstractArchiveFileSystem):
10
+ """Read contents of compressed file as a filesystem with one file inside."""
11
+
12
+ root_marker = ""
13
+ protocol: str = (
14
+ None # protocol passed in prefix to the url. ex: "gzip", for gzip://file.txt::http://foo.bar/file.txt.gz
15
+ )
16
+ compression: str = None # compression type in fsspec. ex: "gzip"
17
+ extension: str = None # extension of the filename to strip. ex: "".gz" to get file.txt from file.txt.gz
18
+
19
+ def __init__(
20
+ self, fo: str = "", target_protocol: Optional[str] = None, target_options: Optional[dict] = None, **kwargs
21
+ ):
22
+ """
23
+ The compressed file system can be instantiated from any compressed file.
24
+ It reads the contents of compressed file as a filesystem with one file inside, as if it was an archive.
25
+
26
+ The single file inside the filesystem is named after the compresssed file,
27
+ without the compression extension at the end of the filename.
28
+
29
+ Args:
30
+ fo (:obj:``str``): Path to compressed file. Will fetch file using ``fsspec.open()``
31
+ mode (:obj:``str``): Currently, only 'rb' accepted
32
+ target_protocol(:obj:``str``, optional): To override the FS protocol inferred from a URL.
33
+ target_options (:obj:``dict``, optional): Kwargs passed when instantiating the target FS.
34
+ """
35
+ super().__init__(self, **kwargs)
36
+ # always open as "rb" since fsspec can then use the TextIOWrapper to make it work for "r" mode
37
+ self.file = fsspec.open(
38
+ fo,
39
+ mode="rb",
40
+ protocol=target_protocol,
41
+ compression=self.compression,
42
+ client_kwargs={
43
+ "requote_redirect_url": False, # see https://github.com/huggingface/datasets/pull/5459
44
+ "trust_env": True, # Enable reading proxy env variables.
45
+ **(target_options or {}).pop("client_kwargs", {}), # To avoid issues if it was already passed.
46
+ },
47
+ **(target_options or {}),
48
+ )
49
+ self.compressed_name = os.path.basename(self.file.path.split("::")[0])
50
+ self.uncompressed_name = (
51
+ self.compressed_name[: self.compressed_name.rindex(".")]
52
+ if "." in self.compressed_name
53
+ else self.compressed_name
54
+ )
55
+ self.dir_cache = None
56
+
57
+ @classmethod
58
+ def _strip_protocol(cls, path):
59
+ # compressed file paths are always relative to the archive root
60
+ return super()._strip_protocol(path).lstrip("/")
61
+
62
+ def _get_dirs(self):
63
+ if self.dir_cache is None:
64
+ f = {**self.file.fs.info(self.file.path), "name": self.uncompressed_name}
65
+ self.dir_cache = {f["name"]: f}
66
+
67
+ def cat(self, path: str):
68
+ return self.file.open().read()
69
+
70
+ def _open(
71
+ self,
72
+ path: str,
73
+ mode: str = "rb",
74
+ block_size=None,
75
+ autocommit=True,
76
+ cache_options=None,
77
+ **kwargs,
78
+ ):
79
+ path = self._strip_protocol(path)
80
+ if mode != "rb":
81
+ raise ValueError(f"Tried to read with mode {mode} on file {self.file.path} opened with mode 'rb'")
82
+ return self.file.open()
83
+
84
+
85
+ class Bz2FileSystem(BaseCompressedFileFileSystem):
86
+ """Read contents of BZ2 file as a filesystem with one file inside."""
87
+
88
+ protocol = "bz2"
89
+ compression = "bz2"
90
+ extension = ".bz2"
91
+
92
+
93
+ class GzipFileSystem(BaseCompressedFileFileSystem):
94
+ """Read contents of GZIP file as a filesystem with one file inside."""
95
+
96
+ protocol = "gzip"
97
+ compression = "gzip"
98
+ extension = ".gz"
99
+
100
+
101
+ class Lz4FileSystem(BaseCompressedFileFileSystem):
102
+ """Read contents of LZ4 file as a filesystem with one file inside."""
103
+
104
+ protocol = "lz4"
105
+ compression = "lz4"
106
+ extension = ".lz4"
107
+
108
+
109
+ class XzFileSystem(BaseCompressedFileFileSystem):
110
+ """Read contents of .xz (LZMA) file as a filesystem with one file inside."""
111
+
112
+ protocol = "xz"
113
+ compression = "xz"
114
+ extension = ".xz"
115
+
116
+
117
+ class ZstdFileSystem(BaseCompressedFileFileSystem):
118
+ """
119
+ Read contents of zstd file as a filesystem with one file inside.
120
+
121
+ Note that reading in binary mode with fsspec isn't supported yet:
122
+ https://github.com/indygreg/python-zstandard/issues/136
123
+ """
124
+
125
+ protocol = "zstd"
126
+ compression = "zstd"
127
+ extension = ".zst"
128
+
129
+ def __init__(
130
+ self,
131
+ fo: str,
132
+ mode: str = "rb",
133
+ target_protocol: Optional[str] = None,
134
+ target_options: Optional[dict] = None,
135
+ block_size: int = DEFAULT_BLOCK_SIZE,
136
+ **kwargs,
137
+ ):
138
+ super().__init__(
139
+ fo=fo,
140
+ mode=mode,
141
+ target_protocol=target_protocol,
142
+ target_options=target_options,
143
+ block_size=block_size,
144
+ **kwargs,
145
+ )
146
+ # We need to wrap the zstd decompressor to avoid this error in fsspec==2021.7.0 and zstandard==0.15.2:
147
+ #
148
+ # File "/Users/user/.virtualenvs/hf-datasets/lib/python3.7/site-packages/fsspec/core.py", line 145, in open
149
+ # out.close = close
150
+ # AttributeError: 'zstd.ZstdDecompressionReader' object attribute 'close' is read-only
151
+ #
152
+ # see https://github.com/intake/filesystem_spec/issues/725
153
+ _enter = self.file.__enter__
154
+
155
+ class WrappedFile:
156
+ def __init__(self, file_):
157
+ self._file = file_
158
+
159
+ def __enter__(self):
160
+ self._file.__enter__()
161
+ return self
162
+
163
+ def __exit__(self, *args, **kwargs):
164
+ self._file.__exit__(*args, **kwargs)
165
+
166
+ def __iter__(self):
167
+ return iter(self._file)
168
+
169
+ def __next__(self):
170
+ return next(self._file)
171
+
172
+ def __getattr__(self, attr):
173
+ return getattr(self._file, attr)
174
+
175
+ def fixed_enter(*args, **kwargs):
176
+ return WrappedFile(_enter(*args, **kwargs))
177
+
178
+ self.file.__enter__ = fixed_enter
env-llmeval/lib/python3.10/site-packages/datasets/filesystems/s3filesystem.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import s3fs
2
+
3
+ from ..utils.deprecation_utils import deprecated
4
+
5
+
6
+ @deprecated("Use s3fs.S3FileSystem instead.")
7
+ class S3FileSystem(s3fs.S3FileSystem):
8
+ """
9
+ `datasets.filesystems.S3FileSystem` is a subclass of [`s3fs.S3FileSystem`](https://s3fs.readthedocs.io/en/latest/api.html).
10
+
11
+ Users can use this class to access S3 as if it were a file system. It exposes a filesystem-like API (ls, cp, open, etc.) on top of S3 storage. Provide credentials either explicitly (`key=`, `secret=`) or with boto's credential methods. See botocore documentation for more information. If no credentials are available, use `anon=True`.
12
+
13
+ Args:
14
+ anon (`bool`, default to `False`):
15
+ Whether to use anonymous connection (public buckets only). If `False`, uses the key/secret given,
16
+ or boto's credential resolver (client_kwargs, environment, variables, config files, EC2 IAM server, in that order).
17
+ key (`str`):
18
+ If not anonymous, use this access key ID, if specified.
19
+ secret (`str`):
20
+ If not anonymous, use this secret access key, if specified.
21
+ token (`str`):
22
+ If not anonymous, use this security token, if specified.
23
+ use_ssl (`bool`, defaults to `True`):
24
+ Whether to use SSL in connections to S3; may be faster without, but insecure. If `use_ssl` is
25
+ also set in `client_kwargs`, the value set in `client_kwargs` will take priority.
26
+ s3_additional_kwargs (`dict`):
27
+ Parameters that are used when calling S3 API methods. Typically used for things
28
+ like ServerSideEncryption.
29
+ client_kwargs (`dict`):
30
+ Parameters for the botocore client.
31
+ requester_pays (`bool`, defaults to `False`):
32
+ Whether `RequesterPays` buckets are supported.
33
+ default_block_size (`int`):
34
+ If given, the default block size value used for `open()`, if no specific value is given at all time.
35
+ The built-in default is 5MB.
36
+ default_fill_cache (`bool`, defaults to `True`):
37
+ Whether to use cache filling with open by default. Refer to `S3File.open`.
38
+ default_cache_type (`str`, defaults to `bytes`):
39
+ If given, the default `cache_type` value used for `open()`. Set to `none` if no
40
+ caching is desired. See fsspec's documentation for other available `cache_type` values.
41
+ version_aware (`bool`, defaults to `False`):
42
+ Whether to support bucket versioning. If enable this will require the user to have
43
+ the necessary IAM permissions for dealing with versioned objects.
44
+ cache_regions (`bool`, defaults to `False`):
45
+ Whether to cache bucket regions. Whenever a new bucket is used, it will
46
+ first find out which region it belongs to and then use the client for that region.
47
+ asynchronous (`bool`, defaults to `False`):
48
+ Whether this instance is to be used from inside coroutines.
49
+ config_kwargs (`dict`):
50
+ Parameters passed to `botocore.client.Config`.
51
+ **kwargs:
52
+ Other parameters for core session.
53
+ session (`aiobotocore.session.AioSession`):
54
+ Session to be used for all connections. This session will be used inplace of creating
55
+ a new session inside S3FileSystem. For example: `aiobotocore.session.AioSession(profile='test_user')`.
56
+ skip_instance_cache (`bool`):
57
+ Control reuse of instances. Passed on to `fsspec`.
58
+ use_listings_cache (`bool`):
59
+ Control reuse of directory listings. Passed on to `fsspec`.
60
+ listings_expiry_time (`int` or `float`):
61
+ Control reuse of directory listings. Passed on to `fsspec`.
62
+ max_paths (`int`): Control reuse of directory listings. Passed on to `fsspec`.
63
+
64
+ Examples:
65
+
66
+ Listing files from public S3 bucket.
67
+
68
+ ```py
69
+ >>> import datasets
70
+ >>> s3 = datasets.filesystems.S3FileSystem(anon=True) # doctest: +SKIP
71
+ >>> s3.ls('public-datasets/imdb/train') # doctest: +SKIP
72
+ ['dataset_info.json.json','dataset.arrow','state.json']
73
+ ```
74
+
75
+ Listing files from private S3 bucket using `aws_access_key_id` and `aws_secret_access_key`.
76
+
77
+ ```py
78
+ >>> import datasets
79
+ >>> s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key) # doctest: +SKIP
80
+ >>> s3.ls('my-private-datasets/imdb/train') # doctest: +SKIP
81
+ ['dataset_info.json.json','dataset.arrow','state.json']
82
+ ```
83
+
84
+ Using `S3Filesystem` with `botocore.session.Session` and custom `aws_profile`.
85
+
86
+ ```py
87
+ >>> import botocore
88
+ >>> from datasets.filesystems import S3Filesystem
89
+
90
+ >>> s3_session = botocore.session.Session(profile_name='my_profile_name')
91
+ >>> s3 = S3FileSystem(session=s3_session) # doctest: +SKIP
92
+ ```
93
+
94
+ Loading dataset from S3 using `S3Filesystem` and [`load_from_disk`].
95
+
96
+ ```py
97
+ >>> from datasets import load_from_disk
98
+ >>> from datasets.filesystems import S3Filesystem
99
+
100
+ >>> s3 = S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key) # doctest: +SKIP
101
+ >>> dataset = load_from_disk('s3://my-private-datasets/imdb/train', storage_options=s3.storage_options) # doctest: +SKIP
102
+ >>> print(len(dataset))
103
+ 25000
104
+ ```
105
+
106
+ Saving dataset to S3 using `S3Filesystem` and [`Dataset.save_to_disk`].
107
+
108
+ ```py
109
+ >>> from datasets import load_dataset
110
+ >>> from datasets.filesystems import S3Filesystem
111
+
112
+ >>> dataset = load_dataset("imdb")
113
+ >>> s3 = S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key) # doctest: +SKIP
114
+ >>> dataset.save_to_disk('s3://my-private-datasets/imdb/train', storage_options=s3.storage_options) # doctest: +SKIP
115
+ ```
116
+ """
env-llmeval/lib/python3.10/site-packages/datasets/fingerprint.py ADDED
@@ -0,0 +1,494 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import inspect
2
+ import os
3
+ import random
4
+ import shutil
5
+ import tempfile
6
+ import weakref
7
+ from functools import wraps
8
+ from pathlib import Path
9
+ from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple, Union
10
+
11
+ import numpy as np
12
+ import xxhash
13
+
14
+ from . import config
15
+ from .naming import INVALID_WINDOWS_CHARACTERS_IN_PATH
16
+ from .utils._dill import dumps
17
+ from .utils.deprecation_utils import deprecated
18
+ from .utils.logging import get_logger
19
+
20
+
21
+ if TYPE_CHECKING:
22
+ from .arrow_dataset import Dataset
23
+
24
+
25
+ logger = get_logger(__name__)
26
+
27
+
28
+ # Fingerprinting allows to have one deterministic fingerprint per dataset state.
29
+ # A dataset fingerprint is updated after each transform.
30
+ # Re-running the same transforms on a dataset in a different session results in the same fingerprint.
31
+ # This is possible thanks to a custom hashing function that works with most python objects.
32
+
33
+ # Fingerprinting is the main mechanism that enables caching.
34
+ # The caching mechanism allows to reload an existing cache file if it's already been computed.
35
+
36
+
37
+ #################
38
+ # Caching
39
+ #################
40
+
41
+ _CACHING_ENABLED = True
42
+ _TEMP_DIR_FOR_TEMP_CACHE_FILES: Optional["_TempCacheDir"] = None
43
+ _DATASETS_WITH_TABLE_IN_TEMP_DIR: Optional[weakref.WeakSet] = None
44
+
45
+
46
+ class _TempCacheDir:
47
+ """
48
+ A temporary directory for storing cached Arrow files with a cleanup that frees references to the Arrow files
49
+ before deleting the directory itself to avoid permission errors on Windows.
50
+ """
51
+
52
+ def __init__(self):
53
+ self.name = tempfile.mkdtemp(prefix=config.TEMP_CACHE_DIR_PREFIX)
54
+ self._finalizer = weakref.finalize(self, self._cleanup)
55
+
56
+ def _cleanup(self):
57
+ for dset in get_datasets_with_cache_file_in_temp_dir():
58
+ dset.__del__()
59
+ if os.path.exists(self.name):
60
+ try:
61
+ shutil.rmtree(self.name)
62
+ except Exception as e:
63
+ raise OSError(
64
+ f"An error occured while trying to delete temporary cache directory {self.name}. Please delete it manually."
65
+ ) from e
66
+
67
+ def cleanup(self):
68
+ if self._finalizer.detach():
69
+ self._cleanup()
70
+
71
+
72
+ def maybe_register_dataset_for_temp_dir_deletion(dataset):
73
+ """
74
+ This function registers the datasets that have cache files in _TEMP_DIR_FOR_TEMP_CACHE_FILES in order
75
+ to properly delete them before deleting the temporary directory.
76
+ The temporary directory _TEMP_DIR_FOR_TEMP_CACHE_FILES is used when caching is disabled.
77
+ """
78
+ if _TEMP_DIR_FOR_TEMP_CACHE_FILES is None:
79
+ return
80
+
81
+ global _DATASETS_WITH_TABLE_IN_TEMP_DIR
82
+ if _DATASETS_WITH_TABLE_IN_TEMP_DIR is None:
83
+ _DATASETS_WITH_TABLE_IN_TEMP_DIR = weakref.WeakSet()
84
+ if any(
85
+ Path(_TEMP_DIR_FOR_TEMP_CACHE_FILES.name) in Path(cache_file["filename"]).parents
86
+ for cache_file in dataset.cache_files
87
+ ):
88
+ _DATASETS_WITH_TABLE_IN_TEMP_DIR.add(dataset)
89
+
90
+
91
+ def get_datasets_with_cache_file_in_temp_dir():
92
+ return list(_DATASETS_WITH_TABLE_IN_TEMP_DIR) if _DATASETS_WITH_TABLE_IN_TEMP_DIR is not None else []
93
+
94
+
95
+ def enable_caching():
96
+ """
97
+ When applying transforms on a dataset, the data are stored in cache files.
98
+ The caching mechanism allows to reload an existing cache file if it's already been computed.
99
+
100
+ Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated
101
+ after each transform.
102
+
103
+ If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.
104
+ More precisely, if the caching is disabled:
105
+ - cache files are always recreated
106
+ - cache files are written to a temporary directory that is deleted when session closes
107
+ - cache files are named using a random hash instead of the dataset fingerprint
108
+ - use [`~datasets.Dataset.save_to_disk`] to save a transformed dataset or it will be deleted when session closes
109
+ - caching doesn't affect [`~datasets.load_dataset`]. If you want to regenerate a dataset from scratch you should use
110
+ the `download_mode` parameter in [`~datasets.load_dataset`].
111
+ """
112
+ global _CACHING_ENABLED
113
+ _CACHING_ENABLED = True
114
+
115
+
116
+ def disable_caching():
117
+ """
118
+ When applying transforms on a dataset, the data are stored in cache files.
119
+ The caching mechanism allows to reload an existing cache file if it's already been computed.
120
+
121
+ Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated
122
+ after each transform.
123
+
124
+ If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.
125
+ More precisely, if the caching is disabled:
126
+ - cache files are always recreated
127
+ - cache files are written to a temporary directory that is deleted when session closes
128
+ - cache files are named using a random hash instead of the dataset fingerprint
129
+ - use [`~datasets.Dataset.save_to_disk`] to save a transformed dataset or it will be deleted when session closes
130
+ - caching doesn't affect [`~datasets.load_dataset`]. If you want to regenerate a dataset from scratch you should use
131
+ the `download_mode` parameter in [`~datasets.load_dataset`].
132
+ """
133
+ global _CACHING_ENABLED
134
+ _CACHING_ENABLED = False
135
+
136
+
137
+ @deprecated(
138
+ "Use datasets.enable_caching() or datasets.disable_caching() instead. This function will be removed in a future version of datasets."
139
+ )
140
+ def set_caching_enabled(boolean: bool):
141
+ """
142
+ When applying transforms on a dataset, the data are stored in cache files.
143
+ The caching mechanism allows to reload an existing cache file if it's already been computed.
144
+
145
+ Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated
146
+ after each transform.
147
+
148
+ If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.
149
+ More precisely, if the caching is disabled:
150
+ - cache files are always recreated
151
+ - cache files are written to a temporary directory that is deleted when session closes
152
+ - cache files are named using a random hash instead of the dataset fingerprint
153
+ - use :func:`datasets.Dataset.save_to_disk` to save a transformed dataset or it will be deleted when session closes
154
+ - caching doesn't affect :func:`datasets.load_dataset`. If you want to regenerate a dataset from scratch you should use
155
+ the ``download_mode`` parameter in :func:`datasets.load_dataset`.
156
+ """
157
+ global _CACHING_ENABLED
158
+ _CACHING_ENABLED = bool(boolean)
159
+
160
+
161
+ def is_caching_enabled() -> bool:
162
+ """
163
+ When applying transforms on a dataset, the data are stored in cache files.
164
+ The caching mechanism allows to reload an existing cache file if it's already been computed.
165
+
166
+ Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated
167
+ after each transform.
168
+
169
+ If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.
170
+ More precisely, if the caching is disabled:
171
+ - cache files are always recreated
172
+ - cache files are written to a temporary directory that is deleted when session closes
173
+ - cache files are named using a random hash instead of the dataset fingerprint
174
+ - use [`~datasets.Dataset.save_to_disk`]] to save a transformed dataset or it will be deleted when session closes
175
+ - caching doesn't affect [`~datasets.load_dataset`]. If you want to regenerate a dataset from scratch you should use
176
+ the `download_mode` parameter in [`~datasets.load_dataset`].
177
+ """
178
+ global _CACHING_ENABLED
179
+ return bool(_CACHING_ENABLED)
180
+
181
+
182
+ def get_temporary_cache_files_directory() -> str:
183
+ """Return a directory that is deleted when session closes."""
184
+ global _TEMP_DIR_FOR_TEMP_CACHE_FILES
185
+ if _TEMP_DIR_FOR_TEMP_CACHE_FILES is None:
186
+ _TEMP_DIR_FOR_TEMP_CACHE_FILES = _TempCacheDir()
187
+ return _TEMP_DIR_FOR_TEMP_CACHE_FILES.name
188
+
189
+
190
+ #################
191
+ # Hashing
192
+ #################
193
+
194
+
195
+ @deprecated("Use `copyreg.pickle` to register a custom reducer.")
196
+ def hashregister(*types):
197
+ def proxy(func):
198
+ for t in types:
199
+ Hasher.dispatch[t] = func
200
+ return func
201
+
202
+ return proxy
203
+
204
+
205
+ class Hasher:
206
+ """Hasher that accepts python objects as inputs."""
207
+
208
+ dispatch: Dict = {}
209
+
210
+ def __init__(self):
211
+ self.m = xxhash.xxh64()
212
+
213
+ @classmethod
214
+ def hash_bytes(cls, value: Union[bytes, List[bytes]]) -> str:
215
+ value = [value] if isinstance(value, bytes) else value
216
+ m = xxhash.xxh64()
217
+ for x in value:
218
+ m.update(x)
219
+ return m.hexdigest()
220
+
221
+ @classmethod
222
+ @deprecated("Use `Hasher.hash` instead.")
223
+ def hash_default(cls, value: Any) -> str:
224
+ return cls.hash(value)
225
+
226
+ @classmethod
227
+ def hash(cls, value: Any) -> str:
228
+ return cls.hash_bytes(dumps(value))
229
+
230
+ def update(self, value: Any) -> None:
231
+ header_for_update = f"=={type(value)}=="
232
+ value_for_update = self.hash(value)
233
+ self.m.update(header_for_update.encode("utf8"))
234
+ self.m.update(value_for_update.encode("utf-8"))
235
+
236
+ def hexdigest(self) -> str:
237
+ return self.m.hexdigest()
238
+
239
+
240
+ #################
241
+ # Fingerprinting
242
+ #################
243
+
244
+ fingerprint_rng = random.Random()
245
+ # we show a warning only once when fingerprinting fails to avoid spam
246
+ fingerprint_warnings: Dict[str, bool] = {}
247
+
248
+
249
+ def generate_fingerprint(dataset: "Dataset") -> str:
250
+ state = dataset.__dict__
251
+ hasher = Hasher()
252
+ for key in sorted(state):
253
+ if key == "_fingerprint":
254
+ continue
255
+ hasher.update(key)
256
+ hasher.update(state[key])
257
+ # hash data files last modification timestamps as well
258
+ for cache_file in dataset.cache_files:
259
+ hasher.update(os.path.getmtime(cache_file["filename"]))
260
+ return hasher.hexdigest()
261
+
262
+
263
+ def generate_random_fingerprint(nbits: int = 64) -> str:
264
+ return f"{fingerprint_rng.getrandbits(nbits):0{nbits//4}x}"
265
+
266
+
267
+ def update_fingerprint(fingerprint, transform, transform_args):
268
+ global fingerprint_warnings
269
+ hasher = Hasher()
270
+ hasher.update(fingerprint)
271
+ try:
272
+ hasher.update(transform)
273
+ except: # noqa various errors might raise here from pickle or dill
274
+ if _CACHING_ENABLED:
275
+ if not fingerprint_warnings.get("update_fingerprint_transform_hash_failed", False):
276
+ logger.warning(
277
+ f"Transform {transform} couldn't be hashed properly, a random hash was used instead. "
278
+ "Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. "
279
+ "If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. "
280
+ "This warning is only showed once. Subsequent hashing failures won't be showed."
281
+ )
282
+ fingerprint_warnings["update_fingerprint_transform_hash_failed"] = True
283
+ else:
284
+ logger.info(f"Transform {transform} couldn't be hashed properly, a random hash was used instead.")
285
+ else:
286
+ logger.info(
287
+ f"Transform {transform} couldn't be hashed properly, a random hash was used instead. This doesn't affect caching since it's disabled."
288
+ )
289
+
290
+ return generate_random_fingerprint()
291
+ for key in sorted(transform_args):
292
+ hasher.update(key)
293
+ try:
294
+ hasher.update(transform_args[key])
295
+ except: # noqa various errors might raise here from pickle or dill
296
+ if _CACHING_ENABLED:
297
+ if not fingerprint_warnings.get("update_fingerprint_transform_hash_failed", False):
298
+ logger.warning(
299
+ f"Parameter '{key}'={transform_args[key]} of the transform {transform} couldn't be hashed properly, a random hash was used instead. "
300
+ "Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. "
301
+ "If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. "
302
+ "This warning is only showed once. Subsequent hashing failures won't be showed."
303
+ )
304
+ fingerprint_warnings["update_fingerprint_transform_hash_failed"] = True
305
+ else:
306
+ logger.info(
307
+ f"Parameter '{key}'={transform_args[key]} of the transform {transform} couldn't be hashed properly, a random hash was used instead."
308
+ )
309
+ else:
310
+ logger.info(
311
+ f"Parameter '{key}'={transform_args[key]} of the transform {transform} couldn't be hashed properly, a random hash was used instead. This doesn't affect caching since it's disabled."
312
+ )
313
+ return generate_random_fingerprint()
314
+ return hasher.hexdigest()
315
+
316
+
317
+ def validate_fingerprint(fingerprint: str, max_length=64):
318
+ """
319
+ Make sure the fingerprint is a non-empty string that is not longer that max_length=64 by default,
320
+ so that the fingerprint can be used to name cache files without issues.
321
+ """
322
+ if not isinstance(fingerprint, str) or not fingerprint:
323
+ raise ValueError(f"Invalid fingerprint '{fingerprint}': it should be a non-empty string.")
324
+ for invalid_char in INVALID_WINDOWS_CHARACTERS_IN_PATH:
325
+ if invalid_char in fingerprint:
326
+ raise ValueError(
327
+ f"Invalid fingerprint. Bad characters from black list '{INVALID_WINDOWS_CHARACTERS_IN_PATH}' found in '{fingerprint}'. "
328
+ f"They could create issues when creating cache files."
329
+ )
330
+ if len(fingerprint) > max_length:
331
+ raise ValueError(
332
+ f"Invalid fingerprint. Maximum lenth is {max_length} but '{fingerprint}' has length {len(fingerprint)}."
333
+ "It could create issues when creating cache files."
334
+ )
335
+
336
+
337
+ def format_transform_for_fingerprint(func: Callable, version: Optional[str] = None) -> str:
338
+ """
339
+ Format a transform to the format that will be used to update the fingerprint.
340
+ """
341
+ transform = f"{func.__module__}.{func.__qualname__}"
342
+ if version is not None:
343
+ transform += f"@{version}"
344
+ return transform
345
+
346
+
347
+ def format_kwargs_for_fingerprint(
348
+ func: Callable,
349
+ args: Tuple,
350
+ kwargs: Dict[str, Any],
351
+ use_kwargs: Optional[List[str]] = None,
352
+ ignore_kwargs: Optional[List[str]] = None,
353
+ randomized_function: bool = False,
354
+ ) -> Dict[str, Any]:
355
+ """
356
+ Format the kwargs of a transform to the format that will be used to update the fingerprint.
357
+ """
358
+ kwargs_for_fingerprint = kwargs.copy()
359
+ if args:
360
+ params = [p.name for p in inspect.signature(func).parameters.values() if p != p.VAR_KEYWORD]
361
+ args = args[1:] # assume the first argument is the dataset
362
+ params = params[1:]
363
+ kwargs_for_fingerprint.update(zip(params, args))
364
+ else:
365
+ del kwargs_for_fingerprint[
366
+ next(iter(inspect.signature(func).parameters))
367
+ ] # assume the first key is the dataset
368
+
369
+ # keep the right kwargs to be hashed to generate the fingerprint
370
+
371
+ if use_kwargs:
372
+ kwargs_for_fingerprint = {k: v for k, v in kwargs_for_fingerprint.items() if k in use_kwargs}
373
+ if ignore_kwargs:
374
+ kwargs_for_fingerprint = {k: v for k, v in kwargs_for_fingerprint.items() if k not in ignore_kwargs}
375
+ if randomized_function: # randomized functions have `seed` and `generator` parameters
376
+ if kwargs_for_fingerprint.get("seed") is None and kwargs_for_fingerprint.get("generator") is None:
377
+ _, seed, pos, *_ = np.random.get_state()
378
+ seed = seed[pos] if pos < 624 else seed[0]
379
+ kwargs_for_fingerprint["generator"] = np.random.default_rng(seed)
380
+
381
+ # remove kwargs that are the default values
382
+
383
+ default_values = {
384
+ p.name: p.default for p in inspect.signature(func).parameters.values() if p.default != inspect._empty
385
+ }
386
+ for default_varname, default_value in default_values.items():
387
+ if default_varname in kwargs_for_fingerprint and kwargs_for_fingerprint[default_varname] == default_value:
388
+ kwargs_for_fingerprint.pop(default_varname)
389
+ return kwargs_for_fingerprint
390
+
391
+
392
+ def fingerprint_transform(
393
+ inplace: bool,
394
+ use_kwargs: Optional[List[str]] = None,
395
+ ignore_kwargs: Optional[List[str]] = None,
396
+ fingerprint_names: Optional[List[str]] = None,
397
+ randomized_function: bool = False,
398
+ version: Optional[str] = None,
399
+ ):
400
+ """
401
+ Wrapper for dataset transforms to update the dataset fingerprint using ``update_fingerprint``
402
+ Args:
403
+ inplace (:obj:`bool`): If inplace is True, the fingerprint of the dataset is updated inplace.
404
+ Otherwise, a parameter "new_fingerprint" is passed to the wrapped method that should take care of
405
+ setting the fingerprint of the returned Dataset.
406
+ use_kwargs (:obj:`List[str]`, optional): optional white list of argument names to take into account
407
+ to update the fingerprint to the wrapped method that should take care of
408
+ setting the fingerprint of the returned Dataset. By default all the arguments are used.
409
+ ignore_kwargs (:obj:`List[str]`, optional): optional black list of argument names to take into account
410
+ to update the fingerprint. Note that ignore_kwargs prevails on use_kwargs.
411
+ fingerprint_names (:obj:`List[str]`, optional, defaults to ["new_fingerprint"]):
412
+ If the dataset transforms is not inplace and returns a DatasetDict, then it can require
413
+ several fingerprints (one per dataset in the DatasetDict). By specifying fingerprint_names,
414
+ one fingerprint named after each element of fingerprint_names is going to be passed.
415
+ randomized_function (:obj:`bool`, defaults to False): If the dataset transform is random and has
416
+ optional parameters "seed" and "generator", then you can set randomized_function to True.
417
+ This way, even if users set "seed" and "generator" to None, then the fingerprint is
418
+ going to be randomly generated depending on numpy's current state. In this case, the
419
+ generator is set to np.random.default_rng(np.random.get_state()[1][0]).
420
+ version (:obj:`str`, optional): version of the transform. The version is taken into account when
421
+ computing the fingerprint. If a datase transform changes (or at least if the output data
422
+ that are cached changes), then one should increase the version. If the version stays the
423
+ same, then old cached data could be reused that are not compatible with the new transform.
424
+ It should be in the format "MAJOR.MINOR.PATCH".
425
+ """
426
+
427
+ if use_kwargs is not None and not isinstance(use_kwargs, list):
428
+ raise ValueError(f"use_kwargs is supposed to be a list, not {type(use_kwargs)}")
429
+
430
+ if ignore_kwargs is not None and not isinstance(ignore_kwargs, list):
431
+ raise ValueError(f"ignore_kwargs is supposed to be a list, not {type(use_kwargs)}")
432
+
433
+ if inplace and fingerprint_names:
434
+ raise ValueError("fingerprint_names are only used when inplace is False")
435
+
436
+ fingerprint_names = fingerprint_names if fingerprint_names is not None else ["new_fingerprint"]
437
+
438
+ def _fingerprint(func):
439
+ if not inplace and not all(name in func.__code__.co_varnames for name in fingerprint_names):
440
+ raise ValueError(f"function {func} is missing parameters {fingerprint_names} in signature")
441
+
442
+ if randomized_function: # randomized function have seed and generator parameters
443
+ if "seed" not in func.__code__.co_varnames:
444
+ raise ValueError(f"'seed' must be in {func}'s signature")
445
+ if "generator" not in func.__code__.co_varnames:
446
+ raise ValueError(f"'generator' must be in {func}'s signature")
447
+ # this call has to be outside the wrapper or since __qualname__ changes in multiprocessing
448
+ transform = format_transform_for_fingerprint(func, version=version)
449
+
450
+ @wraps(func)
451
+ def wrapper(*args, **kwargs):
452
+ kwargs_for_fingerprint = format_kwargs_for_fingerprint(
453
+ func,
454
+ args,
455
+ kwargs,
456
+ use_kwargs=use_kwargs,
457
+ ignore_kwargs=ignore_kwargs,
458
+ randomized_function=randomized_function,
459
+ )
460
+
461
+ if args:
462
+ dataset: Dataset = args[0]
463
+ args = args[1:]
464
+ else:
465
+ dataset: Dataset = kwargs.pop(next(iter(inspect.signature(func).parameters)))
466
+
467
+ # compute new_fingerprint and add it to the args of not in-place transforms
468
+ if inplace:
469
+ new_fingerprint = update_fingerprint(dataset._fingerprint, transform, kwargs_for_fingerprint)
470
+ else:
471
+ for fingerprint_name in fingerprint_names: # transforms like `train_test_split` have several hashes
472
+ if kwargs.get(fingerprint_name) is None:
473
+ kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
474
+ kwargs[fingerprint_name] = update_fingerprint(
475
+ dataset._fingerprint, transform, kwargs_for_fingerprint
476
+ )
477
+ else:
478
+ validate_fingerprint(kwargs[fingerprint_name])
479
+
480
+ # Call actual function
481
+
482
+ out = func(dataset, *args, **kwargs)
483
+
484
+ # Update fingerprint of in-place transforms + update in-place history of transforms
485
+
486
+ if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
487
+ dataset._fingerprint = new_fingerprint
488
+
489
+ return out
490
+
491
+ wrapper._decorator_name_ = "fingerprint"
492
+ return wrapper
493
+
494
+ return _fingerprint
env-llmeval/lib/python3.10/site-packages/datasets/info.py ADDED
@@ -0,0 +1,592 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Lint as: python3
16
+ """DatasetInfo and MetricInfo record information we know about a dataset and a metric.
17
+
18
+ This includes things that we know about the dataset statically, i.e.:
19
+ - description
20
+ - canonical location
21
+ - does it have validation and tests splits
22
+ - size
23
+ - etc.
24
+
25
+ This also includes the things that can and should be computed once we've
26
+ processed the dataset as well:
27
+ - number of examples (in each split)
28
+ - etc.
29
+ """
30
+
31
+ import copy
32
+ import dataclasses
33
+ import json
34
+ import os
35
+ import posixpath
36
+ import warnings
37
+ from dataclasses import dataclass
38
+ from pathlib import Path
39
+ from typing import ClassVar, Dict, List, Optional, Union
40
+
41
+ import fsspec
42
+ from huggingface_hub import DatasetCard, DatasetCardData
43
+
44
+ from . import config
45
+ from .features import Features, Value
46
+ from .splits import SplitDict
47
+ from .tasks import TaskTemplate, task_template_from_dict
48
+ from .utils import Version
49
+ from .utils.logging import get_logger
50
+ from .utils.py_utils import asdict, unique_values
51
+
52
+
53
+ logger = get_logger(__name__)
54
+
55
+
56
+ @dataclass
57
+ class SupervisedKeysData:
58
+ input: str = ""
59
+ output: str = ""
60
+
61
+
62
+ @dataclass
63
+ class DownloadChecksumsEntryData:
64
+ key: str = ""
65
+ value: str = ""
66
+
67
+
68
+ class MissingCachedSizesConfigError(Exception):
69
+ """The expected cached sizes of the download file are missing."""
70
+
71
+
72
+ class NonMatchingCachedSizesError(Exception):
73
+ """The prepared split doesn't have expected sizes."""
74
+
75
+
76
+ @dataclass
77
+ class PostProcessedInfo:
78
+ features: Optional[Features] = None
79
+ resources_checksums: Optional[dict] = None
80
+
81
+ def __post_init__(self):
82
+ # Convert back to the correct classes when we reload from dict
83
+ if self.features is not None and not isinstance(self.features, Features):
84
+ self.features = Features.from_dict(self.features)
85
+
86
+ @classmethod
87
+ def from_dict(cls, post_processed_info_dict: dict) -> "PostProcessedInfo":
88
+ field_names = {f.name for f in dataclasses.fields(cls)}
89
+ return cls(**{k: v for k, v in post_processed_info_dict.items() if k in field_names})
90
+
91
+
92
+ @dataclass
93
+ class DatasetInfo:
94
+ """Information about a dataset.
95
+
96
+ `DatasetInfo` documents datasets, including its name, version, and features.
97
+ See the constructor arguments and properties for a full list.
98
+
99
+ Not all fields are known on construction and may be updated later.
100
+
101
+ Attributes:
102
+ description (`str`):
103
+ A description of the dataset.
104
+ citation (`str`):
105
+ A BibTeX citation of the dataset.
106
+ homepage (`str`):
107
+ A URL to the official homepage for the dataset.
108
+ license (`str`):
109
+ The dataset's license. It can be the name of the license or a paragraph containing the terms of the license.
110
+ features ([`Features`], *optional*):
111
+ The features used to specify the dataset's column types.
112
+ post_processed (`PostProcessedInfo`, *optional*):
113
+ Information regarding the resources of a possible post-processing of a dataset. For example, it can contain the information of an index.
114
+ supervised_keys (`SupervisedKeysData`, *optional*):
115
+ Specifies the input feature and the label for supervised learning if applicable for the dataset (legacy from TFDS).
116
+ builder_name (`str`, *optional*):
117
+ The name of the `GeneratorBasedBuilder` subclass used to create the dataset. Usually matched to the corresponding script name. It is also the snake_case version of the dataset builder class name.
118
+ config_name (`str`, *optional*):
119
+ The name of the configuration derived from [`BuilderConfig`].
120
+ version (`str` or [`Version`], *optional*):
121
+ The version of the dataset.
122
+ splits (`dict`, *optional*):
123
+ The mapping between split name and metadata.
124
+ download_checksums (`dict`, *optional*):
125
+ The mapping between the URL to download the dataset's checksums and corresponding metadata.
126
+ download_size (`int`, *optional*):
127
+ The size of the files to download to generate the dataset, in bytes.
128
+ post_processing_size (`int`, *optional*):
129
+ Size of the dataset in bytes after post-processing, if any.
130
+ dataset_size (`int`, *optional*):
131
+ The combined size in bytes of the Arrow tables for all splits.
132
+ size_in_bytes (`int`, *optional*):
133
+ The combined size in bytes of all files associated with the dataset (downloaded files + Arrow files).
134
+ task_templates (`List[TaskTemplate]`, *optional*):
135
+ The task templates to prepare the dataset for during training and evaluation. Each template casts the dataset's [`Features`] to standardized column names and types as detailed in `datasets.tasks`.
136
+ **config_kwargs (additional keyword arguments):
137
+ Keyword arguments to be passed to the [`BuilderConfig`] and used in the [`DatasetBuilder`].
138
+ """
139
+
140
+ # Set in the dataset scripts
141
+ description: str = dataclasses.field(default_factory=str)
142
+ citation: str = dataclasses.field(default_factory=str)
143
+ homepage: str = dataclasses.field(default_factory=str)
144
+ license: str = dataclasses.field(default_factory=str)
145
+ features: Optional[Features] = None
146
+ post_processed: Optional[PostProcessedInfo] = None
147
+ supervised_keys: Optional[SupervisedKeysData] = None
148
+ task_templates: Optional[List[TaskTemplate]] = None
149
+
150
+ # Set later by the builder
151
+ builder_name: Optional[str] = None
152
+ dataset_name: Optional[str] = None # for packaged builders, to be different from builder_name
153
+ config_name: Optional[str] = None
154
+ version: Optional[Union[str, Version]] = None
155
+ # Set later by `download_and_prepare`
156
+ splits: Optional[dict] = None
157
+ download_checksums: Optional[dict] = None
158
+ download_size: Optional[int] = None
159
+ post_processing_size: Optional[int] = None
160
+ dataset_size: Optional[int] = None
161
+ size_in_bytes: Optional[int] = None
162
+
163
+ _INCLUDED_INFO_IN_YAML: ClassVar[List[str]] = [
164
+ "config_name",
165
+ "download_size",
166
+ "dataset_size",
167
+ "features",
168
+ "splits",
169
+ ]
170
+
171
+ def __post_init__(self):
172
+ # Convert back to the correct classes when we reload from dict
173
+ if self.features is not None and not isinstance(self.features, Features):
174
+ self.features = Features.from_dict(self.features)
175
+ if self.post_processed is not None and not isinstance(self.post_processed, PostProcessedInfo):
176
+ self.post_processed = PostProcessedInfo.from_dict(self.post_processed)
177
+ if self.version is not None and not isinstance(self.version, Version):
178
+ if isinstance(self.version, str):
179
+ self.version = Version(self.version)
180
+ else:
181
+ self.version = Version.from_dict(self.version)
182
+ if self.splits is not None and not isinstance(self.splits, SplitDict):
183
+ self.splits = SplitDict.from_split_dict(self.splits)
184
+ if self.supervised_keys is not None and not isinstance(self.supervised_keys, SupervisedKeysData):
185
+ if isinstance(self.supervised_keys, (tuple, list)):
186
+ self.supervised_keys = SupervisedKeysData(*self.supervised_keys)
187
+ else:
188
+ self.supervised_keys = SupervisedKeysData(**self.supervised_keys)
189
+
190
+ # Parse and make a list of templates
191
+ if self.task_templates is not None:
192
+ if isinstance(self.task_templates, (list, tuple)):
193
+ templates = [
194
+ template if isinstance(template, TaskTemplate) else task_template_from_dict(template)
195
+ for template in self.task_templates
196
+ ]
197
+ self.task_templates = [template for template in templates if template is not None]
198
+ elif isinstance(self.task_templates, TaskTemplate):
199
+ self.task_templates = [self.task_templates]
200
+ else:
201
+ template = task_template_from_dict(self.task_templates)
202
+ self.task_templates = [template] if template is not None else []
203
+
204
+ # Align task templates with features
205
+ if self.task_templates is not None:
206
+ self.task_templates = list(self.task_templates)
207
+ if self.features is not None:
208
+ self.task_templates = [
209
+ template.align_with_features(self.features) for template in (self.task_templates)
210
+ ]
211
+
212
+ def write_to_directory(
213
+ self, dataset_info_dir, pretty_print=False, fs="deprecated", storage_options: Optional[dict] = None
214
+ ):
215
+ """Write `DatasetInfo` and license (if present) as JSON files to `dataset_info_dir`.
216
+
217
+ Args:
218
+ dataset_info_dir (`str`):
219
+ Destination directory.
220
+ pretty_print (`bool`, defaults to `False`):
221
+ If `True`, the JSON will be pretty-printed with the indent level of 4.
222
+ fs (`fsspec.spec.AbstractFileSystem`, *optional*):
223
+ Instance of the remote filesystem used to download the files from.
224
+
225
+ <Deprecated version="2.9.0">
226
+
227
+ `fs` was deprecated in version 2.9.0 and will be removed in 3.0.0.
228
+ Please use `storage_options` instead, e.g. `storage_options=fs.storage_options`.
229
+
230
+ </Deprecated>
231
+
232
+ storage_options (`dict`, *optional*):
233
+ Key/value pairs to be passed on to the file-system backend, if any.
234
+
235
+ <Added version="2.9.0"/>
236
+
237
+ Example:
238
+
239
+ ```py
240
+ >>> from datasets import load_dataset
241
+ >>> ds = load_dataset("rotten_tomatoes", split="validation")
242
+ >>> ds.info.write_to_directory("/path/to/directory/")
243
+ ```
244
+ """
245
+ if fs != "deprecated":
246
+ warnings.warn(
247
+ "'fs' was deprecated in favor of 'storage_options' in version 2.9.0 and will be removed in 3.0.0.\n"
248
+ "You can remove this warning by passing 'storage_options=fs.storage_options' instead.",
249
+ FutureWarning,
250
+ )
251
+ storage_options = fs.storage_options
252
+
253
+ fs: fsspec.AbstractFileSystem
254
+ fs, _, _ = fsspec.get_fs_token_paths(dataset_info_dir, storage_options=storage_options)
255
+ with fs.open(posixpath.join(dataset_info_dir, config.DATASET_INFO_FILENAME), "wb") as f:
256
+ self._dump_info(f, pretty_print=pretty_print)
257
+ if self.license:
258
+ with fs.open(posixpath.join(dataset_info_dir, config.LICENSE_FILENAME), "wb") as f:
259
+ self._dump_license(f)
260
+
261
+ def _dump_info(self, file, pretty_print=False):
262
+ """Dump info in `file` file-like object open in bytes mode (to support remote files)"""
263
+ file.write(json.dumps(asdict(self), indent=4 if pretty_print else None).encode("utf-8"))
264
+
265
+ def _dump_license(self, file):
266
+ """Dump license in `file` file-like object open in bytes mode (to support remote files)"""
267
+ file.write(self.license.encode("utf-8"))
268
+
269
+ @classmethod
270
+ def from_merge(cls, dataset_infos: List["DatasetInfo"]):
271
+ dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]
272
+
273
+ if len(dataset_infos) > 0 and all(dataset_infos[0] == dset_info for dset_info in dataset_infos):
274
+ # if all dataset_infos are equal we don't need to merge. Just return the first.
275
+ return dataset_infos[0]
276
+
277
+ description = "\n\n".join(unique_values(info.description for info in dataset_infos)).strip()
278
+ citation = "\n\n".join(unique_values(info.citation for info in dataset_infos)).strip()
279
+ homepage = "\n\n".join(unique_values(info.homepage for info in dataset_infos)).strip()
280
+ license = "\n\n".join(unique_values(info.license for info in dataset_infos)).strip()
281
+ features = None
282
+ supervised_keys = None
283
+ task_templates = None
284
+
285
+ # Find common task templates across all dataset infos
286
+ all_task_templates = [info.task_templates for info in dataset_infos if info.task_templates is not None]
287
+ if len(all_task_templates) > 1:
288
+ task_templates = list(set(all_task_templates[0]).intersection(*all_task_templates[1:]))
289
+ elif len(all_task_templates):
290
+ task_templates = list(set(all_task_templates[0]))
291
+ # If no common task templates found, replace empty list with None
292
+ task_templates = task_templates if task_templates else None
293
+
294
+ return cls(
295
+ description=description,
296
+ citation=citation,
297
+ homepage=homepage,
298
+ license=license,
299
+ features=features,
300
+ supervised_keys=supervised_keys,
301
+ task_templates=task_templates,
302
+ )
303
+
304
+ @classmethod
305
+ def from_directory(
306
+ cls, dataset_info_dir: str, fs="deprecated", storage_options: Optional[dict] = None
307
+ ) -> "DatasetInfo":
308
+ """Create [`DatasetInfo`] from the JSON file in `dataset_info_dir`.
309
+
310
+ This function updates all the dynamically generated fields (num_examples,
311
+ hash, time of creation,...) of the [`DatasetInfo`].
312
+
313
+ This will overwrite all previous metadata.
314
+
315
+ Args:
316
+ dataset_info_dir (`str`):
317
+ The directory containing the metadata file. This
318
+ should be the root directory of a specific dataset version.
319
+ fs (`fsspec.spec.AbstractFileSystem`, *optional*):
320
+ Instance of the remote filesystem used to download the files from.
321
+
322
+ <Deprecated version="2.9.0">
323
+
324
+ `fs` was deprecated in version 2.9.0 and will be removed in 3.0.0.
325
+ Please use `storage_options` instead, e.g. `storage_options=fs.storage_options`.
326
+
327
+ </Deprecated>
328
+
329
+ storage_options (`dict`, *optional*):
330
+ Key/value pairs to be passed on to the file-system backend, if any.
331
+
332
+ <Added version="2.9.0"/>
333
+
334
+ Example:
335
+
336
+ ```py
337
+ >>> from datasets import DatasetInfo
338
+ >>> ds_info = DatasetInfo.from_directory("/path/to/directory/")
339
+ ```
340
+ """
341
+ if fs != "deprecated":
342
+ warnings.warn(
343
+ "'fs' was deprecated in favor of 'storage_options' in version 2.9.0 and will be removed in 3.0.0.\n"
344
+ "You can remove this warning by passing 'storage_options=fs.storage_options' instead.",
345
+ FutureWarning,
346
+ )
347
+ storage_options = fs.storage_options
348
+
349
+ fs: fsspec.AbstractFileSystem
350
+ fs, _, _ = fsspec.get_fs_token_paths(dataset_info_dir, storage_options=storage_options)
351
+ logger.info(f"Loading Dataset info from {dataset_info_dir}")
352
+ if not dataset_info_dir:
353
+ raise ValueError("Calling DatasetInfo.from_directory() with undefined dataset_info_dir.")
354
+ with fs.open(posixpath.join(dataset_info_dir, config.DATASET_INFO_FILENAME), "r", encoding="utf-8") as f:
355
+ dataset_info_dict = json.load(f)
356
+ return cls.from_dict(dataset_info_dict)
357
+
358
+ @classmethod
359
+ def from_dict(cls, dataset_info_dict: dict) -> "DatasetInfo":
360
+ field_names = {f.name for f in dataclasses.fields(cls)}
361
+ return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names})
362
+
363
+ def update(self, other_dataset_info: "DatasetInfo", ignore_none=True):
364
+ self_dict = self.__dict__
365
+ self_dict.update(
366
+ **{
367
+ k: copy.deepcopy(v)
368
+ for k, v in other_dataset_info.__dict__.items()
369
+ if (v is not None or not ignore_none)
370
+ }
371
+ )
372
+
373
+ def copy(self) -> "DatasetInfo":
374
+ return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
375
+
376
+ def _to_yaml_dict(self) -> dict:
377
+ yaml_dict = {}
378
+ dataset_info_dict = asdict(self)
379
+ for key in dataset_info_dict:
380
+ if key in self._INCLUDED_INFO_IN_YAML:
381
+ value = getattr(self, key)
382
+ if hasattr(value, "_to_yaml_list"): # Features, SplitDict
383
+ yaml_dict[key] = value._to_yaml_list()
384
+ elif hasattr(value, "_to_yaml_string"): # Version
385
+ yaml_dict[key] = value._to_yaml_string()
386
+ else:
387
+ yaml_dict[key] = value
388
+ return yaml_dict
389
+
390
+ @classmethod
391
+ def _from_yaml_dict(cls, yaml_data: dict) -> "DatasetInfo":
392
+ yaml_data = copy.deepcopy(yaml_data)
393
+ if yaml_data.get("features") is not None:
394
+ yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
395
+ if yaml_data.get("splits") is not None:
396
+ yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"])
397
+ field_names = {f.name for f in dataclasses.fields(cls)}
398
+ return cls(**{k: v for k, v in yaml_data.items() if k in field_names})
399
+
400
+
401
+ class DatasetInfosDict(Dict[str, DatasetInfo]):
402
+ def write_to_directory(self, dataset_infos_dir, overwrite=False, pretty_print=False) -> None:
403
+ total_dataset_infos = {}
404
+ dataset_infos_path = os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)
405
+ dataset_readme_path = os.path.join(dataset_infos_dir, config.REPOCARD_FILENAME)
406
+ if not overwrite:
407
+ total_dataset_infos = self.from_directory(dataset_infos_dir)
408
+ total_dataset_infos.update(self)
409
+ if os.path.exists(dataset_infos_path):
410
+ # for backward compatibility, let's update the JSON file if it exists
411
+ with open(dataset_infos_path, "w", encoding="utf-8") as f:
412
+ dataset_infos_dict = {
413
+ config_name: asdict(dset_info) for config_name, dset_info in total_dataset_infos.items()
414
+ }
415
+ json.dump(dataset_infos_dict, f, indent=4 if pretty_print else None)
416
+ # Dump the infos in the YAML part of the README.md file
417
+ if os.path.exists(dataset_readme_path):
418
+ dataset_card = DatasetCard.load(dataset_readme_path)
419
+ dataset_card_data = dataset_card.data
420
+ else:
421
+ dataset_card = None
422
+ dataset_card_data = DatasetCardData()
423
+ if total_dataset_infos:
424
+ total_dataset_infos.to_dataset_card_data(dataset_card_data)
425
+ dataset_card = (
426
+ DatasetCard("---\n" + str(dataset_card_data) + "\n---\n") if dataset_card is None else dataset_card
427
+ )
428
+ dataset_card.save(Path(dataset_readme_path))
429
+
430
+ @classmethod
431
+ def from_directory(cls, dataset_infos_dir) -> "DatasetInfosDict":
432
+ logger.info(f"Loading Dataset Infos from {dataset_infos_dir}")
433
+ # Load the info from the YAML part of README.md
434
+ if os.path.exists(os.path.join(dataset_infos_dir, config.REPOCARD_FILENAME)):
435
+ dataset_card_data = DatasetCard.load(Path(dataset_infos_dir) / config.REPOCARD_FILENAME).data
436
+ if "dataset_info" in dataset_card_data:
437
+ return cls.from_dataset_card_data(dataset_card_data)
438
+ if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)):
439
+ # this is just to have backward compatibility with dataset_infos.json files
440
+ with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f:
441
+ return cls(
442
+ {
443
+ config_name: DatasetInfo.from_dict(dataset_info_dict)
444
+ for config_name, dataset_info_dict in json.load(f).items()
445
+ }
446
+ )
447
+ else:
448
+ return cls()
449
+
450
+ @classmethod
451
+ def from_dataset_card_data(cls, dataset_card_data: DatasetCardData) -> "DatasetInfosDict":
452
+ if isinstance(dataset_card_data.get("dataset_info"), (list, dict)):
453
+ if isinstance(dataset_card_data["dataset_info"], list):
454
+ return cls(
455
+ {
456
+ dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict(
457
+ dataset_info_yaml_dict
458
+ )
459
+ for dataset_info_yaml_dict in dataset_card_data["dataset_info"]
460
+ }
461
+ )
462
+ else:
463
+ dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
464
+ dataset_info.config_name = dataset_card_data["dataset_info"].get("config_name", "default")
465
+ return cls({dataset_info.config_name: dataset_info})
466
+ else:
467
+ return cls()
468
+
469
+ def to_dataset_card_data(self, dataset_card_data: DatasetCardData) -> None:
470
+ if self:
471
+ # first get existing metadata info
472
+ if "dataset_info" in dataset_card_data and isinstance(dataset_card_data["dataset_info"], dict):
473
+ dataset_metadata_infos = {
474
+ dataset_card_data["dataset_info"].get("config_name", "default"): dataset_card_data["dataset_info"]
475
+ }
476
+ elif "dataset_info" in dataset_card_data and isinstance(dataset_card_data["dataset_info"], list):
477
+ dataset_metadata_infos = {
478
+ config_metadata["config_name"]: config_metadata
479
+ for config_metadata in dataset_card_data["dataset_info"]
480
+ }
481
+ else:
482
+ dataset_metadata_infos = {}
483
+ # update/rewrite existing metadata info with the one to dump
484
+ total_dataset_infos = {
485
+ **dataset_metadata_infos,
486
+ **{config_name: dset_info._to_yaml_dict() for config_name, dset_info in self.items()},
487
+ }
488
+ # the config_name from the dataset_infos_dict takes over the config_name of the DatasetInfo
489
+ for config_name, dset_info_yaml_dict in total_dataset_infos.items():
490
+ dset_info_yaml_dict["config_name"] = config_name
491
+ if len(total_dataset_infos) == 1:
492
+ # use a struct instead of a list of configurations, since there's only one
493
+ dataset_card_data["dataset_info"] = next(iter(total_dataset_infos.values()))
494
+ config_name = dataset_card_data["dataset_info"].pop("config_name", None)
495
+ if config_name != "default":
496
+ # if config_name is not "default" preserve it and put at the first position
497
+ dataset_card_data["dataset_info"] = {
498
+ "config_name": config_name,
499
+ **dataset_card_data["dataset_info"],
500
+ }
501
+ else:
502
+ dataset_card_data["dataset_info"] = []
503
+ for config_name, dataset_info_yaml_dict in sorted(total_dataset_infos.items()):
504
+ # add the config_name field in first position
505
+ dataset_info_yaml_dict.pop("config_name", None)
506
+ dataset_info_yaml_dict = {"config_name": config_name, **dataset_info_yaml_dict}
507
+ dataset_card_data["dataset_info"].append(dataset_info_yaml_dict)
508
+
509
+
510
+ @dataclass
511
+ class MetricInfo:
512
+ """Information about a metric.
513
+
514
+ `MetricInfo` documents a metric, including its name, version, and features.
515
+ See the constructor arguments and properties for a full list.
516
+
517
+ Note: Not all fields are known on construction and may be updated later.
518
+ """
519
+
520
+ # Set in the dataset scripts
521
+ description: str
522
+ citation: str
523
+ features: Features
524
+ inputs_description: str = dataclasses.field(default_factory=str)
525
+ homepage: str = dataclasses.field(default_factory=str)
526
+ license: str = dataclasses.field(default_factory=str)
527
+ codebase_urls: List[str] = dataclasses.field(default_factory=list)
528
+ reference_urls: List[str] = dataclasses.field(default_factory=list)
529
+ streamable: bool = False
530
+ format: Optional[str] = None
531
+
532
+ # Set later by the builder
533
+ metric_name: Optional[str] = None
534
+ config_name: Optional[str] = None
535
+ experiment_id: Optional[str] = None
536
+
537
+ def __post_init__(self):
538
+ if self.format is not None:
539
+ for key, value in self.features.items():
540
+ if not isinstance(value, Value):
541
+ raise ValueError(
542
+ f"When using 'numpy' format, all features should be a `datasets.Value` feature. "
543
+ f"Here {key} is an instance of {value.__class__.__name__}"
544
+ )
545
+
546
+ def write_to_directory(self, metric_info_dir, pretty_print=False):
547
+ """Write `MetricInfo` as JSON to `metric_info_dir`.
548
+ Also save the license separately in LICENCE.
549
+ If `pretty_print` is True, the JSON will be pretty-printed with the indent level of 4.
550
+
551
+ Example:
552
+
553
+ ```py
554
+ >>> from datasets import load_metric
555
+ >>> metric = load_metric("accuracy")
556
+ >>> metric.info.write_to_directory("/path/to/directory/")
557
+ ```
558
+ """
559
+ with open(os.path.join(metric_info_dir, config.METRIC_INFO_FILENAME), "w", encoding="utf-8") as f:
560
+ json.dump(asdict(self), f, indent=4 if pretty_print else None)
561
+
562
+ if self.license:
563
+ with open(os.path.join(metric_info_dir, config.LICENSE_FILENAME), "w", encoding="utf-8") as f:
564
+ f.write(self.license)
565
+
566
+ @classmethod
567
+ def from_directory(cls, metric_info_dir) -> "MetricInfo":
568
+ """Create MetricInfo from the JSON file in `metric_info_dir`.
569
+
570
+ Args:
571
+ metric_info_dir: `str` The directory containing the metadata file. This
572
+ should be the root directory of a specific dataset version.
573
+
574
+ Example:
575
+
576
+ ```py
577
+ >>> from datasets import MetricInfo
578
+ >>> metric_info = MetricInfo.from_directory("/path/to/directory/")
579
+ ```
580
+ """
581
+ logger.info(f"Loading Metric info from {metric_info_dir}")
582
+ if not metric_info_dir:
583
+ raise ValueError("Calling MetricInfo.from_directory() with undefined metric_info_dir.")
584
+
585
+ with open(os.path.join(metric_info_dir, config.METRIC_INFO_FILENAME), encoding="utf-8") as f:
586
+ metric_info_dict = json.load(f)
587
+ return cls.from_dict(metric_info_dict)
588
+
589
+ @classmethod
590
+ def from_dict(cls, metric_info_dict: dict) -> "MetricInfo":
591
+ field_names = {f.name for f in dataclasses.fields(cls)}
592
+ return cls(**{k: v for k, v in metric_info_dict.items() if k in field_names})
env-llmeval/lib/python3.10/site-packages/datasets/inspect.py ADDED
@@ -0,0 +1,581 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Lint as: python3
16
+ """List and inspect datasets."""
17
+
18
+ import inspect
19
+ import os
20
+ import shutil
21
+ import warnings
22
+ from pathlib import Path, PurePath
23
+ from typing import Dict, List, Mapping, Optional, Sequence, Union
24
+
25
+ import huggingface_hub
26
+
27
+ from . import config
28
+ from .download.download_config import DownloadConfig
29
+ from .download.download_manager import DownloadMode
30
+ from .download.streaming_download_manager import StreamingDownloadManager
31
+ from .info import DatasetInfo
32
+ from .load import (
33
+ dataset_module_factory,
34
+ get_dataset_builder_class,
35
+ import_main_class,
36
+ load_dataset_builder,
37
+ metric_module_factory,
38
+ )
39
+ from .utils.deprecation_utils import deprecated
40
+ from .utils.file_utils import relative_to_absolute_path
41
+ from .utils.logging import get_logger
42
+ from .utils.version import Version
43
+
44
+
45
+ logger = get_logger(__name__)
46
+
47
+
48
+ class SplitsNotFoundError(ValueError):
49
+ pass
50
+
51
+
52
+ @deprecated("Use 'huggingface_hub.list_datasets' instead.")
53
+ def list_datasets(with_community_datasets=True, with_details=False):
54
+ """List all the datasets scripts available on the Hugging Face Hub.
55
+
56
+ Args:
57
+ with_community_datasets (`bool`, *optional*, defaults to `True`):
58
+ Include the community provided datasets.
59
+ with_details (`bool`, *optional*, defaults to `False`):
60
+ Return the full details on the datasets instead of only the short name.
61
+
62
+ Example:
63
+
64
+ ```py
65
+ >>> from datasets import list_datasets
66
+ >>> list_datasets()
67
+ ['acronym_identification',
68
+ 'ade_corpus_v2',
69
+ 'adversarial_qa',
70
+ 'aeslc',
71
+ 'afrikaans_ner_corpus',
72
+ 'ag_news',
73
+ ...
74
+ ]
75
+ ```
76
+ """
77
+ datasets = huggingface_hub.list_datasets(full=with_details)
78
+ if not with_community_datasets:
79
+ datasets = [dataset for dataset in datasets if "/" not in dataset.id]
80
+ if not with_details:
81
+ datasets = [dataset.id for dataset in datasets]
82
+ return list(datasets)
83
+
84
+
85
+ @deprecated(
86
+ "Use 'evaluate.list_evaluation_modules' instead, from the new library πŸ€— Evaluate: https://huggingface.co/docs/evaluate"
87
+ )
88
+ def list_metrics(with_community_metrics=True, with_details=False):
89
+ """List all the metrics script available on the Hugging Face Hub.
90
+
91
+ <Deprecated version="2.5.0">
92
+
93
+ Use `evaluate.list_evaluation_modules` instead, from the new library πŸ€— Evaluate: https://huggingface.co/docs/evaluate
94
+
95
+ </Deprecated>
96
+
97
+ Args:
98
+ with_community_metrics (:obj:`bool`, optional, default ``True``): Include the community provided metrics.
99
+ with_details (:obj:`bool`, optional, default ``False``): Return the full details on the metrics instead of only the short name.
100
+
101
+ Example:
102
+
103
+ ```py
104
+ >>> from datasets import list_metrics
105
+ >>> list_metrics()
106
+ ['accuracy',
107
+ 'bertscore',
108
+ 'bleu',
109
+ 'bleurt',
110
+ 'cer',
111
+ 'chrf',
112
+ ...
113
+ ]
114
+ ```
115
+ """
116
+ metrics = huggingface_hub.list_metrics()
117
+ if not with_community_metrics:
118
+ metrics = [metric for metric in metrics if "/" not in metric.id]
119
+ if not with_details:
120
+ metrics = [metric.id for metric in metrics]
121
+ return metrics
122
+
123
+
124
+ @deprecated("Clone the dataset repository from the Hugging Face Hub instead.")
125
+ def inspect_dataset(path: str, local_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs):
126
+ """
127
+ Allow inspection/modification of a dataset script by copying on local drive at local_path.
128
+
129
+ Args:
130
+ path (`str`): Path to the dataset processing script with the dataset builder. Can be either:
131
+
132
+ - a local path to processing script or the directory containing the script (if the script has the same name
133
+ as the directory),
134
+ e.g. `'./dataset/squad'` or `'./dataset/squad/squad.py'`.
135
+ - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`list_datasets`])
136
+ e.g. `'squad'`, `'glue'` or `'openai/webtext'`.
137
+ local_path (`str`):
138
+ Path to the local folder to copy the dataset script to.
139
+ download_config ([`DownloadConfig`], *optional*):
140
+ Specific download configuration parameters.
141
+ **download_kwargs (additional keyword arguments):
142
+ Optional arguments for [`DownloadConfig`] which will override
143
+ the attributes of `download_config` if supplied.
144
+ """
145
+ if download_config is None:
146
+ download_config = DownloadConfig(**download_kwargs)
147
+ if os.path.isfile(path):
148
+ path = str(Path(path).parent)
149
+ if os.path.isdir(path):
150
+ shutil.copytree(path, local_path, dirs_exist_ok=True)
151
+ else:
152
+ huggingface_hub.HfApi(endpoint=config.HF_ENDPOINT, token=download_config.token).snapshot_download(
153
+ repo_id=path, repo_type="dataset", local_dir=local_path, force_download=download_config.force_download
154
+ )
155
+ print(
156
+ f"The dataset {path} can be inspected at {local_path}. "
157
+ f'You can modify this loading script if it has one and use it with `datasets.load_dataset("{PurePath(local_path).as_posix()}")`.'
158
+ )
159
+
160
+
161
+ @deprecated(
162
+ "Use 'evaluate.inspect_evaluation_module' instead, from the new library πŸ€— Evaluate: https://huggingface.co/docs/evaluate"
163
+ )
164
+ def inspect_metric(path: str, local_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs):
165
+ r"""
166
+ Allow inspection/modification of a metric script by copying it on local drive at local_path.
167
+
168
+ <Deprecated version="2.5.0">
169
+
170
+ Use `evaluate.inspect_evaluation_module` instead, from the new library πŸ€— Evaluate instead: https://huggingface.co/docs/evaluate
171
+
172
+ </Deprecated>
173
+
174
+ Args:
175
+ path (``str``): path to the dataset processing script with the dataset builder. Can be either:
176
+
177
+ - a local path to processing script or the directory containing the script (if the script has the same name as the directory),
178
+ e.g. ``'./dataset/squad'`` or ``'./dataset/squad/squad.py'``
179
+ - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with ``datasets.list_datasets()``)
180
+ e.g. ``'squad'``, ``'glue'`` or ``'openai/webtext'``
181
+ local_path (``str``): path to the local folder to copy the datset script to.
182
+ download_config (Optional ``datasets.DownloadConfig``): specific download configuration parameters.
183
+ **download_kwargs (additional keyword arguments): optional attributes for DownloadConfig() which will override the attributes in download_config if supplied.
184
+ """
185
+ metric_module = metric_module_factory(path, download_config=download_config, **download_kwargs)
186
+ metric_cls = import_main_class(metric_module.module_path, dataset=False)
187
+ module_source_path = inspect.getsourcefile(metric_cls)
188
+ module_source_dirpath = os.path.dirname(module_source_path)
189
+ for dirpath, dirnames, filenames in os.walk(module_source_dirpath):
190
+ dst_dirpath = os.path.join(local_path, os.path.relpath(dirpath, module_source_dirpath))
191
+ os.makedirs(dst_dirpath, exist_ok=True)
192
+ # skipping hidden directories; prune the search
193
+ dirnames[:] = [dirname for dirname in dirnames if not dirname.startswith((".", "__"))]
194
+ for filename in filenames:
195
+ shutil.copy2(os.path.join(dirpath, filename), os.path.join(dst_dirpath, filename))
196
+ shutil.copystat(dirpath, dst_dirpath)
197
+ local_path = relative_to_absolute_path(local_path)
198
+ print(
199
+ f"The processing scripts for metric {path} can be inspected at {local_path}. "
200
+ f"The main class is in {module_source_dirpath}. "
201
+ f'You can modify this processing scripts and use it with `datasets.load_metric("{PurePath(local_path).as_posix()}")`.'
202
+ )
203
+
204
+
205
+ def get_dataset_infos(
206
+ path: str,
207
+ data_files: Optional[Union[Dict, List, str]] = None,
208
+ download_config: Optional[DownloadConfig] = None,
209
+ download_mode: Optional[Union[DownloadMode, str]] = None,
210
+ revision: Optional[Union[str, Version]] = None,
211
+ token: Optional[Union[bool, str]] = None,
212
+ use_auth_token="deprecated",
213
+ **config_kwargs,
214
+ ):
215
+ """Get the meta information about a dataset, returned as a dict mapping config name to DatasetInfoDict.
216
+
217
+ Args:
218
+ path (`str`): path to the dataset processing script with the dataset builder. Can be either:
219
+
220
+ - a local path to processing script or the directory containing the script (if the script has the same name as the directory),
221
+ e.g. `'./dataset/squad'` or `'./dataset/squad/squad.py'`
222
+ - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`datasets.list_datasets`])
223
+ e.g. `'squad'`, `'glue'` or``'openai/webtext'`
224
+ revision (`Union[str, datasets.Version]`, *optional*):
225
+ If specified, the dataset module will be loaded from the datasets repository at this version.
226
+ By default:
227
+ - it is set to the local version of the lib.
228
+ - it will also try to load it from the main branch if it's not available at the local version of the lib.
229
+ Specifying a version that is different from your local version of the lib might cause compatibility issues.
230
+ download_config ([`DownloadConfig`], *optional*):
231
+ Specific download configuration parameters.
232
+ download_mode ([`DownloadMode`] or `str`, defaults to `REUSE_DATASET_IF_EXISTS`):
233
+ Download/generate mode.
234
+ data_files (`Union[Dict, List, str]`, *optional*):
235
+ Defining the data_files of the dataset configuration.
236
+ token (`str` or `bool`, *optional*):
237
+ Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
238
+ If `True`, or not specified, will get token from `"~/.huggingface"`.
239
+ use_auth_token (`str` or `bool`, *optional*):
240
+ Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
241
+ If `True`, or not specified, will get token from `"~/.huggingface"`.
242
+
243
+ <Deprecated version="2.14.0">
244
+
245
+ `use_auth_token` was deprecated in favor of `token` in version 2.14.0 and will be removed in 3.0.0.
246
+
247
+ </Deprecated>
248
+
249
+ **config_kwargs (additional keyword arguments):
250
+ Optional attributes for builder class which will override the attributes if supplied.
251
+
252
+ Example:
253
+
254
+ ```py
255
+ >>> from datasets import get_dataset_infos
256
+ >>> get_dataset_infos('rotten_tomatoes')
257
+ {'default': DatasetInfo(description="Movie Review Dataset.\nThis is a dataset of containing 5,331 positive and 5,331 negative processed\nsentences from Rotten Tomatoes movie reviews...), ...}
258
+ ```
259
+ """
260
+ if use_auth_token != "deprecated":
261
+ warnings.warn(
262
+ "'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\n"
263
+ "You can remove this warning by passing 'token=<use_auth_token>' instead.",
264
+ FutureWarning,
265
+ )
266
+ token = use_auth_token
267
+
268
+ config_names = get_dataset_config_names(
269
+ path=path,
270
+ revision=revision,
271
+ download_config=download_config,
272
+ download_mode=download_mode,
273
+ data_files=data_files,
274
+ token=token,
275
+ )
276
+ return {
277
+ config_name: get_dataset_config_info(
278
+ path=path,
279
+ config_name=config_name,
280
+ data_files=data_files,
281
+ download_config=download_config,
282
+ download_mode=download_mode,
283
+ revision=revision,
284
+ token=token,
285
+ **config_kwargs,
286
+ )
287
+ for config_name in config_names
288
+ }
289
+
290
+
291
+ def get_dataset_config_names(
292
+ path: str,
293
+ revision: Optional[Union[str, Version]] = None,
294
+ download_config: Optional[DownloadConfig] = None,
295
+ download_mode: Optional[Union[DownloadMode, str]] = None,
296
+ dynamic_modules_path: Optional[str] = None,
297
+ data_files: Optional[Union[Dict, List, str]] = None,
298
+ **download_kwargs,
299
+ ):
300
+ """Get the list of available config names for a particular dataset.
301
+
302
+ Args:
303
+ path (`str`): path to the dataset processing script with the dataset builder. Can be either:
304
+
305
+ - a local path to processing script or the directory containing the script (if the script has the same name as the directory),
306
+ e.g. `'./dataset/squad'` or `'./dataset/squad/squad.py'`
307
+ - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`datasets.list_datasets`])
308
+ e.g. `'squad'`, `'glue'` or `'openai/webtext'`
309
+ revision (`Union[str, datasets.Version]`, *optional*):
310
+ If specified, the dataset module will be loaded from the datasets repository at this version.
311
+ By default:
312
+ - it is set to the local version of the lib.
313
+ - it will also try to load it from the main branch if it's not available at the local version of the lib.
314
+ Specifying a version that is different from your local version of the lib might cause compatibility issues.
315
+ download_config ([`DownloadConfig`], *optional*):
316
+ Specific download configuration parameters.
317
+ download_mode ([`DownloadMode`] or `str`, defaults to `REUSE_DATASET_IF_EXISTS`):
318
+ Download/generate mode.
319
+ dynamic_modules_path (`str`, defaults to `~/.cache/huggingface/modules/datasets_modules`):
320
+ Optional path to the directory in which the dynamic modules are saved. It must have been initialized with `init_dynamic_modules`.
321
+ By default the datasets and metrics are stored inside the `datasets_modules` module.
322
+ data_files (`Union[Dict, List, str]`, *optional*):
323
+ Defining the data_files of the dataset configuration.
324
+ **download_kwargs (additional keyword arguments):
325
+ Optional attributes for [`DownloadConfig`] which will override the attributes in `download_config` if supplied,
326
+ for example `token`.
327
+
328
+ Example:
329
+
330
+ ```py
331
+ >>> from datasets import get_dataset_config_names
332
+ >>> get_dataset_config_names("glue")
333
+ ['cola',
334
+ 'sst2',
335
+ 'mrpc',
336
+ 'qqp',
337
+ 'stsb',
338
+ 'mnli',
339
+ 'mnli_mismatched',
340
+ 'mnli_matched',
341
+ 'qnli',
342
+ 'rte',
343
+ 'wnli',
344
+ 'ax']
345
+ ```
346
+ """
347
+ dataset_module = dataset_module_factory(
348
+ path,
349
+ revision=revision,
350
+ download_config=download_config,
351
+ download_mode=download_mode,
352
+ dynamic_modules_path=dynamic_modules_path,
353
+ data_files=data_files,
354
+ **download_kwargs,
355
+ )
356
+ builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path))
357
+ return list(builder_cls.builder_configs.keys()) or [
358
+ dataset_module.builder_kwargs.get("config_name", builder_cls.DEFAULT_CONFIG_NAME or "default")
359
+ ]
360
+
361
+
362
+ def get_dataset_default_config_name(
363
+ path: str,
364
+ revision: Optional[Union[str, Version]] = None,
365
+ download_config: Optional[DownloadConfig] = None,
366
+ download_mode: Optional[Union[DownloadMode, str]] = None,
367
+ dynamic_modules_path: Optional[str] = None,
368
+ data_files: Optional[Union[Dict, List, str]] = None,
369
+ **download_kwargs,
370
+ ) -> Optional[str]:
371
+ """Get the default config name for a particular dataset.
372
+
373
+ Args:
374
+ path (`str`): path to the dataset processing script with the dataset builder. Can be either:
375
+
376
+ - a local path to processing script or the directory containing the script (if the script has the same name as the directory),
377
+ e.g. `'./dataset/squad'` or `'./dataset/squad/squad.py'`
378
+ - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`datasets.list_datasets`])
379
+ e.g. `'squad'`, `'glue'` or `'openai/webtext'`
380
+ revision (`Union[str, datasets.Version]`, *optional*):
381
+ If specified, the dataset module will be loaded from the datasets repository at this version.
382
+ By default:
383
+ - it is set to the local version of the lib.
384
+ - it will also try to load it from the main branch if it's not available at the local version of the lib.
385
+ Specifying a version that is different from your local version of the lib might cause compatibility issues.
386
+ download_config ([`DownloadConfig`], *optional*):
387
+ Specific download configuration parameters.
388
+ download_mode ([`DownloadMode`] or `str`, defaults to `REUSE_DATASET_IF_EXISTS`):
389
+ Download/generate mode.
390
+ dynamic_modules_path (`str`, defaults to `~/.cache/huggingface/modules/datasets_modules`):
391
+ Optional path to the directory in which the dynamic modules are saved. It must have been initialized with `init_dynamic_modules`.
392
+ By default the datasets and metrics are stored inside the `datasets_modules` module.
393
+ data_files (`Union[Dict, List, str]`, *optional*):
394
+ Defining the data_files of the dataset configuration.
395
+ **download_kwargs (additional keyword arguments):
396
+ Optional attributes for [`DownloadConfig`] which will override the attributes in `download_config` if supplied,
397
+ for example `token`.
398
+
399
+ Returns:
400
+ Optional[str]
401
+
402
+ Example:
403
+
404
+ ```py
405
+ >>> from datasets import get_dataset_default_config_name
406
+ >>> get_dataset_default_config_name("openbookqa")
407
+ 'main'
408
+ ```
409
+ """
410
+ dataset_module = dataset_module_factory(
411
+ path,
412
+ revision=revision,
413
+ download_config=download_config,
414
+ download_mode=download_mode,
415
+ dynamic_modules_path=dynamic_modules_path,
416
+ data_files=data_files,
417
+ **download_kwargs,
418
+ )
419
+ builder_cls = get_dataset_builder_class(dataset_module, dataset_name=os.path.basename(path))
420
+ builder_configs = list(builder_cls.builder_configs.keys())
421
+ if builder_configs:
422
+ default_config_name = builder_configs[0] if len(builder_configs) == 1 else None
423
+ else:
424
+ default_config_name = "default"
425
+ return builder_cls.DEFAULT_CONFIG_NAME or default_config_name
426
+
427
+
428
+ def get_dataset_config_info(
429
+ path: str,
430
+ config_name: Optional[str] = None,
431
+ data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
432
+ download_config: Optional[DownloadConfig] = None,
433
+ download_mode: Optional[Union[DownloadMode, str]] = None,
434
+ revision: Optional[Union[str, Version]] = None,
435
+ token: Optional[Union[bool, str]] = None,
436
+ use_auth_token="deprecated",
437
+ **config_kwargs,
438
+ ) -> DatasetInfo:
439
+ """Get the meta information (DatasetInfo) about a dataset for a particular config
440
+
441
+ Args:
442
+ path (``str``): path to the dataset processing script with the dataset builder. Can be either:
443
+
444
+ - a local path to processing script or the directory containing the script (if the script has the same name as the directory),
445
+ e.g. ``'./dataset/squad'`` or ``'./dataset/squad/squad.py'``
446
+ - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with ``datasets.list_datasets()``)
447
+ e.g. ``'squad'``, ``'glue'`` or ``'openai/webtext'``
448
+ config_name (:obj:`str`, optional): Defining the name of the dataset configuration.
449
+ data_files (:obj:`str` or :obj:`Sequence` or :obj:`Mapping`, optional): Path(s) to source data file(s).
450
+ download_config (:class:`~download.DownloadConfig`, optional): Specific download configuration parameters.
451
+ download_mode (:class:`DownloadMode` or :obj:`str`, default ``REUSE_DATASET_IF_EXISTS``): Download/generate mode.
452
+ revision (:class:`~utils.Version` or :obj:`str`, optional): Version of the dataset script to load.
453
+ As datasets have their own git repository on the Datasets Hub, the default version "main" corresponds to their "main" branch.
454
+ You can specify a different version than the default "main" by using a commit SHA or a git tag of the dataset repository.
455
+ token (``str`` or :obj:`bool`, optional): Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
456
+ If True, or not specified, will get token from `"~/.huggingface"`.
457
+ use_auth_token (``str`` or :obj:`bool`, optional): Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
458
+ If True, or not specified, will get token from `"~/.huggingface"`.
459
+
460
+ <Deprecated version="2.14.0">
461
+
462
+ `use_auth_token` was deprecated in favor of `token` in version 2.14.0 and will be removed in 3.0.0.
463
+
464
+ </Deprecated>
465
+
466
+ **config_kwargs (additional keyword arguments): optional attributes for builder class which will override the attributes if supplied.
467
+
468
+ """
469
+ if use_auth_token != "deprecated":
470
+ warnings.warn(
471
+ "'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\n"
472
+ "You can remove this warning by passing 'token=<use_auth_token>' instead.",
473
+ FutureWarning,
474
+ )
475
+ token = use_auth_token
476
+
477
+ builder = load_dataset_builder(
478
+ path,
479
+ name=config_name,
480
+ data_files=data_files,
481
+ download_config=download_config,
482
+ download_mode=download_mode,
483
+ revision=revision,
484
+ token=token,
485
+ **config_kwargs,
486
+ )
487
+ info = builder.info
488
+ if info.splits is None:
489
+ download_config = download_config.copy() if download_config else DownloadConfig()
490
+ if token is not None:
491
+ download_config.token = token
492
+ builder._check_manual_download(
493
+ StreamingDownloadManager(base_path=builder.base_path, download_config=download_config)
494
+ )
495
+ try:
496
+ info.splits = {
497
+ split_generator.name: {"name": split_generator.name, "dataset_name": path}
498
+ for split_generator in builder._split_generators(
499
+ StreamingDownloadManager(base_path=builder.base_path, download_config=download_config)
500
+ )
501
+ }
502
+ except Exception as err:
503
+ raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
504
+ return info
505
+
506
+
507
+ def get_dataset_split_names(
508
+ path: str,
509
+ config_name: Optional[str] = None,
510
+ data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None,
511
+ download_config: Optional[DownloadConfig] = None,
512
+ download_mode: Optional[Union[DownloadMode, str]] = None,
513
+ revision: Optional[Union[str, Version]] = None,
514
+ token: Optional[Union[bool, str]] = None,
515
+ use_auth_token="deprecated",
516
+ **config_kwargs,
517
+ ):
518
+ """Get the list of available splits for a particular config and dataset.
519
+
520
+ Args:
521
+ path (`str`): path to the dataset processing script with the dataset builder. Can be either:
522
+
523
+ - a local path to processing script or the directory containing the script (if the script has the same name as the directory),
524
+ e.g. `'./dataset/squad'` or `'./dataset/squad/squad.py'`
525
+ - a dataset identifier on the Hugging Face Hub (list all available datasets and ids with [`datasets.list_datasets`])
526
+ e.g. `'squad'`, `'glue'` or `'openai/webtext'`
527
+ config_name (`str`, *optional*):
528
+ Defining the name of the dataset configuration.
529
+ data_files (`str` or `Sequence` or `Mapping`, *optional*):
530
+ Path(s) to source data file(s).
531
+ download_config ([`DownloadConfig`], *optional*):
532
+ Specific download configuration parameters.
533
+ download_mode ([`DownloadMode`] or `str`, defaults to `REUSE_DATASET_IF_EXISTS`):
534
+ Download/generate mode.
535
+ revision ([`Version`] or `str`, *optional*):
536
+ Version of the dataset script to load.
537
+ As datasets have their own git repository on the Datasets Hub, the default version "main" corresponds to their "main" branch.
538
+ You can specify a different version than the default "main" by using a commit SHA or a git tag of the dataset repository.
539
+ token (`str` or `bool`, *optional*):
540
+ Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
541
+ If `True`, or not specified, will get token from `"~/.huggingface"`.
542
+ use_auth_token (`str` or `bool`, *optional*):
543
+ Optional string or boolean to use as Bearer token for remote files on the Datasets Hub.
544
+ If `True`, or not specified, will get token from `"~/.huggingface"`.
545
+
546
+ <Deprecated version="2.14.0">
547
+
548
+ `use_auth_token` was deprecated in favor of `token` in version 2.14.0 and will be removed in 3.0.0.
549
+
550
+ </Deprecated>
551
+
552
+ **config_kwargs (additional keyword arguments):
553
+ Optional attributes for builder class which will override the attributes if supplied.
554
+
555
+ Example:
556
+
557
+ ```py
558
+ >>> from datasets import get_dataset_split_names
559
+ >>> get_dataset_split_names('rotten_tomatoes')
560
+ ['train', 'validation', 'test']
561
+ ```
562
+ """
563
+ if use_auth_token != "deprecated":
564
+ warnings.warn(
565
+ "'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\n"
566
+ "You can remove this warning by passing 'token=<use_auth_token>' instead.",
567
+ FutureWarning,
568
+ )
569
+ token = use_auth_token
570
+
571
+ info = get_dataset_config_info(
572
+ path,
573
+ config_name=config_name,
574
+ data_files=data_files,
575
+ download_config=download_config,
576
+ download_mode=download_mode,
577
+ revision=revision,
578
+ token=token,
579
+ **config_kwargs,
580
+ )
581
+ return list(info.splits.keys())
env-llmeval/lib/python3.10/site-packages/datasets/iterable_dataset.py ADDED
The diff for this file is too large to render. See raw diff
 
env-llmeval/lib/python3.10/site-packages/datasets/keyhash.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Lint as: python3
16
+
17
+ """
18
+ Hashing function for dataset keys using `hashlib.md5`
19
+
20
+ Requirements for the hash function:
21
+
22
+ - Provides a uniformly distributed hash from random space
23
+ - Adequately fast speed
24
+ - Working with multiple input types (in this case, `str`, `int` or `bytes`)
25
+ - Should be platform independent (generates same hash on different OS and systems)
26
+
27
+ The hashing function provides a unique 128-bit integer hash of the key provided.
28
+
29
+ The split name is being used here as the hash salt to avoid having same hashes
30
+ in different splits due to same keys
31
+ """
32
+
33
+ from typing import Union
34
+
35
+ from huggingface_hub.utils import insecure_hashlib
36
+
37
+
38
+ def _as_bytes(hash_data: Union[str, int, bytes]) -> bytes:
39
+ """
40
+ Returns the input hash_data in its bytes form
41
+
42
+ Args:
43
+ hash_data: the hash salt/key to be converted to bytes
44
+ """
45
+ if isinstance(hash_data, bytes):
46
+ # Data already in bytes, returns as it as
47
+ return hash_data
48
+ elif isinstance(hash_data, str):
49
+ # We keep the data as it as for it ot be later encoded to UTF-8
50
+ # However replace `\\` with `/` for Windows compatibility
51
+ hash_data = hash_data.replace("\\", "/")
52
+ elif isinstance(hash_data, int):
53
+ hash_data = str(hash_data)
54
+ else:
55
+ # If data is not of the required type, raise error
56
+ raise InvalidKeyError(hash_data)
57
+
58
+ return hash_data.encode("utf-8")
59
+
60
+
61
+ class InvalidKeyError(Exception):
62
+ """Raises an error when given key is of invalid datatype."""
63
+
64
+ def __init__(self, hash_data):
65
+ self.prefix = "\nFAILURE TO GENERATE DATASET: Invalid key type detected"
66
+ self.err_msg = f"\nFound Key {hash_data} of type {type(hash_data)}"
67
+ self.suffix = "\nKeys should be either str, int or bytes type"
68
+ super().__init__(f"{self.prefix}{self.err_msg}{self.suffix}")
69
+
70
+
71
+ class DuplicatedKeysError(Exception):
72
+ """Raise an error when duplicate key found."""
73
+
74
+ def __init__(self, key, duplicate_key_indices, fix_msg=""):
75
+ self.key = key
76
+ self.duplicate_key_indices = duplicate_key_indices
77
+ self.fix_msg = fix_msg
78
+ self.prefix = "Found multiple examples generated with the same key"
79
+ if len(duplicate_key_indices) <= 20:
80
+ self.err_msg = f"\nThe examples at index {', '.join(duplicate_key_indices)} have the key {key}"
81
+ else:
82
+ self.err_msg = f"\nThe examples at index {', '.join(duplicate_key_indices[:20])}... ({len(duplicate_key_indices) - 20} more) have the key {key}"
83
+ self.suffix = "\n" + fix_msg if fix_msg else ""
84
+ super().__init__(f"{self.prefix}{self.err_msg}{self.suffix}")
85
+
86
+
87
+ class KeyHasher:
88
+ """KeyHasher class for providing hash using md5"""
89
+
90
+ def __init__(self, hash_salt: str):
91
+ self._split_md5 = insecure_hashlib.md5(_as_bytes(hash_salt))
92
+
93
+ def hash(self, key: Union[str, int, bytes]) -> int:
94
+ """Returns 128-bits unique hash of input key
95
+
96
+ Args:
97
+ key: the input key to be hashed (should be str, int or bytes)
98
+
99
+ Returns: 128-bit int hash key"""
100
+ md5 = self._split_md5.copy()
101
+ byte_key = _as_bytes(key)
102
+ md5.update(byte_key)
103
+ # Convert to integer with hexadecimal conversion
104
+ return int(md5.hexdigest(), 16)
env-llmeval/lib/python3.10/site-packages/datasets/load.py ADDED
The diff for this file is too large to render. See raw diff
 
env-llmeval/lib/python3.10/site-packages/datasets/metric.py ADDED
@@ -0,0 +1,652 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Lint as: python3
16
+ """Metrics base class."""
17
+
18
+ import os
19
+ import types
20
+ import uuid
21
+ from typing import Any, Dict, List, Optional, Tuple, Union
22
+
23
+ import numpy as np
24
+ import pyarrow as pa
25
+ from filelock import BaseFileLock, Timeout
26
+
27
+ from . import config
28
+ from .arrow_dataset import Dataset
29
+ from .arrow_reader import ArrowReader
30
+ from .arrow_writer import ArrowWriter
31
+ from .download.download_config import DownloadConfig
32
+ from .download.download_manager import DownloadManager
33
+ from .features import Features
34
+ from .info import DatasetInfo, MetricInfo
35
+ from .naming import camelcase_to_snakecase
36
+ from .utils._filelock import FileLock
37
+ from .utils.deprecation_utils import deprecated
38
+ from .utils.logging import get_logger
39
+ from .utils.py_utils import copyfunc, temp_seed
40
+
41
+
42
+ logger = get_logger(__name__)
43
+
44
+
45
+ class FileFreeLock(BaseFileLock):
46
+ """Thread lock until a file **cannot** be locked"""
47
+
48
+ def __init__(self, lock_file, *args, **kwargs):
49
+ self.filelock = FileLock(lock_file)
50
+ super().__init__(self.filelock.lock_file, *args, **kwargs)
51
+
52
+ def _acquire(self):
53
+ try:
54
+ self.filelock.acquire(timeout=0.01, poll_intervall=0.02) # Try to lock once
55
+ except Timeout:
56
+ # We couldn't acquire the lock, the file is locked!
57
+ self._context.lock_file_fd = self.filelock.lock_file
58
+ else:
59
+ # We were able to acquire the lock, the file is not yet locked!
60
+ self.filelock.release()
61
+ self._context.lock_file_fd = None
62
+
63
+ def _release(self):
64
+ self._context.lock_file_fd = None
65
+
66
+
67
+ # lists - summarize long lists similarly to NumPy
68
+ # arrays/tensors - let the frameworks control formatting
69
+ def summarize_if_long_list(obj):
70
+ if not type(obj) == list or len(obj) <= 6: # noqa: E721
71
+ return f"{obj}"
72
+
73
+ def format_chunk(chunk):
74
+ return ", ".join(repr(x) for x in chunk)
75
+
76
+ return f"[{format_chunk(obj[:3])}, ..., {format_chunk(obj[-3:])}]"
77
+
78
+
79
+ class MetricInfoMixin:
80
+ """This base class exposes some attributes of MetricInfo
81
+ at the base level of the Metric for easy access.
82
+
83
+ <Deprecated version="2.5.0">
84
+
85
+ Use the new library πŸ€— Evaluate instead: https://huggingface.co/docs/evaluate
86
+
87
+ </Deprecated>
88
+
89
+ """
90
+
91
+ def __init__(self, info: MetricInfo):
92
+ self._metric_info = info
93
+
94
+ @property
95
+ def info(self):
96
+ """:class:`datasets.MetricInfo` object containing all the metadata in the metric."""
97
+ return self._metric_info
98
+
99
+ @property
100
+ def name(self) -> str:
101
+ return self._metric_info.metric_name
102
+
103
+ @property
104
+ def experiment_id(self) -> Optional[str]:
105
+ return self._metric_info.experiment_id
106
+
107
+ @property
108
+ def description(self) -> str:
109
+ return self._metric_info.description
110
+
111
+ @property
112
+ def citation(self) -> str:
113
+ return self._metric_info.citation
114
+
115
+ @property
116
+ def features(self) -> Features:
117
+ return self._metric_info.features
118
+
119
+ @property
120
+ def inputs_description(self) -> str:
121
+ return self._metric_info.inputs_description
122
+
123
+ @property
124
+ def homepage(self) -> Optional[str]:
125
+ return self._metric_info.homepage
126
+
127
+ @property
128
+ def license(self) -> str:
129
+ return self._metric_info.license
130
+
131
+ @property
132
+ def codebase_urls(self) -> Optional[List[str]]:
133
+ return self._metric_info.codebase_urls
134
+
135
+ @property
136
+ def reference_urls(self) -> Optional[List[str]]:
137
+ return self._metric_info.reference_urls
138
+
139
+ @property
140
+ def streamable(self) -> bool:
141
+ return self._metric_info.streamable
142
+
143
+ @property
144
+ def format(self) -> Optional[str]:
145
+ return self._metric_info.format
146
+
147
+
148
+ class Metric(MetricInfoMixin):
149
+ """A Metric is the base class and common API for all metrics.
150
+
151
+ <Deprecated version="2.5.0">
152
+
153
+ Use the new library πŸ€— Evaluate instead: https://huggingface.co/docs/evaluate
154
+
155
+ </Deprecated>
156
+
157
+ Args:
158
+ config_name (``str``): This is used to define a hash specific to a metrics computation script and prevents the metric's data
159
+ to be overridden when the metric loading script is modified.
160
+ keep_in_memory (:obj:`bool`): keep all predictions and references in memory. Not possible in distributed settings.
161
+ cache_dir (``str``): Path to a directory in which temporary prediction/references data will be stored.
162
+ The data directory should be located on a shared file-system in distributed setups.
163
+ num_process (``int``): specify the total number of nodes in a distributed settings.
164
+ This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).
165
+ process_id (``int``): specify the id of the current process in a distributed setup (between 0 and num_process-1)
166
+ This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).
167
+ seed (:obj:`int`, optional): If specified, this will temporarily set numpy's random seed when :func:`datasets.Metric.compute` is run.
168
+ experiment_id (``str``): A specific experiment id. This is used if several distributed evaluations share the same file system.
169
+ This is useful to compute metrics in distributed setups (in particular non-additive metrics like F1).
170
+ max_concurrent_cache_files (``int``): Max number of concurrent metrics cache files (default 10000).
171
+ timeout (``Union[int, float]``): Timeout in second for distributed setting synchronization.
172
+ """
173
+
174
+ @deprecated("Use the new library πŸ€— Evaluate instead: https://huggingface.co/docs/evaluate")
175
+ def __init__(
176
+ self,
177
+ config_name: Optional[str] = None,
178
+ keep_in_memory: bool = False,
179
+ cache_dir: Optional[str] = None,
180
+ num_process: int = 1,
181
+ process_id: int = 0,
182
+ seed: Optional[int] = None,
183
+ experiment_id: Optional[str] = None,
184
+ max_concurrent_cache_files: int = 10000,
185
+ timeout: Union[int, float] = 100,
186
+ **kwargs,
187
+ ):
188
+ # prepare info
189
+ self.config_name = config_name or "default"
190
+ info = self._info()
191
+ info.metric_name = camelcase_to_snakecase(self.__class__.__name__)
192
+ info.config_name = self.config_name
193
+ info.experiment_id = experiment_id or "default_experiment"
194
+ MetricInfoMixin.__init__(self, info) # For easy access on low level
195
+
196
+ # Safety checks on num_process and process_id
197
+ if not isinstance(process_id, int) or process_id < 0:
198
+ raise ValueError("'process_id' should be a number greater than 0")
199
+ if not isinstance(num_process, int) or num_process <= process_id:
200
+ raise ValueError("'num_process' should be a number greater than process_id")
201
+ if keep_in_memory and num_process != 1:
202
+ raise ValueError("Using 'keep_in_memory' is not possible in distributed setting (num_process > 1).")
203
+
204
+ self.num_process = num_process
205
+ self.process_id = process_id
206
+ self.max_concurrent_cache_files = max_concurrent_cache_files
207
+
208
+ self.keep_in_memory = keep_in_memory
209
+ self._data_dir_root = os.path.expanduser(cache_dir or config.HF_METRICS_CACHE)
210
+ self.data_dir = self._build_data_dir()
211
+ if seed is None:
212
+ _, seed, pos, *_ = np.random.get_state()
213
+ self.seed: int = seed[pos] if pos < 624 else seed[0]
214
+ else:
215
+ self.seed: int = seed
216
+ self.timeout: Union[int, float] = timeout
217
+
218
+ # Update 'compute' and 'add' docstring
219
+ # methods need to be copied otherwise it changes the docstrings of every instance
220
+ self.compute = types.MethodType(copyfunc(self.compute), self)
221
+ self.add_batch = types.MethodType(copyfunc(self.add_batch), self)
222
+ self.add = types.MethodType(copyfunc(self.add), self)
223
+ self.compute.__func__.__doc__ += self.info.inputs_description
224
+ self.add_batch.__func__.__doc__ += self.info.inputs_description
225
+ self.add.__func__.__doc__ += self.info.inputs_description
226
+
227
+ # self.arrow_schema = pa.schema(field for field in self.info.features.type)
228
+ self.buf_writer = None
229
+ self.writer = None
230
+ self.writer_batch_size = None
231
+ self.data = None
232
+
233
+ # This is the cache file we store our predictions/references in
234
+ # Keep it None for now so we can (cloud)pickle the object
235
+ self.cache_file_name = None
236
+ self.filelock = None
237
+ self.rendez_vous_lock = None
238
+
239
+ # This is all the cache files on which we have a lock when we are in a distributed setting
240
+ self.file_paths = None
241
+ self.filelocks = None
242
+
243
+ def __len__(self):
244
+ """Return the number of examples (predictions or predictions/references pair)
245
+ currently stored in the metric's cache.
246
+ """
247
+ return 0 if self.writer is None else len(self.writer)
248
+
249
+ def __repr__(self):
250
+ return (
251
+ f'Metric(name: "{self.name}", features: {self.features}, '
252
+ f'usage: """{self.inputs_description}""", '
253
+ f"stored examples: {len(self)})"
254
+ )
255
+
256
+ def _build_data_dir(self):
257
+ """Path of this metric in cache_dir:
258
+ Will be:
259
+ self._data_dir_root/self.name/self.config_name/self.hash (if not none)/
260
+ If any of these element is missing or if ``with_version=False`` the corresponding subfolders are dropped.
261
+ """
262
+ builder_data_dir = self._data_dir_root
263
+ builder_data_dir = os.path.join(builder_data_dir, self.name, self.config_name)
264
+ os.makedirs(builder_data_dir, exist_ok=True)
265
+ return builder_data_dir
266
+
267
+ def _create_cache_file(self, timeout=1) -> Tuple[str, FileLock]:
268
+ """Create a new cache file. If the default cache file is used, we generated a new hash."""
269
+ file_path = os.path.join(self.data_dir, f"{self.experiment_id}-{self.num_process}-{self.process_id}.arrow")
270
+ filelock = None
271
+ for i in range(self.max_concurrent_cache_files):
272
+ filelock = FileLock(file_path + ".lock")
273
+ try:
274
+ filelock.acquire(timeout=timeout)
275
+ except Timeout:
276
+ # If we have reached the max number of attempts or we are not allow to find a free name (distributed setup)
277
+ # We raise an error
278
+ if self.num_process != 1:
279
+ raise ValueError(
280
+ f"Error in _create_cache_file: another metric instance is already using the local cache file at {file_path}. "
281
+ f"Please specify an experiment_id (currently: {self.experiment_id}) to avoid collision "
282
+ f"between distributed metric instances."
283
+ ) from None
284
+ if i == self.max_concurrent_cache_files - 1:
285
+ raise ValueError(
286
+ f"Cannot acquire lock, too many metric instance are operating concurrently on this file system."
287
+ f"You should set a larger value of max_concurrent_cache_files when creating the metric "
288
+ f"(current value is {self.max_concurrent_cache_files})."
289
+ ) from None
290
+ # In other cases (allow to find new file name + not yet at max num of attempts) we can try to sample a new hashing name.
291
+ file_uuid = str(uuid.uuid4())
292
+ file_path = os.path.join(
293
+ self.data_dir, f"{self.experiment_id}-{file_uuid}-{self.num_process}-{self.process_id}.arrow"
294
+ )
295
+ else:
296
+ break
297
+
298
+ return file_path, filelock
299
+
300
+ def _get_all_cache_files(self) -> Tuple[List[str], List[FileLock]]:
301
+ """Get a lock on all the cache files in a distributed setup.
302
+ We wait for timeout second to let all the distributed node finish their tasks (default is 100 seconds).
303
+ """
304
+ if self.num_process == 1:
305
+ if self.cache_file_name is None:
306
+ raise ValueError(
307
+ "Metric cache file doesn't exist. Please make sure that you call `add` or `add_batch` "
308
+ "at least once before calling `compute`."
309
+ )
310
+ file_paths = [self.cache_file_name]
311
+ else:
312
+ file_paths = [
313
+ os.path.join(self.data_dir, f"{self.experiment_id}-{self.num_process}-{process_id}.arrow")
314
+ for process_id in range(self.num_process)
315
+ ]
316
+
317
+ # Let's acquire a lock on each process files to be sure they are finished writing
318
+ filelocks = []
319
+ for process_id, file_path in enumerate(file_paths):
320
+ if process_id == 0: # process 0 already has its lock file
321
+ filelocks.append(self.filelock)
322
+ else:
323
+ filelock = FileLock(file_path + ".lock")
324
+ try:
325
+ filelock.acquire(timeout=self.timeout)
326
+ except Timeout:
327
+ raise ValueError(
328
+ f"Cannot acquire lock on cached file {file_path} for process {process_id}."
329
+ ) from None
330
+ else:
331
+ filelocks.append(filelock)
332
+
333
+ return file_paths, filelocks
334
+
335
+ def _check_all_processes_locks(self):
336
+ expected_lock_file_names = [
337
+ os.path.join(self.data_dir, f"{self.experiment_id}-{self.num_process}-{process_id}.arrow.lock")
338
+ for process_id in range(self.num_process)
339
+ ]
340
+ for expected_lock_file_name in expected_lock_file_names:
341
+ nofilelock = FileFreeLock(expected_lock_file_name)
342
+ try:
343
+ nofilelock.acquire(timeout=self.timeout)
344
+ except Timeout:
345
+ raise ValueError(
346
+ f"Expected to find locked file {expected_lock_file_name} from process {self.process_id} but it doesn't exist."
347
+ ) from None
348
+ else:
349
+ nofilelock.release()
350
+
351
+ def _check_rendez_vous(self):
352
+ expected_lock_file_name = os.path.join(self.data_dir, f"{self.experiment_id}-{self.num_process}-0.arrow.lock")
353
+ nofilelock = FileFreeLock(expected_lock_file_name)
354
+ try:
355
+ nofilelock.acquire(timeout=self.timeout)
356
+ except Timeout:
357
+ raise ValueError(
358
+ f"Expected to find locked file {expected_lock_file_name} from process {self.process_id} but it doesn't exist."
359
+ ) from None
360
+ else:
361
+ nofilelock.release()
362
+ lock_file_name = os.path.join(self.data_dir, f"{self.experiment_id}-{self.num_process}-rdv.lock")
363
+ rendez_vous_lock = FileLock(lock_file_name)
364
+ try:
365
+ rendez_vous_lock.acquire(timeout=self.timeout)
366
+ except Timeout:
367
+ raise ValueError(f"Couldn't acquire lock on {lock_file_name} from process {self.process_id}.") from None
368
+ else:
369
+ rendez_vous_lock.release()
370
+
371
+ def _finalize(self):
372
+ """Close all the writing process and load/gather the data
373
+ from all the nodes if main node or all_process is True.
374
+ """
375
+ if self.writer is not None:
376
+ self.writer.finalize()
377
+ self.writer = None
378
+ # release the locks of the processes > 0 so that process 0 can lock them to read + delete the data
379
+ if self.filelock is not None and self.process_id > 0:
380
+ self.filelock.release()
381
+
382
+ if self.keep_in_memory:
383
+ # Read the predictions and references
384
+ reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
385
+ self.data = Dataset.from_buffer(self.buf_writer.getvalue())
386
+
387
+ elif self.process_id == 0:
388
+ # Let's acquire a lock on each node files to be sure they are finished writing
389
+ file_paths, filelocks = self._get_all_cache_files()
390
+
391
+ # Read the predictions and references
392
+ try:
393
+ reader = ArrowReader(path="", info=DatasetInfo(features=self.features))
394
+ self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
395
+ except FileNotFoundError:
396
+ raise ValueError(
397
+ "Error in finalize: another metric instance is already using the local cache file. "
398
+ "Please specify an experiment_id to avoid collision between distributed metric instances."
399
+ ) from None
400
+
401
+ # Store file paths and locks and we will release/delete them after the computation.
402
+ self.file_paths = file_paths
403
+ self.filelocks = filelocks
404
+
405
+ def compute(self, *, predictions=None, references=None, **kwargs) -> Optional[dict]:
406
+ """Compute the metrics.
407
+
408
+ Usage of positional arguments is not allowed to prevent mistakes.
409
+
410
+ Args:
411
+ predictions (list/array/tensor, optional): Predictions.
412
+ references (list/array/tensor, optional): References.
413
+ **kwargs (optional): Keyword arguments that will be forwarded to the metrics :meth:`_compute`
414
+ method (see details in the docstring).
415
+
416
+ Return:
417
+ dict or None
418
+
419
+ - Dictionary with the metrics if this metric is run on the main process (``process_id == 0``).
420
+ - None if the metric is not run on the main process (``process_id != 0``).
421
+
422
+ Example:
423
+
424
+ ```py
425
+ >>> from datasets import load_metric
426
+ >>> metric = load_metric("accuracy")
427
+ >>> accuracy = metric.compute(predictions=model_prediction, references=labels)
428
+ ```
429
+ """
430
+ all_kwargs = {"predictions": predictions, "references": references, **kwargs}
431
+ if predictions is None and references is None:
432
+ missing_kwargs = {k: None for k in self.features if k not in all_kwargs}
433
+ all_kwargs.update(missing_kwargs)
434
+ else:
435
+ missing_inputs = [k for k in self.features if k not in all_kwargs]
436
+ if missing_inputs:
437
+ raise ValueError(
438
+ f"Metric inputs are missing: {missing_inputs}. All required inputs are {list(self.features)}"
439
+ )
440
+ inputs = {input_name: all_kwargs[input_name] for input_name in self.features}
441
+ compute_kwargs = {k: kwargs[k] for k in kwargs if k not in self.features}
442
+
443
+ if any(v is not None for v in inputs.values()):
444
+ self.add_batch(**inputs)
445
+ self._finalize()
446
+
447
+ self.cache_file_name = None
448
+ self.filelock = None
449
+
450
+ if self.process_id == 0:
451
+ self.data.set_format(type=self.info.format)
452
+
453
+ inputs = {input_name: self.data[input_name] for input_name in self.features}
454
+ with temp_seed(self.seed):
455
+ output = self._compute(**inputs, **compute_kwargs)
456
+
457
+ if self.buf_writer is not None:
458
+ self.buf_writer = None
459
+ del self.data
460
+ self.data = None
461
+ else:
462
+ # Release locks and delete all the cache files. Process 0 is released last.
463
+ for filelock, file_path in reversed(list(zip(self.filelocks, self.file_paths))):
464
+ logger.info(f"Removing {file_path}")
465
+ del self.data
466
+ self.data = None
467
+ del self.writer
468
+ self.writer = None
469
+ os.remove(file_path)
470
+ filelock.release()
471
+
472
+ return output
473
+ else:
474
+ return None
475
+
476
+ def add_batch(self, *, predictions=None, references=None, **kwargs):
477
+ """Add a batch of predictions and references for the metric's stack.
478
+
479
+ Args:
480
+ predictions (list/array/tensor, optional): Predictions.
481
+ references (list/array/tensor, optional): References.
482
+
483
+ Example:
484
+
485
+ ```py
486
+ >>> from datasets import load_metric
487
+ >>> metric = load_metric("accuracy")
488
+ >>> metric.add_batch(predictions=model_prediction, references=labels)
489
+ ```
490
+ """
491
+ bad_inputs = [input_name for input_name in kwargs if input_name not in self.features]
492
+ if bad_inputs:
493
+ raise ValueError(f"Bad inputs for metric: {bad_inputs}. All required inputs are {list(self.features)}")
494
+ batch = {"predictions": predictions, "references": references, **kwargs}
495
+ batch = {intput_name: batch[intput_name] for intput_name in self.features}
496
+ batch = self.info.features.encode_batch(batch)
497
+ if self.writer is None:
498
+ self._init_writer()
499
+ try:
500
+ self.writer.write_batch(batch)
501
+ except pa.ArrowInvalid:
502
+ if any(len(batch[c]) != len(next(iter(batch.values()))) for c in batch):
503
+ col0 = next(iter(batch))
504
+ bad_col = [c for c in batch if len(batch[c]) != len(batch[col0])][0]
505
+ error_msg = (
506
+ f"Mismatch in the number of {col0} ({len(batch[col0])}) and {bad_col} ({len(batch[bad_col])})"
507
+ )
508
+ elif sorted(self.features) != ["references", "predictions"]:
509
+ error_msg = f"Metric inputs don't match the expected format.\n" f"Expected format: {self.features},\n"
510
+ error_msg_inputs = ",\n".join(
511
+ f"Input {input_name}: {summarize_if_long_list(batch[input_name])}" for input_name in self.features
512
+ )
513
+ error_msg += error_msg_inputs
514
+ else:
515
+ error_msg = (
516
+ f"Predictions and/or references don't match the expected format.\n"
517
+ f"Expected format: {self.features},\n"
518
+ f"Input predictions: {summarize_if_long_list(predictions)},\n"
519
+ f"Input references: {summarize_if_long_list(references)}"
520
+ )
521
+ raise ValueError(error_msg) from None
522
+
523
+ def add(self, *, prediction=None, reference=None, **kwargs):
524
+ """Add one prediction and reference for the metric's stack.
525
+
526
+ Args:
527
+ prediction (list/array/tensor, optional): Predictions.
528
+ reference (list/array/tensor, optional): References.
529
+
530
+ Example:
531
+
532
+ ```py
533
+ >>> from datasets import load_metric
534
+ >>> metric = load_metric("accuracy")
535
+ >>> metric.add(predictions=model_predictions, references=labels)
536
+ ```
537
+ """
538
+ bad_inputs = [input_name for input_name in kwargs if input_name not in self.features]
539
+ if bad_inputs:
540
+ raise ValueError(f"Bad inputs for metric: {bad_inputs}. All required inputs are {list(self.features)}")
541
+ example = {"predictions": prediction, "references": reference, **kwargs}
542
+ example = {intput_name: example[intput_name] for intput_name in self.features}
543
+ example = self.info.features.encode_example(example)
544
+ if self.writer is None:
545
+ self._init_writer()
546
+ try:
547
+ self.writer.write(example)
548
+ except pa.ArrowInvalid:
549
+ error_msg = f"Metric inputs don't match the expected format.\n" f"Expected format: {self.features},\n"
550
+ error_msg_inputs = ",\n".join(
551
+ f"Input {input_name}: {summarize_if_long_list(example[input_name])}" for input_name in self.features
552
+ )
553
+ error_msg += error_msg_inputs
554
+ raise ValueError(error_msg) from None
555
+
556
+ def _init_writer(self, timeout=1):
557
+ if self.num_process > 1:
558
+ if self.process_id == 0:
559
+ file_path = os.path.join(self.data_dir, f"{self.experiment_id}-{self.num_process}-rdv.lock")
560
+ self.rendez_vous_lock = FileLock(file_path)
561
+ try:
562
+ self.rendez_vous_lock.acquire(timeout=timeout)
563
+ except TimeoutError:
564
+ raise ValueError(
565
+ f"Error in _init_writer: another metric instance is already using the local cache file at {file_path}. "
566
+ f"Please specify an experiment_id (currently: {self.experiment_id}) to avoid collision "
567
+ f"between distributed metric instances."
568
+ ) from None
569
+
570
+ if self.keep_in_memory:
571
+ self.buf_writer = pa.BufferOutputStream()
572
+ self.writer = ArrowWriter(
573
+ features=self.info.features, stream=self.buf_writer, writer_batch_size=self.writer_batch_size
574
+ )
575
+ else:
576
+ self.buf_writer = None
577
+
578
+ # Get cache file name and lock it
579
+ if self.cache_file_name is None or self.filelock is None:
580
+ cache_file_name, filelock = self._create_cache_file() # get ready
581
+ self.cache_file_name = cache_file_name
582
+ self.filelock = filelock
583
+
584
+ self.writer = ArrowWriter(
585
+ features=self.info.features, path=self.cache_file_name, writer_batch_size=self.writer_batch_size
586
+ )
587
+ # Setup rendez-vous here if
588
+ if self.num_process > 1:
589
+ if self.process_id == 0:
590
+ self._check_all_processes_locks() # wait for everyone to be ready
591
+ self.rendez_vous_lock.release() # let everyone go
592
+ else:
593
+ self._check_rendez_vous() # wait for master to be ready and to let everyone go
594
+
595
+ def _info(self) -> MetricInfo:
596
+ """Construct the MetricInfo object. See `MetricInfo` for details.
597
+
598
+ Warning: This function is only called once and the result is cached for all
599
+ following .info() calls.
600
+
601
+ Returns:
602
+ info: (MetricInfo) The metrics information
603
+ """
604
+ raise NotImplementedError
605
+
606
+ def download_and_prepare(
607
+ self,
608
+ download_config: Optional[DownloadConfig] = None,
609
+ dl_manager: Optional[DownloadManager] = None,
610
+ ):
611
+ """Downloads and prepares dataset for reading.
612
+
613
+ Args:
614
+ download_config (:class:`DownloadConfig`, optional): Specific download configuration parameters.
615
+ dl_manager (:class:`DownloadManager`, optional): Specific download manager to use.
616
+ """
617
+ if dl_manager is None:
618
+ if download_config is None:
619
+ download_config = DownloadConfig()
620
+ download_config.cache_dir = os.path.join(self.data_dir, "downloads")
621
+ download_config.force_download = False
622
+
623
+ dl_manager = DownloadManager(
624
+ dataset_name=self.name, download_config=download_config, data_dir=self.data_dir
625
+ )
626
+
627
+ self._download_and_prepare(dl_manager)
628
+
629
+ def _download_and_prepare(self, dl_manager):
630
+ """Downloads and prepares resources for the metric.
631
+
632
+ This is the internal implementation to overwrite called when user calls
633
+ `download_and_prepare`. It should download all required resources for the metric.
634
+
635
+ Args:
636
+ dl_manager (:class:`DownloadManager`): `DownloadManager` used to download and cache data.
637
+ """
638
+ return None
639
+
640
+ def _compute(self, *, predictions=None, references=None, **kwargs) -> Dict[str, Any]:
641
+ """This method defines the common API for all the metrics in the library"""
642
+ raise NotImplementedError
643
+
644
+ def __del__(self):
645
+ if hasattr(self, "filelock") and self.filelock is not None:
646
+ self.filelock.release()
647
+ if hasattr(self, "rendez_vous_lock") and self.rendez_vous_lock is not None:
648
+ self.rendez_vous_lock.release()
649
+ if hasattr(self, "writer"): # in case it was already deleted
650
+ del self.writer
651
+ if hasattr(self, "data"): # in case it was already deleted
652
+ del self.data
env-llmeval/lib/python3.10/site-packages/datasets/naming.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Lint as: python3
16
+ """Utilities for file names."""
17
+
18
+ import itertools
19
+ import os
20
+ import re
21
+
22
+
23
+ _uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
24
+ _lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
25
+
26
+ _single_underscore_re = re.compile(r"(?<!_)_(?!_)")
27
+ _multiple_underscores_re = re.compile(r"(_{2,})")
28
+
29
+ _split_re = r"^\w+(\.\w+)*$"
30
+
31
+ INVALID_WINDOWS_CHARACTERS_IN_PATH = r"<>:/\|?*"
32
+
33
+
34
+ def camelcase_to_snakecase(name):
35
+ """Convert camel-case string to snake-case."""
36
+ name = _uppercase_uppercase_re.sub(r"\1_\2", name)
37
+ name = _lowercase_uppercase_re.sub(r"\1_\2", name)
38
+ return name.lower()
39
+
40
+
41
+ def snakecase_to_camelcase(name):
42
+ """Convert snake-case string to camel-case string."""
43
+ name = _single_underscore_re.split(name)
44
+ name = [_multiple_underscores_re.split(n) for n in name]
45
+ return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
46
+
47
+
48
+ def filename_prefix_for_name(name):
49
+ if os.path.basename(name) != name:
50
+ raise ValueError(f"Should be a dataset name, not a path: {name}")
51
+ return camelcase_to_snakecase(name)
52
+
53
+
54
+ def filename_prefix_for_split(name, split):
55
+ if os.path.basename(name) != name:
56
+ raise ValueError(f"Should be a dataset name, not a path: {name}")
57
+ if not re.match(_split_re, split):
58
+ raise ValueError(f"Split name should match '{_split_re}'' but got '{split}'.")
59
+ return f"{filename_prefix_for_name(name)}-{split}"
60
+
61
+
62
+ def filepattern_for_dataset_split(dataset_name, split, data_dir, filetype_suffix=None):
63
+ prefix = filename_prefix_for_split(dataset_name, split)
64
+ if filetype_suffix:
65
+ prefix += f".{filetype_suffix}"
66
+ filepath = os.path.join(data_dir, prefix)
67
+ return f"{filepath}*"
68
+
69
+
70
+ def filenames_for_dataset_split(path, dataset_name, split, filetype_suffix=None, shard_lengths=None):
71
+ prefix = filename_prefix_for_split(dataset_name, split)
72
+ prefix = os.path.join(path, prefix)
73
+
74
+ if shard_lengths:
75
+ num_shards = len(shard_lengths)
76
+ filenames = [f"{prefix}-{shard_id:05d}-of-{num_shards:05d}" for shard_id in range(num_shards)]
77
+ if filetype_suffix:
78
+ filenames = [filename + f".{filetype_suffix}" for filename in filenames]
79
+ return filenames
80
+ else:
81
+ filename = prefix
82
+ if filetype_suffix:
83
+ filename += f".{filetype_suffix}"
84
+ return [filename]
env-llmeval/lib/python3.10/site-packages/datasets/search.py ADDED
@@ -0,0 +1,779 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib.util
2
+ import os
3
+ import tempfile
4
+ from pathlib import PurePath
5
+ from typing import TYPE_CHECKING, Dict, List, NamedTuple, Optional, Union
6
+
7
+ import fsspec
8
+ import numpy as np
9
+
10
+ from .utils import logging
11
+ from .utils import tqdm as hf_tqdm
12
+
13
+
14
+ if TYPE_CHECKING:
15
+ from .arrow_dataset import Dataset # noqa: F401
16
+
17
+ try:
18
+ from elasticsearch import Elasticsearch # noqa: F401
19
+
20
+ except ImportError:
21
+ pass
22
+ try:
23
+ import faiss # noqa: F401
24
+
25
+ except ImportError:
26
+ pass
27
+
28
+ _has_elasticsearch = importlib.util.find_spec("elasticsearch") is not None
29
+ _has_faiss = importlib.util.find_spec("faiss") is not None
30
+
31
+
32
+ logger = logging.get_logger(__name__)
33
+
34
+
35
+ class MissingIndex(Exception):
36
+ pass
37
+
38
+
39
+ class SearchResults(NamedTuple):
40
+ scores: List[float]
41
+ indices: List[int]
42
+
43
+
44
+ class BatchedSearchResults(NamedTuple):
45
+ total_scores: List[List[float]]
46
+ total_indices: List[List[int]]
47
+
48
+
49
+ class NearestExamplesResults(NamedTuple):
50
+ scores: List[float]
51
+ examples: dict
52
+
53
+
54
+ class BatchedNearestExamplesResults(NamedTuple):
55
+ total_scores: List[List[float]]
56
+ total_examples: List[dict]
57
+
58
+
59
+ class BaseIndex:
60
+ """Base class for indexing"""
61
+
62
+ def search(self, query, k: int = 10, **kwargs) -> SearchResults:
63
+ """
64
+ To implement.
65
+ This method has to return the scores and the indices of the retrieved examples given a certain query.
66
+ """
67
+ raise NotImplementedError
68
+
69
+ def search_batch(self, queries, k: int = 10, **kwargs) -> BatchedSearchResults:
70
+ """Find the nearest examples indices to the query.
71
+
72
+ Args:
73
+ queries (`Union[List[str], np.ndarray]`): The queries as a list of strings if `column` is a text index or as a numpy array if `column` is a vector index.
74
+ k (`int`): The number of examples to retrieve per query.
75
+
76
+ Ouput:
77
+ total_scores (`List[List[float]`): The retrieval scores of the retrieved examples per query.
78
+ total_indices (`List[List[int]]`): The indices of the retrieved examples per query.
79
+ """
80
+ total_scores, total_indices = [], []
81
+ for query in queries:
82
+ scores, indices = self.search(query, k)
83
+ total_scores.append(scores)
84
+ total_indices.append(indices)
85
+ return BatchedSearchResults(total_scores, total_indices)
86
+
87
+ def save(self, file: Union[str, PurePath]):
88
+ """Serialize the index on disk"""
89
+ raise NotImplementedError
90
+
91
+ @classmethod
92
+ def load(cls, file: Union[str, PurePath]) -> "BaseIndex":
93
+ """Deserialize the index from disk"""
94
+ raise NotImplementedError
95
+
96
+
97
+ class ElasticSearchIndex(BaseIndex):
98
+ """
99
+ Sparse index using Elasticsearch. It is used to index text and run queries based on BM25 similarity.
100
+ An Elasticsearch server needs to be accessible, and a python client is declared with
101
+ ```
102
+ es_client = Elasticsearch([{'host': 'localhost', 'port': '9200'}])
103
+ ```
104
+ for example.
105
+ """
106
+
107
+ def __init__(
108
+ self,
109
+ host: Optional[str] = None,
110
+ port: Optional[int] = None,
111
+ es_client: Optional["Elasticsearch"] = None,
112
+ es_index_name: Optional[str] = None,
113
+ es_index_config: Optional[dict] = None,
114
+ ):
115
+ if not _has_elasticsearch:
116
+ raise ImportError(
117
+ "You must install ElasticSearch to use ElasticSearchIndex. To do so you can run `pip install elasticsearch==7.7.1 for example`"
118
+ )
119
+ if es_client is not None and (host is not None or port is not None):
120
+ raise ValueError("Please specify either `es_client` or `(host, port)`, but not both.")
121
+ host = host or "localhost"
122
+ port = port or 9200
123
+
124
+ import elasticsearch.helpers # noqa: F401 - need this to properly load all the es features
125
+ from elasticsearch import Elasticsearch # noqa: F811
126
+
127
+ self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
128
+ self.es_index_name = (
129
+ es_index_name
130
+ if es_index_name is not None
131
+ else "huggingface_datasets_" + os.path.basename(tempfile.NamedTemporaryFile().name)
132
+ )
133
+ self.es_index_config = (
134
+ es_index_config
135
+ if es_index_config is not None
136
+ else {
137
+ "settings": {
138
+ "number_of_shards": 1,
139
+ "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
140
+ },
141
+ "mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "BM25"}}},
142
+ }
143
+ )
144
+
145
+ def add_documents(self, documents: Union[List[str], "Dataset"], column: Optional[str] = None):
146
+ """
147
+ Add documents to the index.
148
+ If the documents are inside a certain column, you can specify it using the `column` argument.
149
+ """
150
+ index_name = self.es_index_name
151
+ index_config = self.es_index_config
152
+ self.es_client.indices.create(index=index_name, body=index_config)
153
+ number_of_docs = len(documents)
154
+ progress = hf_tqdm(unit="docs", total=number_of_docs)
155
+ successes = 0
156
+
157
+ def passage_generator():
158
+ if column is not None:
159
+ for i, example in enumerate(documents):
160
+ yield {"text": example[column], "_id": i}
161
+ else:
162
+ for i, example in enumerate(documents):
163
+ yield {"text": example, "_id": i}
164
+
165
+ # create the ES index
166
+ import elasticsearch as es
167
+
168
+ for ok, action in es.helpers.streaming_bulk(
169
+ client=self.es_client,
170
+ index=index_name,
171
+ actions=passage_generator(),
172
+ ):
173
+ progress.update(1)
174
+ successes += ok
175
+ if successes != len(documents):
176
+ logger.warning(
177
+ f"Some documents failed to be added to ElasticSearch. Failures: {len(documents)-successes}/{len(documents)}"
178
+ )
179
+ logger.info(f"Indexed {successes:d} documents")
180
+
181
+ def search(self, query: str, k=10, **kwargs) -> SearchResults:
182
+ """Find the nearest examples indices to the query.
183
+
184
+ Args:
185
+ query (`str`): The query as a string.
186
+ k (`int`): The number of examples to retrieve.
187
+
188
+ Ouput:
189
+ scores (`List[List[float]`): The retrieval scores of the retrieved examples.
190
+ indices (`List[List[int]]`): The indices of the retrieved examples.
191
+ """
192
+ response = self.es_client.search(
193
+ index=self.es_index_name,
194
+ body={"query": {"multi_match": {"query": query, "fields": ["text"], "type": "cross_fields"}}, "size": k},
195
+ **kwargs,
196
+ )
197
+ hits = response["hits"]["hits"]
198
+ return SearchResults([hit["_score"] for hit in hits], [int(hit["_id"]) for hit in hits])
199
+
200
+ def search_batch(self, queries, k: int = 10, max_workers=10, **kwargs) -> BatchedSearchResults:
201
+ import concurrent.futures
202
+
203
+ total_scores, total_indices = [None] * len(queries), [None] * len(queries)
204
+ with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
205
+ future_to_index = {executor.submit(self.search, query, k, **kwargs): i for i, query in enumerate(queries)}
206
+ for future in concurrent.futures.as_completed(future_to_index):
207
+ index = future_to_index[future]
208
+ results: SearchResults = future.result()
209
+ total_scores[index] = results.scores
210
+ total_indices[index] = results.indices
211
+ return BatchedSearchResults(total_indices=total_indices, total_scores=total_scores)
212
+
213
+
214
+ class FaissIndex(BaseIndex):
215
+ """
216
+ Dense index using Faiss. It is used to index vectors.
217
+ Faiss is a library for efficient similarity search and clustering of dense vectors.
218
+ It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM.
219
+ You can find more information about Faiss here:
220
+ - For index types and the string factory: https://github.com/facebookresearch/faiss/wiki/The-index-factory
221
+ - For GPU settings: https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU
222
+ """
223
+
224
+ def __init__(
225
+ self,
226
+ device: Optional[Union[int, List[int]]] = None,
227
+ string_factory: Optional[str] = None,
228
+ metric_type: Optional[int] = None,
229
+ custom_index: Optional["faiss.Index"] = None,
230
+ ):
231
+ """
232
+ Create a Dense index using Faiss. You can specify `device` if you want to run it on GPU (`device` must be the GPU index).
233
+ You can find more information about Faiss here:
234
+ - For `string factory`: https://github.com/facebookresearch/faiss/wiki/The-index-factory
235
+ """
236
+ if string_factory is not None and custom_index is not None:
237
+ raise ValueError("Please specify either `string_factory` or `custom_index` but not both.")
238
+ if device is not None and custom_index is not None:
239
+ raise ValueError(
240
+ "Cannot pass both 'custom_index' and 'device'. "
241
+ "Pass 'custom_index' already transferred to the target device instead."
242
+ )
243
+ self.device = device
244
+ self.string_factory = string_factory
245
+ self.metric_type = metric_type
246
+ self.faiss_index = custom_index
247
+ if not _has_faiss:
248
+ raise ImportError(
249
+ "You must install Faiss to use FaissIndex. To do so you can run `conda install -c pytorch faiss-cpu` or `conda install -c pytorch faiss-gpu`. "
250
+ "A community supported package is also available on pypi: `pip install faiss-cpu` or `pip install faiss-gpu`. "
251
+ "Note that pip may not have the latest version of FAISS, and thus, some of the latest features and bug fixes may not be available."
252
+ )
253
+
254
+ def add_vectors(
255
+ self,
256
+ vectors: Union[np.array, "Dataset"],
257
+ column: Optional[str] = None,
258
+ batch_size: int = 1000,
259
+ train_size: Optional[int] = None,
260
+ faiss_verbose: Optional[bool] = None,
261
+ ):
262
+ """
263
+ Add vectors to the index.
264
+ If the arrays are inside a certain column, you can specify it using the `column` argument.
265
+ """
266
+ import faiss # noqa: F811
267
+
268
+ # Create index
269
+ if self.faiss_index is None:
270
+ size = len(vectors[0]) if column is None else len(vectors[0][column])
271
+ if self.string_factory is not None:
272
+ if self.metric_type is None:
273
+ index = faiss.index_factory(size, self.string_factory)
274
+ else:
275
+ index = faiss.index_factory(size, self.string_factory, self.metric_type)
276
+ else:
277
+ if self.metric_type is None:
278
+ index = faiss.IndexFlat(size)
279
+ else:
280
+ index = faiss.IndexFlat(size, self.metric_type)
281
+
282
+ self.faiss_index = self._faiss_index_to_device(index, self.device)
283
+ logger.info(f"Created faiss index of type {type(self.faiss_index)}")
284
+
285
+ # Set verbosity level
286
+ if faiss_verbose is not None:
287
+ self.faiss_index.verbose = faiss_verbose
288
+ if hasattr(self.faiss_index, "index") and self.faiss_index.index is not None:
289
+ self.faiss_index.index.verbose = faiss_verbose
290
+ if hasattr(self.faiss_index, "quantizer") and self.faiss_index.quantizer is not None:
291
+ self.faiss_index.quantizer.verbose = faiss_verbose
292
+ if hasattr(self.faiss_index, "clustering_index") and self.faiss_index.clustering_index is not None:
293
+ self.faiss_index.clustering_index.verbose = faiss_verbose
294
+
295
+ # Train
296
+ if train_size is not None:
297
+ train_vecs = vectors[:train_size] if column is None else vectors[:train_size][column]
298
+ logger.info(f"Training the index with the first {len(train_vecs)} vectors")
299
+ self.faiss_index.train(train_vecs)
300
+ else:
301
+ logger.info("Ignored the training step of the faiss index as `train_size` is None.")
302
+
303
+ # Add vectors
304
+ logger.info(f"Adding {len(vectors)} vectors to the faiss index")
305
+ for i in hf_tqdm(range(0, len(vectors), batch_size)):
306
+ vecs = vectors[i : i + batch_size] if column is None else vectors[i : i + batch_size][column]
307
+ self.faiss_index.add(vecs)
308
+
309
+ @staticmethod
310
+ def _faiss_index_to_device(index: "faiss.Index", device: Optional[Union[int, List[int]]] = None) -> "faiss.Index":
311
+ """
312
+ Sends a faiss index to a device.
313
+ A device can either be a positive integer (GPU id), a negative integer (all GPUs),
314
+ or a list of positive integers (select GPUs to use), or `None` for CPU.
315
+ """
316
+
317
+ # If device is not specified, then it runs on CPU.
318
+ if device is None:
319
+ return index
320
+
321
+ import faiss # noqa: F811
322
+
323
+ # If the device id is given as an integer
324
+ if isinstance(device, int):
325
+ # Positive integers are directly mapped to GPU ids
326
+ if device > -1:
327
+ faiss_res = faiss.StandardGpuResources()
328
+ index = faiss.index_cpu_to_gpu(faiss_res, device, index)
329
+ # And negative integers mean using all GPUs
330
+ else:
331
+ index = faiss.index_cpu_to_all_gpus(index)
332
+ # Device ids given as a list mean mapping to those devices specified.
333
+ elif isinstance(device, (list, tuple)):
334
+ index = faiss.index_cpu_to_gpus_list(index, gpus=list(device))
335
+ else:
336
+ raise TypeError(
337
+ f"The argument type: {type(device)} is not expected. "
338
+ + "Please pass in either nothing, a positive int, a negative int, or a list of positive ints."
339
+ )
340
+
341
+ return index
342
+
343
+ def search(self, query: np.array, k=10, **kwargs) -> SearchResults:
344
+ """Find the nearest examples indices to the query.
345
+
346
+ Args:
347
+ query (`np.array`): The query as a numpy array.
348
+ k (`int`): The number of examples to retrieve.
349
+
350
+ Ouput:
351
+ scores (`List[List[float]`): The retrieval scores of the retrieved examples.
352
+ indices (`List[List[int]]`): The indices of the retrieved examples.
353
+ """
354
+ if len(query.shape) != 1 and (len(query.shape) != 2 or query.shape[0] != 1):
355
+ raise ValueError("Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)")
356
+
357
+ queries = query.reshape(1, -1)
358
+ if not queries.flags.c_contiguous:
359
+ queries = np.asarray(queries, order="C")
360
+ scores, indices = self.faiss_index.search(queries, k, **kwargs)
361
+ return SearchResults(scores[0], indices[0].astype(int))
362
+
363
+ def search_batch(self, queries: np.array, k=10, **kwargs) -> BatchedSearchResults:
364
+ """Find the nearest examples indices to the queries.
365
+
366
+ Args:
367
+ queries (`np.array`): The queries as a numpy array.
368
+ k (`int`): The number of examples to retrieve.
369
+
370
+ Ouput:
371
+ total_scores (`List[List[float]`): The retrieval scores of the retrieved examples per query.
372
+ total_indices (`List[List[int]]`): The indices of the retrieved examples per query.
373
+ """
374
+ if len(queries.shape) != 2:
375
+ raise ValueError("Shape of query must be 2D")
376
+ if not queries.flags.c_contiguous:
377
+ queries = np.asarray(queries, order="C")
378
+ scores, indices = self.faiss_index.search(queries, k, **kwargs)
379
+ return BatchedSearchResults(scores, indices.astype(int))
380
+
381
+ def save(self, file: Union[str, PurePath], storage_options: Optional[Dict] = None):
382
+ """Serialize the FaissIndex on disk"""
383
+ import faiss # noqa: F811
384
+
385
+ if self.device is not None and isinstance(self.device, (int, list, tuple)):
386
+ index = faiss.index_gpu_to_cpu(self.faiss_index)
387
+ else:
388
+ index = self.faiss_index
389
+
390
+ with fsspec.open(str(file), "wb", **(storage_options or {})) as f:
391
+ faiss.write_index(index, faiss.BufferedIOWriter(faiss.PyCallbackIOWriter(f.write)))
392
+
393
+ @classmethod
394
+ def load(
395
+ cls,
396
+ file: Union[str, PurePath],
397
+ device: Optional[Union[int, List[int]]] = None,
398
+ storage_options: Optional[Dict] = None,
399
+ ) -> "FaissIndex":
400
+ """Deserialize the FaissIndex from disk"""
401
+ import faiss # noqa: F811
402
+
403
+ # Instances of FaissIndex is essentially just a wrapper for faiss indices.
404
+ faiss_index = cls(device=device)
405
+ with fsspec.open(str(file), "rb", **(storage_options or {})) as f:
406
+ index = faiss.read_index(faiss.BufferedIOReader(faiss.PyCallbackIOReader(f.read)))
407
+ faiss_index.faiss_index = faiss_index._faiss_index_to_device(index, faiss_index.device)
408
+ return faiss_index
409
+
410
+
411
+ class IndexableMixin:
412
+ """Add indexing features to `datasets.Dataset`"""
413
+
414
+ def __init__(self):
415
+ self._indexes: Dict[str, BaseIndex] = {}
416
+
417
+ def __len__(self):
418
+ raise NotImplementedError
419
+
420
+ def __getitem__(self, key):
421
+ raise NotImplementedError
422
+
423
+ def is_index_initialized(self, index_name: str) -> bool:
424
+ return index_name in self._indexes
425
+
426
+ def _check_index_is_initialized(self, index_name: str):
427
+ if not self.is_index_initialized(index_name):
428
+ raise MissingIndex(
429
+ f"Index with index_name '{index_name}' not initialized yet. Please make sure that you call `add_faiss_index` or `add_elasticsearch_index` first."
430
+ )
431
+
432
+ def list_indexes(self) -> List[str]:
433
+ """List the `colindex_nameumns`/identifiers of all the attached indexes."""
434
+ return list(self._indexes)
435
+
436
+ def get_index(self, index_name: str) -> BaseIndex:
437
+ """List the `index_name`/identifiers of all the attached indexes.
438
+
439
+ Args:
440
+ index_name (`str`): Index name.
441
+
442
+ Returns:
443
+ [`BaseIndex`]
444
+ """
445
+ self._check_index_is_initialized(index_name)
446
+ return self._indexes[index_name]
447
+
448
+ def add_faiss_index(
449
+ self,
450
+ column: str,
451
+ index_name: Optional[str] = None,
452
+ device: Optional[Union[int, List[int]]] = None,
453
+ string_factory: Optional[str] = None,
454
+ metric_type: Optional[int] = None,
455
+ custom_index: Optional["faiss.Index"] = None,
456
+ batch_size: int = 1000,
457
+ train_size: Optional[int] = None,
458
+ faiss_verbose: bool = False,
459
+ ):
460
+ """Add a dense index using Faiss for fast retrieval.
461
+ The index is created using the vectors of the specified column.
462
+ You can specify `device` if you want to run it on GPU (`device` must be the GPU index, see more below).
463
+ You can find more information about Faiss here:
464
+ - For `string factory`: https://github.com/facebookresearch/faiss/wiki/The-index-factory
465
+
466
+ Args:
467
+ column (`str`): The column of the vectors to add to the index.
468
+ index_name (Optional `str`): The index_name/identifier of the index. This is the index_name that is used to call `.get_nearest` or `.search`.
469
+ By default it corresponds to `column`.
470
+ device (Optional `Union[int, List[int]]`): If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs.
471
+ If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU.
472
+ string_factory (Optional `str`): This is passed to the index factory of Faiss to create the index. Default index class is IndexFlatIP.
473
+ metric_type (Optional `int`): Type of metric. Ex: `faiss.METRIC_INNER_PRODUCT` or `faiss.METRIC_L2`.
474
+ custom_index (Optional `faiss.Index`): Custom Faiss index that you already have instantiated and configured for your needs.
475
+ batch_size (Optional `int`): Size of the batch to use while adding vectors to the FaissIndex. Default value is 1000.
476
+ <Added version="2.4.0"/>
477
+ train_size (Optional `int`): If the index needs a training step, specifies how many vectors will be used to train the index.
478
+ faiss_verbose (`bool`, defaults to False): Enable the verbosity of the Faiss index.
479
+ """
480
+ index_name = index_name if index_name is not None else column
481
+ faiss_index = FaissIndex(
482
+ device=device, string_factory=string_factory, metric_type=metric_type, custom_index=custom_index
483
+ )
484
+ faiss_index.add_vectors(
485
+ self, column=column, batch_size=batch_size, train_size=train_size, faiss_verbose=faiss_verbose
486
+ )
487
+ self._indexes[index_name] = faiss_index
488
+
489
+ def add_faiss_index_from_external_arrays(
490
+ self,
491
+ external_arrays: np.array,
492
+ index_name: str,
493
+ device: Optional[Union[int, List[int]]] = None,
494
+ string_factory: Optional[str] = None,
495
+ metric_type: Optional[int] = None,
496
+ custom_index: Optional["faiss.Index"] = None,
497
+ batch_size: int = 1000,
498
+ train_size: Optional[int] = None,
499
+ faiss_verbose: bool = False,
500
+ ):
501
+ """Add a dense index using Faiss for fast retrieval.
502
+ The index is created using the vectors of `external_arrays`.
503
+ You can specify `device` if you want to run it on GPU (`device` must be the GPU index).
504
+ You can find more information about Faiss here:
505
+ - For `string factory`: https://github.com/facebookresearch/faiss/wiki/The-index-factory
506
+
507
+ Args:
508
+ external_arrays (`np.array`): If you want to use arrays from outside the lib for the index, you can set `external_arrays`.
509
+ It will use `external_arrays` to create the Faiss index instead of the arrays in the given `column`.
510
+ index_name (`str`): The index_name/identifier of the index. This is the index_name that is used to call `.get_nearest` or `.search`.
511
+ device (Optional `Union[int, List[int]]`): If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs.
512
+ If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU.
513
+ string_factory (Optional `str`): This is passed to the index factory of Faiss to create the index. Default index class is IndexFlatIP.
514
+ metric_type (Optional `int`): Type of metric. Ex: `faiss.METRIC_INNER_PRODUCT` or `faiss.METRIC_L2`.
515
+ custom_index (Optional `faiss.Index`): Custom Faiss index that you already have instantiated and configured for your needs.
516
+ batch_size (Optional `int`): Size of the batch to use while adding vectors to the FaissIndex. Default value is 1000.
517
+ <Added version="2.4.0"/>
518
+ train_size (Optional `int`): If the index needs a training step, specifies how many vectors will be used to train the index.
519
+ faiss_verbose (`bool`, defaults to False): Enable the verbosity of the Faiss index.
520
+ """
521
+ faiss_index = FaissIndex(
522
+ device=device, string_factory=string_factory, metric_type=metric_type, custom_index=custom_index
523
+ )
524
+ faiss_index.add_vectors(
525
+ external_arrays, column=None, batch_size=batch_size, train_size=train_size, faiss_verbose=faiss_verbose
526
+ )
527
+ self._indexes[index_name] = faiss_index
528
+
529
+ def save_faiss_index(self, index_name: str, file: Union[str, PurePath], storage_options: Optional[Dict] = None):
530
+ """Save a FaissIndex on disk.
531
+
532
+ Args:
533
+ index_name (`str`): The index_name/identifier of the index. This is the index_name that is used to call `.get_nearest` or `.search`.
534
+ file (`str`): The path to the serialized faiss index on disk or remote URI (e.g. `"s3://my-bucket/index.faiss"`).
535
+ storage_options (`dict`, *optional*):
536
+ Key/value pairs to be passed on to the file-system backend, if any.
537
+
538
+ <Added version="2.11.0"/>
539
+
540
+ """
541
+ index = self.get_index(index_name)
542
+ if not isinstance(index, FaissIndex):
543
+ raise ValueError(f"Index '{index_name}' is not a FaissIndex but a '{type(index)}'")
544
+ index.save(file, storage_options=storage_options)
545
+ logger.info(f"Saved FaissIndex {index_name} at {file}")
546
+
547
+ def load_faiss_index(
548
+ self,
549
+ index_name: str,
550
+ file: Union[str, PurePath],
551
+ device: Optional[Union[int, List[int]]] = None,
552
+ storage_options: Optional[Dict] = None,
553
+ ):
554
+ """Load a FaissIndex from disk.
555
+
556
+ If you want to do additional configurations, you can have access to the faiss index object by doing
557
+ `.get_index(index_name).faiss_index` to make it fit your needs.
558
+
559
+ Args:
560
+ index_name (`str`): The index_name/identifier of the index. This is the index_name that is used to
561
+ call `.get_nearest` or `.search`.
562
+ file (`str`): The path to the serialized faiss index on disk or remote URI (e.g. `"s3://my-bucket/index.faiss"`).
563
+ device (Optional `Union[int, List[int]]`): If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs.
564
+ If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU.
565
+ storage_options (`dict`, *optional*):
566
+ Key/value pairs to be passed on to the file-system backend, if any.
567
+
568
+ <Added version="2.11.0"/>
569
+
570
+ """
571
+ index = FaissIndex.load(file, device=device, storage_options=storage_options)
572
+ if index.faiss_index.ntotal != len(self):
573
+ raise ValueError(
574
+ f"Index size should match Dataset size, but Index '{index_name}' at {file} has {index.faiss_index.ntotal} elements while the dataset has {len(self)} examples."
575
+ )
576
+ self._indexes[index_name] = index
577
+ logger.info(f"Loaded FaissIndex {index_name} from {file}")
578
+
579
+ def add_elasticsearch_index(
580
+ self,
581
+ column: str,
582
+ index_name: Optional[str] = None,
583
+ host: Optional[str] = None,
584
+ port: Optional[int] = None,
585
+ es_client: Optional["Elasticsearch"] = None,
586
+ es_index_name: Optional[str] = None,
587
+ es_index_config: Optional[dict] = None,
588
+ ):
589
+ """Add a text index using ElasticSearch for fast retrieval.
590
+
591
+ Args:
592
+ column (`str`): The column of the documents to add to the index.
593
+ index_name (Optional `str`): The index_name/identifier of the index. This is the index name that is used to call `.get_nearest` or `.search`.
594
+ By default it corresponds to `column`.
595
+ host (Optional `str`, defaults to localhost):
596
+ host of where ElasticSearch is running
597
+ port (Optional `str`, defaults to 9200):
598
+ port of where ElasticSearch is running
599
+ es_client (Optional `elasticsearch.Elasticsearch`):
600
+ The elasticsearch client used to create the index if host and port are None.
601
+ es_index_name (Optional `str`): The elasticsearch index name used to create the index.
602
+ es_index_config (Optional `dict`):
603
+ The configuration of the elasticsearch index.
604
+ Default config is:
605
+
606
+ Config::
607
+
608
+ {
609
+ "settings": {
610
+ "number_of_shards": 1,
611
+ "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
612
+ },
613
+ "mappings": {
614
+ "properties": {
615
+ "text": {
616
+ "type": "text",
617
+ "analyzer": "standard",
618
+ "similarity": "BM25"
619
+ },
620
+ }
621
+ },
622
+ }
623
+ """
624
+ index_name = index_name if index_name is not None else column
625
+ es_index = ElasticSearchIndex(
626
+ host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
627
+ )
628
+ es_index.add_documents(self, column=column)
629
+ self._indexes[index_name] = es_index
630
+
631
+ def load_elasticsearch_index(
632
+ self,
633
+ index_name: str,
634
+ es_index_name: str,
635
+ host: Optional[str] = None,
636
+ port: Optional[int] = None,
637
+ es_client: Optional["Elasticsearch"] = None,
638
+ es_index_config: Optional[dict] = None,
639
+ ):
640
+ """Load an existing text index using ElasticSearch for fast retrieval.
641
+
642
+ Args:
643
+ index_name (`str`):
644
+ The `index_name`/identifier of the index. This is the index name that is used to call `get_nearest` or `search`.
645
+ es_index_name (`str`):
646
+ The name of elasticsearch index to load.
647
+ host (`str`, *optional*, defaults to `localhost`):
648
+ Host of where ElasticSearch is running.
649
+ port (`str`, *optional*, defaults to `9200`):
650
+ Port of where ElasticSearch is running.
651
+ es_client (`elasticsearch.Elasticsearch`, *optional*):
652
+ The elasticsearch client used to create the index if host and port are `None`.
653
+ es_index_config (`dict`, *optional*):
654
+ The configuration of the elasticsearch index.
655
+ Default config is:
656
+ ```
657
+ {
658
+ "settings": {
659
+ "number_of_shards": 1,
660
+ "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
661
+ },
662
+ "mappings": {
663
+ "properties": {
664
+ "text": {
665
+ "type": "text",
666
+ "analyzer": "standard",
667
+ "similarity": "BM25"
668
+ },
669
+ }
670
+ },
671
+ }
672
+ ```
673
+ """
674
+ self._indexes[index_name] = ElasticSearchIndex(
675
+ host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
676
+ )
677
+
678
+ def drop_index(self, index_name: str):
679
+ """Drop the index with the specified column.
680
+
681
+ Args:
682
+ index_name (`str`):
683
+ The `index_name`/identifier of the index.
684
+ """
685
+ del self._indexes[index_name]
686
+
687
+ def search(self, index_name: str, query: Union[str, np.array], k: int = 10, **kwargs) -> SearchResults:
688
+ """Find the nearest examples indices in the dataset to the query.
689
+
690
+ Args:
691
+ index_name (`str`):
692
+ The name/identifier of the index.
693
+ query (`Union[str, np.ndarray]`):
694
+ The query as a string if `index_name` is a text index or as a numpy array if `index_name` is a vector index.
695
+ k (`int`):
696
+ The number of examples to retrieve.
697
+
698
+ Returns:
699
+ `(scores, indices)`:
700
+ A tuple of `(scores, indices)` where:
701
+ - **scores** (`List[List[float]`): the retrieval scores from either FAISS (`IndexFlatL2` by default) or ElasticSearch of the retrieved examples
702
+ - **indices** (`List[List[int]]`): the indices of the retrieved examples
703
+ """
704
+ self._check_index_is_initialized(index_name)
705
+ return self._indexes[index_name].search(query, k, **kwargs)
706
+
707
+ def search_batch(
708
+ self, index_name: str, queries: Union[List[str], np.array], k: int = 10, **kwargs
709
+ ) -> BatchedSearchResults:
710
+ """Find the nearest examples indices in the dataset to the query.
711
+
712
+ Args:
713
+ index_name (`str`):
714
+ The `index_name`/identifier of the index.
715
+ queries (`Union[List[str], np.ndarray]`):
716
+ The queries as a list of strings if `index_name` is a text index or as a numpy array if `index_name` is a vector index.
717
+ k (`int`):
718
+ The number of examples to retrieve per query.
719
+
720
+ Returns:
721
+ `(total_scores, total_indices)`:
722
+ A tuple of `(total_scores, total_indices)` where:
723
+ - **total_scores** (`List[List[float]`): the retrieval scores from either FAISS (`IndexFlatL2` by default) or ElasticSearch of the retrieved examples per query
724
+ - **total_indices** (`List[List[int]]`): the indices of the retrieved examples per query
725
+ """
726
+ self._check_index_is_initialized(index_name)
727
+ return self._indexes[index_name].search_batch(queries, k, **kwargs)
728
+
729
+ def get_nearest_examples(
730
+ self, index_name: str, query: Union[str, np.array], k: int = 10, **kwargs
731
+ ) -> NearestExamplesResults:
732
+ """Find the nearest examples in the dataset to the query.
733
+
734
+ Args:
735
+ index_name (`str`):
736
+ The index_name/identifier of the index.
737
+ query (`Union[str, np.ndarray]`):
738
+ The query as a string if `index_name` is a text index or as a numpy array if `index_name` is a vector index.
739
+ k (`int`):
740
+ The number of examples to retrieve.
741
+
742
+ Returns:
743
+ `(scores, examples)`:
744
+ A tuple of `(scores, examples)` where:
745
+ - **scores** (`List[float]`): the retrieval scores from either FAISS (`IndexFlatL2` by default) or ElasticSearch of the retrieved examples
746
+ - **examples** (`dict`): the retrieved examples
747
+ """
748
+ self._check_index_is_initialized(index_name)
749
+ scores, indices = self.search(index_name, query, k, **kwargs)
750
+ top_indices = [i for i in indices if i >= 0]
751
+ return NearestExamplesResults(scores[: len(top_indices)], self[top_indices])
752
+
753
+ def get_nearest_examples_batch(
754
+ self, index_name: str, queries: Union[List[str], np.array], k: int = 10, **kwargs
755
+ ) -> BatchedNearestExamplesResults:
756
+ """Find the nearest examples in the dataset to the query.
757
+
758
+ Args:
759
+ index_name (`str`):
760
+ The `index_name`/identifier of the index.
761
+ queries (`Union[List[str], np.ndarray]`):
762
+ The queries as a list of strings if `index_name` is a text index or as a numpy array if `index_name` is a vector index.
763
+ k (`int`):
764
+ The number of examples to retrieve per query.
765
+
766
+ Returns:
767
+ `(total_scores, total_examples)`:
768
+ A tuple of `(total_scores, total_examples)` where:
769
+ - **total_scores** (`List[List[float]`): the retrieval scores from either FAISS (`IndexFlatL2` by default) or ElasticSearch of the retrieved examples per query
770
+ - **total_examples** (`List[dict]`): the retrieved examples per query
771
+ """
772
+ self._check_index_is_initialized(index_name)
773
+ total_scores, total_indices = self.search_batch(index_name, queries, k, **kwargs)
774
+ total_scores = [
775
+ scores_i[: len([i for i in indices_i if i >= 0])]
776
+ for scores_i, indices_i in zip(total_scores, total_indices)
777
+ ]
778
+ total_samples = [self[[i for i in indices if i >= 0]] for indices in total_indices]
779
+ return BatchedNearestExamplesResults(total_scores, total_samples)
env-llmeval/lib/python3.10/site-packages/datasets/splits.py ADDED
@@ -0,0 +1,635 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Lint as: python3
16
+ """Splits related API."""
17
+
18
+ import abc
19
+ import collections
20
+ import copy
21
+ import dataclasses
22
+ import re
23
+ from dataclasses import dataclass
24
+ from typing import Dict, List, Optional, Union
25
+
26
+ from .arrow_reader import FileInstructions, make_file_instructions
27
+ from .naming import _split_re
28
+ from .utils.py_utils import NonMutableDict, asdict
29
+
30
+
31
+ @dataclass
32
+ class SplitInfo:
33
+ name: str = dataclasses.field(default="", metadata={"include_in_asdict_even_if_is_default": True})
34
+ num_bytes: int = dataclasses.field(default=0, metadata={"include_in_asdict_even_if_is_default": True})
35
+ num_examples: int = dataclasses.field(default=0, metadata={"include_in_asdict_even_if_is_default": True})
36
+ shard_lengths: Optional[List[int]] = None
37
+
38
+ # Deprecated
39
+ # For backward compatibility, this field needs to always be included in files like
40
+ # dataset_infos.json and dataset_info.json files
41
+ # To do so, we always include it in the output of datasets.utils.py_utils.asdict(split_info)
42
+ dataset_name: Optional[str] = dataclasses.field(
43
+ default=None, metadata={"include_in_asdict_even_if_is_default": True}
44
+ )
45
+
46
+ @property
47
+ def file_instructions(self):
48
+ """Returns the list of dict(filename, take, skip)."""
49
+ # `self.dataset_name` is assigned in `SplitDict.add()`.
50
+ instructions = make_file_instructions(
51
+ name=self.dataset_name,
52
+ split_infos=[self],
53
+ instruction=str(self.name),
54
+ )
55
+ return instructions.file_instructions
56
+
57
+
58
+ @dataclass
59
+ class SubSplitInfo:
60
+ """Wrapper around a sub split info.
61
+ This class expose info on the subsplit:
62
+ ```
63
+ ds, info = datasets.load_dataset(..., split='train[75%:]', with_info=True)
64
+ info.splits['train[75%:]'].num_examples
65
+ ```
66
+ """
67
+
68
+ instructions: FileInstructions
69
+
70
+ @property
71
+ def num_examples(self):
72
+ """Returns the number of example in the subsplit."""
73
+ return self.instructions.num_examples
74
+
75
+ @property
76
+ def file_instructions(self):
77
+ """Returns the list of dict(filename, take, skip)."""
78
+ return self.instructions.file_instructions
79
+
80
+
81
+ class SplitBase(metaclass=abc.ABCMeta):
82
+ # pylint: disable=line-too-long
83
+ """Abstract base class for Split compositionality.
84
+
85
+ See the
86
+ [guide on splits](../loading#slice-splits)
87
+ for more information.
88
+
89
+ There are three parts to the composition:
90
+ 1) The splits are composed (defined, merged, split,...) together before
91
+ calling the `.as_dataset()` function. This is done with the `__add__`,
92
+ `__getitem__`, which return a tree of `SplitBase` (whose leaf
93
+ are the `NamedSplit` objects)
94
+
95
+ ```
96
+ split = datasets.Split.TRAIN + datasets.Split.TEST.subsplit(datasets.percent[:50])
97
+ ```
98
+
99
+ 2) The `SplitBase` is forwarded to the `.as_dataset()` function
100
+ to be resolved into actual read instruction. This is done by the
101
+ `.get_read_instruction()` method which takes the real dataset splits
102
+ (name, number of shards,...) and parse the tree to return a
103
+ `SplitReadInstruction()` object
104
+
105
+ ```
106
+ read_instruction = split.get_read_instruction(self.info.splits)
107
+ ```
108
+
109
+ 3) The `SplitReadInstruction` is then used in the `tf.data.Dataset` pipeline
110
+ to define which files to read and how to skip examples within file.
111
+
112
+ """
113
+
114
+ # pylint: enable=line-too-long
115
+
116
+ @abc.abstractmethod
117
+ def get_read_instruction(self, split_dict):
118
+ """Parse the descriptor tree and compile all read instructions together.
119
+
120
+ Args:
121
+ split_dict: `dict`, The `dict[split_name, SplitInfo]` of the dataset
122
+
123
+ Returns:
124
+ split_read_instruction: `SplitReadInstruction`
125
+ """
126
+ raise NotImplementedError("Abstract method")
127
+
128
+ def __eq__(self, other):
129
+ """Equality: datasets.Split.TRAIN == 'train'."""
130
+ if isinstance(other, (NamedSplit, str)):
131
+ return False
132
+ raise NotImplementedError("Equality is not implemented between merged/sub splits.")
133
+
134
+ def __ne__(self, other):
135
+ """InEquality: datasets.Split.TRAIN != 'test'."""
136
+ return not self.__eq__(other)
137
+
138
+ def __add__(self, other):
139
+ """Merging: datasets.Split.TRAIN + datasets.Split.TEST."""
140
+ return _SplitMerged(self, other)
141
+
142
+ def subsplit(self, arg=None, k=None, percent=None, weighted=None): # pylint: disable=redefined-outer-name
143
+ """Divides this split into subsplits.
144
+
145
+ There are 3 ways to define subsplits, which correspond to the 3
146
+ arguments `k` (get `k` even subsplits), `percent` (get a slice of the
147
+ dataset with `datasets.percent`), and `weighted` (get subsplits with proportions
148
+ specified by `weighted`).
149
+
150
+ Example::
151
+
152
+ ```
153
+ # 50% train, 50% test
154
+ train, test = split.subsplit(k=2)
155
+ # 50% train, 25% test, 25% validation
156
+ train, test, validation = split.subsplit(weighted=[2, 1, 1])
157
+ # Extract last 20%
158
+ subsplit = split.subsplit(datasets.percent[-20:])
159
+ ```
160
+
161
+ Warning: k and weighted will be converted into percent which mean that
162
+ values below the percent will be rounded up or down. The final split may be
163
+ bigger to deal with remainders. For instance:
164
+
165
+ ```
166
+ train, test, valid = split.subsplit(k=3) # 33%, 33%, 34%
167
+ s1, s2, s3, s4 = split.subsplit(weighted=[2, 2, 1, 1]) # 33%, 33%, 16%, 18%
168
+ ```
169
+
170
+ Args:
171
+ arg: If no kwargs are given, `arg` will be interpreted as one of
172
+ `k`, `percent`, or `weighted` depending on the type.
173
+ For example:
174
+ ```
175
+ split.subsplit(10) # Equivalent to split.subsplit(k=10)
176
+ split.subsplit(datasets.percent[:-20]) # percent=datasets.percent[:-20]
177
+ split.subsplit([1, 1, 2]) # weighted=[1, 1, 2]
178
+ ```
179
+ k: `int` If set, subdivide the split into `k` equal parts.
180
+ percent: `datasets.percent slice`, return a single subsplit corresponding to
181
+ a slice of the original split. For example:
182
+ `split.subsplit(datasets.percent[-20:]) # Last 20% of the dataset`.
183
+ weighted: `list[int]`, return a list of subsplits whose proportions match
184
+ the normalized sum of the list. For example:
185
+ `split.subsplit(weighted=[1, 1, 2]) # 25%, 25%, 50%`.
186
+
187
+ Returns:
188
+ A subsplit or list of subsplits extracted from this split object.
189
+ """
190
+ # Note that the percent kwargs redefine the outer name datasets.percent. This
191
+ # is done for consistency (.subsplit(percent=datasets.percent[:40]))
192
+ if sum(bool(x) for x in (arg, k, percent, weighted)) != 1:
193
+ raise ValueError("Only one argument of subsplit should be set.")
194
+
195
+ # Auto deduce k
196
+ if isinstance(arg, int):
197
+ k = arg
198
+ elif isinstance(arg, slice):
199
+ percent = arg
200
+ elif isinstance(arg, list):
201
+ weighted = arg
202
+
203
+ if not (k or percent or weighted):
204
+ raise ValueError(
205
+ f"Invalid split argument {arg}. Only list, slice and int supported. "
206
+ "One of k, weighted or percent should be set to a non empty value."
207
+ )
208
+
209
+ def assert_slices_coverage(slices):
210
+ # Ensure that the expended slices cover all percents.
211
+ assert sum((list(range(*s.indices(100))) for s in slices), []) == list(range(100))
212
+
213
+ if k:
214
+ if not 0 < k <= 100:
215
+ raise ValueError(f"Subsplit k should be between 0 and 100, got {k}")
216
+ shift = 100 // k
217
+ slices = [slice(i * shift, (i + 1) * shift) for i in range(k)]
218
+ # Round up last element to ensure all elements are taken
219
+ slices[-1] = slice(slices[-1].start, 100)
220
+ # Internal check to ensure full coverage
221
+ assert_slices_coverage(slices)
222
+ return tuple(_SubSplit(self, s) for s in slices)
223
+ elif percent:
224
+ return _SubSplit(self, percent)
225
+ elif weighted:
226
+ # Normalize the weighted sum
227
+ total = sum(weighted)
228
+ weighted = [100 * x // total for x in weighted]
229
+ # Create the slice for each of the elements
230
+ start = 0
231
+ stop = 0
232
+ slices = []
233
+ for v in weighted:
234
+ stop += v
235
+ slices.append(slice(start, stop))
236
+ start = stop
237
+ # Round up last element to ensure all elements are taken
238
+ slices[-1] = slice(slices[-1].start, 100)
239
+ # Internal check to ensure full coverage
240
+ assert_slices_coverage(slices)
241
+ return tuple(_SubSplit(self, s) for s in slices)
242
+ else:
243
+ # Should not be possible
244
+ raise ValueError("Could not determine the split")
245
+
246
+
247
+ # 2 requirements:
248
+ # 1. datasets.percent be sliceable
249
+ # 2. datasets.percent be documented
250
+ #
251
+ # Instances are not documented, so we want datasets.percent to be a class, but to
252
+ # have it be sliceable, we need this metaclass.
253
+ class PercentSliceMeta(type):
254
+ def __getitem__(cls, slice_value):
255
+ if not isinstance(slice_value, slice):
256
+ raise ValueError(f"datasets.percent should only be called with slice, not {slice_value}")
257
+ return slice_value
258
+
259
+
260
+ class PercentSlice(metaclass=PercentSliceMeta):
261
+ # pylint: disable=line-too-long
262
+ """Syntactic sugar for defining slice subsplits: `datasets.percent[75:-5]`.
263
+
264
+ See the
265
+ [guide on splits](../loading#slice-splits)
266
+ for more information.
267
+ """
268
+
269
+ # pylint: enable=line-too-long
270
+ pass
271
+
272
+
273
+ percent = PercentSlice # pylint: disable=invalid-name
274
+
275
+
276
+ class _SplitMerged(SplitBase):
277
+ """Represent two split descriptors merged together."""
278
+
279
+ def __init__(self, split1, split2):
280
+ self._split1 = split1
281
+ self._split2 = split2
282
+
283
+ def get_read_instruction(self, split_dict):
284
+ read_instruction1 = self._split1.get_read_instruction(split_dict)
285
+ read_instruction2 = self._split2.get_read_instruction(split_dict)
286
+ return read_instruction1 + read_instruction2
287
+
288
+ def __repr__(self):
289
+ return f"({repr(self._split1)} + {repr(self._split2)})"
290
+
291
+
292
+ class _SubSplit(SplitBase):
293
+ """Represent a sub split of a split descriptor."""
294
+
295
+ def __init__(self, split, slice_value):
296
+ self._split = split
297
+ self._slice_value = slice_value
298
+
299
+ def get_read_instruction(self, split_dict):
300
+ return self._split.get_read_instruction(split_dict)[self._slice_value]
301
+
302
+ def __repr__(self):
303
+ slice_str = "{start}:{stop}"
304
+ if self._slice_value.step is not None:
305
+ slice_str += ":{step}"
306
+ slice_str = slice_str.format(
307
+ start="" if self._slice_value.start is None else self._slice_value.start,
308
+ stop="" if self._slice_value.stop is None else self._slice_value.stop,
309
+ step=self._slice_value.step,
310
+ )
311
+ return f"{repr(self._split)}(datasets.percent[{slice_str}])"
312
+
313
+
314
+ class NamedSplit(SplitBase):
315
+ """Descriptor corresponding to a named split (train, test, ...).
316
+
317
+ Example:
318
+ Each descriptor can be composed with other using addition or slice:
319
+
320
+ ```py
321
+ split = datasets.Split.TRAIN.subsplit(datasets.percent[0:25]) + datasets.Split.TEST
322
+ ```
323
+
324
+ The resulting split will correspond to 25% of the train split merged with
325
+ 100% of the test split.
326
+
327
+ A split cannot be added twice, so the following will fail:
328
+
329
+ ```py
330
+ split = (
331
+ datasets.Split.TRAIN.subsplit(datasets.percent[:25]) +
332
+ datasets.Split.TRAIN.subsplit(datasets.percent[75:])
333
+ ) # Error
334
+ split = datasets.Split.TEST + datasets.Split.ALL # Error
335
+ ```
336
+
337
+ The slices can be applied only one time. So the following are valid:
338
+
339
+ ```py
340
+ split = (
341
+ datasets.Split.TRAIN.subsplit(datasets.percent[:25]) +
342
+ datasets.Split.TEST.subsplit(datasets.percent[:50])
343
+ )
344
+ split = (datasets.Split.TRAIN + datasets.Split.TEST).subsplit(datasets.percent[:50])
345
+ ```
346
+
347
+ But this is not valid:
348
+
349
+ ```py
350
+ train = datasets.Split.TRAIN
351
+ test = datasets.Split.TEST
352
+ split = train.subsplit(datasets.percent[:25]).subsplit(datasets.percent[:25])
353
+ split = (train.subsplit(datasets.percent[:25]) + test).subsplit(datasets.percent[:50])
354
+ ```
355
+ """
356
+
357
+ def __init__(self, name):
358
+ self._name = name
359
+ split_names_from_instruction = [split_instruction.split("[")[0] for split_instruction in name.split("+")]
360
+ for split_name in split_names_from_instruction:
361
+ if not re.match(_split_re, split_name):
362
+ raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.")
363
+
364
+ def __str__(self):
365
+ return self._name
366
+
367
+ def __repr__(self):
368
+ return f"NamedSplit({self._name!r})"
369
+
370
+ def __eq__(self, other):
371
+ """Equality: datasets.Split.TRAIN == 'train'."""
372
+ if isinstance(other, NamedSplit):
373
+ return self._name == other._name # pylint: disable=protected-access
374
+ elif isinstance(other, SplitBase):
375
+ return False
376
+ elif isinstance(other, str): # Other should be string
377
+ return self._name == other
378
+ else:
379
+ raise ValueError(f"Equality not supported between split {self} and {other}")
380
+
381
+ def __lt__(self, other):
382
+ return self._name < other._name # pylint: disable=protected-access
383
+
384
+ def __hash__(self):
385
+ return hash(self._name)
386
+
387
+ def get_read_instruction(self, split_dict):
388
+ return SplitReadInstruction(split_dict[self._name])
389
+
390
+
391
+ class NamedSplitAll(NamedSplit):
392
+ """Split corresponding to the union of all defined dataset splits."""
393
+
394
+ def __init__(self):
395
+ super().__init__("all")
396
+
397
+ def __repr__(self):
398
+ return "NamedSplitAll()"
399
+
400
+ def get_read_instruction(self, split_dict):
401
+ # Merge all dataset split together
402
+ read_instructions = [SplitReadInstruction(s) for s in split_dict.values()]
403
+ return sum(read_instructions, SplitReadInstruction())
404
+
405
+
406
+ class Split:
407
+ # pylint: disable=line-too-long
408
+ """`Enum` for dataset splits.
409
+
410
+ Datasets are typically split into different subsets to be used at various
411
+ stages of training and evaluation.
412
+
413
+ - `TRAIN`: the training data.
414
+ - `VALIDATION`: the validation data. If present, this is typically used as
415
+ evaluation data while iterating on a model (e.g. changing hyperparameters,
416
+ model architecture, etc.).
417
+ - `TEST`: the testing data. This is the data to report metrics on. Typically
418
+ you do not want to use this during model iteration as you may overfit to it.
419
+ - `ALL`: the union of all defined dataset splits.
420
+
421
+ All splits, including compositions inherit from `datasets.SplitBase`.
422
+
423
+ See the [guide](../load_hub#splits) on splits for more information.
424
+
425
+ Example:
426
+
427
+ ```py
428
+ >>> datasets.SplitGenerator(
429
+ ... name=datasets.Split.TRAIN,
430
+ ... gen_kwargs={"split_key": "train", "files": dl_manager.download_and extract(url)},
431
+ ... ),
432
+ ... datasets.SplitGenerator(
433
+ ... name=datasets.Split.VALIDATION,
434
+ ... gen_kwargs={"split_key": "validation", "files": dl_manager.download_and extract(url)},
435
+ ... ),
436
+ ... datasets.SplitGenerator(
437
+ ... name=datasets.Split.TEST,
438
+ ... gen_kwargs={"split_key": "test", "files": dl_manager.download_and extract(url)},
439
+ ... )
440
+ ```
441
+ """
442
+
443
+ # pylint: enable=line-too-long
444
+ TRAIN = NamedSplit("train")
445
+ TEST = NamedSplit("test")
446
+ VALIDATION = NamedSplit("validation")
447
+ ALL = NamedSplitAll()
448
+
449
+ def __new__(cls, name):
450
+ """Create a custom split with datasets.Split('custom_name')."""
451
+ return NamedSplitAll() if name == "all" else NamedSplit(name)
452
+
453
+
454
+ # Similar to SplitInfo, but contain an additional slice info
455
+ SlicedSplitInfo = collections.namedtuple(
456
+ "SlicedSplitInfo",
457
+ [
458
+ "split_info",
459
+ "slice_value",
460
+ ],
461
+ ) # noqa: E231
462
+
463
+
464
+ class SplitReadInstruction:
465
+ """Object containing the reading instruction for the dataset.
466
+
467
+ Similarly to `SplitDescriptor` nodes, this object can be composed with itself,
468
+ but the resolution happens instantaneously, instead of keeping track of the
469
+ tree, such as all instructions are compiled and flattened in a single
470
+ SplitReadInstruction object containing the list of files and slice to use.
471
+
472
+ Once resolved, the instructions can be accessed with:
473
+
474
+ ```
475
+ read_instructions.get_list_sliced_split_info() # List of splits to use
476
+ ```
477
+
478
+ """
479
+
480
+ def __init__(self, split_info=None):
481
+ self._splits = NonMutableDict(error_msg="Overlap between splits. Split {key} has been added with " "itself.")
482
+
483
+ if split_info:
484
+ self.add(SlicedSplitInfo(split_info=split_info, slice_value=None))
485
+
486
+ def add(self, sliced_split):
487
+ """Add a SlicedSplitInfo the read instructions."""
488
+ # TODO(epot): Check that the number of examples per shard % 100 == 0
489
+ # Otherwise the slices value may be unbalanced and not exactly reflect the
490
+ # requested slice.
491
+ self._splits[sliced_split.split_info.name] = sliced_split
492
+
493
+ def __add__(self, other):
494
+ """Merging split together."""
495
+ # Will raise error if a split has already be added (NonMutableDict)
496
+ # TODO(epot): If a split is already added but there is no overlap between
497
+ # the slices, should merge the slices (ex: [:10] + [80:])
498
+ split_instruction = SplitReadInstruction()
499
+ split_instruction._splits.update(self._splits) # pylint: disable=protected-access
500
+ split_instruction._splits.update(other._splits) # pylint: disable=protected-access
501
+ return split_instruction
502
+
503
+ def __getitem__(self, slice_value):
504
+ """Sub-splits."""
505
+ # Will raise an error if a split has already been sliced
506
+ split_instruction = SplitReadInstruction()
507
+ for v in self._splits.values():
508
+ if v.slice_value is not None:
509
+ raise ValueError(f"Trying to slice Split {v.split_info.name} which has already been sliced")
510
+ v = v._asdict()
511
+ v["slice_value"] = slice_value
512
+ split_instruction.add(SlicedSplitInfo(**v))
513
+ return split_instruction
514
+
515
+ def get_list_sliced_split_info(self):
516
+ return list(self._splits.values())
517
+
518
+
519
+ class SplitDict(dict):
520
+ """Split info object."""
521
+
522
+ def __init__(self, *args, dataset_name=None, **kwargs):
523
+ super().__init__(*args, **kwargs)
524
+ self.dataset_name = dataset_name
525
+
526
+ def __getitem__(self, key: Union[SplitBase, str]):
527
+ # 1st case: The key exists: `info.splits['train']`
528
+ if str(key) in self:
529
+ return super().__getitem__(str(key))
530
+ # 2nd case: Uses instructions: `info.splits['train[50%]']`
531
+ else:
532
+ instructions = make_file_instructions(
533
+ name=self.dataset_name,
534
+ split_infos=self.values(),
535
+ instruction=key,
536
+ )
537
+ return SubSplitInfo(instructions)
538
+
539
+ def __setitem__(self, key: Union[SplitBase, str], value: SplitInfo):
540
+ if key != value.name:
541
+ raise ValueError(f"Cannot add elem. (key mismatch: '{key}' != '{value.name}')")
542
+ super().__setitem__(key, value)
543
+
544
+ def add(self, split_info: SplitInfo):
545
+ """Add the split info."""
546
+ if split_info.name in self:
547
+ raise ValueError(f"Split {split_info.name} already present")
548
+ split_info.dataset_name = self.dataset_name
549
+ super().__setitem__(split_info.name, split_info)
550
+
551
+ @property
552
+ def total_num_examples(self):
553
+ """Return the total number of examples."""
554
+ return sum(s.num_examples for s in self.values())
555
+
556
+ @classmethod
557
+ def from_split_dict(cls, split_infos: Union[List, Dict], dataset_name: Optional[str] = None):
558
+ """Returns a new SplitDict initialized from a Dict or List of `split_infos`."""
559
+ if isinstance(split_infos, dict):
560
+ split_infos = list(split_infos.values())
561
+
562
+ if dataset_name is None:
563
+ dataset_name = split_infos[0].get("dataset_name") if split_infos else None
564
+
565
+ split_dict = cls(dataset_name=dataset_name)
566
+
567
+ for split_info in split_infos:
568
+ if isinstance(split_info, dict):
569
+ split_info = SplitInfo(**split_info)
570
+ split_dict.add(split_info)
571
+
572
+ return split_dict
573
+
574
+ def to_split_dict(self):
575
+ """Returns a list of SplitInfo protos that we have."""
576
+ out = []
577
+ for split_name, split_info in self.items():
578
+ split_info = copy.deepcopy(split_info)
579
+ split_info.name = split_name
580
+ out.append(split_info)
581
+ return out
582
+
583
+ def copy(self):
584
+ return SplitDict.from_split_dict(self.to_split_dict(), self.dataset_name)
585
+
586
+ def _to_yaml_list(self) -> list:
587
+ out = [asdict(s) for s in self.to_split_dict()]
588
+ # we don't need the shard lengths in YAML, since it depends on max_shard_size and num_proc
589
+ for split_info_dict in out:
590
+ split_info_dict.pop("shard_lengths", None)
591
+ # we don't need the dataset_name attribute that is deprecated
592
+ for split_info_dict in out:
593
+ split_info_dict.pop("dataset_name", None)
594
+ return out
595
+
596
+ @classmethod
597
+ def _from_yaml_list(cls, yaml_data: list) -> "SplitDict":
598
+ return cls.from_split_dict(yaml_data)
599
+
600
+
601
+ @dataclass
602
+ class SplitGenerator:
603
+ """Defines the split information for the generator.
604
+
605
+ This should be used as returned value of
606
+ `GeneratorBasedBuilder._split_generators`.
607
+ See `GeneratorBasedBuilder._split_generators` for more info and example
608
+ of usage.
609
+
610
+ Args:
611
+ name (`str`):
612
+ Name of the `Split` for which the generator will
613
+ create the examples.
614
+ **gen_kwargs (additional keyword arguments):
615
+ Keyword arguments to forward to the `DatasetBuilder._generate_examples` method
616
+ of the builder.
617
+
618
+ Example:
619
+
620
+ ```py
621
+ >>> datasets.SplitGenerator(
622
+ ... name=datasets.Split.TRAIN,
623
+ ... gen_kwargs={"split_key": "train", "files": dl_manager.download_and_extract(url)},
624
+ ... )
625
+ ```
626
+ """
627
+
628
+ name: str
629
+ gen_kwargs: Dict = dataclasses.field(default_factory=dict)
630
+ split_info: SplitInfo = dataclasses.field(init=False)
631
+
632
+ def __post_init__(self):
633
+ self.name = str(self.name) # Make sure we convert NamedSplits in strings
634
+ NamedSplit(self.name) # check that it's a valid split name
635
+ self.split_info = SplitInfo(name=self.name)
env-llmeval/lib/python3.10/site-packages/datasets/streaming.py ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib
2
+ import inspect
3
+ from functools import wraps
4
+ from typing import TYPE_CHECKING, Optional
5
+
6
+ from .download.download_config import DownloadConfig
7
+ from .download.streaming_download_manager import (
8
+ xbasename,
9
+ xdirname,
10
+ xet_parse,
11
+ xexists,
12
+ xgetsize,
13
+ xglob,
14
+ xgzip_open,
15
+ xisdir,
16
+ xisfile,
17
+ xjoin,
18
+ xlistdir,
19
+ xnumpy_load,
20
+ xopen,
21
+ xpandas_read_csv,
22
+ xpandas_read_excel,
23
+ xPath,
24
+ xpyarrow_parquet_read_table,
25
+ xrelpath,
26
+ xsio_loadmat,
27
+ xsplit,
28
+ xsplitext,
29
+ xwalk,
30
+ xxml_dom_minidom_parse,
31
+ )
32
+ from .utils.logging import get_logger
33
+ from .utils.patching import patch_submodule
34
+ from .utils.py_utils import get_imports
35
+
36
+
37
+ logger = get_logger(__name__)
38
+
39
+
40
+ if TYPE_CHECKING:
41
+ from .builder import DatasetBuilder
42
+
43
+
44
+ def extend_module_for_streaming(module_path, download_config: Optional[DownloadConfig] = None):
45
+ """Extend the module to support streaming.
46
+
47
+ We patch some functions in the module to use `fsspec` to support data streaming:
48
+ - We use `fsspec.open` to open and read remote files. We patch the module function:
49
+ - `open`
50
+ - We use the "::" hop separator to join paths and navigate remote compressed/archive files. We patch the module
51
+ functions:
52
+ - `os.path.join`
53
+ - `pathlib.Path.joinpath` and `pathlib.Path.__truediv__` (called when using the "/" operator)
54
+
55
+ The patched functions are replaced with custom functions defined to work with the
56
+ :class:`~download.streaming_download_manager.StreamingDownloadManager`.
57
+
58
+ Args:
59
+ module_path: Path to the module to be extended.
60
+ download_config : mainly use use_auth_token or storage_options to support different platforms and auth types.
61
+ """
62
+
63
+ module = importlib.import_module(module_path)
64
+
65
+ # TODO(QL): always update the module to add subsequent new authentication without removing old ones
66
+ if hasattr(module, "_patched_for_streaming") and module._patched_for_streaming:
67
+ if isinstance(module._patched_for_streaming, DownloadConfig):
68
+ module._patched_for_streaming.token = download_config.token
69
+ module._patched_for_streaming.storage_options = download_config.storage_options
70
+ return
71
+
72
+ def wrap_auth(function):
73
+ @wraps(function)
74
+ def wrapper(*args, **kwargs):
75
+ return function(*args, download_config=download_config, **kwargs)
76
+
77
+ wrapper._decorator_name_ = "wrap_auth"
78
+ return wrapper
79
+
80
+ # open files in a streaming fashion
81
+ patch_submodule(module, "open", wrap_auth(xopen)).start()
82
+ patch_submodule(module, "os.listdir", wrap_auth(xlistdir)).start()
83
+ patch_submodule(module, "os.walk", wrap_auth(xwalk)).start()
84
+ patch_submodule(module, "glob.glob", wrap_auth(xglob)).start()
85
+ # allow to navigate in remote zip files
86
+ patch_submodule(module, "os.path.join", xjoin).start()
87
+ patch_submodule(module, "os.path.dirname", xdirname).start()
88
+ patch_submodule(module, "os.path.basename", xbasename).start()
89
+ patch_submodule(module, "os.path.relpath", xrelpath).start()
90
+ patch_submodule(module, "os.path.split", xsplit).start()
91
+ patch_submodule(module, "os.path.splitext", xsplitext).start()
92
+ # allow checks on paths
93
+ patch_submodule(module, "os.path.exists", wrap_auth(xexists)).start()
94
+ patch_submodule(module, "os.path.isdir", wrap_auth(xisdir)).start()
95
+ patch_submodule(module, "os.path.isfile", wrap_auth(xisfile)).start()
96
+ patch_submodule(module, "os.path.getsize", wrap_auth(xgetsize)).start()
97
+ patch_submodule(module, "pathlib.Path", xPath).start()
98
+ # file readers
99
+ patch_submodule(module, "gzip.open", wrap_auth(xgzip_open)).start()
100
+ patch_submodule(module, "numpy.load", wrap_auth(xnumpy_load)).start()
101
+ patch_submodule(module, "pandas.read_csv", wrap_auth(xpandas_read_csv), attrs=["__version__"]).start()
102
+ patch_submodule(module, "pandas.read_excel", wrap_auth(xpandas_read_excel), attrs=["__version__"]).start()
103
+ patch_submodule(module, "scipy.io.loadmat", wrap_auth(xsio_loadmat), attrs=["__version__"]).start()
104
+ patch_submodule(module, "xml.etree.ElementTree.parse", wrap_auth(xet_parse)).start()
105
+ patch_submodule(module, "xml.dom.minidom.parse", wrap_auth(xxml_dom_minidom_parse)).start()
106
+ # pyarrow: do not patch pyarrow attribute in packaged modules
107
+ if not module.__name__.startswith("datasets.packaged_modules."):
108
+ patch_submodule(module, "pyarrow.parquet.read_table", wrap_auth(xpyarrow_parquet_read_table)).start()
109
+ module._patched_for_streaming = download_config
110
+
111
+
112
+ def extend_dataset_builder_for_streaming(builder: "DatasetBuilder"):
113
+ """Extend the dataset builder module and the modules imported by it to support streaming.
114
+
115
+ Args:
116
+ builder (:class:`DatasetBuilder`): Dataset builder instance.
117
+ """
118
+ # this extends the open and os.path.join functions for data streaming
119
+ download_config = DownloadConfig(storage_options=builder.storage_options, token=builder.token)
120
+ extend_module_for_streaming(builder.__module__, download_config=download_config)
121
+ # if needed, we also have to extend additional internal imports (like wmt14 -> wmt_utils)
122
+ if not builder.__module__.startswith("datasets."): # check that it's not a packaged builder like csv
123
+ for imports in get_imports(inspect.getfile(builder.__class__)):
124
+ if imports[0] == "internal":
125
+ internal_import_name = imports[1]
126
+ internal_module_name = ".".join(builder.__module__.split(".")[:-1] + [internal_import_name])
127
+ extend_module_for_streaming(internal_module_name, download_config=download_config)
128
+
129
+ # builders can inherit from other builders that might use streaming functionality
130
+ # (for example, ImageFolder and AudioFolder inherit from FolderBuilder which implements examples generation)
131
+ # but these parents builders are not patched automatically as they are not instantiated, so we patch them here
132
+ from .builder import DatasetBuilder
133
+
134
+ parent_builder_modules = [
135
+ cls.__module__
136
+ for cls in type(builder).__mro__[1:] # make sure it's not the same module we've already patched
137
+ if issubclass(cls, DatasetBuilder) and cls.__module__ != DatasetBuilder.__module__
138
+ ] # check it's not a standard builder from datasets.builder
139
+ for module in parent_builder_modules:
140
+ extend_module_for_streaming(module, download_config=download_config)
env-llmeval/lib/python3.10/site-packages/datasets/table.py ADDED
@@ -0,0 +1,2360 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ import os
3
+ from functools import partial
4
+ from itertools import groupby
5
+ from typing import TYPE_CHECKING, Callable, Iterator, List, Optional, Tuple, TypeVar, Union
6
+
7
+ import numpy as np
8
+ import pyarrow as pa
9
+ import pyarrow.compute as pc
10
+ import pyarrow.types
11
+
12
+ from . import config
13
+ from .utils.logging import get_logger
14
+
15
+
16
+ if TYPE_CHECKING:
17
+ from .features.features import Features, FeatureType
18
+
19
+
20
+ logger = get_logger(__name__)
21
+
22
+
23
+ def inject_arrow_table_documentation(arrow_table_method):
24
+ def wrapper(fn):
25
+ fn.__doc__ = arrow_table_method.__doc__ + (fn.__doc__ if fn.__doc__ is not None else "")
26
+ fn.__doc__ = fn.__doc__.replace("pyarrow.Table", "Table")
27
+ if hasattr(arrow_table_method, "__annotations__"):
28
+ fn.__annotations__ = arrow_table_method.__annotations__
29
+ return fn
30
+
31
+ return wrapper
32
+
33
+
34
+ def _in_memory_arrow_table_from_file(filename: str) -> pa.Table:
35
+ in_memory_stream = pa.input_stream(filename)
36
+ opened_stream = pa.ipc.open_stream(in_memory_stream)
37
+ pa_table = opened_stream.read_all()
38
+ return pa_table
39
+
40
+
41
+ def _in_memory_arrow_table_from_buffer(buffer: pa.Buffer) -> pa.Table:
42
+ stream = pa.BufferReader(buffer)
43
+ opened_stream = pa.ipc.open_stream(stream)
44
+ table = opened_stream.read_all()
45
+ return table
46
+
47
+
48
+ def _memory_mapped_record_batch_reader_from_file(filename: str) -> pa.RecordBatchStreamReader:
49
+ memory_mapped_stream = pa.memory_map(filename)
50
+ return pa.ipc.open_stream(memory_mapped_stream)
51
+
52
+
53
+ def read_schema_from_file(filename: str) -> pa.Schema:
54
+ """
55
+ Infer arrow table schema from file without loading whole file into memory.
56
+ Usefull especially while having very big files.
57
+ """
58
+ with pa.memory_map(filename) as memory_mapped_stream:
59
+ schema = pa.ipc.open_stream(memory_mapped_stream).schema
60
+ return schema
61
+
62
+
63
+ def _memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:
64
+ opened_stream = _memory_mapped_record_batch_reader_from_file(filename)
65
+ pa_table = opened_stream.read_all()
66
+ return pa_table
67
+
68
+
69
+ def _deepcopy(x, memo: dict):
70
+ """deepcopy a regular class instance"""
71
+ cls = x.__class__
72
+ result = cls.__new__(cls)
73
+ memo[id(x)] = result
74
+ for k, v in x.__dict__.items():
75
+ setattr(result, k, copy.deepcopy(v, memo))
76
+ return result
77
+
78
+
79
+ def _interpolation_search(arr: List[int], x: int) -> int:
80
+ """
81
+ Return the position i of a sorted array so that arr[i] <= x < arr[i+1]
82
+
83
+ Args:
84
+ arr (`List[int]`): non-empty sorted list of integers
85
+ x (`int`): query
86
+
87
+ Returns:
88
+ `int`: the position i so that arr[i] <= x < arr[i+1]
89
+
90
+ Raises:
91
+ `IndexError`: if the array is empty or if the query is outside the array values
92
+ """
93
+ i, j = 0, len(arr) - 1
94
+ while i < j and arr[i] <= x < arr[j]:
95
+ k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i]))
96
+ if arr[k] <= x < arr[k + 1]:
97
+ return k
98
+ elif arr[k] < x:
99
+ i, j = k + 1, j
100
+ else:
101
+ i, j = i, k
102
+ raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.")
103
+
104
+
105
+ class IndexedTableMixin:
106
+ def __init__(self, table: pa.Table):
107
+ self._schema: pa.Schema = table.schema
108
+ self._batches: List[pa.RecordBatch] = [
109
+ recordbatch for recordbatch in table.to_batches() if len(recordbatch) > 0
110
+ ]
111
+ self._offsets: np.ndarray = np.cumsum([0] + [len(b) for b in self._batches], dtype=np.int64)
112
+
113
+ def fast_gather(self, indices: Union[List[int], np.ndarray]) -> pa.Table:
114
+ """
115
+ Create a pa.Table by gathering the records at the records at the specified indices. Should be faster
116
+ than pa.concat_tables(table.fast_slice(int(i) % table.num_rows, 1) for i in indices) since NumPy can compute
117
+ the binary searches in parallel, highly optimized C
118
+ """
119
+ if not len(indices):
120
+ raise ValueError("Indices must be non-empty")
121
+ batch_indices = np.searchsorted(self._offsets, indices, side="right") - 1
122
+ return pa.Table.from_batches(
123
+ [
124
+ self._batches[batch_idx].slice(i - self._offsets[batch_idx], 1)
125
+ for batch_idx, i in zip(batch_indices, indices)
126
+ ],
127
+ schema=self._schema,
128
+ )
129
+
130
+ def fast_slice(self, offset=0, length=None) -> pa.Table:
131
+ """
132
+ Slice the Table using interpolation search.
133
+ The behavior is the same as `pyarrow.Table.slice` but it's significantly faster.
134
+
135
+ Interpolation search is used to find the start and end indexes of the batches we want to keep.
136
+ The batches to keep are then concatenated to form the sliced Table.
137
+ """
138
+ if offset < 0:
139
+ raise IndexError("Offset must be non-negative")
140
+ elif offset >= self._offsets[-1] or (length is not None and length <= 0):
141
+ return pa.Table.from_batches([], schema=self._schema)
142
+ i = _interpolation_search(self._offsets, offset)
143
+ if length is None or length + offset >= self._offsets[-1]:
144
+ batches = self._batches[i:]
145
+ batches[0] = batches[0].slice(offset - self._offsets[i])
146
+ else:
147
+ j = _interpolation_search(self._offsets, offset + length - 1)
148
+ batches = self._batches[i : j + 1]
149
+ batches[-1] = batches[-1].slice(0, offset + length - self._offsets[j])
150
+ batches[0] = batches[0].slice(offset - self._offsets[i])
151
+ return pa.Table.from_batches(batches, schema=self._schema)
152
+
153
+
154
+ class Table(IndexedTableMixin):
155
+ """
156
+ Wraps a pyarrow Table by using composition.
157
+ This is the base class for `InMemoryTable`, `MemoryMappedTable` and `ConcatenationTable`.
158
+
159
+ It implements all the basic attributes/methods of the pyarrow Table class except
160
+ the Table transforms: `slice, filter, flatten, combine_chunks, cast, add_column,
161
+ append_column, remove_column, set_column, rename_columns` and `drop`.
162
+
163
+ The implementation of these methods differs for the subclasses.
164
+ """
165
+
166
+ def __init__(self, table: pa.Table):
167
+ super().__init__(table)
168
+ self.table = table
169
+
170
+ def __deepcopy__(self, memo: dict):
171
+ # arrow tables are immutable, so there's no need to copy self.table
172
+ # moreover calling deepcopy on a pyarrow table seems to make pa.total_allocated_bytes() decrease for some reason
173
+ # by adding it to the memo, self.table won't be copied
174
+ memo[id(self.table)] = self.table
175
+ # same for the recordbatches used by the index
176
+ memo[id(self._batches)] = list(self._batches)
177
+ return _deepcopy(self, memo)
178
+
179
+ def validate(self, *args, **kwargs):
180
+ """
181
+ Perform validation checks. An exception is raised if validation fails.
182
+
183
+ By default only cheap validation checks are run. Pass `full=True`
184
+ for thorough validation checks (potentially `O(n)`).
185
+
186
+ Args:
187
+ full (`bool`, defaults to `False`):
188
+ If `True`, run expensive checks, otherwise cheap checks only.
189
+
190
+ Raises:
191
+ `pa.lib.ArrowInvalid`: if validation fails
192
+ """
193
+ return self.table.validate(*args, **kwargs)
194
+
195
+ def equals(self, *args, **kwargs):
196
+ """
197
+ Check if contents of two tables are equal.
198
+
199
+ Args:
200
+ other ([`~datasets.table.Table`]):
201
+ Table to compare against.
202
+ check_metadata `bool`, defaults to `False`):
203
+ Whether schema metadata equality should be checked as well.
204
+
205
+ Returns:
206
+ `bool`
207
+ """
208
+ args = tuple(arg.table if isinstance(arg, Table) else arg for arg in args)
209
+ kwargs = {k: v.table if isinstance(v, Table) else v for k, v in kwargs}
210
+ return self.table.equals(*args, **kwargs)
211
+
212
+ def to_batches(self, *args, **kwargs):
213
+ """
214
+ Convert Table to list of (contiguous) `RecordBatch` objects.
215
+
216
+ Args:
217
+ max_chunksize (`int`, defaults to `None`):
218
+ Maximum size for `RecordBatch` chunks. Individual chunks may be
219
+ smaller depending on the chunk layout of individual columns.
220
+
221
+ Returns:
222
+ `List[pyarrow.RecordBatch]`
223
+ """
224
+ return self.table.to_batches(*args, **kwargs)
225
+
226
+ def to_pydict(self, *args, **kwargs):
227
+ """
228
+ Convert the Table to a `dict` or `OrderedDict`.
229
+
230
+ Returns:
231
+ `dict`
232
+ """
233
+ return self.table.to_pydict(*args, **kwargs)
234
+
235
+ def to_pylist(self, *args, **kwargs):
236
+ """
237
+ Convert the Table to a list
238
+
239
+ Returns:
240
+ `list`
241
+ """
242
+ return self.table.to_pylist(*args, **kwargs)
243
+
244
+ def to_pandas(self, *args, **kwargs):
245
+ """
246
+ Convert to a pandas-compatible NumPy array or DataFrame, as appropriate.
247
+
248
+ Args:
249
+ memory_pool (`MemoryPool`, defaults to `None`):
250
+ Arrow MemoryPool to use for allocations. Uses the default memory
251
+ pool is not passed.
252
+ strings_to_categorical (`bool`, defaults to `False`):
253
+ Encode string (UTF8) and binary types to `pandas.Categorical`.
254
+ categories (`list`, defaults to `empty`):
255
+ List of fields that should be returned as `pandas.Categorical`. Only
256
+ applies to table-like data structures.
257
+ zero_copy_only (`bool`, defaults to `False`):
258
+ Raise an `ArrowException` if this function call would require copying
259
+ the underlying data.
260
+ integer_object_nulls (`bool`, defaults to `False`):
261
+ Cast integers with nulls to objects.
262
+ date_as_object (`bool`, defaults to `True`):
263
+ Cast dates to objects. If `False`, convert to `datetime64[ns]` dtype.
264
+ timestamp_as_object (`bool`, defaults to `False`):
265
+ Cast non-nanosecond timestamps (`np.datetime64`) to objects. This is
266
+ useful if you have timestamps that don't fit in the normal date
267
+ range of nanosecond timestamps (1678 CE-2262 CE).
268
+ If `False`, all timestamps are converted to `datetime64[ns]` dtype.
269
+ use_threads (`bool`, defaults to `True`):
270
+ Whether to parallelize the conversion using multiple threads.
271
+ deduplicate_objects (`bool`, defaults to `False`):
272
+ Do not create multiple copies Python objects when created, to save
273
+ on memory use. Conversion will be slower.
274
+ ignore_metadata (`bool`, defaults to `False`):
275
+ If `True`, do not use the 'pandas' metadata to reconstruct the
276
+ DataFrame index, if present.
277
+ safe (`bool`, defaults to `True`):
278
+ For certain data types, a cast is needed in order to store the
279
+ data in a pandas DataFrame or Series (e.g. timestamps are always
280
+ stored as nanoseconds in pandas). This option controls whether it
281
+ is a safe cast or not.
282
+ split_blocks (`bool`, defaults to `False`):
283
+ If `True`, generate one internal "block" for each column when
284
+ creating a pandas.DataFrame from a `RecordBatch` or `Table`. While this
285
+ can temporarily reduce memory note that various pandas operations
286
+ can trigger "consolidation" which may balloon memory use.
287
+ self_destruct (`bool`, defaults to `False`):
288
+ EXPERIMENTAL: If `True`, attempt to deallocate the originating Arrow
289
+ memory while converting the Arrow object to pandas. If you use the
290
+ object after calling `to_pandas` with this option it will crash your
291
+ program.
292
+ types_mapper (`function`, defaults to `None`):
293
+ A function mapping a pyarrow DataType to a pandas `ExtensionDtype`.
294
+ This can be used to override the default pandas type for conversion
295
+ of built-in pyarrow types or in absence of `pandas_metadata` in the
296
+ Table schema. The function receives a pyarrow DataType and is
297
+ expected to return a pandas `ExtensionDtype` or `None` if the
298
+ default conversion should be used for that type. If you have
299
+ a dictionary mapping, you can pass `dict.get` as function.
300
+
301
+ Returns:
302
+ `pandas.Series` or `pandas.DataFrame`: `pandas.Series` or `pandas.DataFrame` depending on type of object
303
+ """
304
+ return self.table.to_pandas(*args, **kwargs)
305
+
306
+ def to_string(self, *args, **kwargs):
307
+ return self.table.to_string(*args, **kwargs)
308
+
309
+ def to_reader(self, max_chunksize: Optional[int] = None):
310
+ """
311
+ Convert the Table to a RecordBatchReader.
312
+
313
+ Note that this method is zero-copy, it merely exposes the same data under a different API.
314
+
315
+ Args:
316
+ max_chunksize (`int`, defaults to `None`)
317
+ Maximum size for RecordBatch chunks. Individual chunks may be smaller depending
318
+ on the chunk layout of individual columns.
319
+
320
+ Returns:
321
+ `pyarrow.RecordBatchReader`
322
+ """
323
+ return self.table.to_reader(max_chunksize=max_chunksize)
324
+
325
+ def field(self, *args, **kwargs):
326
+ """
327
+ Select a schema field by its column name or numeric index.
328
+
329
+ Args:
330
+ i (`Union[int, str]`):
331
+ The index or name of the field to retrieve.
332
+
333
+ Returns:
334
+ `pyarrow.Field`
335
+ """
336
+ return self.table.field(*args, **kwargs)
337
+
338
+ def column(self, *args, **kwargs):
339
+ """
340
+ Select a column by its column name, or numeric index.
341
+
342
+ Args:
343
+ i (`Union[int, str]`):
344
+ The index or name of the column to retrieve.
345
+
346
+ Returns:
347
+ `pyarrow.ChunkedArray`
348
+ """
349
+ return self.table.column(*args, **kwargs)
350
+
351
+ def itercolumns(self, *args, **kwargs):
352
+ """
353
+ Iterator over all columns in their numerical order.
354
+
355
+ Yields:
356
+ `pyarrow.ChunkedArray`
357
+ """
358
+ return self.table.itercolumns(*args, **kwargs)
359
+
360
+ @property
361
+ def schema(self):
362
+ """
363
+ Schema of the table and its columns.
364
+
365
+ Returns:
366
+ `pyarrow.Schema`
367
+ """
368
+ return self.table.schema
369
+
370
+ @property
371
+ def columns(self):
372
+ """
373
+ List of all columns in numerical order.
374
+
375
+ Returns:
376
+ `List[pa.ChunkedArray]`
377
+ """
378
+ return self.table.columns
379
+
380
+ @property
381
+ def num_columns(self):
382
+ """
383
+ Number of columns in this table.
384
+
385
+ Returns:
386
+ int
387
+ """
388
+ return self.table.num_columns
389
+
390
+ @property
391
+ def num_rows(self):
392
+ """
393
+ Number of rows in this table.
394
+
395
+ Due to the definition of a table, all columns have the same number of
396
+ rows.
397
+
398
+ Returns:
399
+ int
400
+ """
401
+ return self.table.num_rows
402
+
403
+ @property
404
+ def shape(self):
405
+ """
406
+ Dimensions of the table: (#rows, #columns).
407
+
408
+ Returns:
409
+ `(int, int)`: Number of rows and number of columns.
410
+ """
411
+ return self.table.shape
412
+
413
+ @property
414
+ def nbytes(self):
415
+ """
416
+ Total number of bytes consumed by the elements of the table.
417
+ """
418
+ return self.table.nbytes
419
+
420
+ @property
421
+ def column_names(self):
422
+ """
423
+ Names of the table's columns.
424
+ """
425
+ return self.table.column_names
426
+
427
+ def __eq__(self, other):
428
+ return self.equals(other)
429
+
430
+ def __getitem__(self, i):
431
+ return self.table[i]
432
+
433
+ def __len__(self):
434
+ return len(self.table)
435
+
436
+ def __repr__(self):
437
+ return self.table.__repr__().replace("pyarrow.Table", self.__class__.__name__)
438
+
439
+ def __str__(self):
440
+ return self.table.__str__().replace("pyarrow.Table", self.__class__.__name__)
441
+
442
+ def slice(self, *args, **kwargs):
443
+ """
444
+ Compute zero-copy slice of this Table.
445
+
446
+ Args:
447
+ offset (`int`, defaults to `0`):
448
+ Offset from start of table to slice.
449
+ length (`int`, defaults to `None`):
450
+ Length of slice (default is until end of table starting from
451
+ offset).
452
+
453
+ Returns:
454
+ `datasets.table.Table`
455
+ """
456
+ raise NotImplementedError()
457
+
458
+ def filter(self, *args, **kwargs):
459
+ """
460
+ Select records from a Table. See `pyarrow.compute.filter` for full usage.
461
+ """
462
+ raise NotImplementedError()
463
+
464
+ def flatten(self, *args, **kwargs):
465
+ """
466
+ Flatten this Table. Each column with a struct type is flattened
467
+ into one column per struct field. Other columns are left unchanged.
468
+
469
+ Args:
470
+ memory_pool (`MemoryPool`, defaults to `None`):
471
+ For memory allocations, if required, otherwise use default pool.
472
+
473
+ Returns:
474
+ `datasets.table.Table`
475
+ """
476
+ raise NotImplementedError()
477
+
478
+ def combine_chunks(self, *args, **kwargs):
479
+ """
480
+ Make a new table by combining the chunks this table has.
481
+
482
+ All the underlying chunks in the `ChunkedArray` of each column are
483
+ concatenated into zero or one chunk.
484
+
485
+ Args:
486
+ memory_pool (`MemoryPool`, defaults to `None`):
487
+ For memory allocations, if required, otherwise use default pool.
488
+
489
+ Returns:
490
+ `datasets.table.Table`
491
+ """
492
+ raise NotImplementedError()
493
+
494
+ def cast(self, *args, **kwargs):
495
+ """
496
+ Cast table values to another schema.
497
+
498
+ Args:
499
+ target_schema (`Schema`):
500
+ Schema to cast to, the names and order of fields must match.
501
+ safe (`bool`, defaults to `True`):
502
+ Check for overflows or other unsafe conversions.
503
+
504
+ Returns:
505
+ `datasets.table.Table`
506
+ """
507
+ raise NotImplementedError()
508
+
509
+ def replace_schema_metadata(self, *args, **kwargs):
510
+ """
511
+ EXPERIMENTAL: Create shallow copy of table by replacing schema
512
+ key-value metadata with the indicated new metadata (which may be None,
513
+ which deletes any existing metadata
514
+
515
+ Args:
516
+ metadata (`dict`, defaults to `None`):
517
+
518
+ Returns:
519
+ `datasets.table.Table`: shallow_copy
520
+ """
521
+ raise NotImplementedError()
522
+
523
+ def add_column(self, *args, **kwargs):
524
+ """
525
+ Add column to Table at position.
526
+
527
+ A new table is returned with the column added, the original table
528
+ object is left unchanged.
529
+
530
+ Args:
531
+ i (`int`):
532
+ Index to place the column at.
533
+ field_ (`Union[str, pyarrow.Field]`):
534
+ If a string is passed then the type is deduced from the column
535
+ data.
536
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
537
+ Column data.
538
+
539
+ Returns:
540
+ `datasets.table.Table`: New table with the passed column added.
541
+ """
542
+ raise NotImplementedError()
543
+
544
+ def append_column(self, *args, **kwargs):
545
+ """
546
+ Append column at end of columns.
547
+
548
+ Args:
549
+ field_ (`Union[str, pyarrow.Field]`):
550
+ If a string is passed then the type is deduced from the column
551
+ data.
552
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
553
+ Column data.
554
+
555
+ Returns:
556
+ `datasets.table.Table`: New table with the passed column added.
557
+ """
558
+ raise NotImplementedError()
559
+
560
+ def remove_column(self, *args, **kwargs):
561
+ """
562
+ Create new Table with the indicated column removed.
563
+
564
+ Args:
565
+ i (`int`):
566
+ Index of column to remove.
567
+
568
+ Returns:
569
+ `datasets.table.Table`: New table without the column.
570
+ """
571
+ raise NotImplementedError()
572
+
573
+ def set_column(self, *args, **kwargs):
574
+ """
575
+ Replace column in Table at position.
576
+
577
+ Args:
578
+ i (`int`):
579
+ Index to place the column at.
580
+ field_ (`Union[str, pyarrow.Field]`):
581
+ If a string is passed then the type is deduced from the column
582
+ data.
583
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
584
+ Column data.
585
+
586
+ Returns:
587
+ `datasets.table.Table`: New table with the passed column set.
588
+ """
589
+ raise NotImplementedError()
590
+
591
+ def rename_columns(self, *args, **kwargs):
592
+ """
593
+ Create new table with columns renamed to provided names.
594
+ """
595
+ raise NotImplementedError()
596
+
597
+ def drop(self, *args, **kwargs):
598
+ """
599
+ Drop one or more columns and return a new table.
600
+
601
+ Args:
602
+ columns (`List[str]`):
603
+ List of field names referencing existing columns.
604
+
605
+ Raises:
606
+ `KeyError` : if any of the passed columns name are not existing.
607
+
608
+ Returns:
609
+ `datasets.table.Table`: New table without the columns.
610
+ """
611
+ raise NotImplementedError()
612
+
613
+ def select(self, *args, **kwargs):
614
+ """
615
+ Select columns of the table.
616
+
617
+ Returns a new table with the specified columns, and metadata preserved.
618
+
619
+ Args:
620
+ columns (:obj:`Union[List[str], List[int]]`):
621
+ The column names or integer indices to select.
622
+
623
+ Returns:
624
+ `datasets.table.Table`: table with only a subset of the columns
625
+ """
626
+ raise NotImplementedError()
627
+
628
+
629
+ class TableBlock(Table):
630
+ """
631
+ `TableBlock` is the allowed class inside a `ConcanetationTable`.
632
+ Only `MemoryMappedTable` and `InMemoryTable` are `TableBlock`.
633
+ This is because we don't want a `ConcanetationTable` made out of other `ConcanetationTables`.
634
+ """
635
+
636
+ pass
637
+
638
+
639
+ class InMemoryTable(TableBlock):
640
+ """
641
+ The table is said in-memory when it is loaded into the user's RAM.
642
+
643
+ Pickling it does copy all the data using memory.
644
+ Its implementation is simple and uses the underlying pyarrow Table methods directly.
645
+
646
+ This is different from the `MemoryMapped` table, for which pickling doesn't copy all the
647
+ data in memory. For a `MemoryMapped`, unpickling instead reloads the table from the disk.
648
+
649
+ `InMemoryTable` must be used when data fit in memory, while `MemoryMapped` are reserved for
650
+ data bigger than memory or when you want the memory footprint of your application to
651
+ stay low.
652
+ """
653
+
654
+ @classmethod
655
+ def from_file(cls, filename: str):
656
+ table = _in_memory_arrow_table_from_file(filename)
657
+ return cls(table)
658
+
659
+ @classmethod
660
+ def from_buffer(cls, buffer: pa.Buffer):
661
+ table = _in_memory_arrow_table_from_buffer(buffer)
662
+ return cls(table)
663
+
664
+ @classmethod
665
+ def from_pandas(cls, *args, **kwargs):
666
+ """
667
+ Convert pandas.DataFrame to an Arrow Table.
668
+
669
+ The column types in the resulting Arrow Table are inferred from the
670
+ dtypes of the pandas.Series in the DataFrame. In the case of non-object
671
+ Series, the NumPy dtype is translated to its Arrow equivalent. In the
672
+ case of `object`, we need to guess the datatype by looking at the
673
+ Python objects in this Series.
674
+
675
+ Be aware that Series of the `object` dtype don't carry enough
676
+ information to always lead to a meaningful Arrow type. In the case that
677
+ we cannot infer a type, e.g. because the DataFrame is of length 0 or
678
+ the Series only contains `None/nan` objects, the type is set to
679
+ null. This behavior can be avoided by constructing an explicit schema
680
+ and passing it to this function.
681
+
682
+ Args:
683
+ df (`pandas.DataFrame`):
684
+ schema (`pyarrow.Schema`, *optional*):
685
+ The expected schema of the Arrow Table. This can be used to
686
+ indicate the type of columns if we cannot infer it automatically.
687
+ If passed, the output will have exactly this schema. Columns
688
+ specified in the schema that are not found in the DataFrame columns
689
+ or its index will raise an error. Additional columns or index
690
+ levels in the DataFrame which are not specified in the schema will
691
+ be ignored.
692
+ preserve_index (`bool`, *optional*):
693
+ Whether to store the index as an additional column in the resulting
694
+ `Table`. The default of None will store the index as a column,
695
+ except for RangeIndex which is stored as metadata only. Use
696
+ `preserve_index=True` to force it to be stored as a column.
697
+ nthreads (`int`, defaults to `None` (may use up to system CPU count threads))
698
+ If greater than 1, convert columns to Arrow in parallel using
699
+ indicated number of threads.
700
+ columns (`List[str]`, *optional*):
701
+ List of column to be converted. If `None`, use all columns.
702
+ safe (`bool`, defaults to `True`):
703
+ Check for overflows or other unsafe conversions,
704
+
705
+ Returns:
706
+ `datasets.table.Table`:
707
+
708
+ Examples:
709
+ ```python
710
+ >>> import pandas as pd
711
+ >>> import pyarrow as pa
712
+ >>> df = pd.DataFrame({
713
+ ... 'int': [1, 2],
714
+ ... 'str': ['a', 'b']
715
+ ... })
716
+ >>> pa.Table.from_pandas(df)
717
+ <pyarrow.lib.Table object at 0x7f05d1fb1b40>
718
+ ```
719
+ """
720
+ return cls(pa.Table.from_pandas(*args, **kwargs))
721
+
722
+ @classmethod
723
+ def from_arrays(cls, *args, **kwargs):
724
+ """
725
+ Construct a Table from Arrow arrays.
726
+
727
+ Args:
728
+ arrays (`List[Union[pyarrow.Array, pyarrow.ChunkedArray]]`):
729
+ Equal-length arrays that should form the table.
730
+ names (`List[str]`, *optional*):
731
+ Names for the table columns. If not passed, schema must be passed.
732
+ schema (`Schema`, defaults to `None`):
733
+ Schema for the created table. If not passed, names must be passed.
734
+ metadata (`Union[dict, Mapping]`, defaults to `None`):
735
+ Optional metadata for the schema (if inferred).
736
+
737
+ Returns:
738
+ `datasets.table.Table`
739
+ """
740
+ return cls(pa.Table.from_arrays(*args, **kwargs))
741
+
742
+ @classmethod
743
+ def from_pydict(cls, *args, **kwargs):
744
+ """
745
+ Construct a Table from Arrow arrays or columns.
746
+
747
+ Args:
748
+ mapping (`Union[dict, Mapping]`):
749
+ A mapping of strings to Arrays or Python lists.
750
+ schema (`Schema`, defaults to `None`):
751
+ If not passed, will be inferred from the Mapping values
752
+ metadata (`Union[dict, Mapping]`, defaults to `None`):
753
+ Optional metadata for the schema (if inferred).
754
+
755
+ Returns:
756
+ `datasets.table.Table`
757
+ """
758
+ return cls(pa.Table.from_pydict(*args, **kwargs))
759
+
760
+ @classmethod
761
+ def from_pylist(cls, mapping, *args, **kwargs):
762
+ """
763
+ Construct a Table from list of rows / dictionaries.
764
+
765
+ Args:
766
+ mapping (`List[dict]`):
767
+ A mapping of strings to row values.
768
+ schema (`Schema`, defaults to `None`):
769
+ If not passed, will be inferred from the Mapping values
770
+ metadata (`Union[dict, Mapping]`, defaults to `None`):
771
+ Optional metadata for the schema (if inferred).
772
+
773
+ Returns:
774
+ `datasets.table.Table`
775
+ """
776
+ return cls(pa.Table.from_pylist(mapping, *args, **kwargs))
777
+
778
+ @classmethod
779
+ def from_batches(cls, *args, **kwargs):
780
+ """
781
+ Construct a Table from a sequence or iterator of Arrow `RecordBatches`.
782
+
783
+ Args:
784
+ batches (`Union[Sequence[pyarrow.RecordBatch], Iterator[pyarrow.RecordBatch]]`):
785
+ Sequence of `RecordBatch` to be converted, all schemas must be equal.
786
+ schema (`Schema`, defaults to `None`):
787
+ If not passed, will be inferred from the first `RecordBatch`.
788
+
789
+ Returns:
790
+ `datasets.table.Table`:
791
+ """
792
+ return cls(pa.Table.from_batches(*args, **kwargs))
793
+
794
+ def slice(self, offset=0, length=None):
795
+ """
796
+ Compute zero-copy slice of this Table.
797
+
798
+ Args:
799
+ offset (`int`, defaults to `0`):
800
+ Offset from start of table to slice.
801
+ length (`int`, defaults to `None`):
802
+ Length of slice (default is until end of table starting from
803
+ offset).
804
+
805
+ Returns:
806
+ `datasets.table.Table`
807
+ """
808
+ # Use fast slicing here
809
+ return InMemoryTable(self.fast_slice(offset=offset, length=length))
810
+
811
+ def filter(self, *args, **kwargs):
812
+ """
813
+ Select records from a Table. See `pyarrow.compute.filter` for full usage.
814
+ """
815
+ return InMemoryTable(self.table.filter(*args, **kwargs))
816
+
817
+ def flatten(self, *args, **kwargs):
818
+ """
819
+ Flatten this Table. Each column with a struct type is flattened
820
+ into one column per struct field. Other columns are left unchanged.
821
+
822
+ Args:
823
+ memory_pool (`MemoryPool`, defaults to `None`):
824
+ For memory allocations, if required, otherwise use default pool.
825
+
826
+ Returns:
827
+ `datasets.table.Table`
828
+ """
829
+ return InMemoryTable(table_flatten(self.table, *args, **kwargs))
830
+
831
+ def combine_chunks(self, *args, **kwargs):
832
+ """
833
+ Make a new table by combining the chunks this table has.
834
+
835
+ All the underlying chunks in the `ChunkedArray` of each column are
836
+ concatenated into zero or one chunk.
837
+
838
+ Args:
839
+ memory_pool (`MemoryPool`, defaults to `None`):
840
+ For memory allocations, if required, otherwise use default pool.
841
+
842
+ Returns:
843
+ `datasets.table.Table`
844
+ """
845
+ return InMemoryTable(self.table.combine_chunks(*args, **kwargs))
846
+
847
+ def cast(self, *args, **kwargs):
848
+ """
849
+ Cast table values to another schema.
850
+
851
+ Args:
852
+ target_schema (`Schema`):
853
+ Schema to cast to, the names and order of fields must match.
854
+ safe (`bool`, defaults to `True`):
855
+ Check for overflows or other unsafe conversions.
856
+
857
+ Returns:
858
+ `datasets.table.Table`
859
+ """
860
+ return InMemoryTable(table_cast(self.table, *args, **kwargs))
861
+
862
+ def replace_schema_metadata(self, *args, **kwargs):
863
+ """
864
+ EXPERIMENTAL: Create shallow copy of table by replacing schema
865
+ key-value metadata with the indicated new metadata (which may be `None`,
866
+ which deletes any existing metadata).
867
+
868
+ Args:
869
+ metadata (`dict`, defaults to `None`):
870
+
871
+ Returns:
872
+ `datasets.table.Table`: shallow_copy
873
+ """
874
+ return InMemoryTable(self.table.replace_schema_metadata(*args, **kwargs))
875
+
876
+ def add_column(self, *args, **kwargs):
877
+ """
878
+ Add column to Table at position.
879
+
880
+ A new table is returned with the column added, the original table
881
+ object is left unchanged.
882
+
883
+ Args:
884
+ i (`int`):
885
+ Index to place the column at.
886
+ field_ (`Union[str, pyarrow.Field]`):
887
+ If a string is passed then the type is deduced from the column
888
+ data.
889
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
890
+ Column data.
891
+
892
+ Returns:
893
+ `datasets.table.Table`: New table with the passed column added.
894
+ """
895
+ return InMemoryTable(self.table.add_column(*args, **kwargs))
896
+
897
+ def append_column(self, *args, **kwargs):
898
+ """
899
+ Append column at end of columns.
900
+
901
+ Args:
902
+ field_ (`Union[str, pyarrow.Field]`):
903
+ If a string is passed then the type is deduced from the column
904
+ data.
905
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
906
+ Column data.
907
+
908
+ Returns:
909
+ `datasets.table.Table`:
910
+ New table with the passed column added.
911
+ """
912
+ return InMemoryTable(self.table.append_column(*args, **kwargs))
913
+
914
+ def remove_column(self, *args, **kwargs):
915
+ """
916
+ Create new Table with the indicated column removed.
917
+
918
+ Args:
919
+ i (`int`):
920
+ Index of column to remove.
921
+
922
+ Returns:
923
+ `datasets.table.Table`:
924
+ New table without the column.
925
+ """
926
+ return InMemoryTable(self.table.remove_column(*args, **kwargs))
927
+
928
+ def set_column(self, *args, **kwargs):
929
+ """
930
+ Replace column in Table at position.
931
+
932
+ Args:
933
+ i (`int`):
934
+ Index to place the column at.
935
+ field_ (`Union[str, pyarrow.Field]`):
936
+ If a string is passed then the type is deduced from the column
937
+ data.
938
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
939
+ Column data.
940
+
941
+ Returns:
942
+ `datasets.table.Table`:
943
+ New table with the passed column set.
944
+ """
945
+ return InMemoryTable(self.table.set_column(*args, **kwargs))
946
+
947
+ def rename_columns(self, *args, **kwargs):
948
+ """
949
+ Create new table with columns renamed to provided names.
950
+ """
951
+ return InMemoryTable(self.table.rename_columns(*args, **kwargs))
952
+
953
+ def drop(self, *args, **kwargs):
954
+ """
955
+ Drop one or more columns and return a new table.
956
+
957
+ Args:
958
+ columns (`List[str]`):
959
+ List of field names referencing existing columns.
960
+
961
+ Raises:
962
+ `KeyError` : if any of the passed columns name are not existing.
963
+
964
+ Returns:
965
+ `datasets.table.Table`:
966
+ New table without the columns.
967
+ """
968
+ return InMemoryTable(self.table.drop(*args, **kwargs))
969
+
970
+ def select(self, *args, **kwargs):
971
+ """
972
+ Select columns of the table.
973
+
974
+ Returns a new table with the specified columns, and metadata preserved.
975
+
976
+ Args:
977
+ columns (:obj:`Union[List[str], List[int]]`):
978
+ The column names or integer indices to select.
979
+
980
+ Returns:
981
+ :class:`datasets.table.Table`: New table with the specified columns, and metadata preserved.
982
+ """
983
+ return InMemoryTable(self.table.select(*args, **kwargs))
984
+
985
+
986
+ # The MemoryMappedTable needs replays to properly reload tables from the disk
987
+ Replay = Tuple[str, tuple, dict]
988
+
989
+
990
+ class MemoryMappedTable(TableBlock):
991
+ """
992
+ The table is said memory mapped when it doesn't use the user's RAM but loads the data
993
+ from the disk instead.
994
+
995
+ Pickling it doesn't copy the data into memory.
996
+ Instead, only the path to the memory mapped arrow file is pickled, as well as the list
997
+ of transforms to "replay" when reloading the table from the disk.
998
+
999
+ Its implementation requires to store an history of all the transforms that were applied
1000
+ to the underlying pyarrow Table, so that they can be "replayed" when reloading the Table
1001
+ from the disk.
1002
+
1003
+ This is different from the `InMemoryTable` table, for which pickling does copy all the
1004
+ data in memory.
1005
+
1006
+ `InMemoryTable` must be used when data fit in memory, while `MemoryMapped` are reserved for
1007
+ data bigger than memory or when you want the memory footprint of your application to
1008
+ stay low.
1009
+ """
1010
+
1011
+ def __init__(self, table: pa.Table, path: str, replays: Optional[List[Replay]] = None):
1012
+ super().__init__(table)
1013
+ self.path = os.path.abspath(path)
1014
+ self.replays: List[Replay] = replays if replays is not None else []
1015
+
1016
+ @classmethod
1017
+ def from_file(cls, filename: str, replays=None):
1018
+ table = _memory_mapped_arrow_table_from_file(filename)
1019
+ table = cls._apply_replays(table, replays)
1020
+ return cls(table, filename, replays)
1021
+
1022
+ def __getstate__(self):
1023
+ return {"path": self.path, "replays": self.replays}
1024
+
1025
+ def __setstate__(self, state):
1026
+ path = state["path"]
1027
+ replays = state["replays"]
1028
+ table = _memory_mapped_arrow_table_from_file(path)
1029
+ table = self._apply_replays(table, replays)
1030
+ MemoryMappedTable.__init__(self, table, path=path, replays=replays)
1031
+
1032
+ @staticmethod
1033
+ def _apply_replays(table: pa.Table, replays: Optional[List[Replay]] = None) -> pa.Table:
1034
+ if replays is not None:
1035
+ for name, args, kwargs in replays:
1036
+ if name == "cast":
1037
+ table = table_cast(table, *args, **kwargs)
1038
+ elif name == "flatten":
1039
+ table = table_flatten(table, *args, **kwargs)
1040
+ else:
1041
+ table = getattr(table, name)(*args, **kwargs)
1042
+ return table
1043
+
1044
+ def _append_replay(self, replay: Replay) -> List[Replay]:
1045
+ replays = copy.deepcopy(self.replays)
1046
+ replays.append(replay)
1047
+ return replays
1048
+
1049
+ def slice(self, offset=0, length=None):
1050
+ """
1051
+ Compute zero-copy slice of this Table.
1052
+
1053
+ Args:
1054
+ offset (`int`, defaults to `0`):
1055
+ Offset from start of table to slice.
1056
+ length (`int`, defaults to `None`):
1057
+ Length of slice (default is until end of table starting from
1058
+ offset).
1059
+
1060
+ Returns:
1061
+ `datasets.table.Table`
1062
+ """
1063
+ replay = ("slice", (offset, length), {})
1064
+ replays = self._append_replay(replay)
1065
+ # Use fast slicing here
1066
+ return MemoryMappedTable(self.fast_slice(offset=offset, length=length), self.path, replays)
1067
+
1068
+ def filter(self, *args, **kwargs):
1069
+ """
1070
+ Select records from a Table. See `pyarrow.compute.filter` for full usage.
1071
+ """
1072
+ replay = ("filter", copy.deepcopy(args), copy.deepcopy(kwargs))
1073
+ replays = self._append_replay(replay)
1074
+ return MemoryMappedTable(self.table.filter(*args, **kwargs), self.path, replays)
1075
+
1076
+ def flatten(self, *args, **kwargs):
1077
+ """
1078
+ Flatten this Table. Each column with a struct type is flattened
1079
+ into one column per struct field. Other columns are left unchanged.
1080
+
1081
+ Args:
1082
+ memory_pool (`MemoryPool`, defaults to `None`):
1083
+ For memory allocations, if required, otherwise use default pool.
1084
+
1085
+ Returns:
1086
+ `datasets.table.Table`
1087
+ """
1088
+ replay = ("flatten", copy.deepcopy(args), copy.deepcopy(kwargs))
1089
+ replays = self._append_replay(replay)
1090
+ return MemoryMappedTable(table_flatten(self.table, *args, **kwargs), self.path, replays)
1091
+
1092
+ def combine_chunks(self, *args, **kwargs):
1093
+ """
1094
+ Make a new table by combining the chunks this table has.
1095
+
1096
+ All the underlying chunks in the ChunkedArray of each column are
1097
+ concatenated into zero or one chunk.
1098
+
1099
+ Args:
1100
+ memory_pool (`MemoryPool`, defaults to `None`):
1101
+ For memory allocations, if required, otherwise use default pool.
1102
+
1103
+ Returns:
1104
+ `datasets.table.Table`
1105
+ """
1106
+ replay = ("combine_chunks", copy.deepcopy(args), copy.deepcopy(kwargs))
1107
+ replays = self._append_replay(replay)
1108
+ return MemoryMappedTable(self.table.combine_chunks(*args, **kwargs), self.path, replays)
1109
+
1110
+ def cast(self, *args, **kwargs):
1111
+ """
1112
+ Cast table values to another schema
1113
+
1114
+ Args:
1115
+ target_schema (`Schema`):
1116
+ Schema to cast to, the names and order of fields must match.
1117
+ safe (`bool`, defaults to `True`):
1118
+ Check for overflows or other unsafe conversions.
1119
+
1120
+ Returns:
1121
+ `datasets.table.Table`
1122
+ """
1123
+ replay = ("cast", copy.deepcopy(args), copy.deepcopy(kwargs))
1124
+ replays = self._append_replay(replay)
1125
+ return MemoryMappedTable(table_cast(self.table, *args, **kwargs), self.path, replays)
1126
+
1127
+ def replace_schema_metadata(self, *args, **kwargs):
1128
+ """
1129
+ EXPERIMENTAL: Create shallow copy of table by replacing schema
1130
+ key-value metadata with the indicated new metadata (which may be None,
1131
+ which deletes any existing metadata.
1132
+
1133
+ Args:
1134
+ metadata (`dict`, defaults to `None`):
1135
+
1136
+ Returns:
1137
+ `datasets.table.Table`: shallow_copy
1138
+ """
1139
+ replay = ("replace_schema_metadata", copy.deepcopy(args), copy.deepcopy(kwargs))
1140
+ replays = self._append_replay(replay)
1141
+ return MemoryMappedTable(self.table.replace_schema_metadata(*args, **kwargs), self.path, replays)
1142
+
1143
+ def add_column(self, *args, **kwargs):
1144
+ """
1145
+ Add column to Table at position.
1146
+
1147
+ A new table is returned with the column added, the original table
1148
+ object is left unchanged.
1149
+
1150
+ Args:
1151
+ i (`int`):
1152
+ Index to place the column at.
1153
+ field_ (`Union[str, pyarrow.Field]`):
1154
+ If a string is passed then the type is deduced from the column
1155
+ data.
1156
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
1157
+ Column data.
1158
+
1159
+ Returns:
1160
+ `datasets.table.Table`: New table with the passed column added.
1161
+ """
1162
+ replay = ("add_column", copy.deepcopy(args), copy.deepcopy(kwargs))
1163
+ replays = self._append_replay(replay)
1164
+ return MemoryMappedTable(self.table.add_column(*args, **kwargs), self.path, replays)
1165
+
1166
+ def append_column(self, *args, **kwargs):
1167
+ """
1168
+ Append column at end of columns.
1169
+
1170
+ Args:
1171
+ field_ (`Union[str, pyarrow.Field]`):
1172
+ If a string is passed then the type is deduced from the column
1173
+ data.
1174
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
1175
+ Column data.
1176
+
1177
+ Returns:
1178
+ `datasets.table.Table`:
1179
+ New table with the passed column added.
1180
+ """
1181
+ replay = ("append_column", copy.deepcopy(args), copy.deepcopy(kwargs))
1182
+ replays = self._append_replay(replay)
1183
+ return MemoryMappedTable(self.table.append_column(*args, **kwargs), self.path, replays)
1184
+
1185
+ def remove_column(self, *args, **kwargs):
1186
+ """
1187
+ Create new Table with the indicated column removed.
1188
+
1189
+ Args:
1190
+ i (`int`):
1191
+ Index of column to remove.
1192
+
1193
+ Returns:
1194
+ `datasets.table.Table`:
1195
+ New table without the column.
1196
+ """
1197
+ replay = ("remove_column", copy.deepcopy(args), copy.deepcopy(kwargs))
1198
+ replays = self._append_replay(replay)
1199
+ return MemoryMappedTable(self.table.remove_column(*args, **kwargs), self.path, replays)
1200
+
1201
+ def set_column(self, *args, **kwargs):
1202
+ """
1203
+ Replace column in Table at position.
1204
+
1205
+ Args:
1206
+ i (`int`):
1207
+ Index to place the column at.
1208
+ field_ (`Union[str, pyarrow.Field]`):
1209
+ If a string is passed then the type is deduced from the column
1210
+ data.
1211
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
1212
+ Column data.
1213
+
1214
+ Returns:
1215
+ `datasets.table.Table`:
1216
+ New table with the passed column set.
1217
+ """
1218
+ replay = ("set_column", copy.deepcopy(args), copy.deepcopy(kwargs))
1219
+ replays = self._append_replay(replay)
1220
+ return MemoryMappedTable(self.table.set_column(*args, **kwargs), self.path, replays)
1221
+
1222
+ def rename_columns(self, *args, **kwargs):
1223
+ """
1224
+ Create new table with columns renamed to provided names.
1225
+ """
1226
+ replay = ("rename_columns", copy.deepcopy(args), copy.deepcopy(kwargs))
1227
+ replays = self._append_replay(replay)
1228
+ return MemoryMappedTable(self.table.rename_columns(*args, **kwargs), self.path, replays)
1229
+
1230
+ def drop(self, *args, **kwargs):
1231
+ """
1232
+ Drop one or more columns and return a new table.
1233
+
1234
+ Args:
1235
+ columns (`List[str]`):
1236
+ List of field names referencing existing columns.
1237
+
1238
+ Raises:
1239
+ `KeyError` : if any of the passed columns name are not existing.
1240
+
1241
+ Returns:
1242
+ `datasets.table.Table`:
1243
+ New table without the columns.
1244
+ """
1245
+ replay = ("drop", copy.deepcopy(args), copy.deepcopy(kwargs))
1246
+ replays = self._append_replay(replay)
1247
+ return MemoryMappedTable(self.table.drop(*args, **kwargs), self.path, replays)
1248
+
1249
+ def select(self, *args, **kwargs):
1250
+ """
1251
+ Select columns of the table.
1252
+
1253
+ Returns a new table with the specified columns, and metadata preserved.
1254
+
1255
+ Args:
1256
+ columns (:obj:`Union[List[str], List[int]]`):
1257
+ The column names or integer indices to select.
1258
+
1259
+ Returns:
1260
+ :class:`datasets.table.Table`: New table with the specified columns, and metadata preserved.
1261
+ """
1262
+ replay = ("select", copy.deepcopy(args), copy.deepcopy(kwargs))
1263
+ replays = self._append_replay(replay)
1264
+ return MemoryMappedTable(self.table.select(*args, **kwargs), self.path, replays)
1265
+
1266
+
1267
+ # A ConcatenationTable is the concatenation of several tables.
1268
+ # The ``blocks`` attributes stores a list of list of blocks.
1269
+ # The first axis concatenates the tables along the axis 0 (it appends rows),
1270
+ # while the second axis concatenates tables along the axis 1 (it appends columns).
1271
+ TableBlockContainer = TypeVar("TableBlockContainer", TableBlock, List[TableBlock], List[List[TableBlock]])
1272
+
1273
+
1274
+ class ConcatenationTable(Table):
1275
+ """
1276
+ The table comes from the concatenation of several tables called blocks.
1277
+ It enables concatenation on both axis 0 (append rows) and axis 1 (append columns).
1278
+
1279
+ The underlying tables are called "blocks" and can be either `InMemoryTable`
1280
+ or `MemoryMappedTable` objects.
1281
+ This allows to combine tables that come from memory or that are memory mapped.
1282
+ When a `ConcatenationTable` is pickled, then each block is pickled:
1283
+ - the `InMemoryTable` objects are pickled by copying all the data in memory.
1284
+ - the MemoryMappedTable objects are pickled without copying the data into memory.
1285
+ Instead, only the path to the memory mapped arrow file is pickled, as well as the list
1286
+ of transforms to "replays" when reloading the table from the disk.
1287
+
1288
+ Its implementation requires to store each block separately.
1289
+ The `blocks` attributes stores a list of list of blocks.
1290
+ The first axis concatenates the tables along the axis 0 (it appends rows),
1291
+ while the second axis concatenates tables along the axis 1 (it appends columns).
1292
+
1293
+ If some columns are missing when concatenating on axis 0, they are filled with null values.
1294
+ This is done using `pyarrow.concat_tables(tables, promote=True)`.
1295
+
1296
+ You can access the fully combined table by accessing the `ConcatenationTable.table` attribute,
1297
+ and the blocks by accessing the `ConcatenationTable.blocks` attribute.
1298
+ """
1299
+
1300
+ def __init__(self, table: pa.Table, blocks: List[List[TableBlock]]):
1301
+ super().__init__(table)
1302
+ self.blocks = blocks
1303
+ # Check that all the blocks have the right type.
1304
+ # Only InMemoryTable and MemoryMappedTable are allowed.
1305
+ for subtables in blocks:
1306
+ for subtable in subtables:
1307
+ if not isinstance(subtable, TableBlock):
1308
+ raise TypeError(
1309
+ "The blocks of a ConcatenationTable must be InMemoryTable or MemoryMappedTable objects"
1310
+ f", but got {subtable}."
1311
+ )
1312
+
1313
+ def __getstate__(self):
1314
+ return {"blocks": self.blocks}
1315
+
1316
+ def __setstate__(self, state):
1317
+ blocks = state["blocks"]
1318
+ table = self._concat_blocks_horizontally_and_vertically(blocks)
1319
+ ConcatenationTable.__init__(self, table, blocks=blocks)
1320
+
1321
+ @staticmethod
1322
+ def _concat_blocks(blocks: List[Union[TableBlock, pa.Table]], axis: int = 0) -> pa.Table:
1323
+ pa_tables = [table.table if hasattr(table, "table") else table for table in blocks]
1324
+ if axis == 0:
1325
+ # we set promote=True to fill missing columns with null values
1326
+ if config.PYARROW_VERSION.major < 14:
1327
+ return pa.concat_tables(pa_tables, promote=True)
1328
+ else:
1329
+ return pa.concat_tables(pa_tables, promote_options="default")
1330
+ elif axis == 1:
1331
+ for i, table in enumerate(pa_tables):
1332
+ if i == 0:
1333
+ pa_table = table
1334
+ else:
1335
+ for name, col in zip(table.column_names, table.columns):
1336
+ pa_table = pa_table.append_column(name, col)
1337
+ return pa_table
1338
+ else:
1339
+ raise ValueError("'axis' must be either 0 or 1")
1340
+
1341
+ @classmethod
1342
+ def _concat_blocks_horizontally_and_vertically(cls, blocks: List[List[TableBlock]]) -> pa.Table:
1343
+ pa_tables_to_concat_vertically = []
1344
+ for i, tables in enumerate(blocks):
1345
+ if not tables:
1346
+ continue
1347
+ pa_table_horizontally_concatenated = cls._concat_blocks(tables, axis=1)
1348
+ pa_tables_to_concat_vertically.append(pa_table_horizontally_concatenated)
1349
+ return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0)
1350
+
1351
+ @classmethod
1352
+ def _merge_blocks(cls, blocks: TableBlockContainer, axis: Optional[int] = None) -> TableBlockContainer:
1353
+ if axis is not None:
1354
+ merged_blocks = []
1355
+ for is_in_memory, block_group in groupby(blocks, key=lambda x: isinstance(x, InMemoryTable)):
1356
+ if is_in_memory:
1357
+ block_group = [InMemoryTable(cls._concat_blocks(list(block_group), axis=axis))]
1358
+ merged_blocks += list(block_group)
1359
+ else: # both
1360
+ merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks]
1361
+ if all(len(row_block) == 1 for row_block in merged_blocks):
1362
+ merged_blocks = cls._merge_blocks(
1363
+ [block for row_block in merged_blocks for block in row_block], axis=0
1364
+ )
1365
+ return merged_blocks
1366
+
1367
+ @classmethod
1368
+ def _consolidate_blocks(cls, blocks: TableBlockContainer) -> TableBlockContainer:
1369
+ if isinstance(blocks, TableBlock):
1370
+ return blocks
1371
+ elif isinstance(blocks[0], TableBlock):
1372
+ return cls._merge_blocks(blocks, axis=0)
1373
+ else:
1374
+ return cls._merge_blocks(blocks)
1375
+
1376
+ @classmethod
1377
+ def from_blocks(cls, blocks: TableBlockContainer) -> "ConcatenationTable":
1378
+ blocks = cls._consolidate_blocks(blocks)
1379
+ if isinstance(blocks, TableBlock):
1380
+ table = blocks
1381
+ return cls(table.table, [[table]])
1382
+ elif isinstance(blocks[0], TableBlock):
1383
+ table = cls._concat_blocks(blocks, axis=0)
1384
+ blocks = [[t] for t in blocks]
1385
+ return cls(table, blocks)
1386
+ else:
1387
+ table = cls._concat_blocks_horizontally_and_vertically(blocks)
1388
+ return cls(table, blocks)
1389
+
1390
+ @classmethod
1391
+ def from_tables(cls, tables: List[Union[pa.Table, Table]], axis: int = 0) -> "ConcatenationTable":
1392
+ """Create `ConcatenationTable` from list of tables.
1393
+
1394
+ Args:
1395
+ tables (list of `Table` or list of `pyarrow.Table`):
1396
+ List of tables.
1397
+ axis (`{0, 1}`, defaults to `0`, meaning over rows):
1398
+ Axis to concatenate over, where `0` means over rows (vertically) and `1` means over columns
1399
+ (horizontally).
1400
+
1401
+ <Added version="1.6.0"/>
1402
+ """
1403
+
1404
+ def to_blocks(table: Union[pa.Table, Table]) -> List[List[TableBlock]]:
1405
+ if isinstance(table, pa.Table):
1406
+ return [[InMemoryTable(table)]]
1407
+ elif isinstance(table, ConcatenationTable):
1408
+ return copy.deepcopy(table.blocks)
1409
+ else:
1410
+ return [[table]]
1411
+
1412
+ def _slice_row_block(row_block: List[TableBlock], length: int) -> Tuple[List[TableBlock], List[TableBlock]]:
1413
+ sliced = [table.slice(0, length) for table in row_block]
1414
+ remainder = [table.slice(length, len(row_block[0]) - length) for table in row_block]
1415
+ return sliced, remainder
1416
+
1417
+ def _split_both_like(
1418
+ result: List[List[TableBlock]], blocks: List[List[TableBlock]]
1419
+ ) -> Tuple[List[List[TableBlock]], List[List[TableBlock]]]:
1420
+ """
1421
+ Make sure each row_block contain the same num_rows to be able to concatenate them on axis=1.
1422
+
1423
+ To do so, we modify both blocks sets to have the same row_blocks boundaries.
1424
+ For example, if `result` has 2 row_blocks of 3 rows and `blocks` has 3 row_blocks of 2 rows,
1425
+ we modify both to have 4 row_blocks of size 2, 1, 1 and 2:
1426
+
1427
+ [ x x x | x x x ]
1428
+ + [ y y | y y | y y ]
1429
+ -----------------------------
1430
+ = [ x x | x | x | x x ]
1431
+ [ y y | y | y | y y ]
1432
+
1433
+ """
1434
+ result, blocks = list(result), list(blocks)
1435
+ new_result, new_blocks = [], []
1436
+ while result and blocks:
1437
+ # we slice the longest row block to save two row blocks of same length
1438
+ # and we replace the long row block by its remainder if necessary
1439
+ if len(result[0][0]) > len(blocks[0][0]):
1440
+ new_blocks.append(blocks[0])
1441
+ sliced, result[0] = _slice_row_block(result[0], len(blocks.pop(0)[0]))
1442
+ new_result.append(sliced)
1443
+ elif len(result[0][0]) < len(blocks[0][0]):
1444
+ new_result.append(result[0])
1445
+ sliced, blocks[0] = _slice_row_block(blocks[0], len(result.pop(0)[0]))
1446
+ new_blocks.append(sliced)
1447
+ else:
1448
+ new_result.append(result.pop(0))
1449
+ new_blocks.append(blocks.pop(0))
1450
+ if result or blocks:
1451
+ raise ValueError("Failed to concatenate on axis=1 because tables don't have the same number of rows")
1452
+ return new_result, new_blocks
1453
+
1454
+ def _extend_blocks(
1455
+ result: List[List[TableBlock]], blocks: List[List[TableBlock]], axis: int = 0
1456
+ ) -> List[List[TableBlock]]:
1457
+ if axis == 0:
1458
+ result.extend(blocks)
1459
+ elif axis == 1:
1460
+ # We make sure each row_block have the same num_rows
1461
+ result, blocks = _split_both_like(result, blocks)
1462
+ for i, row_block in enumerate(blocks):
1463
+ result[i].extend(row_block)
1464
+ return result
1465
+
1466
+ blocks = to_blocks(tables[0])
1467
+ for table in tables[1:]:
1468
+ table_blocks = to_blocks(table)
1469
+ blocks = _extend_blocks(blocks, table_blocks, axis=axis)
1470
+ return cls.from_blocks(blocks)
1471
+
1472
+ @property
1473
+ def _slices(self):
1474
+ offset = 0
1475
+ for tables in self.blocks:
1476
+ length = len(tables[0])
1477
+ yield (offset, length)
1478
+ offset += length
1479
+
1480
+ def slice(self, offset=0, length=None):
1481
+ """
1482
+ Compute zero-copy slice of this Table.
1483
+
1484
+ Args:
1485
+ offset (`int`, defaults to `0`):
1486
+ Offset from start of table to slice.
1487
+ length (`int`, defaults to `None`):
1488
+ Length of slice (default is until end of table starting from
1489
+ offset).
1490
+
1491
+ Returns:
1492
+ `datasets.table.Table`
1493
+ """
1494
+ table = self.table.slice(offset, length=length)
1495
+ length = length if length is not None else self.num_rows - offset
1496
+ blocks = []
1497
+ for tables in self.blocks:
1498
+ n_rows = len(tables[0])
1499
+ if length == 0:
1500
+ break
1501
+ elif n_rows <= offset:
1502
+ offset = offset - n_rows
1503
+ elif n_rows <= offset + length:
1504
+ blocks.append([t.slice(offset) for t in tables])
1505
+ length, offset = length + offset - n_rows, 0
1506
+ else:
1507
+ blocks.append([t.slice(offset, length) for t in tables])
1508
+ length, offset = 0, 0
1509
+ return ConcatenationTable(table, blocks)
1510
+
1511
+ def filter(self, mask, *args, **kwargs):
1512
+ """
1513
+ Select records from a Table. See `pyarrow.compute.filter` for full usage.
1514
+ """
1515
+ table = self.table.filter(mask, *args, **kwargs)
1516
+ blocks = []
1517
+ for (offset, length), tables in zip(self._slices, self.blocks):
1518
+ submask = mask.slice(offset, length)
1519
+ blocks.append([t.filter(submask, *args, **kwargs) for t in tables])
1520
+ return ConcatenationTable(table, blocks)
1521
+
1522
+ def flatten(self, *args, **kwargs):
1523
+ """
1524
+ Flatten this Table. Each column with a struct type is flattened
1525
+ into one column per struct field. Other columns are left unchanged.
1526
+
1527
+ Args:
1528
+ memory_pool (`MemoryPool`, defaults to `None`):
1529
+ For memory allocations, if required, otherwise use default pool.
1530
+
1531
+ Returns:
1532
+ `datasets.table.Table`
1533
+ """
1534
+ table = table_flatten(self.table, *args, **kwargs)
1535
+ blocks = []
1536
+ for tables in self.blocks:
1537
+ blocks.append([t.flatten(*args, **kwargs) for t in tables])
1538
+ return ConcatenationTable(table, blocks)
1539
+
1540
+ def combine_chunks(self, *args, **kwargs):
1541
+ """
1542
+ Make a new table by combining the chunks this table has.
1543
+
1544
+ All the underlying chunks in the `ChunkedArray` of each column are
1545
+ concatenated into zero or one chunk.
1546
+
1547
+ Args:
1548
+ memory_pool (`MemoryPool`, defaults to `None`):
1549
+ For memory allocations, if required, otherwise use default pool.
1550
+
1551
+ Returns:
1552
+ `datasets.table.Table`
1553
+ """
1554
+ table = self.table.combine_chunks(*args, **kwargs)
1555
+ blocks = []
1556
+ for tables in self.blocks:
1557
+ blocks.append([t.combine_chunks(*args, **kwargs) for t in tables])
1558
+ return ConcatenationTable(table, blocks)
1559
+
1560
+ def cast(self, target_schema, *args, **kwargs):
1561
+ """
1562
+ Cast table values to another schema.
1563
+
1564
+ Args:
1565
+ target_schema (`Schema`):
1566
+ Schema to cast to, the names and order of fields must match.
1567
+ safe (`bool`, defaults to `True`):
1568
+ Check for overflows or other unsafe conversions.
1569
+
1570
+ Returns:
1571
+ `datasets.table.Table`
1572
+ """
1573
+ from .features import Features
1574
+
1575
+ table = table_cast(self.table, target_schema, *args, **kwargs)
1576
+ target_features = Features.from_arrow_schema(target_schema)
1577
+ blocks = []
1578
+ for subtables in self.blocks:
1579
+ new_tables = []
1580
+ fields = list(target_schema)
1581
+ for subtable in subtables:
1582
+ subfields = []
1583
+ for name in subtable.column_names:
1584
+ subfields.append(fields.pop(next(i for i, field in enumerate(fields) if field.name == name)))
1585
+ subfeatures = Features({subfield.name: target_features[subfield.name] for subfield in subfields})
1586
+ subschema = subfeatures.arrow_schema
1587
+ new_tables.append(subtable.cast(subschema, *args, **kwargs))
1588
+ blocks.append(new_tables)
1589
+ return ConcatenationTable(table, blocks)
1590
+
1591
+ def replace_schema_metadata(self, *args, **kwargs):
1592
+ """
1593
+ EXPERIMENTAL: Create shallow copy of table by replacing schema
1594
+ key-value metadata with the indicated new metadata (which may be `None`,
1595
+ which deletes any existing metadata).
1596
+
1597
+ Args:
1598
+ metadata (`dict`, defaults to `None`):
1599
+
1600
+ Returns:
1601
+ `datasets.table.Table`: shallow_copy
1602
+ """
1603
+ table = self.table.replace_schema_metadata(*args, **kwargs)
1604
+ blocks = []
1605
+ for tables in self.blocks:
1606
+ blocks.append([t.replace_schema_metadata(*args, **kwargs) for t in tables])
1607
+ return ConcatenationTable(table, self.blocks)
1608
+
1609
+ def add_column(self, *args, **kwargs):
1610
+ """
1611
+ Add column to Table at position.
1612
+
1613
+ A new table is returned with the column added, the original table
1614
+ object is left unchanged.
1615
+
1616
+ Args:
1617
+ i (`int`):
1618
+ Index to place the column at.
1619
+ field_ (`Union[str, pyarrow.Field]`):
1620
+ If a string is passed then the type is deduced from the column
1621
+ data.
1622
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
1623
+ Column data.
1624
+
1625
+ Returns:
1626
+ `datasets.table.Table`: New table with the passed column added.
1627
+ """
1628
+ raise NotImplementedError()
1629
+
1630
+ def append_column(self, *args, **kwargs):
1631
+ """
1632
+ Append column at end of columns.
1633
+
1634
+ Args:
1635
+ field_ (`Union[str, pyarrow.Field]`):
1636
+ If a string is passed then the type is deduced from the column
1637
+ data.
1638
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
1639
+ Column data.
1640
+
1641
+ Returns:
1642
+ `datasets.table.Table`:
1643
+ New table with the passed column added.
1644
+ """
1645
+ raise NotImplementedError()
1646
+
1647
+ def remove_column(self, i, *args, **kwargs):
1648
+ """
1649
+ Create new Table with the indicated column removed.
1650
+
1651
+ Args:
1652
+ i (`int`):
1653
+ Index of column to remove.
1654
+
1655
+ Returns:
1656
+ `datasets.table.Table`:
1657
+ New table without the column.
1658
+ """
1659
+ table = self.table.remove_column(i, *args, **kwargs)
1660
+ name = self.table.column_names[i]
1661
+ blocks = []
1662
+ for tables in self.blocks:
1663
+ blocks.append(
1664
+ [
1665
+ t.remove_column(t.column_names.index(name), *args, **kwargs) if name in t.column_names else t
1666
+ for t in tables
1667
+ ]
1668
+ )
1669
+ return ConcatenationTable(table, blocks)
1670
+
1671
+ def set_column(self, *args, **kwargs):
1672
+ """
1673
+ Replace column in Table at position.
1674
+
1675
+ Args:
1676
+ i (`int`):
1677
+ Index to place the column at.
1678
+ field_ (`Union[str, pyarrow.Field]`):
1679
+ If a string is passed then the type is deduced from the column
1680
+ data.
1681
+ column (`Union[pyarrow.Array, List[pyarrow.Array]]`):
1682
+ Column data.
1683
+
1684
+ Returns:
1685
+ `datasets.table.Table`:
1686
+ New table with the passed column set.
1687
+ """
1688
+ raise NotImplementedError()
1689
+
1690
+ def rename_columns(self, names, *args, **kwargs):
1691
+ """
1692
+ Create new table with columns renamed to provided names.
1693
+ """
1694
+ table = self.table.rename_columns(names, *args, **kwargs)
1695
+ names = dict(zip(self.table.column_names, names))
1696
+ blocks = []
1697
+ for tables in self.blocks:
1698
+ blocks.append(
1699
+ [t.rename_columns([names[name] for name in t.column_names], *args, **kwargs) for t in tables]
1700
+ )
1701
+ return ConcatenationTable(table, blocks)
1702
+
1703
+ def drop(self, columns, *args, **kwargs):
1704
+ """
1705
+ Drop one or more columns and return a new table.
1706
+
1707
+ Args:
1708
+ columns (`List[str]`):
1709
+ List of field names referencing existing columns.
1710
+
1711
+ Raises:
1712
+ `KeyError` : if any of the passed columns name are not existing.
1713
+
1714
+ Returns:
1715
+ `datasets.table.Table`:
1716
+ New table without the columns.
1717
+ """
1718
+ table = self.table.drop(columns, *args, **kwargs)
1719
+ blocks = []
1720
+ for tables in self.blocks:
1721
+ blocks.append([t.drop([c for c in columns if c in t.column_names], *args, **kwargs) for t in tables])
1722
+ return ConcatenationTable(table, blocks)
1723
+
1724
+ def select(self, columns, *args, **kwargs):
1725
+ """
1726
+ Select columns of the table.
1727
+
1728
+ Returns a new table with the specified columns, and metadata preserved.
1729
+
1730
+ Args:
1731
+ columns (:obj:`Union[List[str], List[int]]`):
1732
+ The column names or integer indices to select.
1733
+
1734
+ Returns:
1735
+ :class:`datasets.table.Table`: New table with the specified columns, and metadata preserved.
1736
+ """
1737
+ table = self.table.select(columns, *args, **kwargs)
1738
+ blocks = []
1739
+ for tables in self.blocks:
1740
+ blocks.append([t.select([c for c in columns if c in t.column_names], *args, **kwargs) for t in tables])
1741
+ return ConcatenationTable(table, blocks)
1742
+
1743
+
1744
+ def concat_tables(tables: List[Table], axis: int = 0) -> Table:
1745
+ """
1746
+ Concatenate tables.
1747
+
1748
+ Args:
1749
+ tables (list of `Table`):
1750
+ List of tables to be concatenated.
1751
+ axis (`{0, 1}`, defaults to `0`, meaning over rows):
1752
+ Axis to concatenate over, where `0` means over rows (vertically) and `1` means over columns
1753
+ (horizontally).
1754
+
1755
+ <Added version="1.6.0"/>
1756
+ Returns:
1757
+ `datasets.table.Table`:
1758
+ If the number of input tables is > 1, then the returned table is a `datasets.table.ConcatenationTable`.
1759
+ Otherwise if there's only one table, it is returned as is.
1760
+ """
1761
+ tables = list(tables)
1762
+ if len(tables) == 1:
1763
+ return tables[0]
1764
+ return ConcatenationTable.from_tables(tables, axis=axis)
1765
+
1766
+
1767
+ def list_table_cache_files(table: Table) -> List[str]:
1768
+ """
1769
+ Get the cache files that are loaded by the table.
1770
+ Cache file are used when parts of the table come from the disk via memory mapping.
1771
+
1772
+ Returns:
1773
+ `List[str]`:
1774
+ A list of paths to the cache files loaded by the table.
1775
+ """
1776
+ if isinstance(table, ConcatenationTable):
1777
+ cache_files = []
1778
+ for subtables in table.blocks:
1779
+ for subtable in subtables:
1780
+ cache_files += list_table_cache_files(subtable)
1781
+ return cache_files
1782
+ elif isinstance(table, MemoryMappedTable):
1783
+ return [table.path]
1784
+ else:
1785
+ return []
1786
+
1787
+
1788
+ def _wrap_for_chunked_arrays(func):
1789
+ """Apply the function on each chunk of a `pyarrow.ChunkedArray`, or on the array directly"""
1790
+
1791
+ def wrapper(array, *args, **kwargs):
1792
+ if isinstance(array, pa.ChunkedArray):
1793
+ return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1794
+ else:
1795
+ return func(array, *args, **kwargs)
1796
+
1797
+ return wrapper
1798
+
1799
+
1800
+ def _are_list_values_of_length(array: pa.ListArray, length: int) -> bool:
1801
+ """Check if all the sub-lists of a `pa.ListArray` have the specified length."""
1802
+ return pc.all(pc.equal(array.value_lengths(), length)).as_py() or array.null_count == len(array)
1803
+
1804
+
1805
+ def _combine_list_array_offsets_with_mask(array: pa.ListArray) -> pa.Array:
1806
+ """Add the null bitmap to the offsets of a `pa.ListArray`."""
1807
+ offsets = array.offsets
1808
+ if array.null_count > 0:
1809
+ offsets = pa.concat_arrays(
1810
+ [
1811
+ pc.replace_with_mask(offsets[:-1], array.is_null(), pa.nulls(len(array), pa.int32())),
1812
+ offsets[-1:],
1813
+ ]
1814
+ )
1815
+ return offsets
1816
+
1817
+
1818
+ def _storage_type(type: pa.DataType) -> pa.DataType:
1819
+ """Convert a (possibly nested) `pa.ExtensionType` to its storage type."""
1820
+ if isinstance(type, pa.ExtensionType):
1821
+ return _storage_type(type.storage_type)
1822
+ elif isinstance(type, pa.StructType):
1823
+ return pa.struct([pa.field(field.name, _storage_type(field.type)) for field in type])
1824
+ elif isinstance(type, pa.ListType):
1825
+ return pa.list_(_storage_type(type.value_type))
1826
+ elif isinstance(type, pa.FixedSizeListType):
1827
+ return pa.list_(_storage_type(type.value_type), type.list_size)
1828
+ return type
1829
+
1830
+
1831
+ @_wrap_for_chunked_arrays
1832
+ def array_cast(array: pa.Array, pa_type: pa.DataType, allow_number_to_str=True):
1833
+ """Improved version of `pa.Array.cast`
1834
+
1835
+ It supports casting `pa.StructArray` objects to re-order the fields.
1836
+ It also let you control certain aspects of the casting, e.g. whether
1837
+ to disable numbers (`floats` or `ints`) to strings.
1838
+
1839
+ Args:
1840
+ array (`pa.Array`):
1841
+ PyArrow array to cast
1842
+ pa_type (`pa.DataType`):
1843
+ Target PyArrow type
1844
+ allow_number_to_str (`bool`, defaults to `True`):
1845
+ Whether to allow casting numbers to strings.
1846
+ Defaults to `True`.
1847
+
1848
+ Raises:
1849
+ `pa.ArrowInvalidError`: if the arrow data casting fails
1850
+ `TypeError`: if the target type is not supported according, e.g.
1851
+
1852
+ - if a field is missing
1853
+ - if casting from numbers to strings and `allow_number_to_str` is `False`
1854
+
1855
+ Returns:
1856
+ `List[pyarrow.Array]`: the casted array
1857
+ """
1858
+ _c = partial(array_cast, allow_number_to_str=allow_number_to_str)
1859
+ if isinstance(array, pa.ExtensionArray):
1860
+ array = array.storage
1861
+ if isinstance(pa_type, pa.ExtensionType):
1862
+ return pa_type.wrap_array(_c(array, pa_type.storage_type))
1863
+ elif array.type == pa_type:
1864
+ return array
1865
+ elif pa.types.is_struct(array.type):
1866
+ if pa.types.is_struct(pa_type) and ({field.name for field in pa_type} == {field.name for field in array.type}):
1867
+ if array.type.num_fields == 0:
1868
+ return array
1869
+ arrays = [_c(array.field(field.name), field.type) for field in pa_type]
1870
+ return pa.StructArray.from_arrays(arrays, fields=list(pa_type), mask=array.is_null())
1871
+ elif pa.types.is_list(array.type):
1872
+ if pa.types.is_fixed_size_list(pa_type):
1873
+ if _are_list_values_of_length(array, pa_type.list_size):
1874
+ if array.null_count > 0:
1875
+ # Ensure each null value in the array translates to [null] * pa_type.list_size in the array's values array
1876
+ array_type = array.type
1877
+ storage_type = _storage_type(array_type)
1878
+ if array_type != storage_type:
1879
+ # Temporarily convert to the storage type to support extension types in the slice operation
1880
+ array = _c(array, storage_type)
1881
+ array = pc.list_slice(array, 0, pa_type.list_size, return_fixed_size_list=True)
1882
+ array = _c(array, array_type)
1883
+ else:
1884
+ array = pc.list_slice(array, 0, pa_type.list_size, return_fixed_size_list=True)
1885
+ array_values = array.values
1886
+ if config.PYARROW_VERSION.major < 15:
1887
+ return pa.Array.from_buffers(
1888
+ pa_type,
1889
+ len(array),
1890
+ [array.is_valid().buffers()[1]],
1891
+ children=[_c(array_values, pa_type.value_type)],
1892
+ )
1893
+ else:
1894
+ return pa.FixedSizeListArray.from_arrays(
1895
+ _c(array_values, pa_type.value_type), pa_type.list_size, mask=array.is_null()
1896
+ )
1897
+ else:
1898
+ array_values = array.values[
1899
+ array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length
1900
+ ]
1901
+ return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size)
1902
+ elif pa.types.is_list(pa_type):
1903
+ # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError
1904
+ array_offsets = _combine_list_array_offsets_with_mask(array)
1905
+ return pa.ListArray.from_arrays(array_offsets, _c(array.values, pa_type.value_type))
1906
+ elif pa.types.is_fixed_size_list(array.type):
1907
+ if pa.types.is_fixed_size_list(pa_type):
1908
+ if pa_type.list_size == array.type.list_size:
1909
+ array_values = array.values[
1910
+ array.offset * array.type.list_size : (array.offset + len(array)) * array.type.list_size
1911
+ ]
1912
+ if config.PYARROW_VERSION.major < 15:
1913
+ return pa.Array.from_buffers(
1914
+ pa_type,
1915
+ len(array),
1916
+ [array.is_valid().buffers()[1]],
1917
+ children=[_c(array_values, pa_type.value_type)],
1918
+ )
1919
+ else:
1920
+ return pa.FixedSizeListArray.from_arrays(
1921
+ _c(array_values, pa_type.value_type), pa_type.list_size, mask=array.is_null()
1922
+ )
1923
+ elif pa.types.is_list(pa_type):
1924
+ array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size
1925
+ return pa.ListArray.from_arrays(array_offsets, _c(array.values, pa_type.value_type), mask=array.is_null())
1926
+ else:
1927
+ if (
1928
+ not allow_number_to_str
1929
+ and pa.types.is_string(pa_type)
1930
+ and (pa.types.is_floating(array.type) or pa.types.is_integer(array.type))
1931
+ ):
1932
+ raise TypeError(
1933
+ f"Couldn't cast array of type {array.type} to {pa_type} since allow_number_to_str is set to {allow_number_to_str}"
1934
+ )
1935
+ if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):
1936
+ raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
1937
+ return array.cast(pa_type)
1938
+ raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
1939
+
1940
+
1941
+ @_wrap_for_chunked_arrays
1942
+ def cast_array_to_feature(array: pa.Array, feature: "FeatureType", allow_number_to_str=True):
1943
+ """Cast an array to the arrow type that corresponds to the requested feature type.
1944
+ For custom features like [`Audio`] or [`Image`], it takes into account the "cast_storage" methods
1945
+ they defined to enable casting from other arrow types.
1946
+
1947
+ Args:
1948
+ array (`pa.Array`):
1949
+ The PyArrow array to cast.
1950
+ feature (`datasets.features.FeatureType`):
1951
+ The target feature type.
1952
+ allow_number_to_str (`bool`, defaults to `True`):
1953
+ Whether to allow casting numbers to strings.
1954
+ Defaults to `True`.
1955
+
1956
+ Raises:
1957
+ `pa.ArrowInvalidError`: if the arrow data casting fails
1958
+ `TypeError`: if the target type is not supported according, e.g.
1959
+
1960
+ - if a field is missing
1961
+ - if casting from numbers to strings and `allow_number_to_str` is `False`
1962
+
1963
+ Returns:
1964
+ array (`pyarrow.Array`): the casted array
1965
+ """
1966
+ from .features.features import Sequence, get_nested_type
1967
+
1968
+ _c = partial(cast_array_to_feature, allow_number_to_str=allow_number_to_str)
1969
+
1970
+ if isinstance(array, pa.ExtensionArray):
1971
+ array = array.storage
1972
+ if hasattr(feature, "cast_storage"):
1973
+ return feature.cast_storage(array)
1974
+
1975
+ elif pa.types.is_struct(array.type):
1976
+ # feature must be a dict or Sequence(subfeatures_dict)
1977
+ if isinstance(feature, Sequence) and isinstance(feature.feature, dict):
1978
+ feature = {
1979
+ name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items()
1980
+ }
1981
+ if isinstance(feature, dict) and {field.name for field in array.type} == set(feature):
1982
+ if array.type.num_fields == 0:
1983
+ return array
1984
+ arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1985
+ return pa.StructArray.from_arrays(arrays, names=list(feature), mask=array.is_null())
1986
+ elif pa.types.is_list(array.type):
1987
+ # feature must be either [subfeature] or Sequence(subfeature)
1988
+ if isinstance(feature, list):
1989
+ casted_array_values = _c(array.values, feature[0])
1990
+ if casted_array_values.type == array.values.type:
1991
+ return array
1992
+ else:
1993
+ # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError
1994
+ array_offsets = _combine_list_array_offsets_with_mask(array)
1995
+ return pa.ListArray.from_arrays(array_offsets, casted_array_values)
1996
+ elif isinstance(feature, Sequence):
1997
+ if feature.length > -1:
1998
+ if _are_list_values_of_length(array, feature.length):
1999
+ if array.null_count > 0:
2000
+ # Ensure each null value in the array translates to [null] * pa_type.list_size in the array's values array
2001
+ array_type = array.type
2002
+ storage_type = _storage_type(array_type)
2003
+ if array_type != storage_type:
2004
+ # Temporarily convert to the storage type to support extension types in the slice operation
2005
+ array = array_cast(array, storage_type, allow_number_to_str=allow_number_to_str)
2006
+ array = pc.list_slice(array, 0, feature.length, return_fixed_size_list=True)
2007
+ array = array_cast(array, array_type, allow_number_to_str=allow_number_to_str)
2008
+ else:
2009
+ array = pc.list_slice(array, 0, feature.length, return_fixed_size_list=True)
2010
+ array_values = array.values
2011
+ casted_array_values = _c(array_values, feature.feature)
2012
+ if config.PYARROW_VERSION.major < 15:
2013
+ return pa.Array.from_buffers(
2014
+ pa.list_(casted_array_values.type, feature.length),
2015
+ len(array),
2016
+ [array.is_valid().buffers()[1]],
2017
+ children=[casted_array_values],
2018
+ )
2019
+ else:
2020
+ return pa.FixedSizeListArray.from_arrays(
2021
+ casted_array_values, feature.length, mask=array.is_null()
2022
+ )
2023
+ else:
2024
+ array_values = array.values[
2025
+ array.offset * feature.length : (array.offset + len(array)) * feature.length
2026
+ ]
2027
+ return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length)
2028
+ else:
2029
+ casted_array_values = _c(array.values, feature.feature)
2030
+ if casted_array_values.type == array.values.type:
2031
+ return array
2032
+ else:
2033
+ # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError
2034
+ array_offsets = _combine_list_array_offsets_with_mask(array)
2035
+ return pa.ListArray.from_arrays(array_offsets, casted_array_values)
2036
+ elif pa.types.is_fixed_size_list(array.type):
2037
+ # feature must be either [subfeature] or Sequence(subfeature)
2038
+ if isinstance(feature, list):
2039
+ array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size
2040
+ return pa.ListArray.from_arrays(array_offsets, _c(array.values, feature[0]), mask=array.is_null())
2041
+ elif isinstance(feature, Sequence):
2042
+ if feature.length > -1:
2043
+ if feature.length == array.type.list_size:
2044
+ array_values = array.values[
2045
+ array.offset * array.type.list_size : (array.offset + len(array)) * array.type.list_size
2046
+ ]
2047
+ casted_array_values = _c(array_values, feature.feature)
2048
+ if config.PYARROW_VERSION.major < 15:
2049
+ return pa.Array.from_buffers(
2050
+ pa.list_(casted_array_values.type, feature.length),
2051
+ len(array),
2052
+ [array.is_valid().buffers()[1]],
2053
+ children=[casted_array_values],
2054
+ )
2055
+ else:
2056
+ return pa.FixedSizeListArray.from_arrays(
2057
+ casted_array_values, feature.length, mask=array.is_null()
2058
+ )
2059
+ else:
2060
+ array_offsets = (np.arange(len(array) + 1) + array.offset) * array.type.list_size
2061
+ return pa.ListArray.from_arrays(array_offsets, _c(array.values, feature.feature), mask=array.is_null())
2062
+ if pa.types.is_null(array.type):
2063
+ return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
2064
+ elif not isinstance(feature, (Sequence, dict, list, tuple)):
2065
+ return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
2066
+ raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
2067
+
2068
+
2069
+ @_wrap_for_chunked_arrays
2070
+ def embed_array_storage(array: pa.Array, feature: "FeatureType"):
2071
+ """Embed data into an arrays's storage.
2072
+ For custom features like Audio or Image, it takes into account the "embed_storage" methods
2073
+ they define to embed external data (e.g. an image file) into an array.
2074
+
2075
+ <Added version="2.4.0"/>
2076
+
2077
+ Args:
2078
+ array (`pa.Array`):
2079
+ The PyArrow array in which to embed data.
2080
+ feature (`datasets.features.FeatureType`):
2081
+ Array features.
2082
+
2083
+ Raises:
2084
+ `TypeError`: if the target type is not supported according, e.g.
2085
+
2086
+ - if a field is missing
2087
+
2088
+ Returns:
2089
+ array (`pyarrow.Array`): the casted array
2090
+ """
2091
+ from .features import Sequence
2092
+
2093
+ _e = embed_array_storage
2094
+
2095
+ if isinstance(array, pa.ExtensionArray):
2096
+ array = array.storage
2097
+ if hasattr(feature, "embed_storage"):
2098
+ return feature.embed_storage(array)
2099
+ elif pa.types.is_struct(array.type):
2100
+ # feature must be a dict or Sequence(subfeatures_dict)
2101
+ if isinstance(feature, Sequence) and isinstance(feature.feature, dict):
2102
+ feature = {
2103
+ name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items()
2104
+ }
2105
+ if isinstance(feature, dict):
2106
+ arrays = [_e(array.field(name), subfeature) for name, subfeature in feature.items()]
2107
+ return pa.StructArray.from_arrays(arrays, names=list(feature), mask=array.is_null())
2108
+ elif pa.types.is_list(array.type):
2109
+ # feature must be either [subfeature] or Sequence(subfeature)
2110
+ # Merge offsets with the null bitmap to avoid the "Null bitmap with offsets slice not supported" ArrowNotImplementedError
2111
+ array_offsets = _combine_list_array_offsets_with_mask(array)
2112
+ if isinstance(feature, list):
2113
+ return pa.ListArray.from_arrays(array_offsets, _e(array.values, feature[0]))
2114
+ if isinstance(feature, Sequence) and feature.length == -1:
2115
+ return pa.ListArray.from_arrays(array_offsets, _e(array.values, feature.feature))
2116
+ elif pa.types.is_fixed_size_list(array.type):
2117
+ # feature must be Sequence(subfeature)
2118
+ if isinstance(feature, Sequence) and feature.length > -1:
2119
+ array_values = array.values[
2120
+ array.offset * array.type.list_size : (array.offset + len(array)) * array.type.list_size
2121
+ ]
2122
+ embedded_array_values = _e(array_values, feature.feature)
2123
+ if config.PYARROW_VERSION.major < 15:
2124
+ return pa.Array.from_buffers(
2125
+ pa.list_(array_values.type, feature.length),
2126
+ len(array),
2127
+ [array.is_valid().buffers()[1]],
2128
+ children=[embedded_array_values],
2129
+ )
2130
+ else:
2131
+ return pa.FixedSizeListArray.from_arrays(embedded_array_values, feature.length, mask=array.is_null())
2132
+ if not isinstance(feature, (Sequence, dict, list, tuple)):
2133
+ return array
2134
+ raise TypeError(f"Couldn't embed array of type\n{array.type}\nwith\n{feature}")
2135
+
2136
+
2137
+ class CastError(ValueError):
2138
+ """When it's not possible to cast an Arrow table to a specific schema or set of features"""
2139
+
2140
+ def __init__(self, *args, table_column_names: List[str], requested_column_names: List[str]) -> None:
2141
+ super().__init__(*args)
2142
+ self.table_column_names = table_column_names
2143
+ self.requested_column_names = requested_column_names
2144
+
2145
+ def details(self):
2146
+ new_columns = set(self.table_column_names) - set(self.requested_column_names)
2147
+ missing_columns = set(self.requested_column_names) - set(self.table_column_names)
2148
+ if new_columns and missing_columns:
2149
+ return f"there are {len(new_columns)} new columns ({', '.join(new_columns)}) and {len(missing_columns)} missing columns ({', '.join(missing_columns)})."
2150
+ elif new_columns:
2151
+ return f"there are {len(new_columns)} new columns ({new_columns})"
2152
+ else:
2153
+ return f"there are {len(missing_columns)} missing columns ({missing_columns})"
2154
+
2155
+
2156
+ def cast_table_to_features(table: pa.Table, features: "Features"):
2157
+ """Cast a table to the arrow schema that corresponds to the requested features.
2158
+
2159
+ Args:
2160
+ table (`pyarrow.Table`):
2161
+ PyArrow table to cast.
2162
+ features ([`Features`]):
2163
+ Target features.
2164
+
2165
+ Returns:
2166
+ table (`pyarrow.Table`): the casted table
2167
+ """
2168
+ if sorted(table.column_names) != sorted(features):
2169
+ raise CastError(
2170
+ f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match",
2171
+ table_column_names=table.column_names,
2172
+ requested_column_names=list(features),
2173
+ )
2174
+ arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
2175
+ return pa.Table.from_arrays(arrays, schema=features.arrow_schema)
2176
+
2177
+
2178
+ def cast_table_to_schema(table: pa.Table, schema: pa.Schema):
2179
+ """Cast a table to the arrow schema. Different from `cast_table_to_features`, this method can preserve nullability.
2180
+
2181
+ Args:
2182
+ table (`pa.Table`):
2183
+ PyArrow table to cast.
2184
+ features ([`Features`]):
2185
+ Target features.
2186
+
2187
+ Returns:
2188
+ `pa.Table`: the casted table
2189
+ """
2190
+ from .features import Features
2191
+
2192
+ features = Features.from_arrow_schema(schema)
2193
+ if sorted(table.column_names) != sorted(features):
2194
+ raise CastError(
2195
+ f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match",
2196
+ table_column_names=table.column_names,
2197
+ requested_column_names=list(features),
2198
+ )
2199
+ arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
2200
+ return pa.Table.from_arrays(arrays, schema=schema)
2201
+
2202
+
2203
+ def embed_table_storage(table: pa.Table):
2204
+ """Embed external data into a table's storage.
2205
+
2206
+ <Added version="2.4.0"/>
2207
+
2208
+ Args:
2209
+ table (`pyarrow.Table`):
2210
+ PyArrow table in which to embed data.
2211
+
2212
+ Returns:
2213
+ table (`pyarrow.Table`): the table with embedded data
2214
+ """
2215
+ from .features.features import Features, require_storage_embed
2216
+
2217
+ features = Features.from_arrow_schema(table.schema)
2218
+ arrays = [
2219
+ embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]
2220
+ for name, feature in features.items()
2221
+ ]
2222
+ return pa.Table.from_arrays(arrays, schema=features.arrow_schema)
2223
+
2224
+
2225
+ def table_cast(table: pa.Table, schema: pa.Schema):
2226
+ """Improved version of `pa.Table.cast`.
2227
+
2228
+ It supports casting to feature types stored in the schema metadata.
2229
+
2230
+ Args:
2231
+ table (`pyarrow.Table`):
2232
+ PyArrow table to cast.
2233
+ schema (`pyarrow.Schema`):
2234
+ Target PyArrow schema.
2235
+
2236
+ Returns:
2237
+ table (`pyarrow.Table`): the casted table
2238
+ """
2239
+ if table.schema != schema:
2240
+ return cast_table_to_schema(table, schema)
2241
+ elif table.schema.metadata != schema.metadata:
2242
+ return table.replace_schema_metadata(schema.metadata)
2243
+ else:
2244
+ return table
2245
+
2246
+
2247
+ def table_flatten(table: pa.Table):
2248
+ """Improved version of `pa.Table.flatten`.
2249
+
2250
+ It behaves as `pa.Table.flatten` in a sense it does 1-step flatten of the columns with a struct type into one column per struct field,
2251
+ but updates the metadata and skips decodable features unless the `decode` attribute of these features is set to False.
2252
+
2253
+ Args:
2254
+ table (`pa.Table`):
2255
+ PyArrow table to flatten.
2256
+
2257
+ Returns:
2258
+ `Table`: the flattened table
2259
+ """
2260
+ from .features import Features
2261
+
2262
+ features = Features.from_arrow_schema(table.schema)
2263
+ if any(hasattr(subfeature, "flatten") and subfeature.flatten() == subfeature for subfeature in features.values()):
2264
+ flat_arrays = []
2265
+ flat_column_names = []
2266
+ for field in table.schema:
2267
+ array = table.column(field.name)
2268
+ subfeature = features[field.name]
2269
+ if pa.types.is_struct(field.type) and (
2270
+ not hasattr(subfeature, "flatten") or subfeature.flatten() != subfeature
2271
+ ):
2272
+ flat_arrays.extend(array.flatten())
2273
+ flat_column_names.extend([f"{field.name}.{subfield.name}" for subfield in field.type])
2274
+ else:
2275
+ flat_arrays.append(array)
2276
+ flat_column_names.append(field.name)
2277
+ flat_table = pa.Table.from_arrays(
2278
+ flat_arrays,
2279
+ names=flat_column_names,
2280
+ )
2281
+ else:
2282
+ flat_table = table.flatten()
2283
+ # Preserve complex types in the metadata
2284
+ flat_features = features.flatten(max_depth=2)
2285
+ flat_features = Features({column_name: flat_features[column_name] for column_name in flat_table.column_names})
2286
+ return flat_table.replace_schema_metadata(flat_features.arrow_schema.metadata)
2287
+
2288
+
2289
+ def table_visitor(table: pa.Table, function: Callable[[pa.Array], None]):
2290
+ """Visit all arrays in a table and apply a function to them.
2291
+
2292
+ Args:
2293
+ table (`pyarrow.Table`):
2294
+ PyArrow table to visit.
2295
+ function (`Callable[[pa.Array], None]`):
2296
+ Function to apply to each array.
2297
+ """
2298
+ from .features import Features, Sequence
2299
+
2300
+ features = Features.from_arrow_schema(table.schema)
2301
+
2302
+ def _visit(array, feature):
2303
+ if isinstance(array, pa.ChunkedArray):
2304
+ for chunk in array.chunks:
2305
+ _visit(chunk, feature)
2306
+ else:
2307
+ if isinstance(array, pa.ExtensionArray):
2308
+ array = array.storage
2309
+ function(array, feature)
2310
+ if pa.types.is_struct(array.type) and not hasattr(feature, "cast_storage"):
2311
+ if isinstance(feature, Sequence) and isinstance(feature.feature, dict):
2312
+ feature = {
2313
+ name: Sequence(subfeature, length=feature.length)
2314
+ for name, subfeature in feature.feature.items()
2315
+ }
2316
+ for name, subfeature in feature.items():
2317
+ _visit(array.field(name), subfeature)
2318
+ elif pa.types.is_list(array.type):
2319
+ if isinstance(feature, list):
2320
+ _visit(array.values, feature[0])
2321
+ elif isinstance(feature, Sequence):
2322
+ _visit(array.values, feature.feature)
2323
+
2324
+ for name, feature in features.items():
2325
+ _visit(table[name], feature)
2326
+
2327
+
2328
+ def table_iter(table: Table, batch_size: int, drop_last_batch=False) -> Iterator[pa.Table]:
2329
+ """Iterate over sub-tables of size `batch_size`.
2330
+
2331
+ Args:
2332
+ table (`pyarrow.Table`):
2333
+ PyArrow table to iterate over.
2334
+ batch_size (`int`):
2335
+ Size of each sub-table to yield.
2336
+ drop_last_batch (`bool`, defaults to `False`):
2337
+ Drop the last batch if it is smaller than `batch_size`.
2338
+ """
2339
+ chunks_buffer = []
2340
+ chunks_buffer_size = 0
2341
+ for chunk in table.to_reader(max_chunksize=batch_size):
2342
+ if len(chunk) == 0:
2343
+ continue
2344
+ elif chunks_buffer_size + len(chunk) < batch_size:
2345
+ chunks_buffer.append(chunk)
2346
+ chunks_buffer_size += len(chunk)
2347
+ continue
2348
+ elif chunks_buffer_size + len(chunk) == batch_size:
2349
+ chunks_buffer.append(chunk)
2350
+ yield pa.Table.from_batches(chunks_buffer)
2351
+ chunks_buffer = []
2352
+ chunks_buffer_size = 0
2353
+ else:
2354
+ cropped_chunk_length = batch_size - chunks_buffer_size
2355
+ chunks_buffer.append(chunk.slice(0, cropped_chunk_length))
2356
+ yield pa.Table.from_batches(chunks_buffer)
2357
+ chunks_buffer = [chunk.slice(cropped_chunk_length, len(chunk) - cropped_chunk_length)]
2358
+ chunks_buffer_size = len(chunk) - cropped_chunk_length
2359
+ if not drop_last_batch and chunks_buffer:
2360
+ yield pa.Table.from_batches(chunks_buffer)
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__init__.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional
2
+
3
+ from ..utils.logging import get_logger
4
+ from .audio_classification import AudioClassification
5
+ from .automatic_speech_recognition import AutomaticSpeechRecognition
6
+ from .base import TaskTemplate
7
+ from .image_classification import ImageClassification
8
+ from .language_modeling import LanguageModeling
9
+ from .question_answering import QuestionAnsweringExtractive
10
+ from .summarization import Summarization
11
+ from .text_classification import TextClassification
12
+
13
+
14
+ __all__ = [
15
+ "AutomaticSpeechRecognition",
16
+ "AudioClassification",
17
+ "ImageClassification",
18
+ "LanguageModeling",
19
+ "QuestionAnsweringExtractive",
20
+ "Summarization",
21
+ "TaskTemplate",
22
+ "TextClassification",
23
+ ]
24
+
25
+ logger = get_logger(__name__)
26
+
27
+
28
+ NAME2TEMPLATE = {
29
+ AutomaticSpeechRecognition.task: AutomaticSpeechRecognition,
30
+ AudioClassification.task: AudioClassification,
31
+ ImageClassification.task: ImageClassification,
32
+ LanguageModeling.task: LanguageModeling,
33
+ QuestionAnsweringExtractive.task: QuestionAnsweringExtractive,
34
+ Summarization.task: Summarization,
35
+ TextClassification.task: TextClassification,
36
+ }
37
+
38
+
39
+ def task_template_from_dict(task_template_dict: dict) -> Optional[TaskTemplate]:
40
+ """Create one of the supported task templates in :py:mod:`datasets.tasks` from a dictionary."""
41
+ task_name = task_template_dict.get("task")
42
+ if task_name is None:
43
+ logger.warning(f"Couldn't find template for task '{task_name}'. Available templates: {list(NAME2TEMPLATE)}")
44
+ return None
45
+ template = NAME2TEMPLATE.get(task_name)
46
+ return template.from_dict(task_template_dict)
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (1.4 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/audio_classification.cpython-310.pyc ADDED
Binary file (1.63 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/automatic_speech_recognition.cpython-310.pyc ADDED
Binary file (1.69 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/base.cpython-310.pyc ADDED
Binary file (1.96 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/image_classification.cpython-310.pyc ADDED
Binary file (1.63 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/language_modeling.cpython-310.pyc ADDED
Binary file (1.09 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/question_answering.cpython-310.pyc ADDED
Binary file (1.36 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/summarization.cpython-310.pyc ADDED
Binary file (1.15 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/tasks/__pycache__/text_classification.cpython-310.pyc ADDED
Binary file (1.64 kB). View file
 
env-llmeval/lib/python3.10/site-packages/datasets/tasks/audio_classification.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ from dataclasses import dataclass, field
3
+ from typing import ClassVar, Dict
4
+
5
+ from ..features import Audio, ClassLabel, Features
6
+ from .base import TaskTemplate
7
+
8
+
9
+ @dataclass(frozen=True)
10
+ class AudioClassification(TaskTemplate):
11
+ task: str = field(default="audio-classification", metadata={"include_in_asdict_even_if_is_default": True})
12
+ input_schema: ClassVar[Features] = Features({"audio": Audio()})
13
+ label_schema: ClassVar[Features] = Features({"labels": ClassLabel})
14
+ audio_column: str = "audio"
15
+ label_column: str = "labels"
16
+
17
+ def align_with_features(self, features):
18
+ if self.label_column not in features:
19
+ raise ValueError(f"Column {self.label_column} is not present in features.")
20
+ if not isinstance(features[self.label_column], ClassLabel):
21
+ raise ValueError(f"Column {self.label_column} is not a ClassLabel.")
22
+ task_template = copy.deepcopy(self)
23
+ label_schema = self.label_schema.copy()
24
+ label_schema["labels"] = features[self.label_column]
25
+ task_template.__dict__["label_schema"] = label_schema
26
+ return task_template
27
+
28
+ @property
29
+ def column_mapping(self) -> Dict[str, str]:
30
+ return {
31
+ self.audio_column: "audio",
32
+ self.label_column: "labels",
33
+ }
env-llmeval/lib/python3.10/site-packages/datasets/tasks/automatic_speech_recognition.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ from dataclasses import dataclass, field
3
+ from typing import ClassVar, Dict
4
+
5
+ from ..features import Audio, Features, Value
6
+ from .base import TaskTemplate
7
+
8
+
9
+ @dataclass(frozen=True)
10
+ class AutomaticSpeechRecognition(TaskTemplate):
11
+ task: str = field(default="automatic-speech-recognition", metadata={"include_in_asdict_even_if_is_default": True})
12
+ input_schema: ClassVar[Features] = Features({"audio": Audio()})
13
+ label_schema: ClassVar[Features] = Features({"transcription": Value("string")})
14
+ audio_column: str = "audio"
15
+ transcription_column: str = "transcription"
16
+
17
+ def align_with_features(self, features):
18
+ if self.audio_column not in features:
19
+ raise ValueError(f"Column {self.audio_column} is not present in features.")
20
+ if not isinstance(features[self.audio_column], Audio):
21
+ raise ValueError(f"Column {self.audio_column} is not an Audio type.")
22
+ task_template = copy.deepcopy(self)
23
+ input_schema = self.input_schema.copy()
24
+ input_schema["audio"] = features[self.audio_column]
25
+ task_template.__dict__["input_schema"] = input_schema
26
+ return task_template
27
+
28
+ @property
29
+ def column_mapping(self) -> Dict[str, str]:
30
+ return {self.audio_column: "audio", self.transcription_column: "transcription"}
env-llmeval/lib/python3.10/site-packages/datasets/tasks/base.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import abc
2
+ import copy
3
+ import dataclasses
4
+ from dataclasses import dataclass
5
+ from typing import ClassVar, Dict, Type, TypeVar
6
+
7
+ from ..features import Features
8
+
9
+
10
+ T = TypeVar("T", bound="TaskTemplate")
11
+
12
+
13
+ @dataclass(frozen=True)
14
+ class TaskTemplate(abc.ABC):
15
+ # `task` is not a ClassVar since we want it to be part of the `asdict` output for JSON serialization
16
+ task: str
17
+ input_schema: ClassVar[Features]
18
+ label_schema: ClassVar[Features]
19
+
20
+ def align_with_features(self: T, features: Features) -> T:
21
+ """
22
+ Align features with the task template.
23
+ """
24
+ # No-op
25
+ return copy.deepcopy(self)
26
+
27
+ @property
28
+ def features(self) -> Features:
29
+ return Features(**self.input_schema, **self.label_schema)
30
+
31
+ @property
32
+ @abc.abstractmethod
33
+ def column_mapping(self) -> Dict[str, str]:
34
+ raise NotImplementedError
35
+
36
+ @classmethod
37
+ def from_dict(cls: Type[T], template_dict: dict) -> T:
38
+ field_names = {f.name for f in dataclasses.fields(cls)}
39
+ return cls(**{k: v for k, v in template_dict.items() if k in field_names})
env-llmeval/lib/python3.10/site-packages/datasets/tasks/image_classification.py ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ from dataclasses import dataclass, field
3
+ from typing import ClassVar, Dict
4
+
5
+ from ..features import ClassLabel, Features, Image
6
+ from .base import TaskTemplate
7
+
8
+
9
+ @dataclass(frozen=True)
10
+ class ImageClassification(TaskTemplate):
11
+ task: str = field(default="image-classification", metadata={"include_in_asdict_even_if_is_default": True})
12
+ input_schema: ClassVar[Features] = Features({"image": Image()})
13
+ label_schema: ClassVar[Features] = Features({"labels": ClassLabel})
14
+ image_column: str = "image"
15
+ label_column: str = "labels"
16
+
17
+ def align_with_features(self, features):
18
+ if self.label_column not in features:
19
+ raise ValueError(f"Column {self.label_column} is not present in features.")
20
+ if not isinstance(features[self.label_column], ClassLabel):
21
+ raise ValueError(f"Column {self.label_column} is not a ClassLabel.")
22
+ task_template = copy.deepcopy(self)
23
+ label_schema = self.label_schema.copy()
24
+ label_schema["labels"] = features[self.label_column]
25
+ task_template.__dict__["label_schema"] = label_schema
26
+ return task_template
27
+
28
+ @property
29
+ def column_mapping(self) -> Dict[str, str]:
30
+ return {
31
+ self.image_column: "image",
32
+ self.label_column: "labels",
33
+ }
env-llmeval/lib/python3.10/site-packages/datasets/tasks/language_modeling.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import dataclass, field
2
+ from typing import ClassVar, Dict
3
+
4
+ from ..features import Features, Value
5
+ from .base import TaskTemplate
6
+
7
+
8
+ @dataclass(frozen=True)
9
+ class LanguageModeling(TaskTemplate):
10
+ task: str = field(default="language-modeling", metadata={"include_in_asdict_even_if_is_default": True})
11
+
12
+ input_schema: ClassVar[Features] = Features({"text": Value("string")})
13
+ label_schema: ClassVar[Features] = Features({})
14
+ text_column: str = "text"
15
+
16
+ @property
17
+ def column_mapping(self) -> Dict[str, str]:
18
+ return {self.text_column: "text"}
env-llmeval/lib/python3.10/site-packages/datasets/tasks/question_answering.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import dataclass, field
2
+ from typing import ClassVar, Dict
3
+
4
+ from ..features import Features, Sequence, Value
5
+ from .base import TaskTemplate
6
+
7
+
8
+ @dataclass(frozen=True)
9
+ class QuestionAnsweringExtractive(TaskTemplate):
10
+ # `task` is not a ClassVar since we want it to be part of the `asdict` output for JSON serialization
11
+ task: str = field(default="question-answering-extractive", metadata={"include_in_asdict_even_if_is_default": True})
12
+ input_schema: ClassVar[Features] = Features({"question": Value("string"), "context": Value("string")})
13
+ label_schema: ClassVar[Features] = Features(
14
+ {
15
+ "answers": Sequence(
16
+ {
17
+ "text": Value("string"),
18
+ "answer_start": Value("int32"),
19
+ }
20
+ )
21
+ }
22
+ )
23
+ question_column: str = "question"
24
+ context_column: str = "context"
25
+ answers_column: str = "answers"
26
+
27
+ @property
28
+ def column_mapping(self) -> Dict[str, str]:
29
+ return {self.question_column: "question", self.context_column: "context", self.answers_column: "answers"}
env-llmeval/lib/python3.10/site-packages/datasets/tasks/summarization.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import dataclass, field
2
+ from typing import ClassVar, Dict
3
+
4
+ from ..features import Features, Value
5
+ from .base import TaskTemplate
6
+
7
+
8
+ @dataclass(frozen=True)
9
+ class Summarization(TaskTemplate):
10
+ # `task` is not a ClassVar since we want it to be part of the `asdict` output for JSON serialization
11
+ task: str = field(default="summarization", metadata={"include_in_asdict_even_if_is_default": True})
12
+ input_schema: ClassVar[Features] = Features({"text": Value("string")})
13
+ label_schema: ClassVar[Features] = Features({"summary": Value("string")})
14
+ text_column: str = "text"
15
+ summary_column: str = "summary"
16
+
17
+ @property
18
+ def column_mapping(self) -> Dict[str, str]:
19
+ return {self.text_column: "text", self.summary_column: "summary"}
env-llmeval/lib/python3.10/site-packages/datasets/tasks/text_classification.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ from dataclasses import dataclass, field
3
+ from typing import ClassVar, Dict
4
+
5
+ from ..features import ClassLabel, Features, Value
6
+ from .base import TaskTemplate
7
+
8
+
9
+ @dataclass(frozen=True)
10
+ class TextClassification(TaskTemplate):
11
+ # `task` is not a ClassVar since we want it to be part of the `asdict` output for JSON serialization
12
+ task: str = field(default="text-classification", metadata={"include_in_asdict_even_if_is_default": True})
13
+ input_schema: ClassVar[Features] = Features({"text": Value("string")})
14
+ label_schema: ClassVar[Features] = Features({"labels": ClassLabel})
15
+ text_column: str = "text"
16
+ label_column: str = "labels"
17
+
18
+ def align_with_features(self, features):
19
+ if self.label_column not in features:
20
+ raise ValueError(f"Column {self.label_column} is not present in features.")
21
+ if not isinstance(features[self.label_column], ClassLabel):
22
+ raise ValueError(f"Column {self.label_column} is not a ClassLabel.")
23
+ task_template = copy.deepcopy(self)
24
+ label_schema = self.label_schema.copy()
25
+ label_schema["labels"] = features[self.label_column]
26
+ task_template.__dict__["label_schema"] = label_schema
27
+ return task_template
28
+
29
+ @property
30
+ def column_mapping(self) -> Dict[str, str]:
31
+ return {
32
+ self.text_column: "text",
33
+ self.label_column: "labels",
34
+ }
env-llmeval/lib/python3.10/site-packages/dill-0.3.8.dist-info/INSTALLER ADDED
@@ -0,0 +1 @@
 
 
1
+ pip
env-llmeval/lib/python3.10/site-packages/dill-0.3.8.dist-info/LICENSE ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright (c) 2004-2016 California Institute of Technology.
2
+ Copyright (c) 2016-2024 The Uncertainty Quantification Foundation.
3
+ All rights reserved.
4
+
5
+ This software is available subject to the conditions and terms laid
6
+ out below. By downloading and using this software you are agreeing
7
+ to the following conditions.
8
+
9
+ Redistribution and use in source and binary forms, with or without
10
+ modification, are permitted provided that the following conditions
11
+ are met:
12
+
13
+ - Redistributions of source code must retain the above copyright
14
+ notice, this list of conditions and the following disclaimer.
15
+
16
+ - Redistributions in binary form must reproduce the above copyright
17
+ notice, this list of conditions and the following disclaimer in the
18
+ documentation and/or other materials provided with the distribution.
19
+
20
+ - Neither the names of the copyright holders nor the names of any of
21
+ the contributors may be used to endorse or promote products derived
22
+ from this software without specific prior written permission.
23
+
24
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
25
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
26
+ TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
27
+ PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
28
+ CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
29
+ EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
30
+ PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
31
+ OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
32
+ WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
33
+ OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
34
+ ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
35
+
env-llmeval/lib/python3.10/site-packages/dill-0.3.8.dist-info/METADATA ADDED
@@ -0,0 +1,280 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.1
2
+ Name: dill
3
+ Version: 0.3.8
4
+ Summary: serialize all of Python
5
+ Home-page: https://github.com/uqfoundation/dill
6
+ Author: Mike McKerns
7
+ Author-email: [email protected]
8
+ Maintainer: Mike McKerns
9
+ Maintainer-email: [email protected]
10
+ License: BSD-3-Clause
11
+ Download-URL: https://pypi.org/project/dill/#files
12
+ Project-URL: Documentation, http://dill.rtfd.io
13
+ Project-URL: Source Code, https://github.com/uqfoundation/dill
14
+ Project-URL: Bug Tracker, https://github.com/uqfoundation/dill/issues
15
+ Platform: Linux
16
+ Platform: Windows
17
+ Platform: Mac
18
+ Classifier: Development Status :: 5 - Production/Stable
19
+ Classifier: Intended Audience :: Developers
20
+ Classifier: Intended Audience :: Science/Research
21
+ Classifier: License :: OSI Approved :: BSD License
22
+ Classifier: Programming Language :: Python :: 3
23
+ Classifier: Programming Language :: Python :: 3.8
24
+ Classifier: Programming Language :: Python :: 3.9
25
+ Classifier: Programming Language :: Python :: 3.10
26
+ Classifier: Programming Language :: Python :: 3.11
27
+ Classifier: Programming Language :: Python :: 3.12
28
+ Classifier: Programming Language :: Python :: Implementation :: CPython
29
+ Classifier: Programming Language :: Python :: Implementation :: PyPy
30
+ Classifier: Topic :: Scientific/Engineering
31
+ Classifier: Topic :: Software Development
32
+ Requires-Python: >=3.8
33
+ Provides-Extra: graph
34
+ Requires-Dist: objgraph (>=1.7.2) ; extra == 'graph'
35
+ Provides-Extra: profile
36
+ Requires-Dist: gprof2dot (>=2022.7.29) ; extra == 'profile'
37
+ Provides-Extra: readline
38
+
39
+ -----------------------------
40
+ dill: serialize all of Python
41
+ -----------------------------
42
+
43
+ About Dill
44
+ ==========
45
+
46
+ ``dill`` extends Python's ``pickle`` module for serializing and de-serializing
47
+ Python objects to the majority of the built-in Python types. Serialization
48
+ is the process of converting an object to a byte stream, and the inverse
49
+ of which is converting a byte stream back to a Python object hierarchy.
50
+
51
+ ``dill`` provides the user the same interface as the ``pickle`` module, and
52
+ also includes some additional features. In addition to pickling Python
53
+ objects, ``dill`` provides the ability to save the state of an interpreter
54
+ session in a single command. Hence, it would be feasible to save an
55
+ interpreter session, close the interpreter, ship the pickled file to
56
+ another computer, open a new interpreter, unpickle the session and
57
+ thus continue from the 'saved' state of the original interpreter
58
+ session.
59
+
60
+ ``dill`` can be used to store Python objects to a file, but the primary
61
+ usage is to send Python objects across the network as a byte stream.
62
+ ``dill`` is quite flexible, and allows arbitrary user defined classes
63
+ and functions to be serialized. Thus ``dill`` is not intended to be
64
+ secure against erroneously or maliciously constructed data. It is
65
+ left to the user to decide whether the data they unpickle is from
66
+ a trustworthy source.
67
+
68
+ ``dill`` is part of ``pathos``, a Python framework for heterogeneous computing.
69
+ ``dill`` is in active development, so any user feedback, bug reports, comments,
70
+ or suggestions are highly appreciated. A list of issues is located at
71
+ https://github.com/uqfoundation/dill/issues, with a legacy list maintained at
72
+ https://uqfoundation.github.io/project/pathos/query.
73
+
74
+
75
+ Major Features
76
+ ==============
77
+
78
+ ``dill`` can pickle the following standard types:
79
+
80
+ - none, type, bool, int, float, complex, bytes, str,
81
+ - tuple, list, dict, file, buffer, builtin,
82
+ - Python classes, namedtuples, dataclasses, metaclasses,
83
+ - instances of classes,
84
+ - set, frozenset, array, functions, exceptions
85
+
86
+ ``dill`` can also pickle more 'exotic' standard types:
87
+
88
+ - functions with yields, nested functions, lambdas,
89
+ - cell, method, unboundmethod, module, code, methodwrapper,
90
+ - methoddescriptor, getsetdescriptor, memberdescriptor, wrapperdescriptor,
91
+ - dictproxy, slice, notimplemented, ellipsis, quit
92
+
93
+ ``dill`` cannot yet pickle these standard types:
94
+
95
+ - frame, generator, traceback
96
+
97
+ ``dill`` also provides the capability to:
98
+
99
+ - save and load Python interpreter sessions
100
+ - save and extract the source code from functions and classes
101
+ - interactively diagnose pickling errors
102
+
103
+
104
+ Current Release
105
+ ===============
106
+
107
+ The latest released version of ``dill`` is available from:
108
+
109
+ https://pypi.org/project/dill
110
+
111
+ ``dill`` is distributed under a 3-clause BSD license.
112
+
113
+
114
+ Development Version
115
+ ===================
116
+
117
+ You can get the latest development version with all the shiny new features at:
118
+
119
+ https://github.com/uqfoundation
120
+
121
+ If you have a new contribution, please submit a pull request.
122
+
123
+
124
+ Installation
125
+ ============
126
+
127
+ ``dill`` can be installed with ``pip``::
128
+
129
+ $ pip install dill
130
+
131
+ To optionally include the ``objgraph`` diagnostic tool in the install::
132
+
133
+ $ pip install dill[graph]
134
+
135
+ To optionally include the ``gprof2dot`` diagnostic tool in the install::
136
+
137
+ $ pip install dill[profile]
138
+
139
+ For windows users, to optionally install session history tools::
140
+
141
+ $ pip install dill[readline]
142
+
143
+
144
+ Requirements
145
+ ============
146
+
147
+ ``dill`` requires:
148
+
149
+ - ``python`` (or ``pypy``), **>=3.8**
150
+ - ``setuptools``, **>=42**
151
+
152
+ Optional requirements:
153
+
154
+ - ``objgraph``, **>=1.7.2**
155
+ - ``gprof2dot``, **>=2022.7.29**
156
+ - ``pyreadline``, **>=1.7.1** (on windows)
157
+
158
+
159
+ Basic Usage
160
+ ===========
161
+
162
+ ``dill`` is a drop-in replacement for ``pickle``. Existing code can be
163
+ updated to allow complete pickling using::
164
+
165
+ >>> import dill as pickle
166
+
167
+ or::
168
+
169
+ >>> from dill import dumps, loads
170
+
171
+ ``dumps`` converts the object to a unique byte string, and ``loads`` performs
172
+ the inverse operation::
173
+
174
+ >>> squared = lambda x: x**2
175
+ >>> loads(dumps(squared))(3)
176
+ 9
177
+
178
+ There are a number of options to control serialization which are provided
179
+ as keyword arguments to several ``dill`` functions:
180
+
181
+ * with *protocol*, the pickle protocol level can be set. This uses the
182
+ same value as the ``pickle`` module, *DEFAULT_PROTOCOL*.
183
+ * with *byref=True*, ``dill`` to behave a lot more like pickle with
184
+ certain objects (like modules) pickled by reference as opposed to
185
+ attempting to pickle the object itself.
186
+ * with *recurse=True*, objects referred to in the global dictionary are
187
+ recursively traced and pickled, instead of the default behavior of
188
+ attempting to store the entire global dictionary.
189
+ * with *fmode*, the contents of the file can be pickled along with the file
190
+ handle, which is useful if the object is being sent over the wire to a
191
+ remote system which does not have the original file on disk. Options are
192
+ *HANDLE_FMODE* for just the handle, *CONTENTS_FMODE* for the file content
193
+ and *FILE_FMODE* for content and handle.
194
+ * with *ignore=False*, objects reconstructed with types defined in the
195
+ top-level script environment use the existing type in the environment
196
+ rather than a possibly different reconstructed type.
197
+
198
+ The default serialization can also be set globally in *dill.settings*.
199
+ Thus, we can modify how ``dill`` handles references to the global dictionary
200
+ locally or globally::
201
+
202
+ >>> import dill.settings
203
+ >>> dumps(absolute) == dumps(absolute, recurse=True)
204
+ False
205
+ >>> dill.settings['recurse'] = True
206
+ >>> dumps(absolute) == dumps(absolute, recurse=True)
207
+ True
208
+
209
+ ``dill`` also includes source code inspection, as an alternate to pickling::
210
+
211
+ >>> import dill.source
212
+ >>> print(dill.source.getsource(squared))
213
+ squared = lambda x:x**2
214
+
215
+ To aid in debugging pickling issues, use *dill.detect* which provides
216
+ tools like pickle tracing::
217
+
218
+ >>> import dill.detect
219
+ >>> with dill.detect.trace():
220
+ >>> dumps(squared)
221
+ ┬ F1: <function <lambda> at 0x7fe074f8c280>
222
+ β”œβ”¬ F2: <function _create_function at 0x7fe074c49c10>
223
+ β”‚β”” # F2 [34 B]
224
+ β”œβ”¬ Co: <code object <lambda> at 0x7fe07501eb30, file "<stdin>", line 1>
225
+ β”‚β”œβ”¬ F2: <function _create_code at 0x7fe074c49ca0>
226
+ β”‚β”‚β”” # F2 [19 B]
227
+ β”‚β”” # Co [87 B]
228
+ β”œβ”¬ D1: <dict object at 0x7fe0750d4680>
229
+ β”‚β”” # D1 [22 B]
230
+ β”œβ”¬ D2: <dict object at 0x7fe074c5a1c0>
231
+ β”‚β”” # D2 [2 B]
232
+ β”œβ”¬ D2: <dict object at 0x7fe074f903c0>
233
+ β”‚β”œβ”¬ D2: <dict object at 0x7fe074f8ebc0>
234
+ β”‚β”‚β”” # D2 [2 B]
235
+ β”‚β”” # D2 [23 B]
236
+ β”” # F1 [180 B]
237
+
238
+ With trace, we see how ``dill`` stored the lambda (``F1``) by first storing
239
+ ``_create_function``, the underlying code object (``Co``) and ``_create_code``
240
+ (which is used to handle code objects), then we handle the reference to
241
+ the global dict (``D2``) plus other dictionaries (``D1`` and ``D2``) that
242
+ save the lambda object's state. A ``#`` marks when the object is actually stored.
243
+
244
+
245
+ More Information
246
+ ================
247
+
248
+ Probably the best way to get started is to look at the documentation at
249
+ http://dill.rtfd.io. Also see ``dill.tests`` for a set of scripts that
250
+ demonstrate how ``dill`` can serialize different Python objects. You can
251
+ run the test suite with ``python -m dill.tests``. The contents of any
252
+ pickle file can be examined with ``undill``. As ``dill`` conforms to
253
+ the ``pickle`` interface, the examples and documentation found at
254
+ http://docs.python.org/library/pickle.html also apply to ``dill``
255
+ if one will ``import dill as pickle``. The source code is also generally
256
+ well documented, so further questions may be resolved by inspecting the
257
+ code itself. Please feel free to submit a ticket on github, or ask a
258
+ question on stackoverflow (**@Mike McKerns**).
259
+ If you would like to share how you use ``dill`` in your work, please send
260
+ an email (to **mmckerns at uqfoundation dot org**).
261
+
262
+
263
+ Citation
264
+ ========
265
+
266
+ If you use ``dill`` to do research that leads to publication, we ask that you
267
+ acknowledge use of ``dill`` by citing the following in your publication::
268
+
269
+ M.M. McKerns, L. Strand, T. Sullivan, A. Fang, M.A.G. Aivazis,
270
+ "Building a framework for predictive science", Proceedings of
271
+ the 10th Python in Science Conference, 2011;
272
+ http://arxiv.org/pdf/1202.1056
273
+
274
+ Michael McKerns and Michael Aivazis,
275
+ "pathos: a framework for heterogeneous computing", 2010- ;
276
+ https://uqfoundation.github.io/project/pathos
277
+
278
+ Please see https://uqfoundation.github.io/project/pathos or
279
+ http://arxiv.org/pdf/1202.1056 for further information.
280
+