applied-ai-018 commited on
Commit
d42a351
·
verified ·
1 Parent(s): e2f0b14

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ckpts/universal/global_step40/zero/23.attention.dense.weight/exp_avg.pt +3 -0
  2. ckpts/universal/global_step40/zero/23.mlp.dense_4h_to_h.weight/exp_avg.pt +3 -0
  3. ckpts/universal/global_step40/zero/23.mlp.dense_4h_to_h.weight/exp_avg_sq.pt +3 -0
  4. ckpts/universal/global_step40/zero/23.mlp.dense_4h_to_h.weight/fp32.pt +3 -0
  5. ckpts/universal/global_step40/zero/24.mlp.dense_h_to_4h.weight/exp_avg.pt +3 -0
  6. ckpts/universal/global_step40/zero/24.mlp.dense_h_to_4h.weight/exp_avg_sq.pt +3 -0
  7. ckpts/universal/global_step40/zero/24.mlp.dense_h_to_4h.weight/fp32.pt +3 -0
  8. ckpts/universal/global_step40/zero/25.input_layernorm.weight/exp_avg_sq.pt +3 -0
  9. ckpts/universal/global_step40/zero/25.mlp.dense_4h_to_h.weight/exp_avg.pt +3 -0
  10. ckpts/universal/global_step40/zero/25.mlp.dense_4h_to_h.weight/exp_avg_sq.pt +3 -0
  11. ckpts/universal/global_step40/zero/25.mlp.dense_4h_to_h.weight/fp32.pt +3 -0
  12. ckpts/universal/global_step40/zero/4.attention.dense.weight/exp_avg.pt +3 -0
  13. ckpts/universal/global_step40/zero/4.attention.dense.weight/exp_avg_sq.pt +3 -0
  14. ckpts/universal/global_step40/zero/4.attention.dense.weight/fp32.pt +3 -0
  15. venv/lib/python3.10/site-packages/sklearn/datasets/_arff_parser.py +542 -0
  16. venv/lib/python3.10/site-packages/sklearn/datasets/_base.py +1441 -0
  17. venv/lib/python3.10/site-packages/sklearn/datasets/_covtype.py +236 -0
  18. venv/lib/python3.10/site-packages/sklearn/datasets/_olivetti_faces.py +156 -0
  19. venv/lib/python3.10/site-packages/sklearn/datasets/_openml.py +1158 -0
  20. venv/lib/python3.10/site-packages/sklearn/datasets/_twenty_newsgroups.py +561 -0
  21. venv/lib/python3.10/site-packages/sklearn/datasets/images/README.txt +21 -0
  22. venv/lib/python3.10/site-packages/sklearn/datasets/images/__init__.py +0 -0
  23. venv/lib/python3.10/site-packages/sklearn/datasets/images/__pycache__/__init__.cpython-310.pyc +0 -0
  24. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__init__.py +0 -0
  25. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/__init__.cpython-310.pyc +0 -0
  26. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_20news.cpython-310.pyc +0 -0
  27. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_arff_parser.cpython-310.pyc +0 -0
  28. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_base.cpython-310.pyc +0 -0
  29. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_california_housing.cpython-310.pyc +0 -0
  30. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_common.cpython-310.pyc +0 -0
  31. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_covtype.cpython-310.pyc +0 -0
  32. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_kddcup99.cpython-310.pyc +0 -0
  33. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_lfw.cpython-310.pyc +0 -0
  34. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_olivetti_faces.cpython-310.pyc +0 -0
  35. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_openml.cpython-310.pyc +0 -0
  36. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_rcv1.cpython-310.pyc +0 -0
  37. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_samples_generator.cpython-310.pyc +0 -0
  38. venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_svmlight_format.cpython-310.pyc +0 -0
  39. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/__init__.py +0 -0
  40. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/__pycache__/__init__.cpython-310.pyc +0 -0
  41. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/__init__.py +0 -0
  42. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/__pycache__/__init__.cpython-310.pyc +0 -0
  43. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1/__init__.py +0 -0
  44. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1/__pycache__/__init__.cpython-310.pyc +0 -0
  45. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1119/__init__.py +0 -0
  46. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1119/__pycache__/__init__.cpython-310.pyc +0 -0
  47. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1590/__init__.py +0 -0
  48. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1590/__pycache__/__init__.cpython-310.pyc +0 -0
  49. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_2/__init__.py +0 -0
  50. venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_2/__pycache__/__init__.cpython-310.pyc +0 -0
ckpts/universal/global_step40/zero/23.attention.dense.weight/exp_avg.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa9f765aa7814461dbecc370b8515288885948e8afbbbbe4b565514bee1d1481
3
+ size 16778396
ckpts/universal/global_step40/zero/23.mlp.dense_4h_to_h.weight/exp_avg.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41075e102137caa1e8745c571f82083723c462bf9ff1249ca75595f67f475897
3
+ size 33555612
ckpts/universal/global_step40/zero/23.mlp.dense_4h_to_h.weight/exp_avg_sq.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48f20aad6c18db2d99498dd71e920c8bb03025779589319272f087c6248d2453
3
+ size 33555627
ckpts/universal/global_step40/zero/23.mlp.dense_4h_to_h.weight/fp32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e31ca62821543e3cbbac929cd9187fa3c82f07a9a41942a3f2dd8698ddc209eb
3
+ size 33555533
ckpts/universal/global_step40/zero/24.mlp.dense_h_to_4h.weight/exp_avg.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4534d085bc382985eb6cd40e84a61a5079e049be0ea2c05e40cd890f0eda71cd
3
+ size 33555612
ckpts/universal/global_step40/zero/24.mlp.dense_h_to_4h.weight/exp_avg_sq.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef8be073710cc5ec015a0dcd6891e7ed954253af39e802e1844bae546db26dc6
3
+ size 33555627
ckpts/universal/global_step40/zero/24.mlp.dense_h_to_4h.weight/fp32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c82ab44226b2005a8b55040ac6cac7a56f4cbb9344f4a332697041b457d67478
3
+ size 33555533
ckpts/universal/global_step40/zero/25.input_layernorm.weight/exp_avg_sq.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8543fd9fe7fff423557d08ddedc20c96ddda4fd6e4db89e9f4848bf7342e1898
3
+ size 9387
ckpts/universal/global_step40/zero/25.mlp.dense_4h_to_h.weight/exp_avg.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8621b78afe936756a9952fa31d3ce1f3f0cc64ac84d1e0b4f3ac2ba23f89fee6
3
+ size 33555612
ckpts/universal/global_step40/zero/25.mlp.dense_4h_to_h.weight/exp_avg_sq.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3db1a383ebabc03297f2d5f8d907b1185c656124131e1fd7fdb34465d6decddc
3
+ size 33555627
ckpts/universal/global_step40/zero/25.mlp.dense_4h_to_h.weight/fp32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0a3307676c70e489cc33ee5db5e70a3ea50c1c3be1e49668a31a330c32f1811
3
+ size 33555533
ckpts/universal/global_step40/zero/4.attention.dense.weight/exp_avg.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b77231c886bbd3f915b506fcb52525d543f3bbf983ec097d70106ecc3cb270a3
3
+ size 16778396
ckpts/universal/global_step40/zero/4.attention.dense.weight/exp_avg_sq.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13ce3d24964147d1243e591e91c30aa056e0015a6b6d43de7a5678538af3a9fa
3
+ size 16778411
ckpts/universal/global_step40/zero/4.attention.dense.weight/fp32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:212a1c16128c00a974a895b7ce9ac104655436f387808b79dfe1154da1d5ed19
3
+ size 16778317
venv/lib/python3.10/site-packages/sklearn/datasets/_arff_parser.py ADDED
@@ -0,0 +1,542 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Implementation of ARFF parsers: via LIAC-ARFF and pandas."""
2
+ import itertools
3
+ import re
4
+ from collections import OrderedDict
5
+ from collections.abc import Generator
6
+ from typing import List
7
+
8
+ import numpy as np
9
+ import scipy as sp
10
+
11
+ from ..externals import _arff
12
+ from ..externals._arff import ArffSparseDataType
13
+ from ..utils import (
14
+ _chunk_generator,
15
+ check_pandas_support,
16
+ get_chunk_n_rows,
17
+ )
18
+ from ..utils.fixes import pd_fillna
19
+
20
+
21
+ def _split_sparse_columns(
22
+ arff_data: ArffSparseDataType, include_columns: List
23
+ ) -> ArffSparseDataType:
24
+ """Obtains several columns from sparse ARFF representation. Additionally,
25
+ the column indices are re-labelled, given the columns that are not
26
+ included. (e.g., when including [1, 2, 3], the columns will be relabelled
27
+ to [0, 1, 2]).
28
+
29
+ Parameters
30
+ ----------
31
+ arff_data : tuple
32
+ A tuple of three lists of equal size; first list indicating the value,
33
+ second the x coordinate and the third the y coordinate.
34
+
35
+ include_columns : list
36
+ A list of columns to include.
37
+
38
+ Returns
39
+ -------
40
+ arff_data_new : tuple
41
+ Subset of arff data with only the include columns indicated by the
42
+ include_columns argument.
43
+ """
44
+ arff_data_new: ArffSparseDataType = (list(), list(), list())
45
+ reindexed_columns = {
46
+ column_idx: array_idx for array_idx, column_idx in enumerate(include_columns)
47
+ }
48
+ for val, row_idx, col_idx in zip(arff_data[0], arff_data[1], arff_data[2]):
49
+ if col_idx in include_columns:
50
+ arff_data_new[0].append(val)
51
+ arff_data_new[1].append(row_idx)
52
+ arff_data_new[2].append(reindexed_columns[col_idx])
53
+ return arff_data_new
54
+
55
+
56
+ def _sparse_data_to_array(
57
+ arff_data: ArffSparseDataType, include_columns: List
58
+ ) -> np.ndarray:
59
+ # turns the sparse data back into an array (can't use toarray() function,
60
+ # as this does only work on numeric data)
61
+ num_obs = max(arff_data[1]) + 1
62
+ y_shape = (num_obs, len(include_columns))
63
+ reindexed_columns = {
64
+ column_idx: array_idx for array_idx, column_idx in enumerate(include_columns)
65
+ }
66
+ # TODO: improve for efficiency
67
+ y = np.empty(y_shape, dtype=np.float64)
68
+ for val, row_idx, col_idx in zip(arff_data[0], arff_data[1], arff_data[2]):
69
+ if col_idx in include_columns:
70
+ y[row_idx, reindexed_columns[col_idx]] = val
71
+ return y
72
+
73
+
74
+ def _post_process_frame(frame, feature_names, target_names):
75
+ """Post process a dataframe to select the desired columns in `X` and `y`.
76
+
77
+ Parameters
78
+ ----------
79
+ frame : dataframe
80
+ The dataframe to split into `X` and `y`.
81
+
82
+ feature_names : list of str
83
+ The list of feature names to populate `X`.
84
+
85
+ target_names : list of str
86
+ The list of target names to populate `y`.
87
+
88
+ Returns
89
+ -------
90
+ X : dataframe
91
+ The dataframe containing the features.
92
+
93
+ y : {series, dataframe} or None
94
+ The series or dataframe containing the target.
95
+ """
96
+ X = frame[feature_names]
97
+ if len(target_names) >= 2:
98
+ y = frame[target_names]
99
+ elif len(target_names) == 1:
100
+ y = frame[target_names[0]]
101
+ else:
102
+ y = None
103
+ return X, y
104
+
105
+
106
+ def _liac_arff_parser(
107
+ gzip_file,
108
+ output_arrays_type,
109
+ openml_columns_info,
110
+ feature_names_to_select,
111
+ target_names_to_select,
112
+ shape=None,
113
+ ):
114
+ """ARFF parser using the LIAC-ARFF library coded purely in Python.
115
+
116
+ This parser is quite slow but consumes a generator. Currently it is needed
117
+ to parse sparse datasets. For dense datasets, it is recommended to instead
118
+ use the pandas-based parser, although it does not always handles the
119
+ dtypes exactly the same.
120
+
121
+ Parameters
122
+ ----------
123
+ gzip_file : GzipFile instance
124
+ The file compressed to be read.
125
+
126
+ output_arrays_type : {"numpy", "sparse", "pandas"}
127
+ The type of the arrays that will be returned. The possibilities ara:
128
+
129
+ - `"numpy"`: both `X` and `y` will be NumPy arrays;
130
+ - `"sparse"`: `X` will be sparse matrix and `y` will be a NumPy array;
131
+ - `"pandas"`: `X` will be a pandas DataFrame and `y` will be either a
132
+ pandas Series or DataFrame.
133
+
134
+ columns_info : dict
135
+ The information provided by OpenML regarding the columns of the ARFF
136
+ file.
137
+
138
+ feature_names_to_select : list of str
139
+ A list of the feature names to be selected.
140
+
141
+ target_names_to_select : list of str
142
+ A list of the target names to be selected.
143
+
144
+ Returns
145
+ -------
146
+ X : {ndarray, sparse matrix, dataframe}
147
+ The data matrix.
148
+
149
+ y : {ndarray, dataframe, series}
150
+ The target.
151
+
152
+ frame : dataframe or None
153
+ A dataframe containing both `X` and `y`. `None` if
154
+ `output_array_type != "pandas"`.
155
+
156
+ categories : list of str or None
157
+ The names of the features that are categorical. `None` if
158
+ `output_array_type == "pandas"`.
159
+ """
160
+
161
+ def _io_to_generator(gzip_file):
162
+ for line in gzip_file:
163
+ yield line.decode("utf-8")
164
+
165
+ stream = _io_to_generator(gzip_file)
166
+
167
+ # find which type (dense or sparse) ARFF type we will have to deal with
168
+ return_type = _arff.COO if output_arrays_type == "sparse" else _arff.DENSE_GEN
169
+ # we should not let LIAC-ARFF to encode the nominal attributes with NumPy
170
+ # arrays to have only numerical values.
171
+ encode_nominal = not (output_arrays_type == "pandas")
172
+ arff_container = _arff.load(
173
+ stream, return_type=return_type, encode_nominal=encode_nominal
174
+ )
175
+ columns_to_select = feature_names_to_select + target_names_to_select
176
+
177
+ categories = {
178
+ name: cat
179
+ for name, cat in arff_container["attributes"]
180
+ if isinstance(cat, list) and name in columns_to_select
181
+ }
182
+ if output_arrays_type == "pandas":
183
+ pd = check_pandas_support("fetch_openml with as_frame=True")
184
+
185
+ columns_info = OrderedDict(arff_container["attributes"])
186
+ columns_names = list(columns_info.keys())
187
+
188
+ # calculate chunksize
189
+ first_row = next(arff_container["data"])
190
+ first_df = pd.DataFrame([first_row], columns=columns_names, copy=False)
191
+
192
+ row_bytes = first_df.memory_usage(deep=True).sum()
193
+ chunksize = get_chunk_n_rows(row_bytes)
194
+
195
+ # read arff data with chunks
196
+ columns_to_keep = [col for col in columns_names if col in columns_to_select]
197
+ dfs = [first_df[columns_to_keep]]
198
+ for data in _chunk_generator(arff_container["data"], chunksize):
199
+ dfs.append(
200
+ pd.DataFrame(data, columns=columns_names, copy=False)[columns_to_keep]
201
+ )
202
+ # dfs[0] contains only one row, which may not have enough data to infer to
203
+ # column's dtype. Here we use `dfs[1]` to configure the dtype in dfs[0]
204
+ if len(dfs) >= 2:
205
+ dfs[0] = dfs[0].astype(dfs[1].dtypes)
206
+
207
+ # liac-arff parser does not depend on NumPy and uses None to represent
208
+ # missing values. To be consistent with the pandas parser, we replace
209
+ # None with np.nan.
210
+ frame = pd.concat(dfs, ignore_index=True)
211
+ frame = pd_fillna(pd, frame)
212
+ del dfs, first_df
213
+
214
+ # cast the columns frame
215
+ dtypes = {}
216
+ for name in frame.columns:
217
+ column_dtype = openml_columns_info[name]["data_type"]
218
+ if column_dtype.lower() == "integer":
219
+ # Use a pandas extension array instead of np.int64 to be able
220
+ # to support missing values.
221
+ dtypes[name] = "Int64"
222
+ elif column_dtype.lower() == "nominal":
223
+ dtypes[name] = "category"
224
+ else:
225
+ dtypes[name] = frame.dtypes[name]
226
+ frame = frame.astype(dtypes)
227
+
228
+ X, y = _post_process_frame(
229
+ frame, feature_names_to_select, target_names_to_select
230
+ )
231
+ else:
232
+ arff_data = arff_container["data"]
233
+
234
+ feature_indices_to_select = [
235
+ int(openml_columns_info[col_name]["index"])
236
+ for col_name in feature_names_to_select
237
+ ]
238
+ target_indices_to_select = [
239
+ int(openml_columns_info[col_name]["index"])
240
+ for col_name in target_names_to_select
241
+ ]
242
+
243
+ if isinstance(arff_data, Generator):
244
+ if shape is None:
245
+ raise ValueError(
246
+ "shape must be provided when arr['data'] is a Generator"
247
+ )
248
+ if shape[0] == -1:
249
+ count = -1
250
+ else:
251
+ count = shape[0] * shape[1]
252
+ data = np.fromiter(
253
+ itertools.chain.from_iterable(arff_data),
254
+ dtype="float64",
255
+ count=count,
256
+ )
257
+ data = data.reshape(*shape)
258
+ X = data[:, feature_indices_to_select]
259
+ y = data[:, target_indices_to_select]
260
+ elif isinstance(arff_data, tuple):
261
+ arff_data_X = _split_sparse_columns(arff_data, feature_indices_to_select)
262
+ num_obs = max(arff_data[1]) + 1
263
+ X_shape = (num_obs, len(feature_indices_to_select))
264
+ X = sp.sparse.coo_matrix(
265
+ (arff_data_X[0], (arff_data_X[1], arff_data_X[2])),
266
+ shape=X_shape,
267
+ dtype=np.float64,
268
+ )
269
+ X = X.tocsr()
270
+ y = _sparse_data_to_array(arff_data, target_indices_to_select)
271
+ else:
272
+ # This should never happen
273
+ raise ValueError(
274
+ f"Unexpected type for data obtained from arff: {type(arff_data)}"
275
+ )
276
+
277
+ is_classification = {
278
+ col_name in categories for col_name in target_names_to_select
279
+ }
280
+ if not is_classification:
281
+ # No target
282
+ pass
283
+ elif all(is_classification):
284
+ y = np.hstack(
285
+ [
286
+ np.take(
287
+ np.asarray(categories.pop(col_name), dtype="O"),
288
+ y[:, i : i + 1].astype(int, copy=False),
289
+ )
290
+ for i, col_name in enumerate(target_names_to_select)
291
+ ]
292
+ )
293
+ elif any(is_classification):
294
+ raise ValueError(
295
+ "Mix of nominal and non-nominal targets is not currently supported"
296
+ )
297
+
298
+ # reshape y back to 1-D array, if there is only 1 target column;
299
+ # back to None if there are not target columns
300
+ if y.shape[1] == 1:
301
+ y = y.reshape((-1,))
302
+ elif y.shape[1] == 0:
303
+ y = None
304
+
305
+ if output_arrays_type == "pandas":
306
+ return X, y, frame, None
307
+ return X, y, None, categories
308
+
309
+
310
+ def _pandas_arff_parser(
311
+ gzip_file,
312
+ output_arrays_type,
313
+ openml_columns_info,
314
+ feature_names_to_select,
315
+ target_names_to_select,
316
+ read_csv_kwargs=None,
317
+ ):
318
+ """ARFF parser using `pandas.read_csv`.
319
+
320
+ This parser uses the metadata fetched directly from OpenML and skips the metadata
321
+ headers of ARFF file itself. The data is loaded as a CSV file.
322
+
323
+ Parameters
324
+ ----------
325
+ gzip_file : GzipFile instance
326
+ The GZip compressed file with the ARFF formatted payload.
327
+
328
+ output_arrays_type : {"numpy", "sparse", "pandas"}
329
+ The type of the arrays that will be returned. The possibilities are:
330
+
331
+ - `"numpy"`: both `X` and `y` will be NumPy arrays;
332
+ - `"sparse"`: `X` will be sparse matrix and `y` will be a NumPy array;
333
+ - `"pandas"`: `X` will be a pandas DataFrame and `y` will be either a
334
+ pandas Series or DataFrame.
335
+
336
+ openml_columns_info : dict
337
+ The information provided by OpenML regarding the columns of the ARFF
338
+ file.
339
+
340
+ feature_names_to_select : list of str
341
+ A list of the feature names to be selected to build `X`.
342
+
343
+ target_names_to_select : list of str
344
+ A list of the target names to be selected to build `y`.
345
+
346
+ read_csv_kwargs : dict, default=None
347
+ Keyword arguments to pass to `pandas.read_csv`. It allows to overwrite
348
+ the default options.
349
+
350
+ Returns
351
+ -------
352
+ X : {ndarray, sparse matrix, dataframe}
353
+ The data matrix.
354
+
355
+ y : {ndarray, dataframe, series}
356
+ The target.
357
+
358
+ frame : dataframe or None
359
+ A dataframe containing both `X` and `y`. `None` if
360
+ `output_array_type != "pandas"`.
361
+
362
+ categories : list of str or None
363
+ The names of the features that are categorical. `None` if
364
+ `output_array_type == "pandas"`.
365
+ """
366
+ import pandas as pd
367
+
368
+ # read the file until the data section to skip the ARFF metadata headers
369
+ for line in gzip_file:
370
+ if line.decode("utf-8").lower().startswith("@data"):
371
+ break
372
+
373
+ dtypes = {}
374
+ for name in openml_columns_info:
375
+ column_dtype = openml_columns_info[name]["data_type"]
376
+ if column_dtype.lower() == "integer":
377
+ # Use Int64 to infer missing values from data
378
+ # XXX: this line is not covered by our tests. Is this really needed?
379
+ dtypes[name] = "Int64"
380
+ elif column_dtype.lower() == "nominal":
381
+ dtypes[name] = "category"
382
+ # since we will not pass `names` when reading the ARFF file, we need to translate
383
+ # `dtypes` from column names to column indices to pass to `pandas.read_csv`
384
+ dtypes_positional = {
385
+ col_idx: dtypes[name]
386
+ for col_idx, name in enumerate(openml_columns_info)
387
+ if name in dtypes
388
+ }
389
+
390
+ default_read_csv_kwargs = {
391
+ "header": None,
392
+ "index_col": False, # always force pandas to not use the first column as index
393
+ "na_values": ["?"], # missing values are represented by `?`
394
+ "keep_default_na": False, # only `?` is a missing value given the ARFF specs
395
+ "comment": "%", # skip line starting by `%` since they are comments
396
+ "quotechar": '"', # delimiter to use for quoted strings
397
+ "skipinitialspace": True, # skip spaces after delimiter to follow ARFF specs
398
+ "escapechar": "\\",
399
+ "dtype": dtypes_positional,
400
+ }
401
+ read_csv_kwargs = {**default_read_csv_kwargs, **(read_csv_kwargs or {})}
402
+ frame = pd.read_csv(gzip_file, **read_csv_kwargs)
403
+ try:
404
+ # Setting the columns while reading the file will select the N first columns
405
+ # and not raise a ParserError. Instead, we set the columns after reading the
406
+ # file and raise a ParserError if the number of columns does not match the
407
+ # number of columns in the metadata given by OpenML.
408
+ frame.columns = [name for name in openml_columns_info]
409
+ except ValueError as exc:
410
+ raise pd.errors.ParserError(
411
+ "The number of columns provided by OpenML does not match the number of "
412
+ "columns inferred by pandas when reading the file."
413
+ ) from exc
414
+
415
+ columns_to_select = feature_names_to_select + target_names_to_select
416
+ columns_to_keep = [col for col in frame.columns if col in columns_to_select]
417
+ frame = frame[columns_to_keep]
418
+
419
+ # `pd.read_csv` automatically handles double quotes for quoting non-numeric
420
+ # CSV cell values. Contrary to LIAC-ARFF, `pd.read_csv` cannot be configured to
421
+ # consider either single quotes and double quotes as valid quoting chars at
422
+ # the same time since this case does not occur in regular (non-ARFF) CSV files.
423
+ # To mimic the behavior of LIAC-ARFF parser, we manually strip single quotes
424
+ # on categories as a post-processing steps if needed.
425
+ #
426
+ # Note however that we intentionally do not attempt to do this kind of manual
427
+ # post-processing of (non-categorical) string-typed columns because we cannot
428
+ # resolve the ambiguity of the case of CSV cell with nesting quoting such as
429
+ # `"'some string value'"` with pandas.
430
+ single_quote_pattern = re.compile(r"^'(?P<contents>.*)'$")
431
+
432
+ def strip_single_quotes(input_string):
433
+ match = re.search(single_quote_pattern, input_string)
434
+ if match is None:
435
+ return input_string
436
+
437
+ return match.group("contents")
438
+
439
+ categorical_columns = [
440
+ name
441
+ for name, dtype in frame.dtypes.items()
442
+ if isinstance(dtype, pd.CategoricalDtype)
443
+ ]
444
+ for col in categorical_columns:
445
+ frame[col] = frame[col].cat.rename_categories(strip_single_quotes)
446
+
447
+ X, y = _post_process_frame(frame, feature_names_to_select, target_names_to_select)
448
+
449
+ if output_arrays_type == "pandas":
450
+ return X, y, frame, None
451
+ else:
452
+ X, y = X.to_numpy(), y.to_numpy()
453
+
454
+ categories = {
455
+ name: dtype.categories.tolist()
456
+ for name, dtype in frame.dtypes.items()
457
+ if isinstance(dtype, pd.CategoricalDtype)
458
+ }
459
+ return X, y, None, categories
460
+
461
+
462
+ def load_arff_from_gzip_file(
463
+ gzip_file,
464
+ parser,
465
+ output_type,
466
+ openml_columns_info,
467
+ feature_names_to_select,
468
+ target_names_to_select,
469
+ shape=None,
470
+ read_csv_kwargs=None,
471
+ ):
472
+ """Load a compressed ARFF file using a given parser.
473
+
474
+ Parameters
475
+ ----------
476
+ gzip_file : GzipFile instance
477
+ The file compressed to be read.
478
+
479
+ parser : {"pandas", "liac-arff"}
480
+ The parser used to parse the ARFF file. "pandas" is recommended
481
+ but only supports loading dense datasets.
482
+
483
+ output_type : {"numpy", "sparse", "pandas"}
484
+ The type of the arrays that will be returned. The possibilities ara:
485
+
486
+ - `"numpy"`: both `X` and `y` will be NumPy arrays;
487
+ - `"sparse"`: `X` will be sparse matrix and `y` will be a NumPy array;
488
+ - `"pandas"`: `X` will be a pandas DataFrame and `y` will be either a
489
+ pandas Series or DataFrame.
490
+
491
+ openml_columns_info : dict
492
+ The information provided by OpenML regarding the columns of the ARFF
493
+ file.
494
+
495
+ feature_names_to_select : list of str
496
+ A list of the feature names to be selected.
497
+
498
+ target_names_to_select : list of str
499
+ A list of the target names to be selected.
500
+
501
+ read_csv_kwargs : dict, default=None
502
+ Keyword arguments to pass to `pandas.read_csv`. It allows to overwrite
503
+ the default options.
504
+
505
+ Returns
506
+ -------
507
+ X : {ndarray, sparse matrix, dataframe}
508
+ The data matrix.
509
+
510
+ y : {ndarray, dataframe, series}
511
+ The target.
512
+
513
+ frame : dataframe or None
514
+ A dataframe containing both `X` and `y`. `None` if
515
+ `output_array_type != "pandas"`.
516
+
517
+ categories : list of str or None
518
+ The names of the features that are categorical. `None` if
519
+ `output_array_type == "pandas"`.
520
+ """
521
+ if parser == "liac-arff":
522
+ return _liac_arff_parser(
523
+ gzip_file,
524
+ output_type,
525
+ openml_columns_info,
526
+ feature_names_to_select,
527
+ target_names_to_select,
528
+ shape,
529
+ )
530
+ elif parser == "pandas":
531
+ return _pandas_arff_parser(
532
+ gzip_file,
533
+ output_type,
534
+ openml_columns_info,
535
+ feature_names_to_select,
536
+ target_names_to_select,
537
+ read_csv_kwargs,
538
+ )
539
+ else:
540
+ raise ValueError(
541
+ f"Unknown parser: '{parser}'. Should be 'liac-arff' or 'pandas'."
542
+ )
venv/lib/python3.10/site-packages/sklearn/datasets/_base.py ADDED
@@ -0,0 +1,1441 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Base IO code for all datasets
3
+ """
4
+
5
+ # Copyright (c) 2007 David Cournapeau <[email protected]>
6
+ # 2010 Fabian Pedregosa <[email protected]>
7
+ # 2010 Olivier Grisel <[email protected]>
8
+ # License: BSD 3 clause
9
+ import csv
10
+ import gzip
11
+ import hashlib
12
+ import os
13
+ import shutil
14
+ from collections import namedtuple
15
+ from importlib import resources
16
+ from numbers import Integral
17
+ from os import environ, listdir, makedirs
18
+ from os.path import expanduser, isdir, join, splitext
19
+ from pathlib import Path
20
+ from urllib.request import urlretrieve
21
+
22
+ import numpy as np
23
+
24
+ from ..preprocessing import scale
25
+ from ..utils import Bunch, check_pandas_support, check_random_state
26
+ from ..utils._param_validation import Interval, StrOptions, validate_params
27
+
28
+ DATA_MODULE = "sklearn.datasets.data"
29
+ DESCR_MODULE = "sklearn.datasets.descr"
30
+ IMAGES_MODULE = "sklearn.datasets.images"
31
+
32
+ RemoteFileMetadata = namedtuple("RemoteFileMetadata", ["filename", "url", "checksum"])
33
+
34
+
35
+ @validate_params(
36
+ {
37
+ "data_home": [str, os.PathLike, None],
38
+ },
39
+ prefer_skip_nested_validation=True,
40
+ )
41
+ def get_data_home(data_home=None) -> str:
42
+ """Return the path of the scikit-learn data directory.
43
+
44
+ This folder is used by some large dataset loaders to avoid downloading the
45
+ data several times.
46
+
47
+ By default the data directory is set to a folder named 'scikit_learn_data' in the
48
+ user home folder.
49
+
50
+ Alternatively, it can be set by the 'SCIKIT_LEARN_DATA' environment
51
+ variable or programmatically by giving an explicit folder path. The '~'
52
+ symbol is expanded to the user home folder.
53
+
54
+ If the folder does not already exist, it is automatically created.
55
+
56
+ Parameters
57
+ ----------
58
+ data_home : str or path-like, default=None
59
+ The path to scikit-learn data directory. If `None`, the default path
60
+ is `~/scikit_learn_data`.
61
+
62
+ Returns
63
+ -------
64
+ data_home: str
65
+ The path to scikit-learn data directory.
66
+ """
67
+ if data_home is None:
68
+ data_home = environ.get("SCIKIT_LEARN_DATA", join("~", "scikit_learn_data"))
69
+ data_home = expanduser(data_home)
70
+ makedirs(data_home, exist_ok=True)
71
+ return data_home
72
+
73
+
74
+ @validate_params(
75
+ {
76
+ "data_home": [str, os.PathLike, None],
77
+ },
78
+ prefer_skip_nested_validation=True,
79
+ )
80
+ def clear_data_home(data_home=None):
81
+ """Delete all the content of the data home cache.
82
+
83
+ Parameters
84
+ ----------
85
+ data_home : str or path-like, default=None
86
+ The path to scikit-learn data directory. If `None`, the default path
87
+ is `~/scikit_learn_data`.
88
+
89
+ Examples
90
+ --------
91
+ >>> from sklearn.datasets import clear_data_home
92
+ >>> clear_data_home() # doctest: +SKIP
93
+ """
94
+ data_home = get_data_home(data_home)
95
+ shutil.rmtree(data_home)
96
+
97
+
98
+ def _convert_data_dataframe(
99
+ caller_name, data, target, feature_names, target_names, sparse_data=False
100
+ ):
101
+ pd = check_pandas_support("{} with as_frame=True".format(caller_name))
102
+ if not sparse_data:
103
+ data_df = pd.DataFrame(data, columns=feature_names, copy=False)
104
+ else:
105
+ data_df = pd.DataFrame.sparse.from_spmatrix(data, columns=feature_names)
106
+
107
+ target_df = pd.DataFrame(target, columns=target_names)
108
+ combined_df = pd.concat([data_df, target_df], axis=1)
109
+ X = combined_df[feature_names]
110
+ y = combined_df[target_names]
111
+ if y.shape[1] == 1:
112
+ y = y.iloc[:, 0]
113
+ return combined_df, X, y
114
+
115
+
116
+ @validate_params(
117
+ {
118
+ "container_path": [str, os.PathLike],
119
+ "description": [str, None],
120
+ "categories": [list, None],
121
+ "load_content": ["boolean"],
122
+ "shuffle": ["boolean"],
123
+ "encoding": [str, None],
124
+ "decode_error": [StrOptions({"strict", "ignore", "replace"})],
125
+ "random_state": ["random_state"],
126
+ "allowed_extensions": [list, None],
127
+ },
128
+ prefer_skip_nested_validation=True,
129
+ )
130
+ def load_files(
131
+ container_path,
132
+ *,
133
+ description=None,
134
+ categories=None,
135
+ load_content=True,
136
+ shuffle=True,
137
+ encoding=None,
138
+ decode_error="strict",
139
+ random_state=0,
140
+ allowed_extensions=None,
141
+ ):
142
+ """Load text files with categories as subfolder names.
143
+
144
+ Individual samples are assumed to be files stored a two levels folder
145
+ structure such as the following:
146
+
147
+ container_folder/
148
+ category_1_folder/
149
+ file_1.txt
150
+ file_2.txt
151
+ ...
152
+ file_42.txt
153
+ category_2_folder/
154
+ file_43.txt
155
+ file_44.txt
156
+ ...
157
+
158
+ The folder names are used as supervised signal label names. The individual
159
+ file names are not important.
160
+
161
+ This function does not try to extract features into a numpy array or scipy
162
+ sparse matrix. In addition, if load_content is false it does not try to
163
+ load the files in memory.
164
+
165
+ To use text files in a scikit-learn classification or clustering algorithm,
166
+ you will need to use the :mod:`~sklearn.feature_extraction.text` module to
167
+ build a feature extraction transformer that suits your problem.
168
+
169
+ If you set load_content=True, you should also specify the encoding of the
170
+ text using the 'encoding' parameter. For many modern text files, 'utf-8'
171
+ will be the correct encoding. If you leave encoding equal to None, then the
172
+ content will be made of bytes instead of Unicode, and you will not be able
173
+ to use most functions in :mod:`~sklearn.feature_extraction.text`.
174
+
175
+ Similar feature extractors should be built for other kind of unstructured
176
+ data input such as images, audio, video, ...
177
+
178
+ If you want files with a specific file extension (e.g. `.txt`) then you
179
+ can pass a list of those file extensions to `allowed_extensions`.
180
+
181
+ Read more in the :ref:`User Guide <datasets>`.
182
+
183
+ Parameters
184
+ ----------
185
+ container_path : str
186
+ Path to the main folder holding one subfolder per category.
187
+
188
+ description : str, default=None
189
+ A paragraph describing the characteristic of the dataset: its source,
190
+ reference, etc.
191
+
192
+ categories : list of str, default=None
193
+ If None (default), load all the categories. If not None, list of
194
+ category names to load (other categories ignored).
195
+
196
+ load_content : bool, default=True
197
+ Whether to load or not the content of the different files. If true a
198
+ 'data' attribute containing the text information is present in the data
199
+ structure returned. If not, a filenames attribute gives the path to the
200
+ files.
201
+
202
+ shuffle : bool, default=True
203
+ Whether or not to shuffle the data: might be important for models that
204
+ make the assumption that the samples are independent and identically
205
+ distributed (i.i.d.), such as stochastic gradient descent.
206
+
207
+ encoding : str, default=None
208
+ If None, do not try to decode the content of the files (e.g. for images
209
+ or other non-text content). If not None, encoding to use to decode text
210
+ files to Unicode if load_content is True.
211
+
212
+ decode_error : {'strict', 'ignore', 'replace'}, default='strict'
213
+ Instruction on what to do if a byte sequence is given to analyze that
214
+ contains characters not of the given `encoding`. Passed as keyword
215
+ argument 'errors' to bytes.decode.
216
+
217
+ random_state : int, RandomState instance or None, default=0
218
+ Determines random number generation for dataset shuffling. Pass an int
219
+ for reproducible output across multiple function calls.
220
+ See :term:`Glossary <random_state>`.
221
+
222
+ allowed_extensions : list of str, default=None
223
+ List of desired file extensions to filter the files to be loaded.
224
+
225
+ Returns
226
+ -------
227
+ data : :class:`~sklearn.utils.Bunch`
228
+ Dictionary-like object, with the following attributes.
229
+
230
+ data : list of str
231
+ Only present when `load_content=True`.
232
+ The raw text data to learn.
233
+ target : ndarray
234
+ The target labels (integer index).
235
+ target_names : list
236
+ The names of target classes.
237
+ DESCR : str
238
+ The full description of the dataset.
239
+ filenames: ndarray
240
+ The filenames holding the dataset.
241
+
242
+ Examples
243
+ --------
244
+ >>> from sklearn.datasets import load_files
245
+ >>> container_path = "./"
246
+ >>> load_files(container_path) # doctest: +SKIP
247
+ """
248
+
249
+ target = []
250
+ target_names = []
251
+ filenames = []
252
+
253
+ folders = [
254
+ f for f in sorted(listdir(container_path)) if isdir(join(container_path, f))
255
+ ]
256
+
257
+ if categories is not None:
258
+ folders = [f for f in folders if f in categories]
259
+
260
+ if allowed_extensions is not None:
261
+ allowed_extensions = frozenset(allowed_extensions)
262
+
263
+ for label, folder in enumerate(folders):
264
+ target_names.append(folder)
265
+ folder_path = join(container_path, folder)
266
+ files = sorted(listdir(folder_path))
267
+ if allowed_extensions is not None:
268
+ documents = [
269
+ join(folder_path, file)
270
+ for file in files
271
+ if os.path.splitext(file)[1] in allowed_extensions
272
+ ]
273
+ else:
274
+ documents = [join(folder_path, file) for file in files]
275
+ target.extend(len(documents) * [label])
276
+ filenames.extend(documents)
277
+
278
+ # convert to array for fancy indexing
279
+ filenames = np.array(filenames)
280
+ target = np.array(target)
281
+
282
+ if shuffle:
283
+ random_state = check_random_state(random_state)
284
+ indices = np.arange(filenames.shape[0])
285
+ random_state.shuffle(indices)
286
+ filenames = filenames[indices]
287
+ target = target[indices]
288
+
289
+ if load_content:
290
+ data = []
291
+ for filename in filenames:
292
+ data.append(Path(filename).read_bytes())
293
+ if encoding is not None:
294
+ data = [d.decode(encoding, decode_error) for d in data]
295
+ return Bunch(
296
+ data=data,
297
+ filenames=filenames,
298
+ target_names=target_names,
299
+ target=target,
300
+ DESCR=description,
301
+ )
302
+
303
+ return Bunch(
304
+ filenames=filenames, target_names=target_names, target=target, DESCR=description
305
+ )
306
+
307
+
308
+ def load_csv_data(
309
+ data_file_name,
310
+ *,
311
+ data_module=DATA_MODULE,
312
+ descr_file_name=None,
313
+ descr_module=DESCR_MODULE,
314
+ encoding="utf-8",
315
+ ):
316
+ """Loads `data_file_name` from `data_module with `importlib.resources`.
317
+
318
+ Parameters
319
+ ----------
320
+ data_file_name : str
321
+ Name of csv file to be loaded from `data_module/data_file_name`.
322
+ For example `'wine_data.csv'`.
323
+
324
+ data_module : str or module, default='sklearn.datasets.data'
325
+ Module where data lives. The default is `'sklearn.datasets.data'`.
326
+
327
+ descr_file_name : str, default=None
328
+ Name of rst file to be loaded from `descr_module/descr_file_name`.
329
+ For example `'wine_data.rst'`. See also :func:`load_descr`.
330
+ If not None, also returns the corresponding description of
331
+ the dataset.
332
+
333
+ descr_module : str or module, default='sklearn.datasets.descr'
334
+ Module where `descr_file_name` lives. See also :func:`load_descr`.
335
+ The default is `'sklearn.datasets.descr'`.
336
+
337
+ Returns
338
+ -------
339
+ data : ndarray of shape (n_samples, n_features)
340
+ A 2D array with each row representing one sample and each column
341
+ representing the features of a given sample.
342
+
343
+ target : ndarry of shape (n_samples,)
344
+ A 1D array holding target variables for all the samples in `data`.
345
+ For example target[0] is the target variable for data[0].
346
+
347
+ target_names : ndarry of shape (n_samples,)
348
+ A 1D array containing the names of the classifications. For example
349
+ target_names[0] is the name of the target[0] class.
350
+
351
+ descr : str, optional
352
+ Description of the dataset (the content of `descr_file_name`).
353
+ Only returned if `descr_file_name` is not None.
354
+
355
+ encoding : str, optional
356
+ Text encoding of the CSV file.
357
+
358
+ .. versionadded:: 1.4
359
+ """
360
+ data_path = resources.files(data_module) / data_file_name
361
+ with data_path.open("r", encoding="utf-8") as csv_file:
362
+ data_file = csv.reader(csv_file)
363
+ temp = next(data_file)
364
+ n_samples = int(temp[0])
365
+ n_features = int(temp[1])
366
+ target_names = np.array(temp[2:])
367
+ data = np.empty((n_samples, n_features))
368
+ target = np.empty((n_samples,), dtype=int)
369
+
370
+ for i, ir in enumerate(data_file):
371
+ data[i] = np.asarray(ir[:-1], dtype=np.float64)
372
+ target[i] = np.asarray(ir[-1], dtype=int)
373
+
374
+ if descr_file_name is None:
375
+ return data, target, target_names
376
+ else:
377
+ assert descr_module is not None
378
+ descr = load_descr(descr_module=descr_module, descr_file_name=descr_file_name)
379
+ return data, target, target_names, descr
380
+
381
+
382
+ def load_gzip_compressed_csv_data(
383
+ data_file_name,
384
+ *,
385
+ data_module=DATA_MODULE,
386
+ descr_file_name=None,
387
+ descr_module=DESCR_MODULE,
388
+ encoding="utf-8",
389
+ **kwargs,
390
+ ):
391
+ """Loads gzip-compressed with `importlib.resources`.
392
+
393
+ 1) Open resource file with `importlib.resources.open_binary`
394
+ 2) Decompress file obj with `gzip.open`
395
+ 3) Load decompressed data with `np.loadtxt`
396
+
397
+ Parameters
398
+ ----------
399
+ data_file_name : str
400
+ Name of gzip-compressed csv file (`'*.csv.gz'`) to be loaded from
401
+ `data_module/data_file_name`. For example `'diabetes_data.csv.gz'`.
402
+
403
+ data_module : str or module, default='sklearn.datasets.data'
404
+ Module where data lives. The default is `'sklearn.datasets.data'`.
405
+
406
+ descr_file_name : str, default=None
407
+ Name of rst file to be loaded from `descr_module/descr_file_name`.
408
+ For example `'wine_data.rst'`. See also :func:`load_descr`.
409
+ If not None, also returns the corresponding description of
410
+ the dataset.
411
+
412
+ descr_module : str or module, default='sklearn.datasets.descr'
413
+ Module where `descr_file_name` lives. See also :func:`load_descr`.
414
+ The default is `'sklearn.datasets.descr'`.
415
+
416
+ encoding : str, default="utf-8"
417
+ Name of the encoding that the gzip-decompressed file will be
418
+ decoded with. The default is 'utf-8'.
419
+
420
+ **kwargs : dict, optional
421
+ Keyword arguments to be passed to `np.loadtxt`;
422
+ e.g. delimiter=','.
423
+
424
+ Returns
425
+ -------
426
+ data : ndarray of shape (n_samples, n_features)
427
+ A 2D array with each row representing one sample and each column
428
+ representing the features and/or target of a given sample.
429
+
430
+ descr : str, optional
431
+ Description of the dataset (the content of `descr_file_name`).
432
+ Only returned if `descr_file_name` is not None.
433
+ """
434
+ data_path = resources.files(data_module) / data_file_name
435
+ with data_path.open("rb") as compressed_file:
436
+ compressed_file = gzip.open(compressed_file, mode="rt", encoding=encoding)
437
+ data = np.loadtxt(compressed_file, **kwargs)
438
+
439
+ if descr_file_name is None:
440
+ return data
441
+ else:
442
+ assert descr_module is not None
443
+ descr = load_descr(descr_module=descr_module, descr_file_name=descr_file_name)
444
+ return data, descr
445
+
446
+
447
+ def load_descr(descr_file_name, *, descr_module=DESCR_MODULE, encoding="utf-8"):
448
+ """Load `descr_file_name` from `descr_module` with `importlib.resources`.
449
+
450
+ Parameters
451
+ ----------
452
+ descr_file_name : str, default=None
453
+ Name of rst file to be loaded from `descr_module/descr_file_name`.
454
+ For example `'wine_data.rst'`. See also :func:`load_descr`.
455
+ If not None, also returns the corresponding description of
456
+ the dataset.
457
+
458
+ descr_module : str or module, default='sklearn.datasets.descr'
459
+ Module where `descr_file_name` lives. See also :func:`load_descr`.
460
+ The default is `'sklearn.datasets.descr'`.
461
+
462
+ encoding : str, default="utf-8"
463
+ Name of the encoding that `descr_file_name` will be decoded with.
464
+ The default is 'utf-8'.
465
+
466
+ .. versionadded:: 1.4
467
+
468
+ Returns
469
+ -------
470
+ fdescr : str
471
+ Content of `descr_file_name`.
472
+ """
473
+ path = resources.files(descr_module) / descr_file_name
474
+ return path.read_text(encoding=encoding)
475
+
476
+
477
+ @validate_params(
478
+ {
479
+ "return_X_y": ["boolean"],
480
+ "as_frame": ["boolean"],
481
+ },
482
+ prefer_skip_nested_validation=True,
483
+ )
484
+ def load_wine(*, return_X_y=False, as_frame=False):
485
+ """Load and return the wine dataset (classification).
486
+
487
+ .. versionadded:: 0.18
488
+
489
+ The wine dataset is a classic and very easy multi-class classification
490
+ dataset.
491
+
492
+ ================= ==============
493
+ Classes 3
494
+ Samples per class [59,71,48]
495
+ Samples total 178
496
+ Dimensionality 13
497
+ Features real, positive
498
+ ================= ==============
499
+
500
+ The copy of UCI ML Wine Data Set dataset is downloaded and modified to fit
501
+ standard format from:
502
+ https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data
503
+
504
+ Read more in the :ref:`User Guide <wine_dataset>`.
505
+
506
+ Parameters
507
+ ----------
508
+ return_X_y : bool, default=False
509
+ If True, returns ``(data, target)`` instead of a Bunch object.
510
+ See below for more information about the `data` and `target` object.
511
+
512
+ as_frame : bool, default=False
513
+ If True, the data is a pandas DataFrame including columns with
514
+ appropriate dtypes (numeric). The target is
515
+ a pandas DataFrame or Series depending on the number of target columns.
516
+ If `return_X_y` is True, then (`data`, `target`) will be pandas
517
+ DataFrames or Series as described below.
518
+
519
+ .. versionadded:: 0.23
520
+
521
+ Returns
522
+ -------
523
+ data : :class:`~sklearn.utils.Bunch`
524
+ Dictionary-like object, with the following attributes.
525
+
526
+ data : {ndarray, dataframe} of shape (178, 13)
527
+ The data matrix. If `as_frame=True`, `data` will be a pandas
528
+ DataFrame.
529
+ target: {ndarray, Series} of shape (178,)
530
+ The classification target. If `as_frame=True`, `target` will be
531
+ a pandas Series.
532
+ feature_names: list
533
+ The names of the dataset columns.
534
+ target_names: list
535
+ The names of target classes.
536
+ frame: DataFrame of shape (178, 14)
537
+ Only present when `as_frame=True`. DataFrame with `data` and
538
+ `target`.
539
+
540
+ .. versionadded:: 0.23
541
+ DESCR: str
542
+ The full description of the dataset.
543
+
544
+ (data, target) : tuple if ``return_X_y`` is True
545
+ A tuple of two ndarrays by default. The first contains a 2D array of shape
546
+ (178, 13) with each row representing one sample and each column representing
547
+ the features. The second array of shape (178,) contains the target samples.
548
+
549
+ Examples
550
+ --------
551
+ Let's say you are interested in the samples 10, 80, and 140, and want to
552
+ know their class name.
553
+
554
+ >>> from sklearn.datasets import load_wine
555
+ >>> data = load_wine()
556
+ >>> data.target[[10, 80, 140]]
557
+ array([0, 1, 2])
558
+ >>> list(data.target_names)
559
+ ['class_0', 'class_1', 'class_2']
560
+ """
561
+
562
+ data, target, target_names, fdescr = load_csv_data(
563
+ data_file_name="wine_data.csv", descr_file_name="wine_data.rst"
564
+ )
565
+
566
+ feature_names = [
567
+ "alcohol",
568
+ "malic_acid",
569
+ "ash",
570
+ "alcalinity_of_ash",
571
+ "magnesium",
572
+ "total_phenols",
573
+ "flavanoids",
574
+ "nonflavanoid_phenols",
575
+ "proanthocyanins",
576
+ "color_intensity",
577
+ "hue",
578
+ "od280/od315_of_diluted_wines",
579
+ "proline",
580
+ ]
581
+
582
+ frame = None
583
+ target_columns = [
584
+ "target",
585
+ ]
586
+ if as_frame:
587
+ frame, data, target = _convert_data_dataframe(
588
+ "load_wine", data, target, feature_names, target_columns
589
+ )
590
+
591
+ if return_X_y:
592
+ return data, target
593
+
594
+ return Bunch(
595
+ data=data,
596
+ target=target,
597
+ frame=frame,
598
+ target_names=target_names,
599
+ DESCR=fdescr,
600
+ feature_names=feature_names,
601
+ )
602
+
603
+
604
+ @validate_params(
605
+ {"return_X_y": ["boolean"], "as_frame": ["boolean"]},
606
+ prefer_skip_nested_validation=True,
607
+ )
608
+ def load_iris(*, return_X_y=False, as_frame=False):
609
+ """Load and return the iris dataset (classification).
610
+
611
+ The iris dataset is a classic and very easy multi-class classification
612
+ dataset.
613
+
614
+ ================= ==============
615
+ Classes 3
616
+ Samples per class 50
617
+ Samples total 150
618
+ Dimensionality 4
619
+ Features real, positive
620
+ ================= ==============
621
+
622
+ Read more in the :ref:`User Guide <iris_dataset>`.
623
+
624
+ Parameters
625
+ ----------
626
+ return_X_y : bool, default=False
627
+ If True, returns ``(data, target)`` instead of a Bunch object. See
628
+ below for more information about the `data` and `target` object.
629
+
630
+ .. versionadded:: 0.18
631
+
632
+ as_frame : bool, default=False
633
+ If True, the data is a pandas DataFrame including columns with
634
+ appropriate dtypes (numeric). The target is
635
+ a pandas DataFrame or Series depending on the number of target columns.
636
+ If `return_X_y` is True, then (`data`, `target`) will be pandas
637
+ DataFrames or Series as described below.
638
+
639
+ .. versionadded:: 0.23
640
+
641
+ Returns
642
+ -------
643
+ data : :class:`~sklearn.utils.Bunch`
644
+ Dictionary-like object, with the following attributes.
645
+
646
+ data : {ndarray, dataframe} of shape (150, 4)
647
+ The data matrix. If `as_frame=True`, `data` will be a pandas
648
+ DataFrame.
649
+ target: {ndarray, Series} of shape (150,)
650
+ The classification target. If `as_frame=True`, `target` will be
651
+ a pandas Series.
652
+ feature_names: list
653
+ The names of the dataset columns.
654
+ target_names: list
655
+ The names of target classes.
656
+ frame: DataFrame of shape (150, 5)
657
+ Only present when `as_frame=True`. DataFrame with `data` and
658
+ `target`.
659
+
660
+ .. versionadded:: 0.23
661
+ DESCR: str
662
+ The full description of the dataset.
663
+ filename: str
664
+ The path to the location of the data.
665
+
666
+ .. versionadded:: 0.20
667
+
668
+ (data, target) : tuple if ``return_X_y`` is True
669
+ A tuple of two ndarray. The first containing a 2D array of shape
670
+ (n_samples, n_features) with each row representing one sample and
671
+ each column representing the features. The second ndarray of shape
672
+ (n_samples,) containing the target samples.
673
+
674
+ .. versionadded:: 0.18
675
+
676
+ Notes
677
+ -----
678
+ .. versionchanged:: 0.20
679
+ Fixed two wrong data points according to Fisher's paper.
680
+ The new version is the same as in R, but not as in the UCI
681
+ Machine Learning Repository.
682
+
683
+ Examples
684
+ --------
685
+ Let's say you are interested in the samples 10, 25, and 50, and want to
686
+ know their class name.
687
+
688
+ >>> from sklearn.datasets import load_iris
689
+ >>> data = load_iris()
690
+ >>> data.target[[10, 25, 50]]
691
+ array([0, 0, 1])
692
+ >>> list(data.target_names)
693
+ ['setosa', 'versicolor', 'virginica']
694
+
695
+ See :ref:`sphx_glr_auto_examples_datasets_plot_iris_dataset.py` for a more
696
+ detailed example of how to work with the iris dataset.
697
+ """
698
+ data_file_name = "iris.csv"
699
+ data, target, target_names, fdescr = load_csv_data(
700
+ data_file_name=data_file_name, descr_file_name="iris.rst"
701
+ )
702
+
703
+ feature_names = [
704
+ "sepal length (cm)",
705
+ "sepal width (cm)",
706
+ "petal length (cm)",
707
+ "petal width (cm)",
708
+ ]
709
+
710
+ frame = None
711
+ target_columns = [
712
+ "target",
713
+ ]
714
+ if as_frame:
715
+ frame, data, target = _convert_data_dataframe(
716
+ "load_iris", data, target, feature_names, target_columns
717
+ )
718
+
719
+ if return_X_y:
720
+ return data, target
721
+
722
+ return Bunch(
723
+ data=data,
724
+ target=target,
725
+ frame=frame,
726
+ target_names=target_names,
727
+ DESCR=fdescr,
728
+ feature_names=feature_names,
729
+ filename=data_file_name,
730
+ data_module=DATA_MODULE,
731
+ )
732
+
733
+
734
+ @validate_params(
735
+ {"return_X_y": ["boolean"], "as_frame": ["boolean"]},
736
+ prefer_skip_nested_validation=True,
737
+ )
738
+ def load_breast_cancer(*, return_X_y=False, as_frame=False):
739
+ """Load and return the breast cancer wisconsin dataset (classification).
740
+
741
+ The breast cancer dataset is a classic and very easy binary classification
742
+ dataset.
743
+
744
+ ================= ==============
745
+ Classes 2
746
+ Samples per class 212(M),357(B)
747
+ Samples total 569
748
+ Dimensionality 30
749
+ Features real, positive
750
+ ================= ==============
751
+
752
+ The copy of UCI ML Breast Cancer Wisconsin (Diagnostic) dataset is
753
+ downloaded from:
754
+ https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic
755
+
756
+ Read more in the :ref:`User Guide <breast_cancer_dataset>`.
757
+
758
+ Parameters
759
+ ----------
760
+ return_X_y : bool, default=False
761
+ If True, returns ``(data, target)`` instead of a Bunch object.
762
+ See below for more information about the `data` and `target` object.
763
+
764
+ .. versionadded:: 0.18
765
+
766
+ as_frame : bool, default=False
767
+ If True, the data is a pandas DataFrame including columns with
768
+ appropriate dtypes (numeric). The target is
769
+ a pandas DataFrame or Series depending on the number of target columns.
770
+ If `return_X_y` is True, then (`data`, `target`) will be pandas
771
+ DataFrames or Series as described below.
772
+
773
+ .. versionadded:: 0.23
774
+
775
+ Returns
776
+ -------
777
+ data : :class:`~sklearn.utils.Bunch`
778
+ Dictionary-like object, with the following attributes.
779
+
780
+ data : {ndarray, dataframe} of shape (569, 30)
781
+ The data matrix. If `as_frame=True`, `data` will be a pandas
782
+ DataFrame.
783
+ target : {ndarray, Series} of shape (569,)
784
+ The classification target. If `as_frame=True`, `target` will be
785
+ a pandas Series.
786
+ feature_names : ndarray of shape (30,)
787
+ The names of the dataset columns.
788
+ target_names : ndarray of shape (2,)
789
+ The names of target classes.
790
+ frame : DataFrame of shape (569, 31)
791
+ Only present when `as_frame=True`. DataFrame with `data` and
792
+ `target`.
793
+
794
+ .. versionadded:: 0.23
795
+ DESCR : str
796
+ The full description of the dataset.
797
+ filename : str
798
+ The path to the location of the data.
799
+
800
+ .. versionadded:: 0.20
801
+
802
+ (data, target) : tuple if ``return_X_y`` is True
803
+ A tuple of two ndarrays by default. The first contains a 2D ndarray of
804
+ shape (569, 30) with each row representing one sample and each column
805
+ representing the features. The second ndarray of shape (569,) contains
806
+ the target samples. If `as_frame=True`, both arrays are pandas objects,
807
+ i.e. `X` a dataframe and `y` a series.
808
+
809
+ .. versionadded:: 0.18
810
+
811
+ Examples
812
+ --------
813
+ Let's say you are interested in the samples 10, 50, and 85, and want to
814
+ know their class name.
815
+
816
+ >>> from sklearn.datasets import load_breast_cancer
817
+ >>> data = load_breast_cancer()
818
+ >>> data.target[[10, 50, 85]]
819
+ array([0, 1, 0])
820
+ >>> list(data.target_names)
821
+ ['malignant', 'benign']
822
+ """
823
+ data_file_name = "breast_cancer.csv"
824
+ data, target, target_names, fdescr = load_csv_data(
825
+ data_file_name=data_file_name, descr_file_name="breast_cancer.rst"
826
+ )
827
+
828
+ feature_names = np.array(
829
+ [
830
+ "mean radius",
831
+ "mean texture",
832
+ "mean perimeter",
833
+ "mean area",
834
+ "mean smoothness",
835
+ "mean compactness",
836
+ "mean concavity",
837
+ "mean concave points",
838
+ "mean symmetry",
839
+ "mean fractal dimension",
840
+ "radius error",
841
+ "texture error",
842
+ "perimeter error",
843
+ "area error",
844
+ "smoothness error",
845
+ "compactness error",
846
+ "concavity error",
847
+ "concave points error",
848
+ "symmetry error",
849
+ "fractal dimension error",
850
+ "worst radius",
851
+ "worst texture",
852
+ "worst perimeter",
853
+ "worst area",
854
+ "worst smoothness",
855
+ "worst compactness",
856
+ "worst concavity",
857
+ "worst concave points",
858
+ "worst symmetry",
859
+ "worst fractal dimension",
860
+ ]
861
+ )
862
+
863
+ frame = None
864
+ target_columns = [
865
+ "target",
866
+ ]
867
+ if as_frame:
868
+ frame, data, target = _convert_data_dataframe(
869
+ "load_breast_cancer", data, target, feature_names, target_columns
870
+ )
871
+
872
+ if return_X_y:
873
+ return data, target
874
+
875
+ return Bunch(
876
+ data=data,
877
+ target=target,
878
+ frame=frame,
879
+ target_names=target_names,
880
+ DESCR=fdescr,
881
+ feature_names=feature_names,
882
+ filename=data_file_name,
883
+ data_module=DATA_MODULE,
884
+ )
885
+
886
+
887
+ @validate_params(
888
+ {
889
+ "n_class": [Interval(Integral, 1, 10, closed="both")],
890
+ "return_X_y": ["boolean"],
891
+ "as_frame": ["boolean"],
892
+ },
893
+ prefer_skip_nested_validation=True,
894
+ )
895
+ def load_digits(*, n_class=10, return_X_y=False, as_frame=False):
896
+ """Load and return the digits dataset (classification).
897
+
898
+ Each datapoint is a 8x8 image of a digit.
899
+
900
+ ================= ==============
901
+ Classes 10
902
+ Samples per class ~180
903
+ Samples total 1797
904
+ Dimensionality 64
905
+ Features integers 0-16
906
+ ================= ==============
907
+
908
+ This is a copy of the test set of the UCI ML hand-written digits datasets
909
+ https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
910
+
911
+ Read more in the :ref:`User Guide <digits_dataset>`.
912
+
913
+ Parameters
914
+ ----------
915
+ n_class : int, default=10
916
+ The number of classes to return. Between 0 and 10.
917
+
918
+ return_X_y : bool, default=False
919
+ If True, returns ``(data, target)`` instead of a Bunch object.
920
+ See below for more information about the `data` and `target` object.
921
+
922
+ .. versionadded:: 0.18
923
+
924
+ as_frame : bool, default=False
925
+ If True, the data is a pandas DataFrame including columns with
926
+ appropriate dtypes (numeric). The target is
927
+ a pandas DataFrame or Series depending on the number of target columns.
928
+ If `return_X_y` is True, then (`data`, `target`) will be pandas
929
+ DataFrames or Series as described below.
930
+
931
+ .. versionadded:: 0.23
932
+
933
+ Returns
934
+ -------
935
+ data : :class:`~sklearn.utils.Bunch`
936
+ Dictionary-like object, with the following attributes.
937
+
938
+ data : {ndarray, dataframe} of shape (1797, 64)
939
+ The flattened data matrix. If `as_frame=True`, `data` will be
940
+ a pandas DataFrame.
941
+ target: {ndarray, Series} of shape (1797,)
942
+ The classification target. If `as_frame=True`, `target` will be
943
+ a pandas Series.
944
+ feature_names: list
945
+ The names of the dataset columns.
946
+ target_names: list
947
+ The names of target classes.
948
+
949
+ .. versionadded:: 0.20
950
+
951
+ frame: DataFrame of shape (1797, 65)
952
+ Only present when `as_frame=True`. DataFrame with `data` and
953
+ `target`.
954
+
955
+ .. versionadded:: 0.23
956
+ images: {ndarray} of shape (1797, 8, 8)
957
+ The raw image data.
958
+ DESCR: str
959
+ The full description of the dataset.
960
+
961
+ (data, target) : tuple if ``return_X_y`` is True
962
+ A tuple of two ndarrays by default. The first contains a 2D ndarray of
963
+ shape (1797, 64) with each row representing one sample and each column
964
+ representing the features. The second ndarray of shape (1797) contains
965
+ the target samples. If `as_frame=True`, both arrays are pandas objects,
966
+ i.e. `X` a dataframe and `y` a series.
967
+
968
+ .. versionadded:: 0.18
969
+
970
+ Examples
971
+ --------
972
+ To load the data and visualize the images::
973
+
974
+ >>> from sklearn.datasets import load_digits
975
+ >>> digits = load_digits()
976
+ >>> print(digits.data.shape)
977
+ (1797, 64)
978
+ >>> import matplotlib.pyplot as plt
979
+ >>> plt.gray()
980
+ >>> plt.matshow(digits.images[0])
981
+ <...>
982
+ >>> plt.show()
983
+ """
984
+
985
+ data, fdescr = load_gzip_compressed_csv_data(
986
+ data_file_name="digits.csv.gz", descr_file_name="digits.rst", delimiter=","
987
+ )
988
+
989
+ target = data[:, -1].astype(int, copy=False)
990
+ flat_data = data[:, :-1]
991
+ images = flat_data.view()
992
+ images.shape = (-1, 8, 8)
993
+
994
+ if n_class < 10:
995
+ idx = target < n_class
996
+ flat_data, target = flat_data[idx], target[idx]
997
+ images = images[idx]
998
+
999
+ feature_names = [
1000
+ "pixel_{}_{}".format(row_idx, col_idx)
1001
+ for row_idx in range(8)
1002
+ for col_idx in range(8)
1003
+ ]
1004
+
1005
+ frame = None
1006
+ target_columns = [
1007
+ "target",
1008
+ ]
1009
+ if as_frame:
1010
+ frame, flat_data, target = _convert_data_dataframe(
1011
+ "load_digits", flat_data, target, feature_names, target_columns
1012
+ )
1013
+
1014
+ if return_X_y:
1015
+ return flat_data, target
1016
+
1017
+ return Bunch(
1018
+ data=flat_data,
1019
+ target=target,
1020
+ frame=frame,
1021
+ feature_names=feature_names,
1022
+ target_names=np.arange(10),
1023
+ images=images,
1024
+ DESCR=fdescr,
1025
+ )
1026
+
1027
+
1028
+ @validate_params(
1029
+ {"return_X_y": ["boolean"], "as_frame": ["boolean"], "scaled": ["boolean"]},
1030
+ prefer_skip_nested_validation=True,
1031
+ )
1032
+ def load_diabetes(*, return_X_y=False, as_frame=False, scaled=True):
1033
+ """Load and return the diabetes dataset (regression).
1034
+
1035
+ ============== ==================
1036
+ Samples total 442
1037
+ Dimensionality 10
1038
+ Features real, -.2 < x < .2
1039
+ Targets integer 25 - 346
1040
+ ============== ==================
1041
+
1042
+ .. note::
1043
+ The meaning of each feature (i.e. `feature_names`) might be unclear
1044
+ (especially for `ltg`) as the documentation of the original dataset is
1045
+ not explicit. We provide information that seems correct in regard with
1046
+ the scientific literature in this field of research.
1047
+
1048
+ Read more in the :ref:`User Guide <diabetes_dataset>`.
1049
+
1050
+ Parameters
1051
+ ----------
1052
+ return_X_y : bool, default=False
1053
+ If True, returns ``(data, target)`` instead of a Bunch object.
1054
+ See below for more information about the `data` and `target` object.
1055
+
1056
+ .. versionadded:: 0.18
1057
+
1058
+ as_frame : bool, default=False
1059
+ If True, the data is a pandas DataFrame including columns with
1060
+ appropriate dtypes (numeric). The target is
1061
+ a pandas DataFrame or Series depending on the number of target columns.
1062
+ If `return_X_y` is True, then (`data`, `target`) will be pandas
1063
+ DataFrames or Series as described below.
1064
+
1065
+ .. versionadded:: 0.23
1066
+
1067
+ scaled : bool, default=True
1068
+ If True, the feature variables are mean centered and scaled by the
1069
+ standard deviation times the square root of `n_samples`.
1070
+ If False, raw data is returned for the feature variables.
1071
+
1072
+ .. versionadded:: 1.1
1073
+
1074
+ Returns
1075
+ -------
1076
+ data : :class:`~sklearn.utils.Bunch`
1077
+ Dictionary-like object, with the following attributes.
1078
+
1079
+ data : {ndarray, dataframe} of shape (442, 10)
1080
+ The data matrix. If `as_frame=True`, `data` will be a pandas
1081
+ DataFrame.
1082
+ target: {ndarray, Series} of shape (442,)
1083
+ The regression target. If `as_frame=True`, `target` will be
1084
+ a pandas Series.
1085
+ feature_names: list
1086
+ The names of the dataset columns.
1087
+ frame: DataFrame of shape (442, 11)
1088
+ Only present when `as_frame=True`. DataFrame with `data` and
1089
+ `target`.
1090
+
1091
+ .. versionadded:: 0.23
1092
+ DESCR: str
1093
+ The full description of the dataset.
1094
+ data_filename: str
1095
+ The path to the location of the data.
1096
+ target_filename: str
1097
+ The path to the location of the target.
1098
+
1099
+ (data, target) : tuple if ``return_X_y`` is True
1100
+ Returns a tuple of two ndarray of shape (n_samples, n_features)
1101
+ A 2D array with each row representing one sample and each column
1102
+ representing the features and/or target of a given sample.
1103
+
1104
+ .. versionadded:: 0.18
1105
+
1106
+ Examples
1107
+ --------
1108
+ >>> from sklearn.datasets import load_diabetes
1109
+ >>> diabetes = load_diabetes()
1110
+ >>> diabetes.target[:3]
1111
+ array([151., 75., 141.])
1112
+ >>> diabetes.data.shape
1113
+ (442, 10)
1114
+ """
1115
+ data_filename = "diabetes_data_raw.csv.gz"
1116
+ target_filename = "diabetes_target.csv.gz"
1117
+ data = load_gzip_compressed_csv_data(data_filename)
1118
+ target = load_gzip_compressed_csv_data(target_filename)
1119
+
1120
+ if scaled:
1121
+ data = scale(data, copy=False)
1122
+ data /= data.shape[0] ** 0.5
1123
+
1124
+ fdescr = load_descr("diabetes.rst")
1125
+
1126
+ feature_names = ["age", "sex", "bmi", "bp", "s1", "s2", "s3", "s4", "s5", "s6"]
1127
+
1128
+ frame = None
1129
+ target_columns = [
1130
+ "target",
1131
+ ]
1132
+ if as_frame:
1133
+ frame, data, target = _convert_data_dataframe(
1134
+ "load_diabetes", data, target, feature_names, target_columns
1135
+ )
1136
+
1137
+ if return_X_y:
1138
+ return data, target
1139
+
1140
+ return Bunch(
1141
+ data=data,
1142
+ target=target,
1143
+ frame=frame,
1144
+ DESCR=fdescr,
1145
+ feature_names=feature_names,
1146
+ data_filename=data_filename,
1147
+ target_filename=target_filename,
1148
+ data_module=DATA_MODULE,
1149
+ )
1150
+
1151
+
1152
+ @validate_params(
1153
+ {
1154
+ "return_X_y": ["boolean"],
1155
+ "as_frame": ["boolean"],
1156
+ },
1157
+ prefer_skip_nested_validation=True,
1158
+ )
1159
+ def load_linnerud(*, return_X_y=False, as_frame=False):
1160
+ """Load and return the physical exercise Linnerud dataset.
1161
+
1162
+ This dataset is suitable for multi-output regression tasks.
1163
+
1164
+ ============== ============================
1165
+ Samples total 20
1166
+ Dimensionality 3 (for both data and target)
1167
+ Features integer
1168
+ Targets integer
1169
+ ============== ============================
1170
+
1171
+ Read more in the :ref:`User Guide <linnerrud_dataset>`.
1172
+
1173
+ Parameters
1174
+ ----------
1175
+ return_X_y : bool, default=False
1176
+ If True, returns ``(data, target)`` instead of a Bunch object.
1177
+ See below for more information about the `data` and `target` object.
1178
+
1179
+ .. versionadded:: 0.18
1180
+
1181
+ as_frame : bool, default=False
1182
+ If True, the data is a pandas DataFrame including columns with
1183
+ appropriate dtypes (numeric, string or categorical). The target is
1184
+ a pandas DataFrame or Series depending on the number of target columns.
1185
+ If `return_X_y` is True, then (`data`, `target`) will be pandas
1186
+ DataFrames or Series as described below.
1187
+
1188
+ .. versionadded:: 0.23
1189
+
1190
+ Returns
1191
+ -------
1192
+ data : :class:`~sklearn.utils.Bunch`
1193
+ Dictionary-like object, with the following attributes.
1194
+
1195
+ data : {ndarray, dataframe} of shape (20, 3)
1196
+ The data matrix. If `as_frame=True`, `data` will be a pandas
1197
+ DataFrame.
1198
+ target: {ndarray, dataframe} of shape (20, 3)
1199
+ The regression targets. If `as_frame=True`, `target` will be
1200
+ a pandas DataFrame.
1201
+ feature_names: list
1202
+ The names of the dataset columns.
1203
+ target_names: list
1204
+ The names of the target columns.
1205
+ frame: DataFrame of shape (20, 6)
1206
+ Only present when `as_frame=True`. DataFrame with `data` and
1207
+ `target`.
1208
+
1209
+ .. versionadded:: 0.23
1210
+ DESCR: str
1211
+ The full description of the dataset.
1212
+ data_filename: str
1213
+ The path to the location of the data.
1214
+ target_filename: str
1215
+ The path to the location of the target.
1216
+
1217
+ .. versionadded:: 0.20
1218
+
1219
+ (data, target) : tuple if ``return_X_y`` is True
1220
+ Returns a tuple of two ndarrays or dataframe of shape
1221
+ `(20, 3)`. Each row represents one sample and each column represents the
1222
+ features in `X` and a target in `y` of a given sample.
1223
+
1224
+ .. versionadded:: 0.18
1225
+ """
1226
+ data_filename = "linnerud_exercise.csv"
1227
+ target_filename = "linnerud_physiological.csv"
1228
+
1229
+ data_module_path = resources.files(DATA_MODULE)
1230
+ # Read header and data
1231
+ data_path = data_module_path / data_filename
1232
+ with data_path.open("r", encoding="utf-8") as f:
1233
+ header_exercise = f.readline().split()
1234
+ f.seek(0) # reset file obj
1235
+ data_exercise = np.loadtxt(f, skiprows=1)
1236
+
1237
+ target_path = data_module_path / target_filename
1238
+ with target_path.open("r", encoding="utf-8") as f:
1239
+ header_physiological = f.readline().split()
1240
+ f.seek(0) # reset file obj
1241
+ data_physiological = np.loadtxt(f, skiprows=1)
1242
+
1243
+ fdescr = load_descr("linnerud.rst")
1244
+
1245
+ frame = None
1246
+ if as_frame:
1247
+ (frame, data_exercise, data_physiological) = _convert_data_dataframe(
1248
+ "load_linnerud",
1249
+ data_exercise,
1250
+ data_physiological,
1251
+ header_exercise,
1252
+ header_physiological,
1253
+ )
1254
+ if return_X_y:
1255
+ return data_exercise, data_physiological
1256
+
1257
+ return Bunch(
1258
+ data=data_exercise,
1259
+ feature_names=header_exercise,
1260
+ target=data_physiological,
1261
+ target_names=header_physiological,
1262
+ frame=frame,
1263
+ DESCR=fdescr,
1264
+ data_filename=data_filename,
1265
+ target_filename=target_filename,
1266
+ data_module=DATA_MODULE,
1267
+ )
1268
+
1269
+
1270
+ def load_sample_images():
1271
+ """Load sample images for image manipulation.
1272
+
1273
+ Loads both, ``china`` and ``flower``.
1274
+
1275
+ Read more in the :ref:`User Guide <sample_images>`.
1276
+
1277
+ Returns
1278
+ -------
1279
+ data : :class:`~sklearn.utils.Bunch`
1280
+ Dictionary-like object, with the following attributes.
1281
+
1282
+ images : list of ndarray of shape (427, 640, 3)
1283
+ The two sample image.
1284
+ filenames : list
1285
+ The filenames for the images.
1286
+ DESCR : str
1287
+ The full description of the dataset.
1288
+
1289
+ Examples
1290
+ --------
1291
+ To load the data and visualize the images:
1292
+
1293
+ >>> from sklearn.datasets import load_sample_images
1294
+ >>> dataset = load_sample_images() #doctest: +SKIP
1295
+ >>> len(dataset.images) #doctest: +SKIP
1296
+ 2
1297
+ >>> first_img_data = dataset.images[0] #doctest: +SKIP
1298
+ >>> first_img_data.shape #doctest: +SKIP
1299
+ (427, 640, 3)
1300
+ >>> first_img_data.dtype #doctest: +SKIP
1301
+ dtype('uint8')
1302
+ """
1303
+ try:
1304
+ from PIL import Image
1305
+ except ImportError:
1306
+ raise ImportError(
1307
+ "The Python Imaging Library (PIL) is required to load data "
1308
+ "from jpeg files. Please refer to "
1309
+ "https://pillow.readthedocs.io/en/stable/installation.html "
1310
+ "for installing PIL."
1311
+ )
1312
+
1313
+ descr = load_descr("README.txt", descr_module=IMAGES_MODULE)
1314
+
1315
+ filenames, images = [], []
1316
+
1317
+ jpg_paths = sorted(
1318
+ resource
1319
+ for resource in resources.files(IMAGES_MODULE).iterdir()
1320
+ if resource.is_file() and resource.match("*.jpg")
1321
+ )
1322
+
1323
+ for path in jpg_paths:
1324
+ filenames.append(str(path))
1325
+ with path.open("rb") as image_file:
1326
+ pil_image = Image.open(image_file)
1327
+ image = np.asarray(pil_image)
1328
+ images.append(image)
1329
+
1330
+ return Bunch(images=images, filenames=filenames, DESCR=descr)
1331
+
1332
+
1333
+ @validate_params(
1334
+ {
1335
+ "image_name": [StrOptions({"china.jpg", "flower.jpg"})],
1336
+ },
1337
+ prefer_skip_nested_validation=True,
1338
+ )
1339
+ def load_sample_image(image_name):
1340
+ """Load the numpy array of a single sample image.
1341
+
1342
+ Read more in the :ref:`User Guide <sample_images>`.
1343
+
1344
+ Parameters
1345
+ ----------
1346
+ image_name : {`china.jpg`, `flower.jpg`}
1347
+ The name of the sample image loaded.
1348
+
1349
+ Returns
1350
+ -------
1351
+ img : 3D array
1352
+ The image as a numpy array: height x width x color.
1353
+
1354
+ Examples
1355
+ --------
1356
+
1357
+ >>> from sklearn.datasets import load_sample_image
1358
+ >>> china = load_sample_image('china.jpg') # doctest: +SKIP
1359
+ >>> china.dtype # doctest: +SKIP
1360
+ dtype('uint8')
1361
+ >>> china.shape # doctest: +SKIP
1362
+ (427, 640, 3)
1363
+ >>> flower = load_sample_image('flower.jpg') # doctest: +SKIP
1364
+ >>> flower.dtype # doctest: +SKIP
1365
+ dtype('uint8')
1366
+ >>> flower.shape # doctest: +SKIP
1367
+ (427, 640, 3)
1368
+ """
1369
+ images = load_sample_images()
1370
+ index = None
1371
+ for i, filename in enumerate(images.filenames):
1372
+ if filename.endswith(image_name):
1373
+ index = i
1374
+ break
1375
+ if index is None:
1376
+ raise AttributeError("Cannot find sample image: %s" % image_name)
1377
+ return images.images[index]
1378
+
1379
+
1380
+ def _pkl_filepath(*args, **kwargs):
1381
+ """Return filename for Python 3 pickles
1382
+
1383
+ args[-1] is expected to be the ".pkl" filename. For compatibility with
1384
+ older scikit-learn versions, a suffix is inserted before the extension.
1385
+
1386
+ _pkl_filepath('/path/to/folder', 'filename.pkl') returns
1387
+ '/path/to/folder/filename_py3.pkl'
1388
+
1389
+ """
1390
+ py3_suffix = kwargs.get("py3_suffix", "_py3")
1391
+ basename, ext = splitext(args[-1])
1392
+ basename += py3_suffix
1393
+ new_args = args[:-1] + (basename + ext,)
1394
+ return join(*new_args)
1395
+
1396
+
1397
+ def _sha256(path):
1398
+ """Calculate the sha256 hash of the file at path."""
1399
+ sha256hash = hashlib.sha256()
1400
+ chunk_size = 8192
1401
+ with open(path, "rb") as f:
1402
+ while True:
1403
+ buffer = f.read(chunk_size)
1404
+ if not buffer:
1405
+ break
1406
+ sha256hash.update(buffer)
1407
+ return sha256hash.hexdigest()
1408
+
1409
+
1410
+ def _fetch_remote(remote, dirname=None):
1411
+ """Helper function to download a remote dataset into path
1412
+
1413
+ Fetch a dataset pointed by remote's url, save into path using remote's
1414
+ filename and ensure its integrity based on the SHA256 Checksum of the
1415
+ downloaded file.
1416
+
1417
+ Parameters
1418
+ ----------
1419
+ remote : RemoteFileMetadata
1420
+ Named tuple containing remote dataset meta information: url, filename
1421
+ and checksum
1422
+
1423
+ dirname : str
1424
+ Directory to save the file to.
1425
+
1426
+ Returns
1427
+ -------
1428
+ file_path: str
1429
+ Full path of the created file.
1430
+ """
1431
+
1432
+ file_path = remote.filename if dirname is None else join(dirname, remote.filename)
1433
+ urlretrieve(remote.url, file_path)
1434
+ checksum = _sha256(file_path)
1435
+ if remote.checksum != checksum:
1436
+ raise OSError(
1437
+ "{} has an SHA256 checksum ({}) "
1438
+ "differing from expected ({}), "
1439
+ "file may be corrupted.".format(file_path, checksum, remote.checksum)
1440
+ )
1441
+ return file_path
venv/lib/python3.10/site-packages/sklearn/datasets/_covtype.py ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Forest covertype dataset.
2
+
3
+ A classic dataset for classification benchmarks, featuring categorical and
4
+ real-valued features.
5
+
6
+ The dataset page is available from UCI Machine Learning Repository
7
+
8
+ https://archive.ics.uci.edu/ml/datasets/Covertype
9
+
10
+ Courtesy of Jock A. Blackard and Colorado State University.
11
+ """
12
+
13
+ # Author: Lars Buitinck
14
+ # Peter Prettenhofer <[email protected]>
15
+ # License: BSD 3 clause
16
+
17
+ import logging
18
+ import os
19
+ from gzip import GzipFile
20
+ from os.path import exists, join
21
+ from tempfile import TemporaryDirectory
22
+
23
+ import joblib
24
+ import numpy as np
25
+
26
+ from ..utils import Bunch, check_random_state
27
+ from ..utils._param_validation import validate_params
28
+ from . import get_data_home
29
+ from ._base import (
30
+ RemoteFileMetadata,
31
+ _convert_data_dataframe,
32
+ _fetch_remote,
33
+ _pkl_filepath,
34
+ load_descr,
35
+ )
36
+
37
+ # The original data can be found in:
38
+ # https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz
39
+ ARCHIVE = RemoteFileMetadata(
40
+ filename="covtype.data.gz",
41
+ url="https://ndownloader.figshare.com/files/5976039",
42
+ checksum="614360d0257557dd1792834a85a1cdebfadc3c4f30b011d56afee7ffb5b15771",
43
+ )
44
+
45
+ logger = logging.getLogger(__name__)
46
+
47
+ # Column names reference:
48
+ # https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.info
49
+ FEATURE_NAMES = [
50
+ "Elevation",
51
+ "Aspect",
52
+ "Slope",
53
+ "Horizontal_Distance_To_Hydrology",
54
+ "Vertical_Distance_To_Hydrology",
55
+ "Horizontal_Distance_To_Roadways",
56
+ "Hillshade_9am",
57
+ "Hillshade_Noon",
58
+ "Hillshade_3pm",
59
+ "Horizontal_Distance_To_Fire_Points",
60
+ ]
61
+ FEATURE_NAMES += [f"Wilderness_Area_{i}" for i in range(4)]
62
+ FEATURE_NAMES += [f"Soil_Type_{i}" for i in range(40)]
63
+ TARGET_NAMES = ["Cover_Type"]
64
+
65
+
66
+ @validate_params(
67
+ {
68
+ "data_home": [str, os.PathLike, None],
69
+ "download_if_missing": ["boolean"],
70
+ "random_state": ["random_state"],
71
+ "shuffle": ["boolean"],
72
+ "return_X_y": ["boolean"],
73
+ "as_frame": ["boolean"],
74
+ },
75
+ prefer_skip_nested_validation=True,
76
+ )
77
+ def fetch_covtype(
78
+ *,
79
+ data_home=None,
80
+ download_if_missing=True,
81
+ random_state=None,
82
+ shuffle=False,
83
+ return_X_y=False,
84
+ as_frame=False,
85
+ ):
86
+ """Load the covertype dataset (classification).
87
+
88
+ Download it if necessary.
89
+
90
+ ================= ============
91
+ Classes 7
92
+ Samples total 581012
93
+ Dimensionality 54
94
+ Features int
95
+ ================= ============
96
+
97
+ Read more in the :ref:`User Guide <covtype_dataset>`.
98
+
99
+ Parameters
100
+ ----------
101
+ data_home : str or path-like, default=None
102
+ Specify another download and cache folder for the datasets. By default
103
+ all scikit-learn data is stored in '~/scikit_learn_data' subfolders.
104
+
105
+ download_if_missing : bool, default=True
106
+ If False, raise an OSError if the data is not locally available
107
+ instead of trying to download the data from the source site.
108
+
109
+ random_state : int, RandomState instance or None, default=None
110
+ Determines random number generation for dataset shuffling. Pass an int
111
+ for reproducible output across multiple function calls.
112
+ See :term:`Glossary <random_state>`.
113
+
114
+ shuffle : bool, default=False
115
+ Whether to shuffle dataset.
116
+
117
+ return_X_y : bool, default=False
118
+ If True, returns ``(data.data, data.target)`` instead of a Bunch
119
+ object.
120
+
121
+ .. versionadded:: 0.20
122
+
123
+ as_frame : bool, default=False
124
+ If True, the data is a pandas DataFrame including columns with
125
+ appropriate dtypes (numeric). The target is a pandas DataFrame or
126
+ Series depending on the number of target columns. If `return_X_y` is
127
+ True, then (`data`, `target`) will be pandas DataFrames or Series as
128
+ described below.
129
+
130
+ .. versionadded:: 0.24
131
+
132
+ Returns
133
+ -------
134
+ dataset : :class:`~sklearn.utils.Bunch`
135
+ Dictionary-like object, with the following attributes.
136
+
137
+ data : ndarray of shape (581012, 54)
138
+ Each row corresponds to the 54 features in the dataset.
139
+ target : ndarray of shape (581012,)
140
+ Each value corresponds to one of
141
+ the 7 forest covertypes with values
142
+ ranging between 1 to 7.
143
+ frame : dataframe of shape (581012, 55)
144
+ Only present when `as_frame=True`. Contains `data` and `target`.
145
+ DESCR : str
146
+ Description of the forest covertype dataset.
147
+ feature_names : list
148
+ The names of the dataset columns.
149
+ target_names: list
150
+ The names of the target columns.
151
+
152
+ (data, target) : tuple if ``return_X_y`` is True
153
+ A tuple of two ndarray. The first containing a 2D array of
154
+ shape (n_samples, n_features) with each row representing one
155
+ sample and each column representing the features. The second
156
+ ndarray of shape (n_samples,) containing the target samples.
157
+
158
+ .. versionadded:: 0.20
159
+
160
+ Examples
161
+ --------
162
+ >>> from sklearn.datasets import fetch_covtype
163
+ >>> cov_type = fetch_covtype()
164
+ >>> cov_type.data.shape
165
+ (581012, 54)
166
+ >>> cov_type.target.shape
167
+ (581012,)
168
+ >>> # Let's check the 4 first feature names
169
+ >>> cov_type.feature_names[:4]
170
+ ['Elevation', 'Aspect', 'Slope', 'Horizontal_Distance_To_Hydrology']
171
+ """
172
+ data_home = get_data_home(data_home=data_home)
173
+ covtype_dir = join(data_home, "covertype")
174
+ samples_path = _pkl_filepath(covtype_dir, "samples")
175
+ targets_path = _pkl_filepath(covtype_dir, "targets")
176
+ available = exists(samples_path) and exists(targets_path)
177
+
178
+ if download_if_missing and not available:
179
+ os.makedirs(covtype_dir, exist_ok=True)
180
+
181
+ # Creating temp_dir as a direct subdirectory of the target directory
182
+ # guarantees that both reside on the same filesystem, so that we can use
183
+ # os.rename to atomically move the data files to their target location.
184
+ with TemporaryDirectory(dir=covtype_dir) as temp_dir:
185
+ logger.info(f"Downloading {ARCHIVE.url}")
186
+ archive_path = _fetch_remote(ARCHIVE, dirname=temp_dir)
187
+ Xy = np.genfromtxt(GzipFile(filename=archive_path), delimiter=",")
188
+
189
+ X = Xy[:, :-1]
190
+ y = Xy[:, -1].astype(np.int32, copy=False)
191
+
192
+ samples_tmp_path = _pkl_filepath(temp_dir, "samples")
193
+ joblib.dump(X, samples_tmp_path, compress=9)
194
+ os.rename(samples_tmp_path, samples_path)
195
+
196
+ targets_tmp_path = _pkl_filepath(temp_dir, "targets")
197
+ joblib.dump(y, targets_tmp_path, compress=9)
198
+ os.rename(targets_tmp_path, targets_path)
199
+
200
+ elif not available and not download_if_missing:
201
+ raise OSError("Data not found and `download_if_missing` is False")
202
+ try:
203
+ X, y
204
+ except NameError:
205
+ X = joblib.load(samples_path)
206
+ y = joblib.load(targets_path)
207
+
208
+ if shuffle:
209
+ ind = np.arange(X.shape[0])
210
+ rng = check_random_state(random_state)
211
+ rng.shuffle(ind)
212
+ X = X[ind]
213
+ y = y[ind]
214
+
215
+ fdescr = load_descr("covtype.rst")
216
+
217
+ frame = None
218
+ if as_frame:
219
+ frame, X, y = _convert_data_dataframe(
220
+ caller_name="fetch_covtype",
221
+ data=X,
222
+ target=y,
223
+ feature_names=FEATURE_NAMES,
224
+ target_names=TARGET_NAMES,
225
+ )
226
+ if return_X_y:
227
+ return X, y
228
+
229
+ return Bunch(
230
+ data=X,
231
+ target=y,
232
+ frame=frame,
233
+ target_names=TARGET_NAMES,
234
+ feature_names=FEATURE_NAMES,
235
+ DESCR=fdescr,
236
+ )
venv/lib/python3.10/site-packages/sklearn/datasets/_olivetti_faces.py ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Modified Olivetti faces dataset.
2
+
3
+ The original database was available from (now defunct)
4
+
5
+ https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
6
+
7
+ The version retrieved here comes in MATLAB format from the personal
8
+ web page of Sam Roweis:
9
+
10
+ https://cs.nyu.edu/~roweis/
11
+ """
12
+
13
+ # Copyright (c) 2011 David Warde-Farley <wardefar at iro dot umontreal dot ca>
14
+ # License: BSD 3 clause
15
+
16
+ from os import PathLike, makedirs, remove
17
+ from os.path import exists
18
+
19
+ import joblib
20
+ import numpy as np
21
+ from scipy.io import loadmat
22
+
23
+ from ..utils import Bunch, check_random_state
24
+ from ..utils._param_validation import validate_params
25
+ from . import get_data_home
26
+ from ._base import RemoteFileMetadata, _fetch_remote, _pkl_filepath, load_descr
27
+
28
+ # The original data can be found at:
29
+ # https://cs.nyu.edu/~roweis/data/olivettifaces.mat
30
+ FACES = RemoteFileMetadata(
31
+ filename="olivettifaces.mat",
32
+ url="https://ndownloader.figshare.com/files/5976027",
33
+ checksum="b612fb967f2dc77c9c62d3e1266e0c73d5fca46a4b8906c18e454d41af987794",
34
+ )
35
+
36
+
37
+ @validate_params(
38
+ {
39
+ "data_home": [str, PathLike, None],
40
+ "shuffle": ["boolean"],
41
+ "random_state": ["random_state"],
42
+ "download_if_missing": ["boolean"],
43
+ "return_X_y": ["boolean"],
44
+ },
45
+ prefer_skip_nested_validation=True,
46
+ )
47
+ def fetch_olivetti_faces(
48
+ *,
49
+ data_home=None,
50
+ shuffle=False,
51
+ random_state=0,
52
+ download_if_missing=True,
53
+ return_X_y=False,
54
+ ):
55
+ """Load the Olivetti faces data-set from AT&T (classification).
56
+
57
+ Download it if necessary.
58
+
59
+ ================= =====================
60
+ Classes 40
61
+ Samples total 400
62
+ Dimensionality 4096
63
+ Features real, between 0 and 1
64
+ ================= =====================
65
+
66
+ Read more in the :ref:`User Guide <olivetti_faces_dataset>`.
67
+
68
+ Parameters
69
+ ----------
70
+ data_home : str or path-like, default=None
71
+ Specify another download and cache folder for the datasets. By default
72
+ all scikit-learn data is stored in '~/scikit_learn_data' subfolders.
73
+
74
+ shuffle : bool, default=False
75
+ If True the order of the dataset is shuffled to avoid having
76
+ images of the same person grouped.
77
+
78
+ random_state : int, RandomState instance or None, default=0
79
+ Determines random number generation for dataset shuffling. Pass an int
80
+ for reproducible output across multiple function calls.
81
+ See :term:`Glossary <random_state>`.
82
+
83
+ download_if_missing : bool, default=True
84
+ If False, raise an OSError if the data is not locally available
85
+ instead of trying to download the data from the source site.
86
+
87
+ return_X_y : bool, default=False
88
+ If True, returns `(data, target)` instead of a `Bunch` object. See
89
+ below for more information about the `data` and `target` object.
90
+
91
+ .. versionadded:: 0.22
92
+
93
+ Returns
94
+ -------
95
+ data : :class:`~sklearn.utils.Bunch`
96
+ Dictionary-like object, with the following attributes.
97
+
98
+ data: ndarray, shape (400, 4096)
99
+ Each row corresponds to a ravelled
100
+ face image of original size 64 x 64 pixels.
101
+ images : ndarray, shape (400, 64, 64)
102
+ Each row is a face image
103
+ corresponding to one of the 40 subjects of the dataset.
104
+ target : ndarray, shape (400,)
105
+ Labels associated to each face image.
106
+ Those labels are ranging from 0-39 and correspond to the
107
+ Subject IDs.
108
+ DESCR : str
109
+ Description of the modified Olivetti Faces Dataset.
110
+
111
+ (data, target) : tuple if `return_X_y=True`
112
+ Tuple with the `data` and `target` objects described above.
113
+
114
+ .. versionadded:: 0.22
115
+ """
116
+ data_home = get_data_home(data_home=data_home)
117
+ if not exists(data_home):
118
+ makedirs(data_home)
119
+ filepath = _pkl_filepath(data_home, "olivetti.pkz")
120
+ if not exists(filepath):
121
+ if not download_if_missing:
122
+ raise OSError("Data not found and `download_if_missing` is False")
123
+
124
+ print("downloading Olivetti faces from %s to %s" % (FACES.url, data_home))
125
+ mat_path = _fetch_remote(FACES, dirname=data_home)
126
+ mfile = loadmat(file_name=mat_path)
127
+ # delete raw .mat data
128
+ remove(mat_path)
129
+
130
+ faces = mfile["faces"].T.copy()
131
+ joblib.dump(faces, filepath, compress=6)
132
+ del mfile
133
+ else:
134
+ faces = joblib.load(filepath)
135
+
136
+ # We want floating point data, but float32 is enough (there is only
137
+ # one byte of precision in the original uint8s anyway)
138
+ faces = np.float32(faces)
139
+ faces = faces - faces.min()
140
+ faces /= faces.max()
141
+ faces = faces.reshape((400, 64, 64)).transpose(0, 2, 1)
142
+ # 10 images per class, 400 images total, each class is contiguous.
143
+ target = np.array([i // 10 for i in range(400)])
144
+ if shuffle:
145
+ random_state = check_random_state(random_state)
146
+ order = random_state.permutation(len(faces))
147
+ faces = faces[order]
148
+ target = target[order]
149
+ faces_vectorized = faces.reshape(len(faces), -1)
150
+
151
+ fdescr = load_descr("olivetti_faces.rst")
152
+
153
+ if return_X_y:
154
+ return faces_vectorized, target
155
+
156
+ return Bunch(data=faces_vectorized, images=faces, target=target, DESCR=fdescr)
venv/lib/python3.10/site-packages/sklearn/datasets/_openml.py ADDED
@@ -0,0 +1,1158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gzip
2
+ import hashlib
3
+ import json
4
+ import os
5
+ import shutil
6
+ import time
7
+ from contextlib import closing
8
+ from functools import wraps
9
+ from os.path import join
10
+ from tempfile import TemporaryDirectory
11
+ from typing import Any, Callable, Dict, List, Optional, Tuple, Union
12
+ from urllib.error import HTTPError, URLError
13
+ from urllib.request import Request, urlopen
14
+ from warnings import warn
15
+
16
+ import numpy as np
17
+
18
+ from ..utils import (
19
+ Bunch,
20
+ check_pandas_support, # noqa # noqa
21
+ )
22
+ from ..utils._param_validation import (
23
+ Integral,
24
+ Interval,
25
+ Real,
26
+ StrOptions,
27
+ validate_params,
28
+ )
29
+ from . import get_data_home
30
+ from ._arff_parser import load_arff_from_gzip_file
31
+
32
+ __all__ = ["fetch_openml"]
33
+
34
+ _OPENML_PREFIX = "https://api.openml.org/"
35
+ _SEARCH_NAME = "api/v1/json/data/list/data_name/{}/limit/2"
36
+ _DATA_INFO = "api/v1/json/data/{}"
37
+ _DATA_FEATURES = "api/v1/json/data/features/{}"
38
+ _DATA_QUALITIES = "api/v1/json/data/qualities/{}"
39
+ _DATA_FILE = "data/v1/download/{}"
40
+
41
+ OpenmlQualitiesType = List[Dict[str, str]]
42
+ OpenmlFeaturesType = List[Dict[str, str]]
43
+
44
+
45
+ def _get_local_path(openml_path: str, data_home: str) -> str:
46
+ return os.path.join(data_home, "openml.org", openml_path + ".gz")
47
+
48
+
49
+ def _retry_with_clean_cache(
50
+ openml_path: str,
51
+ data_home: Optional[str],
52
+ no_retry_exception: Optional[Exception] = None,
53
+ ) -> Callable:
54
+ """If the first call to the decorated function fails, the local cached
55
+ file is removed, and the function is called again. If ``data_home`` is
56
+ ``None``, then the function is called once. We can provide a specific
57
+ exception to not retry on using `no_retry_exception` parameter.
58
+ """
59
+
60
+ def decorator(f):
61
+ @wraps(f)
62
+ def wrapper(*args, **kw):
63
+ if data_home is None:
64
+ return f(*args, **kw)
65
+ try:
66
+ return f(*args, **kw)
67
+ except URLError:
68
+ raise
69
+ except Exception as exc:
70
+ if no_retry_exception is not None and isinstance(
71
+ exc, no_retry_exception
72
+ ):
73
+ raise
74
+ warn("Invalid cache, redownloading file", RuntimeWarning)
75
+ local_path = _get_local_path(openml_path, data_home)
76
+ if os.path.exists(local_path):
77
+ os.unlink(local_path)
78
+ return f(*args, **kw)
79
+
80
+ return wrapper
81
+
82
+ return decorator
83
+
84
+
85
+ def _retry_on_network_error(
86
+ n_retries: int = 3, delay: float = 1.0, url: str = ""
87
+ ) -> Callable:
88
+ """If the function call results in a network error, call the function again
89
+ up to ``n_retries`` times with a ``delay`` between each call. If the error
90
+ has a 412 status code, don't call the function again as this is a specific
91
+ OpenML error.
92
+ The url parameter is used to give more information to the user about the
93
+ error.
94
+ """
95
+
96
+ def decorator(f):
97
+ @wraps(f)
98
+ def wrapper(*args, **kwargs):
99
+ retry_counter = n_retries
100
+ while True:
101
+ try:
102
+ return f(*args, **kwargs)
103
+ except (URLError, TimeoutError) as e:
104
+ # 412 is a specific OpenML error code.
105
+ if isinstance(e, HTTPError) and e.code == 412:
106
+ raise
107
+ if retry_counter == 0:
108
+ raise
109
+ warn(
110
+ f"A network error occurred while downloading {url}. Retrying..."
111
+ )
112
+ retry_counter -= 1
113
+ time.sleep(delay)
114
+
115
+ return wrapper
116
+
117
+ return decorator
118
+
119
+
120
+ def _open_openml_url(
121
+ openml_path: str, data_home: Optional[str], n_retries: int = 3, delay: float = 1.0
122
+ ):
123
+ """
124
+ Returns a resource from OpenML.org. Caches it to data_home if required.
125
+
126
+ Parameters
127
+ ----------
128
+ openml_path : str
129
+ OpenML URL that will be accessed. This will be prefixes with
130
+ _OPENML_PREFIX.
131
+
132
+ data_home : str
133
+ Directory to which the files will be cached. If None, no caching will
134
+ be applied.
135
+
136
+ n_retries : int, default=3
137
+ Number of retries when HTTP errors are encountered. Error with status
138
+ code 412 won't be retried as they represent OpenML generic errors.
139
+
140
+ delay : float, default=1.0
141
+ Number of seconds between retries.
142
+
143
+ Returns
144
+ -------
145
+ result : stream
146
+ A stream to the OpenML resource.
147
+ """
148
+
149
+ def is_gzip_encoded(_fsrc):
150
+ return _fsrc.info().get("Content-Encoding", "") == "gzip"
151
+
152
+ req = Request(_OPENML_PREFIX + openml_path)
153
+ req.add_header("Accept-encoding", "gzip")
154
+
155
+ if data_home is None:
156
+ fsrc = _retry_on_network_error(n_retries, delay, req.full_url)(urlopen)(req)
157
+ if is_gzip_encoded(fsrc):
158
+ return gzip.GzipFile(fileobj=fsrc, mode="rb")
159
+ return fsrc
160
+
161
+ local_path = _get_local_path(openml_path, data_home)
162
+ dir_name, file_name = os.path.split(local_path)
163
+ if not os.path.exists(local_path):
164
+ os.makedirs(dir_name, exist_ok=True)
165
+ try:
166
+ # Create a tmpdir as a subfolder of dir_name where the final file will
167
+ # be moved to if the download is successful. This guarantees that the
168
+ # renaming operation to the final location is atomic to ensure the
169
+ # concurrence safety of the dataset caching mechanism.
170
+ with TemporaryDirectory(dir=dir_name) as tmpdir:
171
+ with closing(
172
+ _retry_on_network_error(n_retries, delay, req.full_url)(urlopen)(
173
+ req
174
+ )
175
+ ) as fsrc:
176
+ opener: Callable
177
+ if is_gzip_encoded(fsrc):
178
+ opener = open
179
+ else:
180
+ opener = gzip.GzipFile
181
+ with opener(os.path.join(tmpdir, file_name), "wb") as fdst:
182
+ shutil.copyfileobj(fsrc, fdst)
183
+ shutil.move(fdst.name, local_path)
184
+ except Exception:
185
+ if os.path.exists(local_path):
186
+ os.unlink(local_path)
187
+ raise
188
+
189
+ # XXX: First time, decompression will not be necessary (by using fsrc), but
190
+ # it will happen nonetheless
191
+ return gzip.GzipFile(local_path, "rb")
192
+
193
+
194
+ class OpenMLError(ValueError):
195
+ """HTTP 412 is a specific OpenML error code, indicating a generic error"""
196
+
197
+ pass
198
+
199
+
200
+ def _get_json_content_from_openml_api(
201
+ url: str,
202
+ error_message: Optional[str],
203
+ data_home: Optional[str],
204
+ n_retries: int = 3,
205
+ delay: float = 1.0,
206
+ ) -> Dict:
207
+ """
208
+ Loads json data from the openml api.
209
+
210
+ Parameters
211
+ ----------
212
+ url : str
213
+ The URL to load from. Should be an official OpenML endpoint.
214
+
215
+ error_message : str or None
216
+ The error message to raise if an acceptable OpenML error is thrown
217
+ (acceptable error is, e.g., data id not found. Other errors, like 404's
218
+ will throw the native error message).
219
+
220
+ data_home : str or None
221
+ Location to cache the response. None if no cache is required.
222
+
223
+ n_retries : int, default=3
224
+ Number of retries when HTTP errors are encountered. Error with status
225
+ code 412 won't be retried as they represent OpenML generic errors.
226
+
227
+ delay : float, default=1.0
228
+ Number of seconds between retries.
229
+
230
+ Returns
231
+ -------
232
+ json_data : json
233
+ the json result from the OpenML server if the call was successful.
234
+ An exception otherwise.
235
+ """
236
+
237
+ @_retry_with_clean_cache(url, data_home=data_home)
238
+ def _load_json():
239
+ with closing(
240
+ _open_openml_url(url, data_home, n_retries=n_retries, delay=delay)
241
+ ) as response:
242
+ return json.loads(response.read().decode("utf-8"))
243
+
244
+ try:
245
+ return _load_json()
246
+ except HTTPError as error:
247
+ # 412 is an OpenML specific error code, indicating a generic error
248
+ # (e.g., data not found)
249
+ if error.code != 412:
250
+ raise error
251
+
252
+ # 412 error, not in except for nicer traceback
253
+ raise OpenMLError(error_message)
254
+
255
+
256
+ def _get_data_info_by_name(
257
+ name: str,
258
+ version: Union[int, str],
259
+ data_home: Optional[str],
260
+ n_retries: int = 3,
261
+ delay: float = 1.0,
262
+ ):
263
+ """
264
+ Utilizes the openml dataset listing api to find a dataset by
265
+ name/version
266
+ OpenML api function:
267
+ https://www.openml.org/api_docs#!/data/get_data_list_data_name_data_name
268
+
269
+ Parameters
270
+ ----------
271
+ name : str
272
+ name of the dataset
273
+
274
+ version : int or str
275
+ If version is an integer, the exact name/version will be obtained from
276
+ OpenML. If version is a string (value: "active") it will take the first
277
+ version from OpenML that is annotated as active. Any other string
278
+ values except "active" are treated as integer.
279
+
280
+ data_home : str or None
281
+ Location to cache the response. None if no cache is required.
282
+
283
+ n_retries : int, default=3
284
+ Number of retries when HTTP errors are encountered. Error with status
285
+ code 412 won't be retried as they represent OpenML generic errors.
286
+
287
+ delay : float, default=1.0
288
+ Number of seconds between retries.
289
+
290
+ Returns
291
+ -------
292
+ first_dataset : json
293
+ json representation of the first dataset object that adhired to the
294
+ search criteria
295
+
296
+ """
297
+ if version == "active":
298
+ # situation in which we return the oldest active version
299
+ url = _SEARCH_NAME.format(name) + "/status/active/"
300
+ error_msg = "No active dataset {} found.".format(name)
301
+ json_data = _get_json_content_from_openml_api(
302
+ url,
303
+ error_msg,
304
+ data_home=data_home,
305
+ n_retries=n_retries,
306
+ delay=delay,
307
+ )
308
+ res = json_data["data"]["dataset"]
309
+ if len(res) > 1:
310
+ first_version = version = res[0]["version"]
311
+ warning_msg = (
312
+ "Multiple active versions of the dataset matching the name"
313
+ f" {name} exist. Versions may be fundamentally different, "
314
+ f"returning version {first_version}. "
315
+ "Available versions:\n"
316
+ )
317
+ for r in res:
318
+ warning_msg += f"- version {r['version']}, status: {r['status']}\n"
319
+ warning_msg += (
320
+ f" url: https://www.openml.org/search?type=data&id={r['did']}\n"
321
+ )
322
+ warn(warning_msg)
323
+ return res[0]
324
+
325
+ # an integer version has been provided
326
+ url = (_SEARCH_NAME + "/data_version/{}").format(name, version)
327
+ try:
328
+ json_data = _get_json_content_from_openml_api(
329
+ url,
330
+ error_message=None,
331
+ data_home=data_home,
332
+ n_retries=n_retries,
333
+ delay=delay,
334
+ )
335
+ except OpenMLError:
336
+ # we can do this in 1 function call if OpenML does not require the
337
+ # specification of the dataset status (i.e., return datasets with a
338
+ # given name / version regardless of active, deactivated, etc. )
339
+ # TODO: feature request OpenML.
340
+ url += "/status/deactivated"
341
+ error_msg = "Dataset {} with version {} not found.".format(name, version)
342
+ json_data = _get_json_content_from_openml_api(
343
+ url,
344
+ error_msg,
345
+ data_home=data_home,
346
+ n_retries=n_retries,
347
+ delay=delay,
348
+ )
349
+
350
+ return json_data["data"]["dataset"][0]
351
+
352
+
353
+ def _get_data_description_by_id(
354
+ data_id: int,
355
+ data_home: Optional[str],
356
+ n_retries: int = 3,
357
+ delay: float = 1.0,
358
+ ) -> Dict[str, Any]:
359
+ # OpenML API function: https://www.openml.org/api_docs#!/data/get_data_id
360
+ url = _DATA_INFO.format(data_id)
361
+ error_message = "Dataset with data_id {} not found.".format(data_id)
362
+ json_data = _get_json_content_from_openml_api(
363
+ url,
364
+ error_message,
365
+ data_home=data_home,
366
+ n_retries=n_retries,
367
+ delay=delay,
368
+ )
369
+ return json_data["data_set_description"]
370
+
371
+
372
+ def _get_data_features(
373
+ data_id: int,
374
+ data_home: Optional[str],
375
+ n_retries: int = 3,
376
+ delay: float = 1.0,
377
+ ) -> OpenmlFeaturesType:
378
+ # OpenML function:
379
+ # https://www.openml.org/api_docs#!/data/get_data_features_id
380
+ url = _DATA_FEATURES.format(data_id)
381
+ error_message = "Dataset with data_id {} not found.".format(data_id)
382
+ json_data = _get_json_content_from_openml_api(
383
+ url,
384
+ error_message,
385
+ data_home=data_home,
386
+ n_retries=n_retries,
387
+ delay=delay,
388
+ )
389
+ return json_data["data_features"]["feature"]
390
+
391
+
392
+ def _get_data_qualities(
393
+ data_id: int,
394
+ data_home: Optional[str],
395
+ n_retries: int = 3,
396
+ delay: float = 1.0,
397
+ ) -> OpenmlQualitiesType:
398
+ # OpenML API function:
399
+ # https://www.openml.org/api_docs#!/data/get_data_qualities_id
400
+ url = _DATA_QUALITIES.format(data_id)
401
+ error_message = "Dataset with data_id {} not found.".format(data_id)
402
+ json_data = _get_json_content_from_openml_api(
403
+ url,
404
+ error_message,
405
+ data_home=data_home,
406
+ n_retries=n_retries,
407
+ delay=delay,
408
+ )
409
+ # the qualities might not be available, but we still try to process
410
+ # the data
411
+ return json_data.get("data_qualities", {}).get("quality", [])
412
+
413
+
414
+ def _get_num_samples(data_qualities: OpenmlQualitiesType) -> int:
415
+ """Get the number of samples from data qualities.
416
+
417
+ Parameters
418
+ ----------
419
+ data_qualities : list of dict
420
+ Used to retrieve the number of instances (samples) in the dataset.
421
+
422
+ Returns
423
+ -------
424
+ n_samples : int
425
+ The number of samples in the dataset or -1 if data qualities are
426
+ unavailable.
427
+ """
428
+ # If the data qualities are unavailable, we return -1
429
+ default_n_samples = -1
430
+
431
+ qualities = {d["name"]: d["value"] for d in data_qualities}
432
+ return int(float(qualities.get("NumberOfInstances", default_n_samples)))
433
+
434
+
435
+ def _load_arff_response(
436
+ url: str,
437
+ data_home: Optional[str],
438
+ parser: str,
439
+ output_type: str,
440
+ openml_columns_info: dict,
441
+ feature_names_to_select: List[str],
442
+ target_names_to_select: List[str],
443
+ shape: Optional[Tuple[int, int]],
444
+ md5_checksum: str,
445
+ n_retries: int = 3,
446
+ delay: float = 1.0,
447
+ read_csv_kwargs: Optional[Dict] = None,
448
+ ):
449
+ """Load the ARFF data associated with the OpenML URL.
450
+
451
+ In addition of loading the data, this function will also check the
452
+ integrity of the downloaded file from OpenML using MD5 checksum.
453
+
454
+ Parameters
455
+ ----------
456
+ url : str
457
+ The URL of the ARFF file on OpenML.
458
+
459
+ data_home : str
460
+ The location where to cache the data.
461
+
462
+ parser : {"liac-arff", "pandas"}
463
+ The parser used to parse the ARFF file.
464
+
465
+ output_type : {"numpy", "pandas", "sparse"}
466
+ The type of the arrays that will be returned. The possibilities are:
467
+
468
+ - `"numpy"`: both `X` and `y` will be NumPy arrays;
469
+ - `"sparse"`: `X` will be sparse matrix and `y` will be a NumPy array;
470
+ - `"pandas"`: `X` will be a pandas DataFrame and `y` will be either a
471
+ pandas Series or DataFrame.
472
+
473
+ openml_columns_info : dict
474
+ The information provided by OpenML regarding the columns of the ARFF
475
+ file.
476
+
477
+ feature_names_to_select : list of str
478
+ The list of the features to be selected.
479
+
480
+ target_names_to_select : list of str
481
+ The list of the target variables to be selected.
482
+
483
+ shape : tuple or None
484
+ With `parser="liac-arff"`, when using a generator to load the data,
485
+ one needs to provide the shape of the data beforehand.
486
+
487
+ md5_checksum : str
488
+ The MD5 checksum provided by OpenML to check the data integrity.
489
+
490
+ n_retries : int, default=3
491
+ The number of times to retry downloading the data if it fails.
492
+
493
+ delay : float, default=1.0
494
+ The delay between two consecutive downloads in seconds.
495
+
496
+ read_csv_kwargs : dict, default=None
497
+ Keyword arguments to pass to `pandas.read_csv` when using the pandas parser.
498
+ It allows to overwrite the default options.
499
+
500
+ .. versionadded:: 1.3
501
+
502
+ Returns
503
+ -------
504
+ X : {ndarray, sparse matrix, dataframe}
505
+ The data matrix.
506
+
507
+ y : {ndarray, dataframe, series}
508
+ The target.
509
+
510
+ frame : dataframe or None
511
+ A dataframe containing both `X` and `y`. `None` if
512
+ `output_array_type != "pandas"`.
513
+
514
+ categories : list of str or None
515
+ The names of the features that are categorical. `None` if
516
+ `output_array_type == "pandas"`.
517
+ """
518
+ gzip_file = _open_openml_url(url, data_home, n_retries=n_retries, delay=delay)
519
+ with closing(gzip_file):
520
+ md5 = hashlib.md5()
521
+ for chunk in iter(lambda: gzip_file.read(4096), b""):
522
+ md5.update(chunk)
523
+ actual_md5_checksum = md5.hexdigest()
524
+
525
+ if actual_md5_checksum != md5_checksum:
526
+ raise ValueError(
527
+ f"md5 checksum of local file for {url} does not match description: "
528
+ f"expected: {md5_checksum} but got {actual_md5_checksum}. "
529
+ "Downloaded file could have been modified / corrupted, clean cache "
530
+ "and retry..."
531
+ )
532
+
533
+ def _open_url_and_load_gzip_file(url, data_home, n_retries, delay, arff_params):
534
+ gzip_file = _open_openml_url(url, data_home, n_retries=n_retries, delay=delay)
535
+ with closing(gzip_file):
536
+ return load_arff_from_gzip_file(gzip_file, **arff_params)
537
+
538
+ arff_params: Dict = dict(
539
+ parser=parser,
540
+ output_type=output_type,
541
+ openml_columns_info=openml_columns_info,
542
+ feature_names_to_select=feature_names_to_select,
543
+ target_names_to_select=target_names_to_select,
544
+ shape=shape,
545
+ read_csv_kwargs=read_csv_kwargs or {},
546
+ )
547
+ try:
548
+ X, y, frame, categories = _open_url_and_load_gzip_file(
549
+ url, data_home, n_retries, delay, arff_params
550
+ )
551
+ except Exception as exc:
552
+ if parser != "pandas":
553
+ raise
554
+
555
+ from pandas.errors import ParserError
556
+
557
+ if not isinstance(exc, ParserError):
558
+ raise
559
+
560
+ # A parsing error could come from providing the wrong quotechar
561
+ # to pandas. By default, we use a double quote. Thus, we retry
562
+ # with a single quote before to raise the error.
563
+ arff_params["read_csv_kwargs"].update(quotechar="'")
564
+ X, y, frame, categories = _open_url_and_load_gzip_file(
565
+ url, data_home, n_retries, delay, arff_params
566
+ )
567
+
568
+ return X, y, frame, categories
569
+
570
+
571
+ def _download_data_to_bunch(
572
+ url: str,
573
+ sparse: bool,
574
+ data_home: Optional[str],
575
+ *,
576
+ as_frame: bool,
577
+ openml_columns_info: List[dict],
578
+ data_columns: List[str],
579
+ target_columns: List[str],
580
+ shape: Optional[Tuple[int, int]],
581
+ md5_checksum: str,
582
+ n_retries: int = 3,
583
+ delay: float = 1.0,
584
+ parser: str,
585
+ read_csv_kwargs: Optional[Dict] = None,
586
+ ):
587
+ """Download ARFF data, load it to a specific container and create to Bunch.
588
+
589
+ This function has a mechanism to retry/cache/clean the data.
590
+
591
+ Parameters
592
+ ----------
593
+ url : str
594
+ The URL of the ARFF file on OpenML.
595
+
596
+ sparse : bool
597
+ Whether the dataset is expected to use the sparse ARFF format.
598
+
599
+ data_home : str
600
+ The location where to cache the data.
601
+
602
+ as_frame : bool
603
+ Whether or not to return the data into a pandas DataFrame.
604
+
605
+ openml_columns_info : list of dict
606
+ The information regarding the columns provided by OpenML for the
607
+ ARFF dataset. The information is stored as a list of dictionaries.
608
+
609
+ data_columns : list of str
610
+ The list of the features to be selected.
611
+
612
+ target_columns : list of str
613
+ The list of the target variables to be selected.
614
+
615
+ shape : tuple or None
616
+ With `parser="liac-arff"`, when using a generator to load the data,
617
+ one needs to provide the shape of the data beforehand.
618
+
619
+ md5_checksum : str
620
+ The MD5 checksum provided by OpenML to check the data integrity.
621
+
622
+ n_retries : int, default=3
623
+ Number of retries when HTTP errors are encountered. Error with status
624
+ code 412 won't be retried as they represent OpenML generic errors.
625
+
626
+ delay : float, default=1.0
627
+ Number of seconds between retries.
628
+
629
+ parser : {"liac-arff", "pandas"}
630
+ The parser used to parse the ARFF file.
631
+
632
+ read_csv_kwargs : dict, default=None
633
+ Keyword arguments to pass to `pandas.read_csv` when using the pandas parser.
634
+ It allows to overwrite the default options.
635
+
636
+ .. versionadded:: 1.3
637
+
638
+ Returns
639
+ -------
640
+ data : :class:`~sklearn.utils.Bunch`
641
+ Dictionary-like object, with the following attributes.
642
+
643
+ X : {ndarray, sparse matrix, dataframe}
644
+ The data matrix.
645
+ y : {ndarray, dataframe, series}
646
+ The target.
647
+ frame : dataframe or None
648
+ A dataframe containing both `X` and `y`. `None` if
649
+ `output_array_type != "pandas"`.
650
+ categories : list of str or None
651
+ The names of the features that are categorical. `None` if
652
+ `output_array_type == "pandas"`.
653
+ """
654
+ # Prepare which columns and data types should be returned for the X and y
655
+ features_dict = {feature["name"]: feature for feature in openml_columns_info}
656
+
657
+ if sparse:
658
+ output_type = "sparse"
659
+ elif as_frame:
660
+ output_type = "pandas"
661
+ else:
662
+ output_type = "numpy"
663
+
664
+ # XXX: target columns should all be categorical or all numeric
665
+ _verify_target_data_type(features_dict, target_columns)
666
+ for name in target_columns:
667
+ column_info = features_dict[name]
668
+ n_missing_values = int(column_info["number_of_missing_values"])
669
+ if n_missing_values > 0:
670
+ raise ValueError(
671
+ f"Target column '{column_info['name']}' has {n_missing_values} missing "
672
+ "values. Missing values are not supported for target columns."
673
+ )
674
+
675
+ no_retry_exception = None
676
+ if parser == "pandas":
677
+ # If we get a ParserError with pandas, then we don't want to retry and we raise
678
+ # early.
679
+ from pandas.errors import ParserError
680
+
681
+ no_retry_exception = ParserError
682
+
683
+ X, y, frame, categories = _retry_with_clean_cache(
684
+ url, data_home, no_retry_exception
685
+ )(_load_arff_response)(
686
+ url,
687
+ data_home,
688
+ parser=parser,
689
+ output_type=output_type,
690
+ openml_columns_info=features_dict,
691
+ feature_names_to_select=data_columns,
692
+ target_names_to_select=target_columns,
693
+ shape=shape,
694
+ md5_checksum=md5_checksum,
695
+ n_retries=n_retries,
696
+ delay=delay,
697
+ read_csv_kwargs=read_csv_kwargs,
698
+ )
699
+
700
+ return Bunch(
701
+ data=X,
702
+ target=y,
703
+ frame=frame,
704
+ categories=categories,
705
+ feature_names=data_columns,
706
+ target_names=target_columns,
707
+ )
708
+
709
+
710
+ def _verify_target_data_type(features_dict, target_columns):
711
+ # verifies the data type of the y array in case there are multiple targets
712
+ # (throws an error if these targets do not comply with sklearn support)
713
+ if not isinstance(target_columns, list):
714
+ raise ValueError("target_column should be list, got: %s" % type(target_columns))
715
+ found_types = set()
716
+ for target_column in target_columns:
717
+ if target_column not in features_dict:
718
+ raise KeyError(f"Could not find target_column='{target_column}'")
719
+ if features_dict[target_column]["data_type"] == "numeric":
720
+ found_types.add(np.float64)
721
+ else:
722
+ found_types.add(object)
723
+
724
+ # note: we compare to a string, not boolean
725
+ if features_dict[target_column]["is_ignore"] == "true":
726
+ warn(f"target_column='{target_column}' has flag is_ignore.")
727
+ if features_dict[target_column]["is_row_identifier"] == "true":
728
+ warn(f"target_column='{target_column}' has flag is_row_identifier.")
729
+ if len(found_types) > 1:
730
+ raise ValueError(
731
+ "Can only handle homogeneous multi-target datasets, "
732
+ "i.e., all targets are either numeric or "
733
+ "categorical."
734
+ )
735
+
736
+
737
+ def _valid_data_column_names(features_list, target_columns):
738
+ # logic for determining on which columns can be learned. Note that from the
739
+ # OpenML guide follows that columns that have the `is_row_identifier` or
740
+ # `is_ignore` flag, these can not be learned on. Also target columns are
741
+ # excluded.
742
+ valid_data_column_names = []
743
+ for feature in features_list:
744
+ if (
745
+ feature["name"] not in target_columns
746
+ and feature["is_ignore"] != "true"
747
+ and feature["is_row_identifier"] != "true"
748
+ ):
749
+ valid_data_column_names.append(feature["name"])
750
+ return valid_data_column_names
751
+
752
+
753
+ @validate_params(
754
+ {
755
+ "name": [str, None],
756
+ "version": [Interval(Integral, 1, None, closed="left"), StrOptions({"active"})],
757
+ "data_id": [Interval(Integral, 1, None, closed="left"), None],
758
+ "data_home": [str, os.PathLike, None],
759
+ "target_column": [str, list, None],
760
+ "cache": [bool],
761
+ "return_X_y": [bool],
762
+ "as_frame": [bool, StrOptions({"auto"})],
763
+ "n_retries": [Interval(Integral, 1, None, closed="left")],
764
+ "delay": [Interval(Real, 0, None, closed="right")],
765
+ "parser": [
766
+ StrOptions({"auto", "pandas", "liac-arff"}),
767
+ ],
768
+ "read_csv_kwargs": [dict, None],
769
+ },
770
+ prefer_skip_nested_validation=True,
771
+ )
772
+ def fetch_openml(
773
+ name: Optional[str] = None,
774
+ *,
775
+ version: Union[str, int] = "active",
776
+ data_id: Optional[int] = None,
777
+ data_home: Optional[Union[str, os.PathLike]] = None,
778
+ target_column: Optional[Union[str, List]] = "default-target",
779
+ cache: bool = True,
780
+ return_X_y: bool = False,
781
+ as_frame: Union[str, bool] = "auto",
782
+ n_retries: int = 3,
783
+ delay: float = 1.0,
784
+ parser: str = "auto",
785
+ read_csv_kwargs: Optional[Dict] = None,
786
+ ):
787
+ """Fetch dataset from openml by name or dataset id.
788
+
789
+ Datasets are uniquely identified by either an integer ID or by a
790
+ combination of name and version (i.e. there might be multiple
791
+ versions of the 'iris' dataset). Please give either name or data_id
792
+ (not both). In case a name is given, a version can also be
793
+ provided.
794
+
795
+ Read more in the :ref:`User Guide <openml>`.
796
+
797
+ .. versionadded:: 0.20
798
+
799
+ .. note:: EXPERIMENTAL
800
+
801
+ The API is experimental (particularly the return value structure),
802
+ and might have small backward-incompatible changes without notice
803
+ or warning in future releases.
804
+
805
+ Parameters
806
+ ----------
807
+ name : str, default=None
808
+ String identifier of the dataset. Note that OpenML can have multiple
809
+ datasets with the same name.
810
+
811
+ version : int or 'active', default='active'
812
+ Version of the dataset. Can only be provided if also ``name`` is given.
813
+ If 'active' the oldest version that's still active is used. Since
814
+ there may be more than one active version of a dataset, and those
815
+ versions may fundamentally be different from one another, setting an
816
+ exact version is highly recommended.
817
+
818
+ data_id : int, default=None
819
+ OpenML ID of the dataset. The most specific way of retrieving a
820
+ dataset. If data_id is not given, name (and potential version) are
821
+ used to obtain a dataset.
822
+
823
+ data_home : str or path-like, default=None
824
+ Specify another download and cache folder for the data sets. By default
825
+ all scikit-learn data is stored in '~/scikit_learn_data' subfolders.
826
+
827
+ target_column : str, list or None, default='default-target'
828
+ Specify the column name in the data to use as target. If
829
+ 'default-target', the standard target column a stored on the server
830
+ is used. If ``None``, all columns are returned as data and the
831
+ target is ``None``. If list (of strings), all columns with these names
832
+ are returned as multi-target (Note: not all scikit-learn classifiers
833
+ can handle all types of multi-output combinations).
834
+
835
+ cache : bool, default=True
836
+ Whether to cache the downloaded datasets into `data_home`.
837
+
838
+ return_X_y : bool, default=False
839
+ If True, returns ``(data, target)`` instead of a Bunch object. See
840
+ below for more information about the `data` and `target` objects.
841
+
842
+ as_frame : bool or 'auto', default='auto'
843
+ If True, the data is a pandas DataFrame including columns with
844
+ appropriate dtypes (numeric, string or categorical). The target is
845
+ a pandas DataFrame or Series depending on the number of target_columns.
846
+ The Bunch will contain a ``frame`` attribute with the target and the
847
+ data. If ``return_X_y`` is True, then ``(data, target)`` will be pandas
848
+ DataFrames or Series as describe above.
849
+
850
+ If `as_frame` is 'auto', the data and target will be converted to
851
+ DataFrame or Series as if `as_frame` is set to True, unless the dataset
852
+ is stored in sparse format.
853
+
854
+ If `as_frame` is False, the data and target will be NumPy arrays and
855
+ the `data` will only contain numerical values when `parser="liac-arff"`
856
+ where the categories are provided in the attribute `categories` of the
857
+ `Bunch` instance. When `parser="pandas"`, no ordinal encoding is made.
858
+
859
+ .. versionchanged:: 0.24
860
+ The default value of `as_frame` changed from `False` to `'auto'`
861
+ in 0.24.
862
+
863
+ n_retries : int, default=3
864
+ Number of retries when HTTP errors or network timeouts are encountered.
865
+ Error with status code 412 won't be retried as they represent OpenML
866
+ generic errors.
867
+
868
+ delay : float, default=1.0
869
+ Number of seconds between retries.
870
+
871
+ parser : {"auto", "pandas", "liac-arff"}, default="auto"
872
+ Parser used to load the ARFF file. Two parsers are implemented:
873
+
874
+ - `"pandas"`: this is the most efficient parser. However, it requires
875
+ pandas to be installed and can only open dense datasets.
876
+ - `"liac-arff"`: this is a pure Python ARFF parser that is much less
877
+ memory- and CPU-efficient. It deals with sparse ARFF datasets.
878
+
879
+ If `"auto"`, the parser is chosen automatically such that `"liac-arff"`
880
+ is selected for sparse ARFF datasets, otherwise `"pandas"` is selected.
881
+
882
+ .. versionadded:: 1.2
883
+ .. versionchanged:: 1.4
884
+ The default value of `parser` changes from `"liac-arff"` to
885
+ `"auto"`.
886
+
887
+ read_csv_kwargs : dict, default=None
888
+ Keyword arguments passed to :func:`pandas.read_csv` when loading the data
889
+ from a ARFF file and using the pandas parser. It can allow to
890
+ overwrite some default parameters.
891
+
892
+ .. versionadded:: 1.3
893
+
894
+ Returns
895
+ -------
896
+ data : :class:`~sklearn.utils.Bunch`
897
+ Dictionary-like object, with the following attributes.
898
+
899
+ data : np.array, scipy.sparse.csr_matrix of floats, or pandas DataFrame
900
+ The feature matrix. Categorical features are encoded as ordinals.
901
+ target : np.array, pandas Series or DataFrame
902
+ The regression target or classification labels, if applicable.
903
+ Dtype is float if numeric, and object if categorical. If
904
+ ``as_frame`` is True, ``target`` is a pandas object.
905
+ DESCR : str
906
+ The full description of the dataset.
907
+ feature_names : list
908
+ The names of the dataset columns.
909
+ target_names: list
910
+ The names of the target columns.
911
+
912
+ .. versionadded:: 0.22
913
+
914
+ categories : dict or None
915
+ Maps each categorical feature name to a list of values, such
916
+ that the value encoded as i is ith in the list. If ``as_frame``
917
+ is True, this is None.
918
+ details : dict
919
+ More metadata from OpenML.
920
+ frame : pandas DataFrame
921
+ Only present when `as_frame=True`. DataFrame with ``data`` and
922
+ ``target``.
923
+
924
+ (data, target) : tuple if ``return_X_y`` is True
925
+
926
+ .. note:: EXPERIMENTAL
927
+
928
+ This interface is **experimental** and subsequent releases may
929
+ change attributes without notice (although there should only be
930
+ minor changes to ``data`` and ``target``).
931
+
932
+ Missing values in the 'data' are represented as NaN's. Missing values
933
+ in 'target' are represented as NaN's (numerical target) or None
934
+ (categorical target).
935
+
936
+ Notes
937
+ -----
938
+ The `"pandas"` and `"liac-arff"` parsers can lead to different data types
939
+ in the output. The notable differences are the following:
940
+
941
+ - The `"liac-arff"` parser always encodes categorical features as `str` objects.
942
+ To the contrary, the `"pandas"` parser instead infers the type while
943
+ reading and numerical categories will be casted into integers whenever
944
+ possible.
945
+ - The `"liac-arff"` parser uses float64 to encode numerical features
946
+ tagged as 'REAL' and 'NUMERICAL' in the metadata. The `"pandas"`
947
+ parser instead infers if these numerical features corresponds
948
+ to integers and uses panda's Integer extension dtype.
949
+ - In particular, classification datasets with integer categories are
950
+ typically loaded as such `(0, 1, ...)` with the `"pandas"` parser while
951
+ `"liac-arff"` will force the use of string encoded class labels such as
952
+ `"0"`, `"1"` and so on.
953
+ - The `"pandas"` parser will not strip single quotes - i.e. `'` - from
954
+ string columns. For instance, a string `'my string'` will be kept as is
955
+ while the `"liac-arff"` parser will strip the single quotes. For
956
+ categorical columns, the single quotes are stripped from the values.
957
+
958
+ In addition, when `as_frame=False` is used, the `"liac-arff"` parser
959
+ returns ordinally encoded data where the categories are provided in the
960
+ attribute `categories` of the `Bunch` instance. Instead, `"pandas"` returns
961
+ a NumPy array were the categories are not encoded.
962
+
963
+ Examples
964
+ --------
965
+ >>> from sklearn.datasets import fetch_openml
966
+ >>> adult = fetch_openml("adult", version=2) # doctest: +SKIP
967
+ >>> adult.frame.info() # doctest: +SKIP
968
+ <class 'pandas.core.frame.DataFrame'>
969
+ RangeIndex: 48842 entries, 0 to 48841
970
+ Data columns (total 15 columns):
971
+ # Column Non-Null Count Dtype
972
+ --- ------ -------------- -----
973
+ 0 age 48842 non-null int64
974
+ 1 workclass 46043 non-null category
975
+ 2 fnlwgt 48842 non-null int64
976
+ 3 education 48842 non-null category
977
+ 4 education-num 48842 non-null int64
978
+ 5 marital-status 48842 non-null category
979
+ 6 occupation 46033 non-null category
980
+ 7 relationship 48842 non-null category
981
+ 8 race 48842 non-null category
982
+ 9 sex 48842 non-null category
983
+ 10 capital-gain 48842 non-null int64
984
+ 11 capital-loss 48842 non-null int64
985
+ 12 hours-per-week 48842 non-null int64
986
+ 13 native-country 47985 non-null category
987
+ 14 class 48842 non-null category
988
+ dtypes: category(9), int64(6)
989
+ memory usage: 2.7 MB
990
+ """
991
+ if cache is False:
992
+ # no caching will be applied
993
+ data_home = None
994
+ else:
995
+ data_home = get_data_home(data_home=data_home)
996
+ data_home = join(str(data_home), "openml")
997
+
998
+ # check valid function arguments. data_id XOR (name, version) should be
999
+ # provided
1000
+ if name is not None:
1001
+ # OpenML is case-insensitive, but the caching mechanism is not
1002
+ # convert all data names (str) to lower case
1003
+ name = name.lower()
1004
+ if data_id is not None:
1005
+ raise ValueError(
1006
+ "Dataset data_id={} and name={} passed, but you can only "
1007
+ "specify a numeric data_id or a name, not "
1008
+ "both.".format(data_id, name)
1009
+ )
1010
+ data_info = _get_data_info_by_name(
1011
+ name, version, data_home, n_retries=n_retries, delay=delay
1012
+ )
1013
+ data_id = data_info["did"]
1014
+ elif data_id is not None:
1015
+ # from the previous if statement, it is given that name is None
1016
+ if version != "active":
1017
+ raise ValueError(
1018
+ "Dataset data_id={} and version={} passed, but you can only "
1019
+ "specify a numeric data_id or a version, not "
1020
+ "both.".format(data_id, version)
1021
+ )
1022
+ else:
1023
+ raise ValueError(
1024
+ "Neither name nor data_id are provided. Please provide name or data_id."
1025
+ )
1026
+
1027
+ data_description = _get_data_description_by_id(data_id, data_home)
1028
+ if data_description["status"] != "active":
1029
+ warn(
1030
+ "Version {} of dataset {} is inactive, meaning that issues have "
1031
+ "been found in the dataset. Try using a newer version from "
1032
+ "this URL: {}".format(
1033
+ data_description["version"],
1034
+ data_description["name"],
1035
+ data_description["url"],
1036
+ )
1037
+ )
1038
+ if "error" in data_description:
1039
+ warn(
1040
+ "OpenML registered a problem with the dataset. It might be "
1041
+ "unusable. Error: {}".format(data_description["error"])
1042
+ )
1043
+ if "warning" in data_description:
1044
+ warn(
1045
+ "OpenML raised a warning on the dataset. It might be "
1046
+ "unusable. Warning: {}".format(data_description["warning"])
1047
+ )
1048
+
1049
+ return_sparse = data_description["format"].lower() == "sparse_arff"
1050
+ as_frame = not return_sparse if as_frame == "auto" else as_frame
1051
+ if parser == "auto":
1052
+ parser_ = "liac-arff" if return_sparse else "pandas"
1053
+ else:
1054
+ parser_ = parser
1055
+
1056
+ if parser_ == "pandas":
1057
+ try:
1058
+ check_pandas_support("`fetch_openml`")
1059
+ except ImportError as exc:
1060
+ if as_frame:
1061
+ err_msg = (
1062
+ "Returning pandas objects requires pandas to be installed. "
1063
+ "Alternatively, explicitly set `as_frame=False` and "
1064
+ "`parser='liac-arff'`."
1065
+ )
1066
+ else:
1067
+ err_msg = (
1068
+ f"Using `parser={parser!r}` wit dense data requires pandas to be "
1069
+ "installed. Alternatively, explicitly set `parser='liac-arff'`."
1070
+ )
1071
+ raise ImportError(err_msg) from exc
1072
+
1073
+ if return_sparse:
1074
+ if as_frame:
1075
+ raise ValueError(
1076
+ "Sparse ARFF datasets cannot be loaded with as_frame=True. "
1077
+ "Use as_frame=False or as_frame='auto' instead."
1078
+ )
1079
+ if parser_ == "pandas":
1080
+ raise ValueError(
1081
+ f"Sparse ARFF datasets cannot be loaded with parser={parser!r}. "
1082
+ "Use parser='liac-arff' or parser='auto' instead."
1083
+ )
1084
+
1085
+ # download data features, meta-info about column types
1086
+ features_list = _get_data_features(data_id, data_home)
1087
+
1088
+ if not as_frame:
1089
+ for feature in features_list:
1090
+ if "true" in (feature["is_ignore"], feature["is_row_identifier"]):
1091
+ continue
1092
+ if feature["data_type"] == "string":
1093
+ raise ValueError(
1094
+ "STRING attributes are not supported for "
1095
+ "array representation. Try as_frame=True"
1096
+ )
1097
+
1098
+ if target_column == "default-target":
1099
+ # determines the default target based on the data feature results
1100
+ # (which is currently more reliable than the data description;
1101
+ # see issue: https://github.com/openml/OpenML/issues/768)
1102
+ target_columns = [
1103
+ feature["name"]
1104
+ for feature in features_list
1105
+ if feature["is_target"] == "true"
1106
+ ]
1107
+ elif isinstance(target_column, str):
1108
+ # for code-simplicity, make target_column by default a list
1109
+ target_columns = [target_column]
1110
+ elif target_column is None:
1111
+ target_columns = []
1112
+ else:
1113
+ # target_column already is of type list
1114
+ target_columns = target_column
1115
+ data_columns = _valid_data_column_names(features_list, target_columns)
1116
+
1117
+ shape: Optional[Tuple[int, int]]
1118
+ # determine arff encoding to return
1119
+ if not return_sparse:
1120
+ # The shape must include the ignored features to keep the right indexes
1121
+ # during the arff data conversion.
1122
+ data_qualities = _get_data_qualities(data_id, data_home)
1123
+ shape = _get_num_samples(data_qualities), len(features_list)
1124
+ else:
1125
+ shape = None
1126
+
1127
+ # obtain the data
1128
+ url = _DATA_FILE.format(data_description["file_id"])
1129
+ bunch = _download_data_to_bunch(
1130
+ url,
1131
+ return_sparse,
1132
+ data_home,
1133
+ as_frame=bool(as_frame),
1134
+ openml_columns_info=features_list,
1135
+ shape=shape,
1136
+ target_columns=target_columns,
1137
+ data_columns=data_columns,
1138
+ md5_checksum=data_description["md5_checksum"],
1139
+ n_retries=n_retries,
1140
+ delay=delay,
1141
+ parser=parser_,
1142
+ read_csv_kwargs=read_csv_kwargs,
1143
+ )
1144
+
1145
+ if return_X_y:
1146
+ return bunch.data, bunch.target
1147
+
1148
+ description = "{}\n\nDownloaded from openml.org.".format(
1149
+ data_description.pop("description")
1150
+ )
1151
+
1152
+ bunch.update(
1153
+ DESCR=description,
1154
+ details=data_description,
1155
+ url="https://www.openml.org/d/{}".format(data_id),
1156
+ )
1157
+
1158
+ return bunch
venv/lib/python3.10/site-packages/sklearn/datasets/_twenty_newsgroups.py ADDED
@@ -0,0 +1,561 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Caching loader for the 20 newsgroups text classification dataset.
2
+
3
+
4
+ The description of the dataset is available on the official website at:
5
+
6
+ http://people.csail.mit.edu/jrennie/20Newsgroups/
7
+
8
+ Quoting the introduction:
9
+
10
+ The 20 Newsgroups data set is a collection of approximately 20,000
11
+ newsgroup documents, partitioned (nearly) evenly across 20 different
12
+ newsgroups. To the best of my knowledge, it was originally collected
13
+ by Ken Lang, probably for his Newsweeder: Learning to filter netnews
14
+ paper, though he does not explicitly mention this collection. The 20
15
+ newsgroups collection has become a popular data set for experiments
16
+ in text applications of machine learning techniques, such as text
17
+ classification and text clustering.
18
+
19
+ This dataset loader will download the recommended "by date" variant of the
20
+ dataset and which features a point in time split between the train and
21
+ test sets. The compressed dataset size is around 14 Mb compressed. Once
22
+ uncompressed the train set is 52 MB and the test set is 34 MB.
23
+ """
24
+ # Copyright (c) 2011 Olivier Grisel <[email protected]>
25
+ # License: BSD 3 clause
26
+
27
+ import codecs
28
+ import logging
29
+ import os
30
+ import pickle
31
+ import re
32
+ import shutil
33
+ import tarfile
34
+ from contextlib import suppress
35
+
36
+ import joblib
37
+ import numpy as np
38
+ import scipy.sparse as sp
39
+
40
+ from .. import preprocessing
41
+ from ..feature_extraction.text import CountVectorizer
42
+ from ..utils import Bunch, check_random_state
43
+ from ..utils._param_validation import StrOptions, validate_params
44
+ from . import get_data_home, load_files
45
+ from ._base import (
46
+ RemoteFileMetadata,
47
+ _convert_data_dataframe,
48
+ _fetch_remote,
49
+ _pkl_filepath,
50
+ load_descr,
51
+ )
52
+
53
+ logger = logging.getLogger(__name__)
54
+
55
+ # The original data can be found at:
56
+ # https://people.csail.mit.edu/jrennie/20Newsgroups/20news-bydate.tar.gz
57
+ ARCHIVE = RemoteFileMetadata(
58
+ filename="20news-bydate.tar.gz",
59
+ url="https://ndownloader.figshare.com/files/5975967",
60
+ checksum="8f1b2514ca22a5ade8fbb9cfa5727df95fa587f4c87b786e15c759fa66d95610",
61
+ )
62
+
63
+ CACHE_NAME = "20news-bydate.pkz"
64
+ TRAIN_FOLDER = "20news-bydate-train"
65
+ TEST_FOLDER = "20news-bydate-test"
66
+
67
+
68
+ def _download_20newsgroups(target_dir, cache_path):
69
+ """Download the 20 newsgroups data and stored it as a zipped pickle."""
70
+ train_path = os.path.join(target_dir, TRAIN_FOLDER)
71
+ test_path = os.path.join(target_dir, TEST_FOLDER)
72
+
73
+ os.makedirs(target_dir, exist_ok=True)
74
+
75
+ logger.info("Downloading dataset from %s (14 MB)", ARCHIVE.url)
76
+ archive_path = _fetch_remote(ARCHIVE, dirname=target_dir)
77
+
78
+ logger.debug("Decompressing %s", archive_path)
79
+ tarfile.open(archive_path, "r:gz").extractall(path=target_dir)
80
+
81
+ with suppress(FileNotFoundError):
82
+ os.remove(archive_path)
83
+
84
+ # Store a zipped pickle
85
+ cache = dict(
86
+ train=load_files(train_path, encoding="latin1"),
87
+ test=load_files(test_path, encoding="latin1"),
88
+ )
89
+ compressed_content = codecs.encode(pickle.dumps(cache), "zlib_codec")
90
+ with open(cache_path, "wb") as f:
91
+ f.write(compressed_content)
92
+
93
+ shutil.rmtree(target_dir)
94
+ return cache
95
+
96
+
97
+ def strip_newsgroup_header(text):
98
+ """
99
+ Given text in "news" format, strip the headers, by removing everything
100
+ before the first blank line.
101
+
102
+ Parameters
103
+ ----------
104
+ text : str
105
+ The text from which to remove the signature block.
106
+ """
107
+ _before, _blankline, after = text.partition("\n\n")
108
+ return after
109
+
110
+
111
+ _QUOTE_RE = re.compile(
112
+ r"(writes in|writes:|wrote:|says:|said:" r"|^In article|^Quoted from|^\||^>)"
113
+ )
114
+
115
+
116
+ def strip_newsgroup_quoting(text):
117
+ """
118
+ Given text in "news" format, strip lines beginning with the quote
119
+ characters > or |, plus lines that often introduce a quoted section
120
+ (for example, because they contain the string 'writes:'.)
121
+
122
+ Parameters
123
+ ----------
124
+ text : str
125
+ The text from which to remove the signature block.
126
+ """
127
+ good_lines = [line for line in text.split("\n") if not _QUOTE_RE.search(line)]
128
+ return "\n".join(good_lines)
129
+
130
+
131
+ def strip_newsgroup_footer(text):
132
+ """
133
+ Given text in "news" format, attempt to remove a signature block.
134
+
135
+ As a rough heuristic, we assume that signatures are set apart by either
136
+ a blank line or a line made of hyphens, and that it is the last such line
137
+ in the file (disregarding blank lines at the end).
138
+
139
+ Parameters
140
+ ----------
141
+ text : str
142
+ The text from which to remove the signature block.
143
+ """
144
+ lines = text.strip().split("\n")
145
+ for line_num in range(len(lines) - 1, -1, -1):
146
+ line = lines[line_num]
147
+ if line.strip().strip("-") == "":
148
+ break
149
+
150
+ if line_num > 0:
151
+ return "\n".join(lines[:line_num])
152
+ else:
153
+ return text
154
+
155
+
156
+ @validate_params(
157
+ {
158
+ "data_home": [str, os.PathLike, None],
159
+ "subset": [StrOptions({"train", "test", "all"})],
160
+ "categories": ["array-like", None],
161
+ "shuffle": ["boolean"],
162
+ "random_state": ["random_state"],
163
+ "remove": [tuple],
164
+ "download_if_missing": ["boolean"],
165
+ "return_X_y": ["boolean"],
166
+ },
167
+ prefer_skip_nested_validation=True,
168
+ )
169
+ def fetch_20newsgroups(
170
+ *,
171
+ data_home=None,
172
+ subset="train",
173
+ categories=None,
174
+ shuffle=True,
175
+ random_state=42,
176
+ remove=(),
177
+ download_if_missing=True,
178
+ return_X_y=False,
179
+ ):
180
+ """Load the filenames and data from the 20 newsgroups dataset \
181
+ (classification).
182
+
183
+ Download it if necessary.
184
+
185
+ ================= ==========
186
+ Classes 20
187
+ Samples total 18846
188
+ Dimensionality 1
189
+ Features text
190
+ ================= ==========
191
+
192
+ Read more in the :ref:`User Guide <20newsgroups_dataset>`.
193
+
194
+ Parameters
195
+ ----------
196
+ data_home : str or path-like, default=None
197
+ Specify a download and cache folder for the datasets. If None,
198
+ all scikit-learn data is stored in '~/scikit_learn_data' subfolders.
199
+
200
+ subset : {'train', 'test', 'all'}, default='train'
201
+ Select the dataset to load: 'train' for the training set, 'test'
202
+ for the test set, 'all' for both, with shuffled ordering.
203
+
204
+ categories : array-like, dtype=str, default=None
205
+ If None (default), load all the categories.
206
+ If not None, list of category names to load (other categories
207
+ ignored).
208
+
209
+ shuffle : bool, default=True
210
+ Whether or not to shuffle the data: might be important for models that
211
+ make the assumption that the samples are independent and identically
212
+ distributed (i.i.d.), such as stochastic gradient descent.
213
+
214
+ random_state : int, RandomState instance or None, default=42
215
+ Determines random number generation for dataset shuffling. Pass an int
216
+ for reproducible output across multiple function calls.
217
+ See :term:`Glossary <random_state>`.
218
+
219
+ remove : tuple, default=()
220
+ May contain any subset of ('headers', 'footers', 'quotes'). Each of
221
+ these are kinds of text that will be detected and removed from the
222
+ newsgroup posts, preventing classifiers from overfitting on
223
+ metadata.
224
+
225
+ 'headers' removes newsgroup headers, 'footers' removes blocks at the
226
+ ends of posts that look like signatures, and 'quotes' removes lines
227
+ that appear to be quoting another post.
228
+
229
+ 'headers' follows an exact standard; the other filters are not always
230
+ correct.
231
+
232
+ download_if_missing : bool, default=True
233
+ If False, raise an OSError if the data is not locally available
234
+ instead of trying to download the data from the source site.
235
+
236
+ return_X_y : bool, default=False
237
+ If True, returns `(data.data, data.target)` instead of a Bunch
238
+ object.
239
+
240
+ .. versionadded:: 0.22
241
+
242
+ Returns
243
+ -------
244
+ bunch : :class:`~sklearn.utils.Bunch`
245
+ Dictionary-like object, with the following attributes.
246
+
247
+ data : list of shape (n_samples,)
248
+ The data list to learn.
249
+ target: ndarray of shape (n_samples,)
250
+ The target labels.
251
+ filenames: list of shape (n_samples,)
252
+ The path to the location of the data.
253
+ DESCR: str
254
+ The full description of the dataset.
255
+ target_names: list of shape (n_classes,)
256
+ The names of target classes.
257
+
258
+ (data, target) : tuple if `return_X_y=True`
259
+ A tuple of two ndarrays. The first contains a 2D array of shape
260
+ (n_samples, n_classes) with each row representing one sample and each
261
+ column representing the features. The second array of shape
262
+ (n_samples,) contains the target samples.
263
+
264
+ .. versionadded:: 0.22
265
+ """
266
+
267
+ data_home = get_data_home(data_home=data_home)
268
+ cache_path = _pkl_filepath(data_home, CACHE_NAME)
269
+ twenty_home = os.path.join(data_home, "20news_home")
270
+ cache = None
271
+ if os.path.exists(cache_path):
272
+ try:
273
+ with open(cache_path, "rb") as f:
274
+ compressed_content = f.read()
275
+ uncompressed_content = codecs.decode(compressed_content, "zlib_codec")
276
+ cache = pickle.loads(uncompressed_content)
277
+ except Exception as e:
278
+ print(80 * "_")
279
+ print("Cache loading failed")
280
+ print(80 * "_")
281
+ print(e)
282
+
283
+ if cache is None:
284
+ if download_if_missing:
285
+ logger.info("Downloading 20news dataset. This may take a few minutes.")
286
+ cache = _download_20newsgroups(
287
+ target_dir=twenty_home, cache_path=cache_path
288
+ )
289
+ else:
290
+ raise OSError("20Newsgroups dataset not found")
291
+
292
+ if subset in ("train", "test"):
293
+ data = cache[subset]
294
+ elif subset == "all":
295
+ data_lst = list()
296
+ target = list()
297
+ filenames = list()
298
+ for subset in ("train", "test"):
299
+ data = cache[subset]
300
+ data_lst.extend(data.data)
301
+ target.extend(data.target)
302
+ filenames.extend(data.filenames)
303
+
304
+ data.data = data_lst
305
+ data.target = np.array(target)
306
+ data.filenames = np.array(filenames)
307
+
308
+ fdescr = load_descr("twenty_newsgroups.rst")
309
+
310
+ data.DESCR = fdescr
311
+
312
+ if "headers" in remove:
313
+ data.data = [strip_newsgroup_header(text) for text in data.data]
314
+ if "footers" in remove:
315
+ data.data = [strip_newsgroup_footer(text) for text in data.data]
316
+ if "quotes" in remove:
317
+ data.data = [strip_newsgroup_quoting(text) for text in data.data]
318
+
319
+ if categories is not None:
320
+ labels = [(data.target_names.index(cat), cat) for cat in categories]
321
+ # Sort the categories to have the ordering of the labels
322
+ labels.sort()
323
+ labels, categories = zip(*labels)
324
+ mask = np.isin(data.target, labels)
325
+ data.filenames = data.filenames[mask]
326
+ data.target = data.target[mask]
327
+ # searchsorted to have continuous labels
328
+ data.target = np.searchsorted(labels, data.target)
329
+ data.target_names = list(categories)
330
+ # Use an object array to shuffle: avoids memory copy
331
+ data_lst = np.array(data.data, dtype=object)
332
+ data_lst = data_lst[mask]
333
+ data.data = data_lst.tolist()
334
+
335
+ if shuffle:
336
+ random_state = check_random_state(random_state)
337
+ indices = np.arange(data.target.shape[0])
338
+ random_state.shuffle(indices)
339
+ data.filenames = data.filenames[indices]
340
+ data.target = data.target[indices]
341
+ # Use an object array to shuffle: avoids memory copy
342
+ data_lst = np.array(data.data, dtype=object)
343
+ data_lst = data_lst[indices]
344
+ data.data = data_lst.tolist()
345
+
346
+ if return_X_y:
347
+ return data.data, data.target
348
+
349
+ return data
350
+
351
+
352
+ @validate_params(
353
+ {
354
+ "subset": [StrOptions({"train", "test", "all"})],
355
+ "remove": [tuple],
356
+ "data_home": [str, os.PathLike, None],
357
+ "download_if_missing": ["boolean"],
358
+ "return_X_y": ["boolean"],
359
+ "normalize": ["boolean"],
360
+ "as_frame": ["boolean"],
361
+ },
362
+ prefer_skip_nested_validation=True,
363
+ )
364
+ def fetch_20newsgroups_vectorized(
365
+ *,
366
+ subset="train",
367
+ remove=(),
368
+ data_home=None,
369
+ download_if_missing=True,
370
+ return_X_y=False,
371
+ normalize=True,
372
+ as_frame=False,
373
+ ):
374
+ """Load and vectorize the 20 newsgroups dataset (classification).
375
+
376
+ Download it if necessary.
377
+
378
+ This is a convenience function; the transformation is done using the
379
+ default settings for
380
+ :class:`~sklearn.feature_extraction.text.CountVectorizer`. For more
381
+ advanced usage (stopword filtering, n-gram extraction, etc.), combine
382
+ fetch_20newsgroups with a custom
383
+ :class:`~sklearn.feature_extraction.text.CountVectorizer`,
384
+ :class:`~sklearn.feature_extraction.text.HashingVectorizer`,
385
+ :class:`~sklearn.feature_extraction.text.TfidfTransformer` or
386
+ :class:`~sklearn.feature_extraction.text.TfidfVectorizer`.
387
+
388
+ The resulting counts are normalized using
389
+ :func:`sklearn.preprocessing.normalize` unless normalize is set to False.
390
+
391
+ ================= ==========
392
+ Classes 20
393
+ Samples total 18846
394
+ Dimensionality 130107
395
+ Features real
396
+ ================= ==========
397
+
398
+ Read more in the :ref:`User Guide <20newsgroups_dataset>`.
399
+
400
+ Parameters
401
+ ----------
402
+ subset : {'train', 'test', 'all'}, default='train'
403
+ Select the dataset to load: 'train' for the training set, 'test'
404
+ for the test set, 'all' for both, with shuffled ordering.
405
+
406
+ remove : tuple, default=()
407
+ May contain any subset of ('headers', 'footers', 'quotes'). Each of
408
+ these are kinds of text that will be detected and removed from the
409
+ newsgroup posts, preventing classifiers from overfitting on
410
+ metadata.
411
+
412
+ 'headers' removes newsgroup headers, 'footers' removes blocks at the
413
+ ends of posts that look like signatures, and 'quotes' removes lines
414
+ that appear to be quoting another post.
415
+
416
+ data_home : str or path-like, default=None
417
+ Specify an download and cache folder for the datasets. If None,
418
+ all scikit-learn data is stored in '~/scikit_learn_data' subfolders.
419
+
420
+ download_if_missing : bool, default=True
421
+ If False, raise an OSError if the data is not locally available
422
+ instead of trying to download the data from the source site.
423
+
424
+ return_X_y : bool, default=False
425
+ If True, returns ``(data.data, data.target)`` instead of a Bunch
426
+ object.
427
+
428
+ .. versionadded:: 0.20
429
+
430
+ normalize : bool, default=True
431
+ If True, normalizes each document's feature vector to unit norm using
432
+ :func:`sklearn.preprocessing.normalize`.
433
+
434
+ .. versionadded:: 0.22
435
+
436
+ as_frame : bool, default=False
437
+ If True, the data is a pandas DataFrame including columns with
438
+ appropriate dtypes (numeric, string, or categorical). The target is
439
+ a pandas DataFrame or Series depending on the number of
440
+ `target_columns`.
441
+
442
+ .. versionadded:: 0.24
443
+
444
+ Returns
445
+ -------
446
+ bunch : :class:`~sklearn.utils.Bunch`
447
+ Dictionary-like object, with the following attributes.
448
+
449
+ data: {sparse matrix, dataframe} of shape (n_samples, n_features)
450
+ The input data matrix. If ``as_frame`` is `True`, ``data`` is
451
+ a pandas DataFrame with sparse columns.
452
+ target: {ndarray, series} of shape (n_samples,)
453
+ The target labels. If ``as_frame`` is `True`, ``target`` is a
454
+ pandas Series.
455
+ target_names: list of shape (n_classes,)
456
+ The names of target classes.
457
+ DESCR: str
458
+ The full description of the dataset.
459
+ frame: dataframe of shape (n_samples, n_features + 1)
460
+ Only present when `as_frame=True`. Pandas DataFrame with ``data``
461
+ and ``target``.
462
+
463
+ .. versionadded:: 0.24
464
+
465
+ (data, target) : tuple if ``return_X_y`` is True
466
+ `data` and `target` would be of the format defined in the `Bunch`
467
+ description above.
468
+
469
+ .. versionadded:: 0.20
470
+ """
471
+ data_home = get_data_home(data_home=data_home)
472
+ filebase = "20newsgroup_vectorized"
473
+ if remove:
474
+ filebase += "remove-" + "-".join(remove)
475
+ target_file = _pkl_filepath(data_home, filebase + ".pkl")
476
+
477
+ # we shuffle but use a fixed seed for the memoization
478
+ data_train = fetch_20newsgroups(
479
+ data_home=data_home,
480
+ subset="train",
481
+ categories=None,
482
+ shuffle=True,
483
+ random_state=12,
484
+ remove=remove,
485
+ download_if_missing=download_if_missing,
486
+ )
487
+
488
+ data_test = fetch_20newsgroups(
489
+ data_home=data_home,
490
+ subset="test",
491
+ categories=None,
492
+ shuffle=True,
493
+ random_state=12,
494
+ remove=remove,
495
+ download_if_missing=download_if_missing,
496
+ )
497
+
498
+ if os.path.exists(target_file):
499
+ try:
500
+ X_train, X_test, feature_names = joblib.load(target_file)
501
+ except ValueError as e:
502
+ raise ValueError(
503
+ f"The cached dataset located in {target_file} was fetched "
504
+ "with an older scikit-learn version and it is not compatible "
505
+ "with the scikit-learn version imported. You need to "
506
+ f"manually delete the file: {target_file}."
507
+ ) from e
508
+ else:
509
+ vectorizer = CountVectorizer(dtype=np.int16)
510
+ X_train = vectorizer.fit_transform(data_train.data).tocsr()
511
+ X_test = vectorizer.transform(data_test.data).tocsr()
512
+ feature_names = vectorizer.get_feature_names_out()
513
+
514
+ joblib.dump((X_train, X_test, feature_names), target_file, compress=9)
515
+
516
+ # the data is stored as int16 for compactness
517
+ # but normalize needs floats
518
+ if normalize:
519
+ X_train = X_train.astype(np.float64)
520
+ X_test = X_test.astype(np.float64)
521
+ preprocessing.normalize(X_train, copy=False)
522
+ preprocessing.normalize(X_test, copy=False)
523
+
524
+ target_names = data_train.target_names
525
+
526
+ if subset == "train":
527
+ data = X_train
528
+ target = data_train.target
529
+ elif subset == "test":
530
+ data = X_test
531
+ target = data_test.target
532
+ elif subset == "all":
533
+ data = sp.vstack((X_train, X_test)).tocsr()
534
+ target = np.concatenate((data_train.target, data_test.target))
535
+
536
+ fdescr = load_descr("twenty_newsgroups.rst")
537
+
538
+ frame = None
539
+ target_name = ["category_class"]
540
+
541
+ if as_frame:
542
+ frame, data, target = _convert_data_dataframe(
543
+ "fetch_20newsgroups_vectorized",
544
+ data,
545
+ target,
546
+ feature_names,
547
+ target_names=target_name,
548
+ sparse_data=True,
549
+ )
550
+
551
+ if return_X_y:
552
+ return data, target
553
+
554
+ return Bunch(
555
+ data=data,
556
+ target=target,
557
+ frame=frame,
558
+ target_names=target_names,
559
+ feature_names=feature_names,
560
+ DESCR=fdescr,
561
+ )
venv/lib/python3.10/site-packages/sklearn/datasets/images/README.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Image: china.jpg
2
+ Released under a creative commons license. [1]
3
+ Attribution: Some rights reserved by danielbuechele [2]
4
+ Retrieved 21st August, 2011 from [3] by Robert Layton
5
+
6
+ [1] https://creativecommons.org/licenses/by/2.0/
7
+ [2] https://www.flickr.com/photos/danielbuechele/
8
+ [3] https://www.flickr.com/photos/danielbuechele/6061409035/sizes/z/in/photostream/
9
+
10
+
11
+ Image: flower.jpg
12
+ Released under a creative commons license. [1]
13
+ Attribution: Some rights reserved by danielbuechele [2]
14
+ Retrieved 21st August, 2011 from [3] by Robert Layton
15
+
16
+ [1] https://creativecommons.org/licenses/by/2.0/
17
+ [2] https://www.flickr.com/photos/vultilion/
18
+ [3] https://www.flickr.com/photos/vultilion/6056698931/sizes/z/in/photostream/
19
+
20
+
21
+
venv/lib/python3.10/site-packages/sklearn/datasets/images/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/sklearn/datasets/images/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (191 Bytes). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (190 Bytes). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_20news.cpython-310.pyc ADDED
Binary file (4.23 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_arff_parser.cpython-310.pyc ADDED
Binary file (4.75 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_base.cpython-310.pyc ADDED
Binary file (10.9 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_california_housing.cpython-310.pyc ADDED
Binary file (1.51 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_common.cpython-310.pyc ADDED
Binary file (3.79 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_covtype.cpython-310.pyc ADDED
Binary file (2.04 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_kddcup99.cpython-310.pyc ADDED
Binary file (2.69 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_lfw.cpython-310.pyc ADDED
Binary file (5.82 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_olivetti_faces.cpython-310.pyc ADDED
Binary file (1.11 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_openml.cpython-310.pyc ADDED
Binary file (36.5 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_rcv1.cpython-310.pyc ADDED
Binary file (1.92 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_samples_generator.cpython-310.pyc ADDED
Binary file (19.9 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/__pycache__/test_svmlight_format.cpython-310.pyc ADDED
Binary file (16 kB). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (195 Bytes). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (202 Bytes). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (207 Bytes). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1119/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1119/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (210 Bytes). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1590/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_1590/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (210 Bytes). View file
 
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_2/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/sklearn/datasets/tests/data/openml/id_2/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (207 Bytes). View file