applied-ai-018 commited on
Commit
a8c23a2
·
verified ·
1 Parent(s): 0367fd6

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. llmeval-env/lib/python3.10/site-packages/regex/_regex.cpython-310-x86_64-linux-gnu.so +3 -0
  3. llmeval-env/lib/python3.10/site-packages/sklearn/__check_build/__init__.py +47 -0
  4. llmeval-env/lib/python3.10/site-packages/sklearn/__check_build/_check_build.cpython-310-x86_64-linux-gnu.so +0 -0
  5. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/__pycache__/__init__.cpython-310.pyc +0 -0
  6. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/__pycache__/_dict_vectorizer.cpython-310.pyc +0 -0
  7. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/__pycache__/_hash.cpython-310.pyc +0 -0
  8. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/__pycache__/image.cpython-310.pyc +0 -0
  9. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__init__.py +0 -0
  10. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/__init__.cpython-310.pyc +0 -0
  11. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/test_dict_vectorizer.cpython-310.pyc +0 -0
  12. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/test_feature_hasher.cpython-310.pyc +0 -0
  13. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/test_image.cpython-310.pyc +0 -0
  14. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/test_text.cpython-310.pyc +0 -0
  15. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/test_feature_hasher.py +160 -0
  16. llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/test_text.py +1655 -0
  17. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__init__.py +14 -0
  18. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__pycache__/__init__.cpython-310.pyc +0 -0
  19. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__pycache__/_partial_dependence.cpython-310.pyc +0 -0
  20. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__pycache__/_pd_utils.cpython-310.pyc +0 -0
  21. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__pycache__/_permutation_importance.cpython-310.pyc +0 -0
  22. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_partial_dependence.py +743 -0
  23. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_pd_utils.py +64 -0
  24. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_permutation_importance.py +317 -0
  25. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_plot/__pycache__/__init__.cpython-310.pyc +0 -0
  26. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_plot/__pycache__/decision_boundary.cpython-310.pyc +0 -0
  27. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_plot/__pycache__/partial_dependence.cpython-310.pyc +0 -0
  28. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_plot/tests/__pycache__/__init__.cpython-310.pyc +0 -0
  29. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/__init__.py +0 -0
  30. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/__pycache__/test_partial_dependence.cpython-310.pyc +0 -0
  31. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/__pycache__/test_permutation_importance.cpython-310.pyc +0 -0
  32. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/test_partial_dependence.py +958 -0
  33. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/test_pd_utils.py +47 -0
  34. llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/test_permutation_importance.py +542 -0
  35. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/__init__.cpython-310.pyc +0 -0
  36. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_arpack.cpython-310.pyc +0 -0
  37. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_array_api.cpython-310.pyc +0 -0
  38. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_available_if.cpython-310.pyc +0 -0
  39. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_bunch.cpython-310.pyc +0 -0
  40. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_encode.cpython-310.pyc +0 -0
  41. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_estimator_html_repr.cpython-310.pyc +0 -0
  42. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_joblib.cpython-310.pyc +0 -0
  43. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_mask.cpython-310.pyc +0 -0
  44. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_metadata_requests.cpython-310.pyc +0 -0
  45. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_mocking.cpython-310.pyc +0 -0
  46. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_param_validation.cpython-310.pyc +0 -0
  47. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_plotting.cpython-310.pyc +0 -0
  48. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_pprint.cpython-310.pyc +0 -0
  49. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_response.cpython-310.pyc +0 -0
  50. llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_set_output.cpython-310.pyc +0 -0
.gitattributes CHANGED
@@ -85,3 +85,4 @@ llmeval-env/lib/python3.10/site-packages/numpy.libs/libgfortran-040039e1.so.5.0.
85
  llmeval-env/lib/python3.10/site-packages/lxml/objectify.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
86
  llmeval-env/lib/python3.10/site-packages/tokenizers/tokenizers.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
87
  llmeval-env/lib/python3.10/site-packages/numpy.libs/libopenblas64_p-r0-0cf96a72.3.23.dev.so filter=lfs diff=lfs merge=lfs -text
 
 
85
  llmeval-env/lib/python3.10/site-packages/lxml/objectify.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
86
  llmeval-env/lib/python3.10/site-packages/tokenizers/tokenizers.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
87
  llmeval-env/lib/python3.10/site-packages/numpy.libs/libopenblas64_p-r0-0cf96a72.3.23.dev.so filter=lfs diff=lfs merge=lfs -text
88
+ llmeval-env/lib/python3.10/site-packages/regex/_regex.cpython-310-x86_64-linux-gnu.so filter=lfs diff=lfs merge=lfs -text
llmeval-env/lib/python3.10/site-packages/regex/_regex.cpython-310-x86_64-linux-gnu.so ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7836accb6f19aadd3c2a5066acfb2f86fcdff510bb6d3efb3832ea3f26e4cc13
3
+ size 2503320
llmeval-env/lib/python3.10/site-packages/sklearn/__check_build/__init__.py ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ Module to give helpful messages to the user that did not
2
+ compile scikit-learn properly.
3
+ """
4
+ import os
5
+
6
+ INPLACE_MSG = """
7
+ It appears that you are importing a local scikit-learn source tree. For
8
+ this, you need to have an inplace install. Maybe you are in the source
9
+ directory and you need to try from another location."""
10
+
11
+ STANDARD_MSG = """
12
+ If you have used an installer, please check that it is suited for your
13
+ Python version, your operating system and your platform."""
14
+
15
+
16
+ def raise_build_error(e):
17
+ # Raise a comprehensible error and list the contents of the
18
+ # directory to help debugging on the mailing list.
19
+ local_dir = os.path.split(__file__)[0]
20
+ msg = STANDARD_MSG
21
+ if local_dir == "sklearn/__check_build":
22
+ # Picking up the local install: this will work only if the
23
+ # install is an 'inplace build'
24
+ msg = INPLACE_MSG
25
+ dir_content = list()
26
+ for i, filename in enumerate(os.listdir(local_dir)):
27
+ if (i + 1) % 3:
28
+ dir_content.append(filename.ljust(26))
29
+ else:
30
+ dir_content.append(filename + "\n")
31
+ raise ImportError("""%s
32
+ ___________________________________________________________________________
33
+ Contents of %s:
34
+ %s
35
+ ___________________________________________________________________________
36
+ It seems that scikit-learn has not been built correctly.
37
+
38
+ If you have installed scikit-learn from source, please do not forget
39
+ to build the package before using it: run `python setup.py install` or
40
+ `make` in the source directory.
41
+ %s""" % (e, local_dir, "".join(dir_content).strip(), msg))
42
+
43
+
44
+ try:
45
+ from ._check_build import check_build # noqa
46
+ except ImportError as e:
47
+ raise_build_error(e)
llmeval-env/lib/python3.10/site-packages/sklearn/__check_build/_check_build.cpython-310-x86_64-linux-gnu.so ADDED
Binary file (51.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (623 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/__pycache__/_dict_vectorizer.cpython-310.pyc ADDED
Binary file (13.5 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/__pycache__/_hash.cpython-310.pyc ADDED
Binary file (7.93 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/__pycache__/image.cpython-310.pyc ADDED
Binary file (19.8 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__init__.py ADDED
File without changes
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (205 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/test_dict_vectorizer.cpython-310.pyc ADDED
Binary file (7.91 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/test_feature_hasher.cpython-310.pyc ADDED
Binary file (5.91 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/test_image.cpython-310.pyc ADDED
Binary file (10.7 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/__pycache__/test_text.cpython-310.pyc ADDED
Binary file (38.1 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/test_feature_hasher.py ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import pytest
3
+ from numpy.testing import assert_array_equal
4
+
5
+ from sklearn.feature_extraction import FeatureHasher
6
+ from sklearn.feature_extraction._hashing_fast import transform as _hashing_transform
7
+
8
+
9
+ def test_feature_hasher_dicts():
10
+ feature_hasher = FeatureHasher(n_features=16)
11
+ assert "dict" == feature_hasher.input_type
12
+
13
+ raw_X = [{"foo": "bar", "dada": 42, "tzara": 37}, {"foo": "baz", "gaga": "string1"}]
14
+ X1 = FeatureHasher(n_features=16).transform(raw_X)
15
+ gen = (iter(d.items()) for d in raw_X)
16
+ X2 = FeatureHasher(n_features=16, input_type="pair").transform(gen)
17
+ assert_array_equal(X1.toarray(), X2.toarray())
18
+
19
+
20
+ def test_feature_hasher_strings():
21
+ # mix byte and Unicode strings; note that "foo" is a duplicate in row 0
22
+ raw_X = [
23
+ ["foo", "bar", "baz", "foo".encode("ascii")],
24
+ ["bar".encode("ascii"), "baz", "quux"],
25
+ ]
26
+
27
+ for lg_n_features in (7, 9, 11, 16, 22):
28
+ n_features = 2**lg_n_features
29
+
30
+ it = (x for x in raw_X) # iterable
31
+
32
+ feature_hasher = FeatureHasher(
33
+ n_features=n_features, input_type="string", alternate_sign=False
34
+ )
35
+ X = feature_hasher.transform(it)
36
+
37
+ assert X.shape[0] == len(raw_X)
38
+ assert X.shape[1] == n_features
39
+
40
+ assert X[0].sum() == 4
41
+ assert X[1].sum() == 3
42
+
43
+ assert X.nnz == 6
44
+
45
+
46
+ @pytest.mark.parametrize(
47
+ "raw_X",
48
+ [
49
+ ["my_string", "another_string"],
50
+ (x for x in ["my_string", "another_string"]),
51
+ ],
52
+ ids=["list", "generator"],
53
+ )
54
+ def test_feature_hasher_single_string(raw_X):
55
+ """FeatureHasher raises error when a sample is a single string.
56
+
57
+ Non-regression test for gh-13199.
58
+ """
59
+ msg = "Samples can not be a single string"
60
+
61
+ feature_hasher = FeatureHasher(n_features=10, input_type="string")
62
+ with pytest.raises(ValueError, match=msg):
63
+ feature_hasher.transform(raw_X)
64
+
65
+
66
+ def test_hashing_transform_seed():
67
+ # check the influence of the seed when computing the hashes
68
+ raw_X = [
69
+ ["foo", "bar", "baz", "foo".encode("ascii")],
70
+ ["bar".encode("ascii"), "baz", "quux"],
71
+ ]
72
+
73
+ raw_X_ = (((f, 1) for f in x) for x in raw_X)
74
+ indices, indptr, _ = _hashing_transform(raw_X_, 2**7, str, False)
75
+
76
+ raw_X_ = (((f, 1) for f in x) for x in raw_X)
77
+ indices_0, indptr_0, _ = _hashing_transform(raw_X_, 2**7, str, False, seed=0)
78
+ assert_array_equal(indices, indices_0)
79
+ assert_array_equal(indptr, indptr_0)
80
+
81
+ raw_X_ = (((f, 1) for f in x) for x in raw_X)
82
+ indices_1, _, _ = _hashing_transform(raw_X_, 2**7, str, False, seed=1)
83
+ with pytest.raises(AssertionError):
84
+ assert_array_equal(indices, indices_1)
85
+
86
+
87
+ def test_feature_hasher_pairs():
88
+ raw_X = (
89
+ iter(d.items())
90
+ for d in [{"foo": 1, "bar": 2}, {"baz": 3, "quux": 4, "foo": -1}]
91
+ )
92
+ feature_hasher = FeatureHasher(n_features=16, input_type="pair")
93
+ x1, x2 = feature_hasher.transform(raw_X).toarray()
94
+ x1_nz = sorted(np.abs(x1[x1 != 0]))
95
+ x2_nz = sorted(np.abs(x2[x2 != 0]))
96
+ assert [1, 2] == x1_nz
97
+ assert [1, 3, 4] == x2_nz
98
+
99
+
100
+ def test_feature_hasher_pairs_with_string_values():
101
+ raw_X = (
102
+ iter(d.items())
103
+ for d in [{"foo": 1, "bar": "a"}, {"baz": "abc", "quux": 4, "foo": -1}]
104
+ )
105
+ feature_hasher = FeatureHasher(n_features=16, input_type="pair")
106
+ x1, x2 = feature_hasher.transform(raw_X).toarray()
107
+ x1_nz = sorted(np.abs(x1[x1 != 0]))
108
+ x2_nz = sorted(np.abs(x2[x2 != 0]))
109
+ assert [1, 1] == x1_nz
110
+ assert [1, 1, 4] == x2_nz
111
+
112
+ raw_X = (iter(d.items()) for d in [{"bax": "abc"}, {"bax": "abc"}])
113
+ x1, x2 = feature_hasher.transform(raw_X).toarray()
114
+ x1_nz = np.abs(x1[x1 != 0])
115
+ x2_nz = np.abs(x2[x2 != 0])
116
+ assert [1] == x1_nz
117
+ assert [1] == x2_nz
118
+ assert_array_equal(x1, x2)
119
+
120
+
121
+ def test_hash_empty_input():
122
+ n_features = 16
123
+ raw_X = [[], (), iter(range(0))]
124
+
125
+ feature_hasher = FeatureHasher(n_features=n_features, input_type="string")
126
+ X = feature_hasher.transform(raw_X)
127
+
128
+ assert_array_equal(X.toarray(), np.zeros((len(raw_X), n_features)))
129
+
130
+
131
+ def test_hasher_zeros():
132
+ # Assert that no zeros are materialized in the output.
133
+ X = FeatureHasher().transform([{"foo": 0}])
134
+ assert X.data.shape == (0,)
135
+
136
+
137
+ def test_hasher_alternate_sign():
138
+ X = [list("Thequickbrownfoxjumped")]
139
+
140
+ Xt = FeatureHasher(alternate_sign=True, input_type="string").fit_transform(X)
141
+ assert Xt.data.min() < 0 and Xt.data.max() > 0
142
+
143
+ Xt = FeatureHasher(alternate_sign=False, input_type="string").fit_transform(X)
144
+ assert Xt.data.min() > 0
145
+
146
+
147
+ def test_hash_collisions():
148
+ X = [list("Thequickbrownfoxjumped")]
149
+
150
+ Xt = FeatureHasher(
151
+ alternate_sign=True, n_features=1, input_type="string"
152
+ ).fit_transform(X)
153
+ # check that some of the hashed tokens are added
154
+ # with an opposite sign and cancel out
155
+ assert abs(Xt.data[0]) < len(X[0])
156
+
157
+ Xt = FeatureHasher(
158
+ alternate_sign=False, n_features=1, input_type="string"
159
+ ).fit_transform(X)
160
+ assert Xt.data[0] == len(X[0])
llmeval-env/lib/python3.10/site-packages/sklearn/feature_extraction/tests/test_text.py ADDED
@@ -0,0 +1,1655 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pickle
2
+ import re
3
+ import warnings
4
+ from collections import defaultdict
5
+ from collections.abc import Mapping
6
+ from functools import partial
7
+ from io import StringIO
8
+ from itertools import product
9
+
10
+ import numpy as np
11
+ import pytest
12
+ from numpy.testing import assert_array_almost_equal, assert_array_equal
13
+ from scipy import sparse
14
+
15
+ from sklearn.base import clone
16
+ from sklearn.feature_extraction.text import (
17
+ ENGLISH_STOP_WORDS,
18
+ CountVectorizer,
19
+ HashingVectorizer,
20
+ TfidfTransformer,
21
+ TfidfVectorizer,
22
+ strip_accents_ascii,
23
+ strip_accents_unicode,
24
+ strip_tags,
25
+ )
26
+ from sklearn.model_selection import GridSearchCV, cross_val_score, train_test_split
27
+ from sklearn.pipeline import Pipeline
28
+ from sklearn.svm import LinearSVC
29
+ from sklearn.utils import _IS_WASM, IS_PYPY
30
+ from sklearn.utils._testing import (
31
+ assert_allclose_dense_sparse,
32
+ assert_almost_equal,
33
+ fails_if_pypy,
34
+ skip_if_32bit,
35
+ )
36
+ from sklearn.utils.fixes import CSC_CONTAINERS, CSR_CONTAINERS
37
+
38
+ JUNK_FOOD_DOCS = (
39
+ "the pizza pizza beer copyright",
40
+ "the pizza burger beer copyright",
41
+ "the the pizza beer beer copyright",
42
+ "the burger beer beer copyright",
43
+ "the coke burger coke copyright",
44
+ "the coke burger burger",
45
+ )
46
+
47
+ NOTJUNK_FOOD_DOCS = (
48
+ "the salad celeri copyright",
49
+ "the salad salad sparkling water copyright",
50
+ "the the celeri celeri copyright",
51
+ "the tomato tomato salad water",
52
+ "the tomato salad water copyright",
53
+ )
54
+
55
+ ALL_FOOD_DOCS = JUNK_FOOD_DOCS + NOTJUNK_FOOD_DOCS
56
+
57
+
58
+ def uppercase(s):
59
+ return strip_accents_unicode(s).upper()
60
+
61
+
62
+ def strip_eacute(s):
63
+ return s.replace("é", "e")
64
+
65
+
66
+ def split_tokenize(s):
67
+ return s.split()
68
+
69
+
70
+ def lazy_analyze(s):
71
+ return ["the_ultimate_feature"]
72
+
73
+
74
+ def test_strip_accents():
75
+ # check some classical latin accentuated symbols
76
+ a = "àáâãäåçèéêë"
77
+ expected = "aaaaaaceeee"
78
+ assert strip_accents_unicode(a) == expected
79
+
80
+ a = "ìíîïñòóôõöùúûüý"
81
+ expected = "iiiinooooouuuuy"
82
+ assert strip_accents_unicode(a) == expected
83
+
84
+ # check some arabic
85
+ a = "\u0625" # alef with a hamza below: إ
86
+ expected = "\u0627" # simple alef: ا
87
+ assert strip_accents_unicode(a) == expected
88
+
89
+ # mix letters accentuated and not
90
+ a = "this is à test"
91
+ expected = "this is a test"
92
+ assert strip_accents_unicode(a) == expected
93
+
94
+ # strings that are already decomposed
95
+ a = "o\u0308" # o with diaeresis
96
+ expected = "o"
97
+ assert strip_accents_unicode(a) == expected
98
+
99
+ # combining marks by themselves
100
+ a = "\u0300\u0301\u0302\u0303"
101
+ expected = ""
102
+ assert strip_accents_unicode(a) == expected
103
+
104
+ # Multiple combining marks on one character
105
+ a = "o\u0308\u0304"
106
+ expected = "o"
107
+ assert strip_accents_unicode(a) == expected
108
+
109
+
110
+ def test_to_ascii():
111
+ # check some classical latin accentuated symbols
112
+ a = "àáâãäåçèéêë"
113
+ expected = "aaaaaaceeee"
114
+ assert strip_accents_ascii(a) == expected
115
+
116
+ a = "ìíîïñòóôõöùúûüý"
117
+ expected = "iiiinooooouuuuy"
118
+ assert strip_accents_ascii(a) == expected
119
+
120
+ # check some arabic
121
+ a = "\u0625" # halef with a hamza below
122
+ expected = "" # halef has no direct ascii match
123
+ assert strip_accents_ascii(a) == expected
124
+
125
+ # mix letters accentuated and not
126
+ a = "this is à test"
127
+ expected = "this is a test"
128
+ assert strip_accents_ascii(a) == expected
129
+
130
+
131
+ @pytest.mark.parametrize("Vectorizer", (CountVectorizer, HashingVectorizer))
132
+ def test_word_analyzer_unigrams(Vectorizer):
133
+ wa = Vectorizer(strip_accents="ascii").build_analyzer()
134
+ text = "J'ai mangé du kangourou ce midi, c'était pas très bon."
135
+ expected = [
136
+ "ai",
137
+ "mange",
138
+ "du",
139
+ "kangourou",
140
+ "ce",
141
+ "midi",
142
+ "etait",
143
+ "pas",
144
+ "tres",
145
+ "bon",
146
+ ]
147
+ assert wa(text) == expected
148
+
149
+ text = "This is a test, really.\n\n I met Harry yesterday."
150
+ expected = ["this", "is", "test", "really", "met", "harry", "yesterday"]
151
+ assert wa(text) == expected
152
+
153
+ wa = Vectorizer(input="file").build_analyzer()
154
+ text = StringIO("This is a test with a file-like object!")
155
+ expected = ["this", "is", "test", "with", "file", "like", "object"]
156
+ assert wa(text) == expected
157
+
158
+ # with custom preprocessor
159
+ wa = Vectorizer(preprocessor=uppercase).build_analyzer()
160
+ text = "J'ai mangé du kangourou ce midi, c'était pas très bon."
161
+ expected = [
162
+ "AI",
163
+ "MANGE",
164
+ "DU",
165
+ "KANGOUROU",
166
+ "CE",
167
+ "MIDI",
168
+ "ETAIT",
169
+ "PAS",
170
+ "TRES",
171
+ "BON",
172
+ ]
173
+ assert wa(text) == expected
174
+
175
+ # with custom tokenizer
176
+ wa = Vectorizer(tokenizer=split_tokenize, strip_accents="ascii").build_analyzer()
177
+ text = "J'ai mangé du kangourou ce midi, c'était pas très bon."
178
+ expected = [
179
+ "j'ai",
180
+ "mange",
181
+ "du",
182
+ "kangourou",
183
+ "ce",
184
+ "midi,",
185
+ "c'etait",
186
+ "pas",
187
+ "tres",
188
+ "bon.",
189
+ ]
190
+ assert wa(text) == expected
191
+
192
+
193
+ def test_word_analyzer_unigrams_and_bigrams():
194
+ wa = CountVectorizer(
195
+ analyzer="word", strip_accents="unicode", ngram_range=(1, 2)
196
+ ).build_analyzer()
197
+
198
+ text = "J'ai mangé du kangourou ce midi, c'était pas très bon."
199
+ expected = [
200
+ "ai",
201
+ "mange",
202
+ "du",
203
+ "kangourou",
204
+ "ce",
205
+ "midi",
206
+ "etait",
207
+ "pas",
208
+ "tres",
209
+ "bon",
210
+ "ai mange",
211
+ "mange du",
212
+ "du kangourou",
213
+ "kangourou ce",
214
+ "ce midi",
215
+ "midi etait",
216
+ "etait pas",
217
+ "pas tres",
218
+ "tres bon",
219
+ ]
220
+ assert wa(text) == expected
221
+
222
+
223
+ def test_unicode_decode_error():
224
+ # decode_error default to strict, so this should fail
225
+ # First, encode (as bytes) a unicode string.
226
+ text = "J'ai mangé du kangourou ce midi, c'était pas très bon."
227
+ text_bytes = text.encode("utf-8")
228
+
229
+ # Then let the Analyzer try to decode it as ascii. It should fail,
230
+ # because we have given it an incorrect encoding.
231
+ wa = CountVectorizer(ngram_range=(1, 2), encoding="ascii").build_analyzer()
232
+ with pytest.raises(UnicodeDecodeError):
233
+ wa(text_bytes)
234
+
235
+ ca = CountVectorizer(
236
+ analyzer="char", ngram_range=(3, 6), encoding="ascii"
237
+ ).build_analyzer()
238
+ with pytest.raises(UnicodeDecodeError):
239
+ ca(text_bytes)
240
+
241
+
242
+ def test_char_ngram_analyzer():
243
+ cnga = CountVectorizer(
244
+ analyzer="char", strip_accents="unicode", ngram_range=(3, 6)
245
+ ).build_analyzer()
246
+
247
+ text = "J'ai mangé du kangourou ce midi, c'était pas très bon"
248
+ expected = ["j'a", "'ai", "ai ", "i m", " ma"]
249
+ assert cnga(text)[:5] == expected
250
+ expected = ["s tres", " tres ", "tres b", "res bo", "es bon"]
251
+ assert cnga(text)[-5:] == expected
252
+
253
+ text = "This \n\tis a test, really.\n\n I met Harry yesterday"
254
+ expected = ["thi", "his", "is ", "s i", " is"]
255
+ assert cnga(text)[:5] == expected
256
+
257
+ expected = [" yeste", "yester", "esterd", "sterda", "terday"]
258
+ assert cnga(text)[-5:] == expected
259
+
260
+ cnga = CountVectorizer(
261
+ input="file", analyzer="char", ngram_range=(3, 6)
262
+ ).build_analyzer()
263
+ text = StringIO("This is a test with a file-like object!")
264
+ expected = ["thi", "his", "is ", "s i", " is"]
265
+ assert cnga(text)[:5] == expected
266
+
267
+
268
+ def test_char_wb_ngram_analyzer():
269
+ cnga = CountVectorizer(
270
+ analyzer="char_wb", strip_accents="unicode", ngram_range=(3, 6)
271
+ ).build_analyzer()
272
+
273
+ text = "This \n\tis a test, really.\n\n I met Harry yesterday"
274
+ expected = [" th", "thi", "his", "is ", " thi"]
275
+ assert cnga(text)[:5] == expected
276
+
277
+ expected = ["yester", "esterd", "sterda", "terday", "erday "]
278
+ assert cnga(text)[-5:] == expected
279
+
280
+ cnga = CountVectorizer(
281
+ input="file", analyzer="char_wb", ngram_range=(3, 6)
282
+ ).build_analyzer()
283
+ text = StringIO("A test with a file-like object!")
284
+ expected = [" a ", " te", "tes", "est", "st ", " tes"]
285
+ assert cnga(text)[:6] == expected
286
+
287
+
288
+ def test_word_ngram_analyzer():
289
+ cnga = CountVectorizer(
290
+ analyzer="word", strip_accents="unicode", ngram_range=(3, 6)
291
+ ).build_analyzer()
292
+
293
+ text = "This \n\tis a test, really.\n\n I met Harry yesterday"
294
+ expected = ["this is test", "is test really", "test really met"]
295
+ assert cnga(text)[:3] == expected
296
+
297
+ expected = [
298
+ "test really met harry yesterday",
299
+ "this is test really met harry",
300
+ "is test really met harry yesterday",
301
+ ]
302
+ assert cnga(text)[-3:] == expected
303
+
304
+ cnga_file = CountVectorizer(
305
+ input="file", analyzer="word", ngram_range=(3, 6)
306
+ ).build_analyzer()
307
+ file = StringIO(text)
308
+ assert cnga_file(file) == cnga(text)
309
+
310
+
311
+ def test_countvectorizer_custom_vocabulary():
312
+ vocab = {"pizza": 0, "beer": 1}
313
+ terms = set(vocab.keys())
314
+
315
+ # Try a few of the supported types.
316
+ for typ in [dict, list, iter, partial(defaultdict, int)]:
317
+ v = typ(vocab)
318
+ vect = CountVectorizer(vocabulary=v)
319
+ vect.fit(JUNK_FOOD_DOCS)
320
+ if isinstance(v, Mapping):
321
+ assert vect.vocabulary_ == vocab
322
+ else:
323
+ assert set(vect.vocabulary_) == terms
324
+ X = vect.transform(JUNK_FOOD_DOCS)
325
+ assert X.shape[1] == len(terms)
326
+ v = typ(vocab)
327
+ vect = CountVectorizer(vocabulary=v)
328
+ inv = vect.inverse_transform(X)
329
+ assert len(inv) == X.shape[0]
330
+
331
+
332
+ def test_countvectorizer_custom_vocabulary_pipeline():
333
+ what_we_like = ["pizza", "beer"]
334
+ pipe = Pipeline(
335
+ [
336
+ ("count", CountVectorizer(vocabulary=what_we_like)),
337
+ ("tfidf", TfidfTransformer()),
338
+ ]
339
+ )
340
+ X = pipe.fit_transform(ALL_FOOD_DOCS)
341
+ assert set(pipe.named_steps["count"].vocabulary_) == set(what_we_like)
342
+ assert X.shape[1] == len(what_we_like)
343
+
344
+
345
+ def test_countvectorizer_custom_vocabulary_repeated_indices():
346
+ vocab = {"pizza": 0, "beer": 0}
347
+ msg = "Vocabulary contains repeated indices"
348
+ with pytest.raises(ValueError, match=msg):
349
+ vect = CountVectorizer(vocabulary=vocab)
350
+ vect.fit(["pasta_siziliana"])
351
+
352
+
353
+ def test_countvectorizer_custom_vocabulary_gap_index():
354
+ vocab = {"pizza": 1, "beer": 2}
355
+ with pytest.raises(ValueError, match="doesn't contain index"):
356
+ vect = CountVectorizer(vocabulary=vocab)
357
+ vect.fit(["pasta_verdura"])
358
+
359
+
360
+ def test_countvectorizer_stop_words():
361
+ cv = CountVectorizer()
362
+ cv.set_params(stop_words="english")
363
+ assert cv.get_stop_words() == ENGLISH_STOP_WORDS
364
+ cv.set_params(stop_words="_bad_str_stop_")
365
+ with pytest.raises(ValueError):
366
+ cv.get_stop_words()
367
+ cv.set_params(stop_words="_bad_unicode_stop_")
368
+ with pytest.raises(ValueError):
369
+ cv.get_stop_words()
370
+ stoplist = ["some", "other", "words"]
371
+ cv.set_params(stop_words=stoplist)
372
+ assert cv.get_stop_words() == set(stoplist)
373
+
374
+
375
+ def test_countvectorizer_empty_vocabulary():
376
+ with pytest.raises(ValueError, match="empty vocabulary"):
377
+ vect = CountVectorizer(vocabulary=[])
378
+ vect.fit(["foo"])
379
+
380
+ with pytest.raises(ValueError, match="empty vocabulary"):
381
+ v = CountVectorizer(max_df=1.0, stop_words="english")
382
+ # fit on stopwords only
383
+ v.fit(["to be or not to be", "and me too", "and so do you"])
384
+
385
+
386
+ def test_fit_countvectorizer_twice():
387
+ cv = CountVectorizer()
388
+ X1 = cv.fit_transform(ALL_FOOD_DOCS[:5])
389
+ X2 = cv.fit_transform(ALL_FOOD_DOCS[5:])
390
+ assert X1.shape[1] != X2.shape[1]
391
+
392
+
393
+ def test_countvectorizer_custom_token_pattern():
394
+ """Check `get_feature_names_out()` when a custom token pattern is passed.
395
+ Non-regression test for:
396
+ https://github.com/scikit-learn/scikit-learn/issues/12971
397
+ """
398
+ corpus = [
399
+ "This is the 1st document in my corpus.",
400
+ "This document is the 2nd sample.",
401
+ "And this is the 3rd one.",
402
+ "Is this the 4th document?",
403
+ ]
404
+ token_pattern = r"[0-9]{1,3}(?:st|nd|rd|th)\s\b(\w{2,})\b"
405
+ vectorizer = CountVectorizer(token_pattern=token_pattern)
406
+ vectorizer.fit_transform(corpus)
407
+ expected = ["document", "one", "sample"]
408
+ feature_names_out = vectorizer.get_feature_names_out()
409
+ assert_array_equal(feature_names_out, expected)
410
+
411
+
412
+ def test_countvectorizer_custom_token_pattern_with_several_group():
413
+ """Check that we raise an error if token pattern capture several groups.
414
+ Non-regression test for:
415
+ https://github.com/scikit-learn/scikit-learn/issues/12971
416
+ """
417
+ corpus = [
418
+ "This is the 1st document in my corpus.",
419
+ "This document is the 2nd sample.",
420
+ "And this is the 3rd one.",
421
+ "Is this the 4th document?",
422
+ ]
423
+
424
+ token_pattern = r"([0-9]{1,3}(?:st|nd|rd|th))\s\b(\w{2,})\b"
425
+ err_msg = "More than 1 capturing group in token pattern"
426
+ vectorizer = CountVectorizer(token_pattern=token_pattern)
427
+ with pytest.raises(ValueError, match=err_msg):
428
+ vectorizer.fit(corpus)
429
+
430
+
431
+ def test_countvectorizer_uppercase_in_vocab():
432
+ # Check that the check for uppercase in the provided vocabulary is only done at fit
433
+ # time and not at transform time (#21251)
434
+ vocabulary = ["Sample", "Upper", "Case", "Vocabulary"]
435
+ message = (
436
+ "Upper case characters found in"
437
+ " vocabulary while 'lowercase'"
438
+ " is True. These entries will not"
439
+ " be matched with any documents"
440
+ )
441
+
442
+ vectorizer = CountVectorizer(lowercase=True, vocabulary=vocabulary)
443
+
444
+ with pytest.warns(UserWarning, match=message):
445
+ vectorizer.fit(vocabulary)
446
+
447
+ with warnings.catch_warnings():
448
+ warnings.simplefilter("error", UserWarning)
449
+ vectorizer.transform(vocabulary)
450
+
451
+
452
+ def test_tf_transformer_feature_names_out():
453
+ """Check get_feature_names_out for TfidfTransformer"""
454
+ X = [[1, 1, 1], [1, 1, 0], [1, 0, 0]]
455
+ tr = TfidfTransformer(smooth_idf=True, norm="l2").fit(X)
456
+
457
+ feature_names_in = ["a", "c", "b"]
458
+ feature_names_out = tr.get_feature_names_out(feature_names_in)
459
+ assert_array_equal(feature_names_in, feature_names_out)
460
+
461
+
462
+ def test_tf_idf_smoothing():
463
+ X = [[1, 1, 1], [1, 1, 0], [1, 0, 0]]
464
+ tr = TfidfTransformer(smooth_idf=True, norm="l2")
465
+ tfidf = tr.fit_transform(X).toarray()
466
+ assert (tfidf >= 0).all()
467
+
468
+ # check normalization
469
+ assert_array_almost_equal((tfidf**2).sum(axis=1), [1.0, 1.0, 1.0])
470
+
471
+ # this is robust to features with only zeros
472
+ X = [[1, 1, 0], [1, 1, 0], [1, 0, 0]]
473
+ tr = TfidfTransformer(smooth_idf=True, norm="l2")
474
+ tfidf = tr.fit_transform(X).toarray()
475
+ assert (tfidf >= 0).all()
476
+
477
+
478
+ @pytest.mark.xfail(
479
+ _IS_WASM,
480
+ reason=(
481
+ "no floating point exceptions, see"
482
+ " https://github.com/numpy/numpy/pull/21895#issuecomment-1311525881"
483
+ ),
484
+ )
485
+ def test_tfidf_no_smoothing():
486
+ X = [[1, 1, 1], [1, 1, 0], [1, 0, 0]]
487
+ tr = TfidfTransformer(smooth_idf=False, norm="l2")
488
+ tfidf = tr.fit_transform(X).toarray()
489
+ assert (tfidf >= 0).all()
490
+
491
+ # check normalization
492
+ assert_array_almost_equal((tfidf**2).sum(axis=1), [1.0, 1.0, 1.0])
493
+
494
+ # the lack of smoothing make IDF fragile in the presence of feature with
495
+ # only zeros
496
+ X = [[1, 1, 0], [1, 1, 0], [1, 0, 0]]
497
+ tr = TfidfTransformer(smooth_idf=False, norm="l2")
498
+
499
+ in_warning_message = "divide by zero"
500
+ with pytest.warns(RuntimeWarning, match=in_warning_message):
501
+ tr.fit_transform(X).toarray()
502
+
503
+
504
+ def test_sublinear_tf():
505
+ X = [[1], [2], [3]]
506
+ tr = TfidfTransformer(sublinear_tf=True, use_idf=False, norm=None)
507
+ tfidf = tr.fit_transform(X).toarray()
508
+ assert tfidf[0] == 1
509
+ assert tfidf[1] > tfidf[0]
510
+ assert tfidf[2] > tfidf[1]
511
+ assert tfidf[1] < 2
512
+ assert tfidf[2] < 3
513
+
514
+
515
+ def test_vectorizer():
516
+ # raw documents as an iterator
517
+ train_data = iter(ALL_FOOD_DOCS[:-1])
518
+ test_data = [ALL_FOOD_DOCS[-1]]
519
+ n_train = len(ALL_FOOD_DOCS) - 1
520
+
521
+ # test without vocabulary
522
+ v1 = CountVectorizer(max_df=0.5)
523
+ counts_train = v1.fit_transform(train_data)
524
+ if hasattr(counts_train, "tocsr"):
525
+ counts_train = counts_train.tocsr()
526
+ assert counts_train[0, v1.vocabulary_["pizza"]] == 2
527
+
528
+ # build a vectorizer v1 with the same vocabulary as the one fitted by v1
529
+ v2 = CountVectorizer(vocabulary=v1.vocabulary_)
530
+
531
+ # compare that the two vectorizer give the same output on the test sample
532
+ for v in (v1, v2):
533
+ counts_test = v.transform(test_data)
534
+ if hasattr(counts_test, "tocsr"):
535
+ counts_test = counts_test.tocsr()
536
+
537
+ vocabulary = v.vocabulary_
538
+ assert counts_test[0, vocabulary["salad"]] == 1
539
+ assert counts_test[0, vocabulary["tomato"]] == 1
540
+ assert counts_test[0, vocabulary["water"]] == 1
541
+
542
+ # stop word from the fixed list
543
+ assert "the" not in vocabulary
544
+
545
+ # stop word found automatically by the vectorizer DF thresholding
546
+ # words that are high frequent across the complete corpus are likely
547
+ # to be not informative (either real stop words of extraction
548
+ # artifacts)
549
+ assert "copyright" not in vocabulary
550
+
551
+ # not present in the sample
552
+ assert counts_test[0, vocabulary["coke"]] == 0
553
+ assert counts_test[0, vocabulary["burger"]] == 0
554
+ assert counts_test[0, vocabulary["beer"]] == 0
555
+ assert counts_test[0, vocabulary["pizza"]] == 0
556
+
557
+ # test tf-idf
558
+ t1 = TfidfTransformer(norm="l1")
559
+ tfidf = t1.fit(counts_train).transform(counts_train).toarray()
560
+ assert len(t1.idf_) == len(v1.vocabulary_)
561
+ assert tfidf.shape == (n_train, len(v1.vocabulary_))
562
+
563
+ # test tf-idf with new data
564
+ tfidf_test = t1.transform(counts_test).toarray()
565
+ assert tfidf_test.shape == (len(test_data), len(v1.vocabulary_))
566
+
567
+ # test tf alone
568
+ t2 = TfidfTransformer(norm="l1", use_idf=False)
569
+ tf = t2.fit(counts_train).transform(counts_train).toarray()
570
+ assert not hasattr(t2, "idf_")
571
+
572
+ # test idf transform with unlearned idf vector
573
+ t3 = TfidfTransformer(use_idf=True)
574
+ with pytest.raises(ValueError):
575
+ t3.transform(counts_train)
576
+
577
+ # L1-normalized term frequencies sum to one
578
+ assert_array_almost_equal(np.sum(tf, axis=1), [1.0] * n_train)
579
+
580
+ # test the direct tfidf vectorizer
581
+ # (equivalent to term count vectorizer + tfidf transformer)
582
+ train_data = iter(ALL_FOOD_DOCS[:-1])
583
+ tv = TfidfVectorizer(norm="l1")
584
+
585
+ tv.max_df = v1.max_df
586
+ tfidf2 = tv.fit_transform(train_data).toarray()
587
+ assert not tv.fixed_vocabulary_
588
+ assert_array_almost_equal(tfidf, tfidf2)
589
+
590
+ # test the direct tfidf vectorizer with new data
591
+ tfidf_test2 = tv.transform(test_data).toarray()
592
+ assert_array_almost_equal(tfidf_test, tfidf_test2)
593
+
594
+ # test transform on unfitted vectorizer with empty vocabulary
595
+ v3 = CountVectorizer(vocabulary=None)
596
+ with pytest.raises(ValueError):
597
+ v3.transform(train_data)
598
+
599
+ # ascii preprocessor?
600
+ v3.set_params(strip_accents="ascii", lowercase=False)
601
+ processor = v3.build_preprocessor()
602
+ text = "J'ai mangé du kangourou ce midi, c'était pas très bon."
603
+ expected = strip_accents_ascii(text)
604
+ result = processor(text)
605
+ assert expected == result
606
+
607
+ # error on bad strip_accents param
608
+ v3.set_params(strip_accents="_gabbledegook_", preprocessor=None)
609
+ with pytest.raises(ValueError):
610
+ v3.build_preprocessor()
611
+
612
+ # error with bad analyzer type
613
+ v3.set_params = "_invalid_analyzer_type_"
614
+ with pytest.raises(ValueError):
615
+ v3.build_analyzer()
616
+
617
+
618
+ def test_tfidf_vectorizer_setters():
619
+ norm, use_idf, smooth_idf, sublinear_tf = "l2", False, False, False
620
+ tv = TfidfVectorizer(
621
+ norm=norm, use_idf=use_idf, smooth_idf=smooth_idf, sublinear_tf=sublinear_tf
622
+ )
623
+ tv.fit(JUNK_FOOD_DOCS)
624
+ assert tv._tfidf.norm == norm
625
+ assert tv._tfidf.use_idf == use_idf
626
+ assert tv._tfidf.smooth_idf == smooth_idf
627
+ assert tv._tfidf.sublinear_tf == sublinear_tf
628
+
629
+ # assigning value to `TfidfTransformer` should not have any effect until
630
+ # fitting
631
+ tv.norm = "l1"
632
+ tv.use_idf = True
633
+ tv.smooth_idf = True
634
+ tv.sublinear_tf = True
635
+ assert tv._tfidf.norm == norm
636
+ assert tv._tfidf.use_idf == use_idf
637
+ assert tv._tfidf.smooth_idf == smooth_idf
638
+ assert tv._tfidf.sublinear_tf == sublinear_tf
639
+
640
+ tv.fit(JUNK_FOOD_DOCS)
641
+ assert tv._tfidf.norm == tv.norm
642
+ assert tv._tfidf.use_idf == tv.use_idf
643
+ assert tv._tfidf.smooth_idf == tv.smooth_idf
644
+ assert tv._tfidf.sublinear_tf == tv.sublinear_tf
645
+
646
+
647
+ @fails_if_pypy
648
+ def test_hashing_vectorizer():
649
+ v = HashingVectorizer()
650
+ X = v.transform(ALL_FOOD_DOCS)
651
+ token_nnz = X.nnz
652
+ assert X.shape == (len(ALL_FOOD_DOCS), v.n_features)
653
+ assert X.dtype == v.dtype
654
+
655
+ # By default the hashed values receive a random sign and l2 normalization
656
+ # makes the feature values bounded
657
+ assert np.min(X.data) > -1
658
+ assert np.min(X.data) < 0
659
+ assert np.max(X.data) > 0
660
+ assert np.max(X.data) < 1
661
+
662
+ # Check that the rows are normalized
663
+ for i in range(X.shape[0]):
664
+ assert_almost_equal(np.linalg.norm(X[0].data, 2), 1.0)
665
+
666
+ # Check vectorization with some non-default parameters
667
+ v = HashingVectorizer(ngram_range=(1, 2), norm="l1")
668
+ X = v.transform(ALL_FOOD_DOCS)
669
+ assert X.shape == (len(ALL_FOOD_DOCS), v.n_features)
670
+ assert X.dtype == v.dtype
671
+
672
+ # ngrams generate more non zeros
673
+ ngrams_nnz = X.nnz
674
+ assert ngrams_nnz > token_nnz
675
+ assert ngrams_nnz < 2 * token_nnz
676
+
677
+ # makes the feature values bounded
678
+ assert np.min(X.data) > -1
679
+ assert np.max(X.data) < 1
680
+
681
+ # Check that the rows are normalized
682
+ for i in range(X.shape[0]):
683
+ assert_almost_equal(np.linalg.norm(X[0].data, 1), 1.0)
684
+
685
+
686
+ def test_feature_names():
687
+ cv = CountVectorizer(max_df=0.5)
688
+
689
+ # test for Value error on unfitted/empty vocabulary
690
+ with pytest.raises(ValueError):
691
+ cv.get_feature_names_out()
692
+ assert not cv.fixed_vocabulary_
693
+
694
+ # test for vocabulary learned from data
695
+ X = cv.fit_transform(ALL_FOOD_DOCS)
696
+ n_samples, n_features = X.shape
697
+ assert len(cv.vocabulary_) == n_features
698
+
699
+ feature_names = cv.get_feature_names_out()
700
+ assert isinstance(feature_names, np.ndarray)
701
+ assert feature_names.dtype == object
702
+
703
+ assert len(feature_names) == n_features
704
+ assert_array_equal(
705
+ [
706
+ "beer",
707
+ "burger",
708
+ "celeri",
709
+ "coke",
710
+ "pizza",
711
+ "salad",
712
+ "sparkling",
713
+ "tomato",
714
+ "water",
715
+ ],
716
+ feature_names,
717
+ )
718
+
719
+ for idx, name in enumerate(feature_names):
720
+ assert idx == cv.vocabulary_.get(name)
721
+
722
+ # test for custom vocabulary
723
+ vocab = [
724
+ "beer",
725
+ "burger",
726
+ "celeri",
727
+ "coke",
728
+ "pizza",
729
+ "salad",
730
+ "sparkling",
731
+ "tomato",
732
+ "water",
733
+ ]
734
+
735
+ cv = CountVectorizer(vocabulary=vocab)
736
+ feature_names = cv.get_feature_names_out()
737
+ assert_array_equal(
738
+ [
739
+ "beer",
740
+ "burger",
741
+ "celeri",
742
+ "coke",
743
+ "pizza",
744
+ "salad",
745
+ "sparkling",
746
+ "tomato",
747
+ "water",
748
+ ],
749
+ feature_names,
750
+ )
751
+ assert cv.fixed_vocabulary_
752
+
753
+ for idx, name in enumerate(feature_names):
754
+ assert idx == cv.vocabulary_.get(name)
755
+
756
+
757
+ @pytest.mark.parametrize("Vectorizer", (CountVectorizer, TfidfVectorizer))
758
+ def test_vectorizer_max_features(Vectorizer):
759
+ expected_vocabulary = {"burger", "beer", "salad", "pizza"}
760
+ expected_stop_words = {
761
+ "celeri",
762
+ "tomato",
763
+ "copyright",
764
+ "coke",
765
+ "sparkling",
766
+ "water",
767
+ "the",
768
+ }
769
+
770
+ # test bounded number of extracted features
771
+ vectorizer = Vectorizer(max_df=0.6, max_features=4)
772
+ vectorizer.fit(ALL_FOOD_DOCS)
773
+ assert set(vectorizer.vocabulary_) == expected_vocabulary
774
+ assert vectorizer.stop_words_ == expected_stop_words
775
+
776
+
777
+ def test_count_vectorizer_max_features():
778
+ # Regression test: max_features didn't work correctly in 0.14.
779
+
780
+ cv_1 = CountVectorizer(max_features=1)
781
+ cv_3 = CountVectorizer(max_features=3)
782
+ cv_None = CountVectorizer(max_features=None)
783
+
784
+ counts_1 = cv_1.fit_transform(JUNK_FOOD_DOCS).sum(axis=0)
785
+ counts_3 = cv_3.fit_transform(JUNK_FOOD_DOCS).sum(axis=0)
786
+ counts_None = cv_None.fit_transform(JUNK_FOOD_DOCS).sum(axis=0)
787
+
788
+ features_1 = cv_1.get_feature_names_out()
789
+ features_3 = cv_3.get_feature_names_out()
790
+ features_None = cv_None.get_feature_names_out()
791
+
792
+ # The most common feature is "the", with frequency 7.
793
+ assert 7 == counts_1.max()
794
+ assert 7 == counts_3.max()
795
+ assert 7 == counts_None.max()
796
+
797
+ # The most common feature should be the same
798
+ assert "the" == features_1[np.argmax(counts_1)]
799
+ assert "the" == features_3[np.argmax(counts_3)]
800
+ assert "the" == features_None[np.argmax(counts_None)]
801
+
802
+
803
+ def test_vectorizer_max_df():
804
+ test_data = ["abc", "dea", "eat"]
805
+ vect = CountVectorizer(analyzer="char", max_df=1.0)
806
+ vect.fit(test_data)
807
+ assert "a" in vect.vocabulary_.keys()
808
+ assert len(vect.vocabulary_.keys()) == 6
809
+ assert len(vect.stop_words_) == 0
810
+
811
+ vect.max_df = 0.5 # 0.5 * 3 documents -> max_doc_count == 1.5
812
+ vect.fit(test_data)
813
+ assert "a" not in vect.vocabulary_.keys() # {ae} ignored
814
+ assert len(vect.vocabulary_.keys()) == 4 # {bcdt} remain
815
+ assert "a" in vect.stop_words_
816
+ assert len(vect.stop_words_) == 2
817
+
818
+ vect.max_df = 1
819
+ vect.fit(test_data)
820
+ assert "a" not in vect.vocabulary_.keys() # {ae} ignored
821
+ assert len(vect.vocabulary_.keys()) == 4 # {bcdt} remain
822
+ assert "a" in vect.stop_words_
823
+ assert len(vect.stop_words_) == 2
824
+
825
+
826
+ def test_vectorizer_min_df():
827
+ test_data = ["abc", "dea", "eat"]
828
+ vect = CountVectorizer(analyzer="char", min_df=1)
829
+ vect.fit(test_data)
830
+ assert "a" in vect.vocabulary_.keys()
831
+ assert len(vect.vocabulary_.keys()) == 6
832
+ assert len(vect.stop_words_) == 0
833
+
834
+ vect.min_df = 2
835
+ vect.fit(test_data)
836
+ assert "c" not in vect.vocabulary_.keys() # {bcdt} ignored
837
+ assert len(vect.vocabulary_.keys()) == 2 # {ae} remain
838
+ assert "c" in vect.stop_words_
839
+ assert len(vect.stop_words_) == 4
840
+
841
+ vect.min_df = 0.8 # 0.8 * 3 documents -> min_doc_count == 2.4
842
+ vect.fit(test_data)
843
+ assert "c" not in vect.vocabulary_.keys() # {bcdet} ignored
844
+ assert len(vect.vocabulary_.keys()) == 1 # {a} remains
845
+ assert "c" in vect.stop_words_
846
+ assert len(vect.stop_words_) == 5
847
+
848
+
849
+ def test_count_binary_occurrences():
850
+ # by default multiple occurrences are counted as longs
851
+ test_data = ["aaabc", "abbde"]
852
+ vect = CountVectorizer(analyzer="char", max_df=1.0)
853
+ X = vect.fit_transform(test_data).toarray()
854
+ assert_array_equal(["a", "b", "c", "d", "e"], vect.get_feature_names_out())
855
+ assert_array_equal([[3, 1, 1, 0, 0], [1, 2, 0, 1, 1]], X)
856
+
857
+ # using boolean features, we can fetch the binary occurrence info
858
+ # instead.
859
+ vect = CountVectorizer(analyzer="char", max_df=1.0, binary=True)
860
+ X = vect.fit_transform(test_data).toarray()
861
+ assert_array_equal([[1, 1, 1, 0, 0], [1, 1, 0, 1, 1]], X)
862
+
863
+ # check the ability to change the dtype
864
+ vect = CountVectorizer(analyzer="char", max_df=1.0, binary=True, dtype=np.float32)
865
+ X_sparse = vect.fit_transform(test_data)
866
+ assert X_sparse.dtype == np.float32
867
+
868
+
869
+ @fails_if_pypy
870
+ def test_hashed_binary_occurrences():
871
+ # by default multiple occurrences are counted as longs
872
+ test_data = ["aaabc", "abbde"]
873
+ vect = HashingVectorizer(alternate_sign=False, analyzer="char", norm=None)
874
+ X = vect.transform(test_data)
875
+ assert np.max(X[0:1].data) == 3
876
+ assert np.max(X[1:2].data) == 2
877
+ assert X.dtype == np.float64
878
+
879
+ # using boolean features, we can fetch the binary occurrence info
880
+ # instead.
881
+ vect = HashingVectorizer(
882
+ analyzer="char", alternate_sign=False, binary=True, norm=None
883
+ )
884
+ X = vect.transform(test_data)
885
+ assert np.max(X.data) == 1
886
+ assert X.dtype == np.float64
887
+
888
+ # check the ability to change the dtype
889
+ vect = HashingVectorizer(
890
+ analyzer="char", alternate_sign=False, binary=True, norm=None, dtype=np.float64
891
+ )
892
+ X = vect.transform(test_data)
893
+ assert X.dtype == np.float64
894
+
895
+
896
+ @pytest.mark.parametrize("Vectorizer", (CountVectorizer, TfidfVectorizer))
897
+ def test_vectorizer_inverse_transform(Vectorizer):
898
+ # raw documents
899
+ data = ALL_FOOD_DOCS
900
+ vectorizer = Vectorizer()
901
+ transformed_data = vectorizer.fit_transform(data)
902
+ inversed_data = vectorizer.inverse_transform(transformed_data)
903
+ assert isinstance(inversed_data, list)
904
+
905
+ analyze = vectorizer.build_analyzer()
906
+ for doc, inversed_terms in zip(data, inversed_data):
907
+ terms = np.sort(np.unique(analyze(doc)))
908
+ inversed_terms = np.sort(np.unique(inversed_terms))
909
+ assert_array_equal(terms, inversed_terms)
910
+
911
+ assert sparse.issparse(transformed_data)
912
+ assert transformed_data.format == "csr"
913
+
914
+ # Test that inverse_transform also works with numpy arrays and
915
+ # scipy
916
+ transformed_data2 = transformed_data.toarray()
917
+ inversed_data2 = vectorizer.inverse_transform(transformed_data2)
918
+ for terms, terms2 in zip(inversed_data, inversed_data2):
919
+ assert_array_equal(np.sort(terms), np.sort(terms2))
920
+
921
+ # Check that inverse_transform also works on non CSR sparse data:
922
+ transformed_data3 = transformed_data.tocsc()
923
+ inversed_data3 = vectorizer.inverse_transform(transformed_data3)
924
+ for terms, terms3 in zip(inversed_data, inversed_data3):
925
+ assert_array_equal(np.sort(terms), np.sort(terms3))
926
+
927
+
928
+ def test_count_vectorizer_pipeline_grid_selection():
929
+ # raw documents
930
+ data = JUNK_FOOD_DOCS + NOTJUNK_FOOD_DOCS
931
+
932
+ # label junk food as -1, the others as +1
933
+ target = [-1] * len(JUNK_FOOD_DOCS) + [1] * len(NOTJUNK_FOOD_DOCS)
934
+
935
+ # split the dataset for model development and final evaluation
936
+ train_data, test_data, target_train, target_test = train_test_split(
937
+ data, target, test_size=0.2, random_state=0
938
+ )
939
+
940
+ pipeline = Pipeline([("vect", CountVectorizer()), ("svc", LinearSVC(dual="auto"))])
941
+
942
+ parameters = {
943
+ "vect__ngram_range": [(1, 1), (1, 2)],
944
+ "svc__loss": ("hinge", "squared_hinge"),
945
+ }
946
+
947
+ # find the best parameters for both the feature extraction and the
948
+ # classifier
949
+ grid_search = GridSearchCV(pipeline, parameters, n_jobs=1, cv=3)
950
+
951
+ # Check that the best model found by grid search is 100% correct on the
952
+ # held out evaluation set.
953
+ pred = grid_search.fit(train_data, target_train).predict(test_data)
954
+ assert_array_equal(pred, target_test)
955
+
956
+ # on this toy dataset bigram representation which is used in the last of
957
+ # the grid_search is considered the best estimator since they all converge
958
+ # to 100% accuracy models
959
+ assert grid_search.best_score_ == 1.0
960
+ best_vectorizer = grid_search.best_estimator_.named_steps["vect"]
961
+ assert best_vectorizer.ngram_range == (1, 1)
962
+
963
+
964
+ def test_vectorizer_pipeline_grid_selection():
965
+ # raw documents
966
+ data = JUNK_FOOD_DOCS + NOTJUNK_FOOD_DOCS
967
+
968
+ # label junk food as -1, the others as +1
969
+ target = [-1] * len(JUNK_FOOD_DOCS) + [1] * len(NOTJUNK_FOOD_DOCS)
970
+
971
+ # split the dataset for model development and final evaluation
972
+ train_data, test_data, target_train, target_test = train_test_split(
973
+ data, target, test_size=0.1, random_state=0
974
+ )
975
+
976
+ pipeline = Pipeline([("vect", TfidfVectorizer()), ("svc", LinearSVC(dual="auto"))])
977
+
978
+ parameters = {
979
+ "vect__ngram_range": [(1, 1), (1, 2)],
980
+ "vect__norm": ("l1", "l2"),
981
+ "svc__loss": ("hinge", "squared_hinge"),
982
+ }
983
+
984
+ # find the best parameters for both the feature extraction and the
985
+ # classifier
986
+ grid_search = GridSearchCV(pipeline, parameters, n_jobs=1)
987
+
988
+ # Check that the best model found by grid search is 100% correct on the
989
+ # held out evaluation set.
990
+ pred = grid_search.fit(train_data, target_train).predict(test_data)
991
+ assert_array_equal(pred, target_test)
992
+
993
+ # on this toy dataset bigram representation which is used in the last of
994
+ # the grid_search is considered the best estimator since they all converge
995
+ # to 100% accuracy models
996
+ assert grid_search.best_score_ == 1.0
997
+ best_vectorizer = grid_search.best_estimator_.named_steps["vect"]
998
+ assert best_vectorizer.ngram_range == (1, 1)
999
+ assert best_vectorizer.norm == "l2"
1000
+ assert not best_vectorizer.fixed_vocabulary_
1001
+
1002
+
1003
+ def test_vectorizer_pipeline_cross_validation():
1004
+ # raw documents
1005
+ data = JUNK_FOOD_DOCS + NOTJUNK_FOOD_DOCS
1006
+
1007
+ # label junk food as -1, the others as +1
1008
+ target = [-1] * len(JUNK_FOOD_DOCS) + [1] * len(NOTJUNK_FOOD_DOCS)
1009
+
1010
+ pipeline = Pipeline([("vect", TfidfVectorizer()), ("svc", LinearSVC(dual="auto"))])
1011
+
1012
+ cv_scores = cross_val_score(pipeline, data, target, cv=3)
1013
+ assert_array_equal(cv_scores, [1.0, 1.0, 1.0])
1014
+
1015
+
1016
+ @fails_if_pypy
1017
+ def test_vectorizer_unicode():
1018
+ # tests that the count vectorizer works with cyrillic.
1019
+ document = (
1020
+ "Машинное обучение — обширный подраздел искусственного "
1021
+ "интеллекта, изучающий методы построения алгоритмов, "
1022
+ "способных обучаться."
1023
+ )
1024
+
1025
+ vect = CountVectorizer()
1026
+ X_counted = vect.fit_transform([document])
1027
+ assert X_counted.shape == (1, 12)
1028
+
1029
+ vect = HashingVectorizer(norm=None, alternate_sign=False)
1030
+ X_hashed = vect.transform([document])
1031
+ assert X_hashed.shape == (1, 2**20)
1032
+
1033
+ # No collisions on such a small dataset
1034
+ assert X_counted.nnz == X_hashed.nnz
1035
+
1036
+ # When norm is None and not alternate_sign, the tokens are counted up to
1037
+ # collisions
1038
+ assert_array_equal(np.sort(X_counted.data), np.sort(X_hashed.data))
1039
+
1040
+
1041
+ def test_tfidf_vectorizer_with_fixed_vocabulary():
1042
+ # non regression smoke test for inheritance issues
1043
+ vocabulary = ["pizza", "celeri"]
1044
+ vect = TfidfVectorizer(vocabulary=vocabulary)
1045
+ X_1 = vect.fit_transform(ALL_FOOD_DOCS)
1046
+ X_2 = vect.transform(ALL_FOOD_DOCS)
1047
+ assert_array_almost_equal(X_1.toarray(), X_2.toarray())
1048
+ assert vect.fixed_vocabulary_
1049
+
1050
+
1051
+ def test_pickling_vectorizer():
1052
+ instances = [
1053
+ HashingVectorizer(),
1054
+ HashingVectorizer(norm="l1"),
1055
+ HashingVectorizer(binary=True),
1056
+ HashingVectorizer(ngram_range=(1, 2)),
1057
+ CountVectorizer(),
1058
+ CountVectorizer(preprocessor=strip_tags),
1059
+ CountVectorizer(analyzer=lazy_analyze),
1060
+ CountVectorizer(preprocessor=strip_tags).fit(JUNK_FOOD_DOCS),
1061
+ CountVectorizer(strip_accents=strip_eacute).fit(JUNK_FOOD_DOCS),
1062
+ TfidfVectorizer(),
1063
+ TfidfVectorizer(analyzer=lazy_analyze),
1064
+ TfidfVectorizer().fit(JUNK_FOOD_DOCS),
1065
+ ]
1066
+
1067
+ for orig in instances:
1068
+ s = pickle.dumps(orig)
1069
+ copy = pickle.loads(s)
1070
+ assert type(copy) == orig.__class__
1071
+ assert copy.get_params() == orig.get_params()
1072
+ if IS_PYPY and isinstance(orig, HashingVectorizer):
1073
+ continue
1074
+ else:
1075
+ assert_allclose_dense_sparse(
1076
+ copy.fit_transform(JUNK_FOOD_DOCS),
1077
+ orig.fit_transform(JUNK_FOOD_DOCS),
1078
+ )
1079
+
1080
+
1081
+ @pytest.mark.parametrize(
1082
+ "factory",
1083
+ [
1084
+ CountVectorizer.build_analyzer,
1085
+ CountVectorizer.build_preprocessor,
1086
+ CountVectorizer.build_tokenizer,
1087
+ ],
1088
+ )
1089
+ def test_pickling_built_processors(factory):
1090
+ """Tokenizers cannot be pickled
1091
+ https://github.com/scikit-learn/scikit-learn/issues/12833
1092
+ """
1093
+ vec = CountVectorizer()
1094
+ function = factory(vec)
1095
+ text = "J'ai mangé du kangourou ce midi, c'était pas très bon."
1096
+ roundtripped_function = pickle.loads(pickle.dumps(function))
1097
+ expected = function(text)
1098
+ result = roundtripped_function(text)
1099
+ assert result == expected
1100
+
1101
+
1102
+ def test_countvectorizer_vocab_sets_when_pickling():
1103
+ # ensure that vocabulary of type set is coerced to a list to
1104
+ # preserve iteration ordering after deserialization
1105
+ rng = np.random.RandomState(0)
1106
+ vocab_words = np.array(
1107
+ [
1108
+ "beer",
1109
+ "burger",
1110
+ "celeri",
1111
+ "coke",
1112
+ "pizza",
1113
+ "salad",
1114
+ "sparkling",
1115
+ "tomato",
1116
+ "water",
1117
+ ]
1118
+ )
1119
+ for x in range(0, 100):
1120
+ vocab_set = set(rng.choice(vocab_words, size=5, replace=False))
1121
+ cv = CountVectorizer(vocabulary=vocab_set)
1122
+ unpickled_cv = pickle.loads(pickle.dumps(cv))
1123
+ cv.fit(ALL_FOOD_DOCS)
1124
+ unpickled_cv.fit(ALL_FOOD_DOCS)
1125
+ assert_array_equal(
1126
+ cv.get_feature_names_out(), unpickled_cv.get_feature_names_out()
1127
+ )
1128
+
1129
+
1130
+ def test_countvectorizer_vocab_dicts_when_pickling():
1131
+ rng = np.random.RandomState(0)
1132
+ vocab_words = np.array(
1133
+ [
1134
+ "beer",
1135
+ "burger",
1136
+ "celeri",
1137
+ "coke",
1138
+ "pizza",
1139
+ "salad",
1140
+ "sparkling",
1141
+ "tomato",
1142
+ "water",
1143
+ ]
1144
+ )
1145
+ for x in range(0, 100):
1146
+ vocab_dict = dict()
1147
+ words = rng.choice(vocab_words, size=5, replace=False)
1148
+ for y in range(0, 5):
1149
+ vocab_dict[words[y]] = y
1150
+ cv = CountVectorizer(vocabulary=vocab_dict)
1151
+ unpickled_cv = pickle.loads(pickle.dumps(cv))
1152
+ cv.fit(ALL_FOOD_DOCS)
1153
+ unpickled_cv.fit(ALL_FOOD_DOCS)
1154
+ assert_array_equal(
1155
+ cv.get_feature_names_out(), unpickled_cv.get_feature_names_out()
1156
+ )
1157
+
1158
+
1159
+ def test_stop_words_removal():
1160
+ # Ensure that deleting the stop_words_ attribute doesn't affect transform
1161
+
1162
+ fitted_vectorizers = (
1163
+ TfidfVectorizer().fit(JUNK_FOOD_DOCS),
1164
+ CountVectorizer(preprocessor=strip_tags).fit(JUNK_FOOD_DOCS),
1165
+ CountVectorizer(strip_accents=strip_eacute).fit(JUNK_FOOD_DOCS),
1166
+ )
1167
+
1168
+ for vect in fitted_vectorizers:
1169
+ vect_transform = vect.transform(JUNK_FOOD_DOCS).toarray()
1170
+
1171
+ vect.stop_words_ = None
1172
+ stop_None_transform = vect.transform(JUNK_FOOD_DOCS).toarray()
1173
+
1174
+ delattr(vect, "stop_words_")
1175
+ stop_del_transform = vect.transform(JUNK_FOOD_DOCS).toarray()
1176
+
1177
+ assert_array_equal(stop_None_transform, vect_transform)
1178
+ assert_array_equal(stop_del_transform, vect_transform)
1179
+
1180
+
1181
+ def test_pickling_transformer():
1182
+ X = CountVectorizer().fit_transform(JUNK_FOOD_DOCS)
1183
+ orig = TfidfTransformer().fit(X)
1184
+ s = pickle.dumps(orig)
1185
+ copy = pickle.loads(s)
1186
+ assert type(copy) == orig.__class__
1187
+ assert_array_equal(copy.fit_transform(X).toarray(), orig.fit_transform(X).toarray())
1188
+
1189
+
1190
+ def test_transformer_idf_setter():
1191
+ X = CountVectorizer().fit_transform(JUNK_FOOD_DOCS)
1192
+ orig = TfidfTransformer().fit(X)
1193
+ copy = TfidfTransformer()
1194
+ copy.idf_ = orig.idf_
1195
+ assert_array_equal(copy.transform(X).toarray(), orig.transform(X).toarray())
1196
+
1197
+
1198
+ def test_tfidf_vectorizer_setter():
1199
+ orig = TfidfVectorizer(use_idf=True)
1200
+ orig.fit(JUNK_FOOD_DOCS)
1201
+ copy = TfidfVectorizer(vocabulary=orig.vocabulary_, use_idf=True)
1202
+ copy.idf_ = orig.idf_
1203
+ assert_array_equal(
1204
+ copy.transform(JUNK_FOOD_DOCS).toarray(),
1205
+ orig.transform(JUNK_FOOD_DOCS).toarray(),
1206
+ )
1207
+ # `idf_` cannot be set with `use_idf=False`
1208
+ copy = TfidfVectorizer(vocabulary=orig.vocabulary_, use_idf=False)
1209
+ err_msg = "`idf_` cannot be set when `user_idf=False`."
1210
+ with pytest.raises(ValueError, match=err_msg):
1211
+ copy.idf_ = orig.idf_
1212
+
1213
+
1214
+ def test_tfidfvectorizer_invalid_idf_attr():
1215
+ vect = TfidfVectorizer(use_idf=True)
1216
+ vect.fit(JUNK_FOOD_DOCS)
1217
+ copy = TfidfVectorizer(vocabulary=vect.vocabulary_, use_idf=True)
1218
+ expected_idf_len = len(vect.idf_)
1219
+ invalid_idf = [1.0] * (expected_idf_len + 1)
1220
+ with pytest.raises(ValueError):
1221
+ setattr(copy, "idf_", invalid_idf)
1222
+
1223
+
1224
+ def test_non_unique_vocab():
1225
+ vocab = ["a", "b", "c", "a", "a"]
1226
+ vect = CountVectorizer(vocabulary=vocab)
1227
+ with pytest.raises(ValueError):
1228
+ vect.fit([])
1229
+
1230
+
1231
+ @fails_if_pypy
1232
+ def test_hashingvectorizer_nan_in_docs():
1233
+ # np.nan can appear when using pandas to load text fields from a csv file
1234
+ # with missing values.
1235
+ message = "np.nan is an invalid document, expected byte or unicode string."
1236
+ exception = ValueError
1237
+
1238
+ def func():
1239
+ hv = HashingVectorizer()
1240
+ hv.fit_transform(["hello world", np.nan, "hello hello"])
1241
+
1242
+ with pytest.raises(exception, match=message):
1243
+ func()
1244
+
1245
+
1246
+ def test_tfidfvectorizer_binary():
1247
+ # Non-regression test: TfidfVectorizer used to ignore its "binary" param.
1248
+ v = TfidfVectorizer(binary=True, use_idf=False, norm=None)
1249
+ assert v.binary
1250
+
1251
+ X = v.fit_transform(["hello world", "hello hello"]).toarray()
1252
+ assert_array_equal(X.ravel(), [1, 1, 1, 0])
1253
+ X2 = v.transform(["hello world", "hello hello"]).toarray()
1254
+ assert_array_equal(X2.ravel(), [1, 1, 1, 0])
1255
+
1256
+
1257
+ def test_tfidfvectorizer_export_idf():
1258
+ vect = TfidfVectorizer(use_idf=True)
1259
+ vect.fit(JUNK_FOOD_DOCS)
1260
+ assert_array_almost_equal(vect.idf_, vect._tfidf.idf_)
1261
+
1262
+
1263
+ def test_vectorizer_vocab_clone():
1264
+ vect_vocab = TfidfVectorizer(vocabulary=["the"])
1265
+ vect_vocab_clone = clone(vect_vocab)
1266
+ vect_vocab.fit(ALL_FOOD_DOCS)
1267
+ vect_vocab_clone.fit(ALL_FOOD_DOCS)
1268
+ assert vect_vocab_clone.vocabulary_ == vect_vocab.vocabulary_
1269
+
1270
+
1271
+ @pytest.mark.parametrize(
1272
+ "Vectorizer", (CountVectorizer, TfidfVectorizer, HashingVectorizer)
1273
+ )
1274
+ def test_vectorizer_string_object_as_input(Vectorizer):
1275
+ message = "Iterable over raw text documents expected, string object received."
1276
+ vec = Vectorizer()
1277
+
1278
+ with pytest.raises(ValueError, match=message):
1279
+ vec.fit_transform("hello world!")
1280
+
1281
+ with pytest.raises(ValueError, match=message):
1282
+ vec.fit("hello world!")
1283
+ vec.fit(["some text", "some other text"])
1284
+
1285
+ with pytest.raises(ValueError, match=message):
1286
+ vec.transform("hello world!")
1287
+
1288
+
1289
+ @pytest.mark.parametrize("X_dtype", [np.float32, np.float64])
1290
+ def test_tfidf_transformer_type(X_dtype):
1291
+ X = sparse.rand(10, 20000, dtype=X_dtype, random_state=42)
1292
+ X_trans = TfidfTransformer().fit_transform(X)
1293
+ assert X_trans.dtype == X.dtype
1294
+
1295
+
1296
+ @pytest.mark.parametrize(
1297
+ "csc_container, csr_container", product(CSC_CONTAINERS, CSR_CONTAINERS)
1298
+ )
1299
+ def test_tfidf_transformer_sparse(csc_container, csr_container):
1300
+ X = sparse.rand(10, 20000, dtype=np.float64, random_state=42)
1301
+ X_csc = csc_container(X)
1302
+ X_csr = csr_container(X)
1303
+
1304
+ X_trans_csc = TfidfTransformer().fit_transform(X_csc)
1305
+ X_trans_csr = TfidfTransformer().fit_transform(X_csr)
1306
+ assert_allclose_dense_sparse(X_trans_csc, X_trans_csr)
1307
+ assert X_trans_csc.format == X_trans_csr.format
1308
+
1309
+
1310
+ @pytest.mark.parametrize(
1311
+ "vectorizer_dtype, output_dtype, warning_expected",
1312
+ [
1313
+ (np.int32, np.float64, True),
1314
+ (np.int64, np.float64, True),
1315
+ (np.float32, np.float32, False),
1316
+ (np.float64, np.float64, False),
1317
+ ],
1318
+ )
1319
+ def test_tfidf_vectorizer_type(vectorizer_dtype, output_dtype, warning_expected):
1320
+ X = np.array(["numpy", "scipy", "sklearn"])
1321
+ vectorizer = TfidfVectorizer(dtype=vectorizer_dtype)
1322
+
1323
+ warning_msg_match = "'dtype' should be used."
1324
+ if warning_expected:
1325
+ with pytest.warns(UserWarning, match=warning_msg_match):
1326
+ X_idf = vectorizer.fit_transform(X)
1327
+ else:
1328
+ with warnings.catch_warnings():
1329
+ warnings.simplefilter("error", UserWarning)
1330
+ X_idf = vectorizer.fit_transform(X)
1331
+ assert X_idf.dtype == output_dtype
1332
+
1333
+
1334
+ @pytest.mark.parametrize(
1335
+ "vec",
1336
+ [
1337
+ HashingVectorizer(ngram_range=(2, 1)),
1338
+ CountVectorizer(ngram_range=(2, 1)),
1339
+ TfidfVectorizer(ngram_range=(2, 1)),
1340
+ ],
1341
+ )
1342
+ def test_vectorizers_invalid_ngram_range(vec):
1343
+ # vectorizers could be initialized with invalid ngram range
1344
+ # test for raising error message
1345
+ invalid_range = vec.ngram_range
1346
+ message = re.escape(
1347
+ f"Invalid value for ngram_range={invalid_range} "
1348
+ "lower boundary larger than the upper boundary."
1349
+ )
1350
+ if isinstance(vec, HashingVectorizer) and IS_PYPY:
1351
+ pytest.xfail(reason="HashingVectorizer is not supported on PyPy")
1352
+
1353
+ with pytest.raises(ValueError, match=message):
1354
+ vec.fit(["good news everyone"])
1355
+
1356
+ with pytest.raises(ValueError, match=message):
1357
+ vec.fit_transform(["good news everyone"])
1358
+
1359
+ if isinstance(vec, HashingVectorizer):
1360
+ with pytest.raises(ValueError, match=message):
1361
+ vec.transform(["good news everyone"])
1362
+
1363
+
1364
+ def _check_stop_words_consistency(estimator):
1365
+ stop_words = estimator.get_stop_words()
1366
+ tokenize = estimator.build_tokenizer()
1367
+ preprocess = estimator.build_preprocessor()
1368
+ return estimator._check_stop_words_consistency(stop_words, preprocess, tokenize)
1369
+
1370
+
1371
+ @fails_if_pypy
1372
+ def test_vectorizer_stop_words_inconsistent():
1373
+ lstr = r"\['and', 'll', 've'\]"
1374
+ message = (
1375
+ "Your stop_words may be inconsistent with your "
1376
+ "preprocessing. Tokenizing the stop words generated "
1377
+ "tokens %s not in stop_words." % lstr
1378
+ )
1379
+ for vec in [CountVectorizer(), TfidfVectorizer(), HashingVectorizer()]:
1380
+ vec.set_params(stop_words=["you've", "you", "you'll", "AND"])
1381
+ with pytest.warns(UserWarning, match=message):
1382
+ vec.fit_transform(["hello world"])
1383
+ # reset stop word validation
1384
+ del vec._stop_words_id
1385
+ assert _check_stop_words_consistency(vec) is False
1386
+
1387
+ # Only one warning per stop list
1388
+ with warnings.catch_warnings():
1389
+ warnings.simplefilter("error", UserWarning)
1390
+ vec.fit_transform(["hello world"])
1391
+ assert _check_stop_words_consistency(vec) is None
1392
+
1393
+ # Test caching of inconsistency assessment
1394
+ vec.set_params(stop_words=["you've", "you", "you'll", "blah", "AND"])
1395
+ with pytest.warns(UserWarning, match=message):
1396
+ vec.fit_transform(["hello world"])
1397
+
1398
+
1399
+ @skip_if_32bit
1400
+ @pytest.mark.parametrize("csr_container", CSR_CONTAINERS)
1401
+ def test_countvectorizer_sort_features_64bit_sparse_indices(csr_container):
1402
+ """
1403
+ Check that CountVectorizer._sort_features preserves the dtype of its sparse
1404
+ feature matrix.
1405
+
1406
+ This test is skipped on 32bit platforms, see:
1407
+ https://github.com/scikit-learn/scikit-learn/pull/11295
1408
+ for more details.
1409
+ """
1410
+
1411
+ X = csr_container((5, 5), dtype=np.int64)
1412
+
1413
+ # force indices and indptr to int64.
1414
+ INDICES_DTYPE = np.int64
1415
+ X.indices = X.indices.astype(INDICES_DTYPE)
1416
+ X.indptr = X.indptr.astype(INDICES_DTYPE)
1417
+
1418
+ vocabulary = {"scikit-learn": 0, "is": 1, "great!": 2}
1419
+
1420
+ Xs = CountVectorizer()._sort_features(X, vocabulary)
1421
+
1422
+ assert INDICES_DTYPE == Xs.indices.dtype
1423
+
1424
+
1425
+ @fails_if_pypy
1426
+ @pytest.mark.parametrize(
1427
+ "Estimator", [CountVectorizer, TfidfVectorizer, HashingVectorizer]
1428
+ )
1429
+ def test_stop_word_validation_custom_preprocessor(Estimator):
1430
+ data = [{"text": "some text"}]
1431
+
1432
+ vec = Estimator()
1433
+ assert _check_stop_words_consistency(vec) is True
1434
+
1435
+ vec = Estimator(preprocessor=lambda x: x["text"], stop_words=["and"])
1436
+ assert _check_stop_words_consistency(vec) == "error"
1437
+ # checks are cached
1438
+ assert _check_stop_words_consistency(vec) is None
1439
+ vec.fit_transform(data)
1440
+
1441
+ class CustomEstimator(Estimator):
1442
+ def build_preprocessor(self):
1443
+ return lambda x: x["text"]
1444
+
1445
+ vec = CustomEstimator(stop_words=["and"])
1446
+ assert _check_stop_words_consistency(vec) == "error"
1447
+
1448
+ vec = Estimator(
1449
+ tokenizer=lambda doc: re.compile(r"\w{1,}").findall(doc), stop_words=["and"]
1450
+ )
1451
+ assert _check_stop_words_consistency(vec) is True
1452
+
1453
+
1454
+ @pytest.mark.parametrize(
1455
+ "Estimator", [CountVectorizer, TfidfVectorizer, HashingVectorizer]
1456
+ )
1457
+ @pytest.mark.parametrize(
1458
+ "input_type, err_type, err_msg",
1459
+ [
1460
+ ("filename", FileNotFoundError, ""),
1461
+ ("file", AttributeError, "'str' object has no attribute 'read'"),
1462
+ ],
1463
+ )
1464
+ def test_callable_analyzer_error(Estimator, input_type, err_type, err_msg):
1465
+ if issubclass(Estimator, HashingVectorizer) and IS_PYPY:
1466
+ pytest.xfail("HashingVectorizer is not supported on PyPy")
1467
+ data = ["this is text, not file or filename"]
1468
+ with pytest.raises(err_type, match=err_msg):
1469
+ Estimator(analyzer=lambda x: x.split(), input=input_type).fit_transform(data)
1470
+
1471
+
1472
+ @pytest.mark.parametrize(
1473
+ "Estimator",
1474
+ [
1475
+ CountVectorizer,
1476
+ TfidfVectorizer,
1477
+ pytest.param(HashingVectorizer, marks=fails_if_pypy),
1478
+ ],
1479
+ )
1480
+ @pytest.mark.parametrize(
1481
+ "analyzer", [lambda doc: open(doc, "r"), lambda doc: doc.read()]
1482
+ )
1483
+ @pytest.mark.parametrize("input_type", ["file", "filename"])
1484
+ def test_callable_analyzer_change_behavior(Estimator, analyzer, input_type):
1485
+ data = ["this is text, not file or filename"]
1486
+ with pytest.raises((FileNotFoundError, AttributeError)):
1487
+ Estimator(analyzer=analyzer, input=input_type).fit_transform(data)
1488
+
1489
+
1490
+ @pytest.mark.parametrize(
1491
+ "Estimator", [CountVectorizer, TfidfVectorizer, HashingVectorizer]
1492
+ )
1493
+ def test_callable_analyzer_reraise_error(tmpdir, Estimator):
1494
+ # check if a custom exception from the analyzer is shown to the user
1495
+ def analyzer(doc):
1496
+ raise Exception("testing")
1497
+
1498
+ if issubclass(Estimator, HashingVectorizer) and IS_PYPY:
1499
+ pytest.xfail("HashingVectorizer is not supported on PyPy")
1500
+
1501
+ f = tmpdir.join("file.txt")
1502
+ f.write("sample content\n")
1503
+
1504
+ with pytest.raises(Exception, match="testing"):
1505
+ Estimator(analyzer=analyzer, input="file").fit_transform([f])
1506
+
1507
+
1508
+ @pytest.mark.parametrize(
1509
+ "Vectorizer", [CountVectorizer, HashingVectorizer, TfidfVectorizer]
1510
+ )
1511
+ @pytest.mark.parametrize(
1512
+ (
1513
+ "stop_words, tokenizer, preprocessor, ngram_range, token_pattern,"
1514
+ "analyzer, unused_name, ovrd_name, ovrd_msg"
1515
+ ),
1516
+ [
1517
+ (
1518
+ ["you've", "you'll"],
1519
+ None,
1520
+ None,
1521
+ (1, 1),
1522
+ None,
1523
+ "char",
1524
+ "'stop_words'",
1525
+ "'analyzer'",
1526
+ "!= 'word'",
1527
+ ),
1528
+ (
1529
+ None,
1530
+ lambda s: s.split(),
1531
+ None,
1532
+ (1, 1),
1533
+ None,
1534
+ "char",
1535
+ "'tokenizer'",
1536
+ "'analyzer'",
1537
+ "!= 'word'",
1538
+ ),
1539
+ (
1540
+ None,
1541
+ lambda s: s.split(),
1542
+ None,
1543
+ (1, 1),
1544
+ r"\w+",
1545
+ "word",
1546
+ "'token_pattern'",
1547
+ "'tokenizer'",
1548
+ "is not None",
1549
+ ),
1550
+ (
1551
+ None,
1552
+ None,
1553
+ lambda s: s.upper(),
1554
+ (1, 1),
1555
+ r"\w+",
1556
+ lambda s: s.upper(),
1557
+ "'preprocessor'",
1558
+ "'analyzer'",
1559
+ "is callable",
1560
+ ),
1561
+ (
1562
+ None,
1563
+ None,
1564
+ None,
1565
+ (1, 2),
1566
+ None,
1567
+ lambda s: s.upper(),
1568
+ "'ngram_range'",
1569
+ "'analyzer'",
1570
+ "is callable",
1571
+ ),
1572
+ (
1573
+ None,
1574
+ None,
1575
+ None,
1576
+ (1, 1),
1577
+ r"\w+",
1578
+ "char",
1579
+ "'token_pattern'",
1580
+ "'analyzer'",
1581
+ "!= 'word'",
1582
+ ),
1583
+ ],
1584
+ )
1585
+ def test_unused_parameters_warn(
1586
+ Vectorizer,
1587
+ stop_words,
1588
+ tokenizer,
1589
+ preprocessor,
1590
+ ngram_range,
1591
+ token_pattern,
1592
+ analyzer,
1593
+ unused_name,
1594
+ ovrd_name,
1595
+ ovrd_msg,
1596
+ ):
1597
+ train_data = JUNK_FOOD_DOCS
1598
+ # setting parameter and checking for corresponding warning messages
1599
+ vect = Vectorizer()
1600
+ vect.set_params(
1601
+ stop_words=stop_words,
1602
+ tokenizer=tokenizer,
1603
+ preprocessor=preprocessor,
1604
+ ngram_range=ngram_range,
1605
+ token_pattern=token_pattern,
1606
+ analyzer=analyzer,
1607
+ )
1608
+ msg = "The parameter %s will not be used since %s %s" % (
1609
+ unused_name,
1610
+ ovrd_name,
1611
+ ovrd_msg,
1612
+ )
1613
+ with pytest.warns(UserWarning, match=msg):
1614
+ vect.fit(train_data)
1615
+
1616
+
1617
+ @pytest.mark.parametrize(
1618
+ "Vectorizer, X",
1619
+ (
1620
+ (HashingVectorizer, [{"foo": 1, "bar": 2}, {"foo": 3, "baz": 1}]),
1621
+ (CountVectorizer, JUNK_FOOD_DOCS),
1622
+ ),
1623
+ )
1624
+ def test_n_features_in(Vectorizer, X):
1625
+ # For vectorizers, n_features_in_ does not make sense
1626
+ vectorizer = Vectorizer()
1627
+ assert not hasattr(vectorizer, "n_features_in_")
1628
+ vectorizer.fit(X)
1629
+ assert not hasattr(vectorizer, "n_features_in_")
1630
+
1631
+
1632
+ def test_tie_breaking_sample_order_invariance():
1633
+ # Checks the sample order invariance when setting max_features
1634
+ # non-regression test for #17939
1635
+ vec = CountVectorizer(max_features=1)
1636
+ vocab1 = vec.fit(["hello", "world"]).vocabulary_
1637
+ vocab2 = vec.fit(["world", "hello"]).vocabulary_
1638
+ assert vocab1 == vocab2
1639
+
1640
+
1641
+ @fails_if_pypy
1642
+ def test_nonnegative_hashing_vectorizer_result_indices():
1643
+ # add test for pr 19035
1644
+ hashing = HashingVectorizer(n_features=1000000, ngram_range=(2, 3))
1645
+ indices = hashing.transform(["22pcs efuture"]).indices
1646
+ assert indices[0] >= 0
1647
+
1648
+
1649
+ @pytest.mark.parametrize(
1650
+ "Estimator", [CountVectorizer, TfidfVectorizer, TfidfTransformer, HashingVectorizer]
1651
+ )
1652
+ def test_vectorizers_do_not_have_set_output(Estimator):
1653
+ """Check that vectorizers do not define set_output."""
1654
+ est = Estimator()
1655
+ assert not hasattr(est, "set_output")
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__init__.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """The :mod:`sklearn.inspection` module includes tools for model inspection."""
2
+
3
+
4
+ from ._partial_dependence import partial_dependence
5
+ from ._permutation_importance import permutation_importance
6
+ from ._plot.decision_boundary import DecisionBoundaryDisplay
7
+ from ._plot.partial_dependence import PartialDependenceDisplay
8
+
9
+ __all__ = [
10
+ "partial_dependence",
11
+ "permutation_importance",
12
+ "PartialDependenceDisplay",
13
+ "DecisionBoundaryDisplay",
14
+ ]
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (598 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__pycache__/_partial_dependence.cpython-310.pyc ADDED
Binary file (24.8 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__pycache__/_pd_utils.cpython-310.pyc ADDED
Binary file (2.04 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/__pycache__/_permutation_importance.cpython-310.pyc ADDED
Binary file (9.86 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_partial_dependence.py ADDED
@@ -0,0 +1,743 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Partial dependence plots for regression and classification models."""
2
+
3
+ # Authors: Peter Prettenhofer
4
+ # Trevor Stephens
5
+ # Nicolas Hug
6
+ # License: BSD 3 clause
7
+
8
+ from collections.abc import Iterable
9
+
10
+ import numpy as np
11
+ from scipy import sparse
12
+ from scipy.stats.mstats import mquantiles
13
+
14
+ from ..base import is_classifier, is_regressor
15
+ from ..ensemble import RandomForestRegressor
16
+ from ..ensemble._gb import BaseGradientBoosting
17
+ from ..ensemble._hist_gradient_boosting.gradient_boosting import (
18
+ BaseHistGradientBoosting,
19
+ )
20
+ from ..exceptions import NotFittedError
21
+ from ..tree import DecisionTreeRegressor
22
+ from ..utils import (
23
+ Bunch,
24
+ _determine_key_type,
25
+ _get_column_indices,
26
+ _safe_assign,
27
+ _safe_indexing,
28
+ check_array,
29
+ check_matplotlib_support, # noqa
30
+ )
31
+ from ..utils._param_validation import (
32
+ HasMethods,
33
+ Integral,
34
+ Interval,
35
+ StrOptions,
36
+ validate_params,
37
+ )
38
+ from ..utils.extmath import cartesian
39
+ from ..utils.validation import _check_sample_weight, check_is_fitted
40
+ from ._pd_utils import _check_feature_names, _get_feature_index
41
+
42
+ __all__ = [
43
+ "partial_dependence",
44
+ ]
45
+
46
+
47
+ def _grid_from_X(X, percentiles, is_categorical, grid_resolution):
48
+ """Generate a grid of points based on the percentiles of X.
49
+
50
+ The grid is a cartesian product between the columns of ``values``. The
51
+ ith column of ``values`` consists in ``grid_resolution`` equally-spaced
52
+ points between the percentiles of the jth column of X.
53
+
54
+ If ``grid_resolution`` is bigger than the number of unique values in the
55
+ j-th column of X or if the feature is a categorical feature (by inspecting
56
+ `is_categorical`) , then those unique values will be used instead.
57
+
58
+ Parameters
59
+ ----------
60
+ X : array-like of shape (n_samples, n_target_features)
61
+ The data.
62
+
63
+ percentiles : tuple of float
64
+ The percentiles which are used to construct the extreme values of
65
+ the grid. Must be in [0, 1].
66
+
67
+ is_categorical : list of bool
68
+ For each feature, tells whether it is categorical or not. If a feature
69
+ is categorical, then the values used will be the unique ones
70
+ (i.e. categories) instead of the percentiles.
71
+
72
+ grid_resolution : int
73
+ The number of equally spaced points to be placed on the grid for each
74
+ feature.
75
+
76
+ Returns
77
+ -------
78
+ grid : ndarray of shape (n_points, n_target_features)
79
+ A value for each feature at each point in the grid. ``n_points`` is
80
+ always ``<= grid_resolution ** X.shape[1]``.
81
+
82
+ values : list of 1d ndarrays
83
+ The values with which the grid has been created. The size of each
84
+ array ``values[j]`` is either ``grid_resolution``, or the number of
85
+ unique values in ``X[:, j]``, whichever is smaller.
86
+ """
87
+ if not isinstance(percentiles, Iterable) or len(percentiles) != 2:
88
+ raise ValueError("'percentiles' must be a sequence of 2 elements.")
89
+ if not all(0 <= x <= 1 for x in percentiles):
90
+ raise ValueError("'percentiles' values must be in [0, 1].")
91
+ if percentiles[0] >= percentiles[1]:
92
+ raise ValueError("percentiles[0] must be strictly less than percentiles[1].")
93
+
94
+ if grid_resolution <= 1:
95
+ raise ValueError("'grid_resolution' must be strictly greater than 1.")
96
+
97
+ values = []
98
+ # TODO: we should handle missing values (i.e. `np.nan`) specifically and store them
99
+ # in a different Bunch attribute.
100
+ for feature, is_cat in enumerate(is_categorical):
101
+ try:
102
+ uniques = np.unique(_safe_indexing(X, feature, axis=1))
103
+ except TypeError as exc:
104
+ # `np.unique` will fail in the presence of `np.nan` and `str` categories
105
+ # due to sorting. Temporary, we reraise an error explaining the problem.
106
+ raise ValueError(
107
+ f"The column #{feature} contains mixed data types. Finding unique "
108
+ "categories fail due to sorting. It usually means that the column "
109
+ "contains `np.nan` values together with `str` categories. Such use "
110
+ "case is not yet supported in scikit-learn."
111
+ ) from exc
112
+ if is_cat or uniques.shape[0] < grid_resolution:
113
+ # Use the unique values either because:
114
+ # - feature has low resolution use unique values
115
+ # - feature is categorical
116
+ axis = uniques
117
+ else:
118
+ # create axis based on percentiles and grid resolution
119
+ emp_percentiles = mquantiles(
120
+ _safe_indexing(X, feature, axis=1), prob=percentiles, axis=0
121
+ )
122
+ if np.allclose(emp_percentiles[0], emp_percentiles[1]):
123
+ raise ValueError(
124
+ "percentiles are too close to each other, "
125
+ "unable to build the grid. Please choose percentiles "
126
+ "that are further apart."
127
+ )
128
+ axis = np.linspace(
129
+ emp_percentiles[0],
130
+ emp_percentiles[1],
131
+ num=grid_resolution,
132
+ endpoint=True,
133
+ )
134
+ values.append(axis)
135
+
136
+ return cartesian(values), values
137
+
138
+
139
+ def _partial_dependence_recursion(est, grid, features):
140
+ """Calculate partial dependence via the recursion method.
141
+
142
+ The recursion method is in particular enabled for tree-based estimators.
143
+
144
+ For each `grid` value, a weighted tree traversal is performed: if a split node
145
+ involves an input feature of interest, the corresponding left or right branch
146
+ is followed; otherwise both branches are followed, each branch being weighted
147
+ by the fraction of training samples that entered that branch. Finally, the
148
+ partial dependence is given by a weighted average of all the visited leaves
149
+ values.
150
+
151
+ This method is more efficient in terms of speed than the `'brute'` method
152
+ (:func:`~sklearn.inspection._partial_dependence._partial_dependence_brute`).
153
+ However, here, the partial dependence computation is done explicitly with the
154
+ `X` used during training of `est`.
155
+
156
+ Parameters
157
+ ----------
158
+ est : BaseEstimator
159
+ A fitted estimator object implementing :term:`predict` or
160
+ :term:`decision_function`. Multioutput-multiclass classifiers are not
161
+ supported. Note that `'recursion'` is only supported for some tree-based
162
+ estimators (namely
163
+ :class:`~sklearn.ensemble.GradientBoostingClassifier`,
164
+ :class:`~sklearn.ensemble.GradientBoostingRegressor`,
165
+ :class:`~sklearn.ensemble.HistGradientBoostingClassifier`,
166
+ :class:`~sklearn.ensemble.HistGradientBoostingRegressor`,
167
+ :class:`~sklearn.tree.DecisionTreeRegressor`,
168
+ :class:`~sklearn.ensemble.RandomForestRegressor`,
169
+ ).
170
+
171
+ grid : array-like of shape (n_points, n_target_features)
172
+ The grid of feature values for which the partial dependence is calculated.
173
+ Note that `n_points` is the number of points in the grid and `n_target_features`
174
+ is the number of features you are doing partial dependence at.
175
+
176
+ features : array-like of {int, str}
177
+ The feature (e.g. `[0]`) or pair of interacting features
178
+ (e.g. `[(0, 1)]`) for which the partial dependency should be computed.
179
+
180
+ Returns
181
+ -------
182
+ averaged_predictions : array-like of shape (n_targets, n_points)
183
+ The averaged predictions for the given `grid` of features values.
184
+ Note that `n_targets` is the number of targets (e.g. 1 for binary
185
+ classification, `n_tasks` for multi-output regression, and `n_classes` for
186
+ multiclass classification) and `n_points` is the number of points in the `grid`.
187
+ """
188
+ averaged_predictions = est._compute_partial_dependence_recursion(grid, features)
189
+ if averaged_predictions.ndim == 1:
190
+ # reshape to (1, n_points) for consistency with
191
+ # _partial_dependence_brute
192
+ averaged_predictions = averaged_predictions.reshape(1, -1)
193
+
194
+ return averaged_predictions
195
+
196
+
197
+ def _partial_dependence_brute(
198
+ est, grid, features, X, response_method, sample_weight=None
199
+ ):
200
+ """Calculate partial dependence via the brute force method.
201
+
202
+ The brute method explicitly averages the predictions of an estimator over a
203
+ grid of feature values.
204
+
205
+ For each `grid` value, all the samples from `X` have their variables of
206
+ interest replaced by that specific `grid` value. The predictions are then made
207
+ and averaged across the samples.
208
+
209
+ This method is slower than the `'recursion'`
210
+ (:func:`~sklearn.inspection._partial_dependence._partial_dependence_recursion`)
211
+ version for estimators with this second option. However, with the `'brute'`
212
+ force method, the average will be done with the given `X` and not the `X`
213
+ used during training, as it is done in the `'recursion'` version. Therefore
214
+ the average can always accept `sample_weight` (even when the estimator was
215
+ fitted without).
216
+
217
+ Parameters
218
+ ----------
219
+ est : BaseEstimator
220
+ A fitted estimator object implementing :term:`predict`,
221
+ :term:`predict_proba`, or :term:`decision_function`.
222
+ Multioutput-multiclass classifiers are not supported.
223
+
224
+ grid : array-like of shape (n_points, n_target_features)
225
+ The grid of feature values for which the partial dependence is calculated.
226
+ Note that `n_points` is the number of points in the grid and `n_target_features`
227
+ is the number of features you are doing partial dependence at.
228
+
229
+ features : array-like of {int, str}
230
+ The feature (e.g. `[0]`) or pair of interacting features
231
+ (e.g. `[(0, 1)]`) for which the partial dependency should be computed.
232
+
233
+ X : array-like of shape (n_samples, n_features)
234
+ `X` is used to generate values for the complement features. That is, for
235
+ each value in `grid`, the method will average the prediction of each
236
+ sample from `X` having that grid value for `features`.
237
+
238
+ response_method : {'auto', 'predict_proba', 'decision_function'}, \
239
+ default='auto'
240
+ Specifies whether to use :term:`predict_proba` or
241
+ :term:`decision_function` as the target response. For regressors
242
+ this parameter is ignored and the response is always the output of
243
+ :term:`predict`. By default, :term:`predict_proba` is tried first
244
+ and we revert to :term:`decision_function` if it doesn't exist.
245
+
246
+ sample_weight : array-like of shape (n_samples,), default=None
247
+ Sample weights are used to calculate weighted means when averaging the
248
+ model output. If `None`, then samples are equally weighted. Note that
249
+ `sample_weight` does not change the individual predictions.
250
+
251
+ Returns
252
+ -------
253
+ averaged_predictions : array-like of shape (n_targets, n_points)
254
+ The averaged predictions for the given `grid` of features values.
255
+ Note that `n_targets` is the number of targets (e.g. 1 for binary
256
+ classification, `n_tasks` for multi-output regression, and `n_classes` for
257
+ multiclass classification) and `n_points` is the number of points in the `grid`.
258
+
259
+ predictions : array-like
260
+ The predictions for the given `grid` of features values over the samples
261
+ from `X`. For non-multioutput regression and binary classification the
262
+ shape is `(n_instances, n_points)` and for multi-output regression and
263
+ multiclass classification the shape is `(n_targets, n_instances, n_points)`,
264
+ where `n_targets` is the number of targets (`n_tasks` for multi-output
265
+ regression, and `n_classes` for multiclass classification), `n_instances`
266
+ is the number of instances in `X`, and `n_points` is the number of points
267
+ in the `grid`.
268
+ """
269
+ predictions = []
270
+ averaged_predictions = []
271
+
272
+ # define the prediction_method (predict, predict_proba, decision_function).
273
+ if is_regressor(est):
274
+ prediction_method = est.predict
275
+ else:
276
+ predict_proba = getattr(est, "predict_proba", None)
277
+ decision_function = getattr(est, "decision_function", None)
278
+ if response_method == "auto":
279
+ # try predict_proba, then decision_function if it doesn't exist
280
+ prediction_method = predict_proba or decision_function
281
+ else:
282
+ prediction_method = (
283
+ predict_proba
284
+ if response_method == "predict_proba"
285
+ else decision_function
286
+ )
287
+ if prediction_method is None:
288
+ if response_method == "auto":
289
+ raise ValueError(
290
+ "The estimator has no predict_proba and no "
291
+ "decision_function method."
292
+ )
293
+ elif response_method == "predict_proba":
294
+ raise ValueError("The estimator has no predict_proba method.")
295
+ else:
296
+ raise ValueError("The estimator has no decision_function method.")
297
+
298
+ X_eval = X.copy()
299
+ for new_values in grid:
300
+ for i, variable in enumerate(features):
301
+ _safe_assign(X_eval, new_values[i], column_indexer=variable)
302
+
303
+ try:
304
+ # Note: predictions is of shape
305
+ # (n_points,) for non-multioutput regressors
306
+ # (n_points, n_tasks) for multioutput regressors
307
+ # (n_points, 1) for the regressors in cross_decomposition (I think)
308
+ # (n_points, 2) for binary classification
309
+ # (n_points, n_classes) for multiclass classification
310
+ pred = prediction_method(X_eval)
311
+
312
+ predictions.append(pred)
313
+ # average over samples
314
+ averaged_predictions.append(np.average(pred, axis=0, weights=sample_weight))
315
+ except NotFittedError as e:
316
+ raise ValueError("'estimator' parameter must be a fitted estimator") from e
317
+
318
+ n_samples = X.shape[0]
319
+
320
+ # reshape to (n_targets, n_instances, n_points) where n_targets is:
321
+ # - 1 for non-multioutput regression and binary classification (shape is
322
+ # already correct in those cases)
323
+ # - n_tasks for multi-output regression
324
+ # - n_classes for multiclass classification.
325
+ predictions = np.array(predictions).T
326
+ if is_regressor(est) and predictions.ndim == 2:
327
+ # non-multioutput regression, shape is (n_instances, n_points,)
328
+ predictions = predictions.reshape(n_samples, -1)
329
+ elif is_classifier(est) and predictions.shape[0] == 2:
330
+ # Binary classification, shape is (2, n_instances, n_points).
331
+ # we output the effect of **positive** class
332
+ predictions = predictions[1]
333
+ predictions = predictions.reshape(n_samples, -1)
334
+
335
+ # reshape averaged_predictions to (n_targets, n_points) where n_targets is:
336
+ # - 1 for non-multioutput regression and binary classification (shape is
337
+ # already correct in those cases)
338
+ # - n_tasks for multi-output regression
339
+ # - n_classes for multiclass classification.
340
+ averaged_predictions = np.array(averaged_predictions).T
341
+ if is_regressor(est) and averaged_predictions.ndim == 1:
342
+ # non-multioutput regression, shape is (n_points,)
343
+ averaged_predictions = averaged_predictions.reshape(1, -1)
344
+ elif is_classifier(est) and averaged_predictions.shape[0] == 2:
345
+ # Binary classification, shape is (2, n_points).
346
+ # we output the effect of **positive** class
347
+ averaged_predictions = averaged_predictions[1]
348
+ averaged_predictions = averaged_predictions.reshape(1, -1)
349
+
350
+ return averaged_predictions, predictions
351
+
352
+
353
+ @validate_params(
354
+ {
355
+ "estimator": [
356
+ HasMethods(["fit", "predict"]),
357
+ HasMethods(["fit", "predict_proba"]),
358
+ HasMethods(["fit", "decision_function"]),
359
+ ],
360
+ "X": ["array-like", "sparse matrix"],
361
+ "features": ["array-like", Integral, str],
362
+ "sample_weight": ["array-like", None],
363
+ "categorical_features": ["array-like", None],
364
+ "feature_names": ["array-like", None],
365
+ "response_method": [StrOptions({"auto", "predict_proba", "decision_function"})],
366
+ "percentiles": [tuple],
367
+ "grid_resolution": [Interval(Integral, 1, None, closed="left")],
368
+ "method": [StrOptions({"auto", "recursion", "brute"})],
369
+ "kind": [StrOptions({"average", "individual", "both"})],
370
+ },
371
+ prefer_skip_nested_validation=True,
372
+ )
373
+ def partial_dependence(
374
+ estimator,
375
+ X,
376
+ features,
377
+ *,
378
+ sample_weight=None,
379
+ categorical_features=None,
380
+ feature_names=None,
381
+ response_method="auto",
382
+ percentiles=(0.05, 0.95),
383
+ grid_resolution=100,
384
+ method="auto",
385
+ kind="average",
386
+ ):
387
+ """Partial dependence of ``features``.
388
+
389
+ Partial dependence of a feature (or a set of features) corresponds to
390
+ the average response of an estimator for each possible value of the
391
+ feature.
392
+
393
+ Read more in the :ref:`User Guide <partial_dependence>`.
394
+
395
+ .. warning::
396
+
397
+ For :class:`~sklearn.ensemble.GradientBoostingClassifier` and
398
+ :class:`~sklearn.ensemble.GradientBoostingRegressor`, the
399
+ `'recursion'` method (used by default) will not account for the `init`
400
+ predictor of the boosting process. In practice, this will produce
401
+ the same values as `'brute'` up to a constant offset in the target
402
+ response, provided that `init` is a constant estimator (which is the
403
+ default). However, if `init` is not a constant estimator, the
404
+ partial dependence values are incorrect for `'recursion'` because the
405
+ offset will be sample-dependent. It is preferable to use the `'brute'`
406
+ method. Note that this only applies to
407
+ :class:`~sklearn.ensemble.GradientBoostingClassifier` and
408
+ :class:`~sklearn.ensemble.GradientBoostingRegressor`, not to
409
+ :class:`~sklearn.ensemble.HistGradientBoostingClassifier` and
410
+ :class:`~sklearn.ensemble.HistGradientBoostingRegressor`.
411
+
412
+ Parameters
413
+ ----------
414
+ estimator : BaseEstimator
415
+ A fitted estimator object implementing :term:`predict`,
416
+ :term:`predict_proba`, or :term:`decision_function`.
417
+ Multioutput-multiclass classifiers are not supported.
418
+
419
+ X : {array-like, sparse matrix or dataframe} of shape (n_samples, n_features)
420
+ ``X`` is used to generate a grid of values for the target
421
+ ``features`` (where the partial dependence will be evaluated), and
422
+ also to generate values for the complement features when the
423
+ `method` is 'brute'.
424
+
425
+ features : array-like of {int, str, bool} or int or str
426
+ The feature (e.g. `[0]`) or pair of interacting features
427
+ (e.g. `[(0, 1)]`) for which the partial dependency should be computed.
428
+
429
+ sample_weight : array-like of shape (n_samples,), default=None
430
+ Sample weights are used to calculate weighted means when averaging the
431
+ model output. If `None`, then samples are equally weighted. If
432
+ `sample_weight` is not `None`, then `method` will be set to `'brute'`.
433
+ Note that `sample_weight` is ignored for `kind='individual'`.
434
+
435
+ .. versionadded:: 1.3
436
+
437
+ categorical_features : array-like of shape (n_features,) or shape \
438
+ (n_categorical_features,), dtype={bool, int, str}, default=None
439
+ Indicates the categorical features.
440
+
441
+ - `None`: no feature will be considered categorical;
442
+ - boolean array-like: boolean mask of shape `(n_features,)`
443
+ indicating which features are categorical. Thus, this array has
444
+ the same shape has `X.shape[1]`;
445
+ - integer or string array-like: integer indices or strings
446
+ indicating categorical features.
447
+
448
+ .. versionadded:: 1.2
449
+
450
+ feature_names : array-like of shape (n_features,), dtype=str, default=None
451
+ Name of each feature; `feature_names[i]` holds the name of the feature
452
+ with index `i`.
453
+ By default, the name of the feature corresponds to their numerical
454
+ index for NumPy array and their column name for pandas dataframe.
455
+
456
+ .. versionadded:: 1.2
457
+
458
+ response_method : {'auto', 'predict_proba', 'decision_function'}, \
459
+ default='auto'
460
+ Specifies whether to use :term:`predict_proba` or
461
+ :term:`decision_function` as the target response. For regressors
462
+ this parameter is ignored and the response is always the output of
463
+ :term:`predict`. By default, :term:`predict_proba` is tried first
464
+ and we revert to :term:`decision_function` if it doesn't exist. If
465
+ ``method`` is 'recursion', the response is always the output of
466
+ :term:`decision_function`.
467
+
468
+ percentiles : tuple of float, default=(0.05, 0.95)
469
+ The lower and upper percentile used to create the extreme values
470
+ for the grid. Must be in [0, 1].
471
+
472
+ grid_resolution : int, default=100
473
+ The number of equally spaced points on the grid, for each target
474
+ feature.
475
+
476
+ method : {'auto', 'recursion', 'brute'}, default='auto'
477
+ The method used to calculate the averaged predictions:
478
+
479
+ - `'recursion'` is only supported for some tree-based estimators
480
+ (namely
481
+ :class:`~sklearn.ensemble.GradientBoostingClassifier`,
482
+ :class:`~sklearn.ensemble.GradientBoostingRegressor`,
483
+ :class:`~sklearn.ensemble.HistGradientBoostingClassifier`,
484
+ :class:`~sklearn.ensemble.HistGradientBoostingRegressor`,
485
+ :class:`~sklearn.tree.DecisionTreeRegressor`,
486
+ :class:`~sklearn.ensemble.RandomForestRegressor`,
487
+ ) when `kind='average'`.
488
+ This is more efficient in terms of speed.
489
+ With this method, the target response of a
490
+ classifier is always the decision function, not the predicted
491
+ probabilities. Since the `'recursion'` method implicitly computes
492
+ the average of the Individual Conditional Expectation (ICE) by
493
+ design, it is not compatible with ICE and thus `kind` must be
494
+ `'average'`.
495
+
496
+ - `'brute'` is supported for any estimator, but is more
497
+ computationally intensive.
498
+
499
+ - `'auto'`: the `'recursion'` is used for estimators that support it,
500
+ and `'brute'` is used otherwise. If `sample_weight` is not `None`,
501
+ then `'brute'` is used regardless of the estimator.
502
+
503
+ Please see :ref:`this note <pdp_method_differences>` for
504
+ differences between the `'brute'` and `'recursion'` method.
505
+
506
+ kind : {'average', 'individual', 'both'}, default='average'
507
+ Whether to return the partial dependence averaged across all the
508
+ samples in the dataset or one value per sample or both.
509
+ See Returns below.
510
+
511
+ Note that the fast `method='recursion'` option is only available for
512
+ `kind='average'` and `sample_weights=None`. Computing individual
513
+ dependencies and doing weighted averages requires using the slower
514
+ `method='brute'`.
515
+
516
+ .. versionadded:: 0.24
517
+
518
+ Returns
519
+ -------
520
+ predictions : :class:`~sklearn.utils.Bunch`
521
+ Dictionary-like object, with the following attributes.
522
+
523
+ individual : ndarray of shape (n_outputs, n_instances, \
524
+ len(values[0]), len(values[1]), ...)
525
+ The predictions for all the points in the grid for all
526
+ samples in X. This is also known as Individual
527
+ Conditional Expectation (ICE).
528
+ Only available when `kind='individual'` or `kind='both'`.
529
+
530
+ average : ndarray of shape (n_outputs, len(values[0]), \
531
+ len(values[1]), ...)
532
+ The predictions for all the points in the grid, averaged
533
+ over all samples in X (or over the training data if
534
+ `method` is 'recursion').
535
+ Only available when `kind='average'` or `kind='both'`.
536
+
537
+ values : seq of 1d ndarrays
538
+ The values with which the grid has been created.
539
+
540
+ .. deprecated:: 1.3
541
+ The key `values` has been deprecated in 1.3 and will be removed
542
+ in 1.5 in favor of `grid_values`. See `grid_values` for details
543
+ about the `values` attribute.
544
+
545
+ grid_values : seq of 1d ndarrays
546
+ The values with which the grid has been created. The generated
547
+ grid is a cartesian product of the arrays in `grid_values` where
548
+ `len(grid_values) == len(features)`. The size of each array
549
+ `grid_values[j]` is either `grid_resolution`, or the number of
550
+ unique values in `X[:, j]`, whichever is smaller.
551
+
552
+ .. versionadded:: 1.3
553
+
554
+ `n_outputs` corresponds to the number of classes in a multi-class
555
+ setting, or to the number of tasks for multi-output regression.
556
+ For classical regression and binary classification `n_outputs==1`.
557
+ `n_values_feature_j` corresponds to the size `grid_values[j]`.
558
+
559
+ See Also
560
+ --------
561
+ PartialDependenceDisplay.from_estimator : Plot Partial Dependence.
562
+ PartialDependenceDisplay : Partial Dependence visualization.
563
+
564
+ Examples
565
+ --------
566
+ >>> X = [[0, 0, 2], [1, 0, 0]]
567
+ >>> y = [0, 1]
568
+ >>> from sklearn.ensemble import GradientBoostingClassifier
569
+ >>> gb = GradientBoostingClassifier(random_state=0).fit(X, y)
570
+ >>> partial_dependence(gb, features=[0], X=X, percentiles=(0, 1),
571
+ ... grid_resolution=2) # doctest: +SKIP
572
+ (array([[-4.52..., 4.52...]]), [array([ 0., 1.])])
573
+ """
574
+ check_is_fitted(estimator)
575
+
576
+ if not (is_classifier(estimator) or is_regressor(estimator)):
577
+ raise ValueError("'estimator' must be a fitted regressor or classifier.")
578
+
579
+ if is_classifier(estimator) and isinstance(estimator.classes_[0], np.ndarray):
580
+ raise ValueError("Multiclass-multioutput estimators are not supported")
581
+
582
+ # Use check_array only on lists and other non-array-likes / sparse. Do not
583
+ # convert DataFrame into a NumPy array.
584
+ if not (hasattr(X, "__array__") or sparse.issparse(X)):
585
+ X = check_array(X, force_all_finite="allow-nan", dtype=object)
586
+
587
+ if is_regressor(estimator) and response_method != "auto":
588
+ raise ValueError(
589
+ "The response_method parameter is ignored for regressors and "
590
+ "must be 'auto'."
591
+ )
592
+
593
+ if kind != "average":
594
+ if method == "recursion":
595
+ raise ValueError(
596
+ "The 'recursion' method only applies when 'kind' is set to 'average'"
597
+ )
598
+ method = "brute"
599
+
600
+ if method == "recursion" and sample_weight is not None:
601
+ raise ValueError(
602
+ "The 'recursion' method can only be applied when sample_weight is None."
603
+ )
604
+
605
+ if method == "auto":
606
+ if sample_weight is not None:
607
+ method = "brute"
608
+ elif isinstance(estimator, BaseGradientBoosting) and estimator.init is None:
609
+ method = "recursion"
610
+ elif isinstance(
611
+ estimator,
612
+ (BaseHistGradientBoosting, DecisionTreeRegressor, RandomForestRegressor),
613
+ ):
614
+ method = "recursion"
615
+ else:
616
+ method = "brute"
617
+
618
+ if method == "recursion":
619
+ if not isinstance(
620
+ estimator,
621
+ (
622
+ BaseGradientBoosting,
623
+ BaseHistGradientBoosting,
624
+ DecisionTreeRegressor,
625
+ RandomForestRegressor,
626
+ ),
627
+ ):
628
+ supported_classes_recursion = (
629
+ "GradientBoostingClassifier",
630
+ "GradientBoostingRegressor",
631
+ "HistGradientBoostingClassifier",
632
+ "HistGradientBoostingRegressor",
633
+ "HistGradientBoostingRegressor",
634
+ "DecisionTreeRegressor",
635
+ "RandomForestRegressor",
636
+ )
637
+ raise ValueError(
638
+ "Only the following estimators support the 'recursion' "
639
+ "method: {}. Try using method='brute'.".format(
640
+ ", ".join(supported_classes_recursion)
641
+ )
642
+ )
643
+ if response_method == "auto":
644
+ response_method = "decision_function"
645
+
646
+ if response_method != "decision_function":
647
+ raise ValueError(
648
+ "With the 'recursion' method, the response_method must be "
649
+ "'decision_function'. Got {}.".format(response_method)
650
+ )
651
+
652
+ if sample_weight is not None:
653
+ sample_weight = _check_sample_weight(sample_weight, X)
654
+
655
+ if _determine_key_type(features, accept_slice=False) == "int":
656
+ # _get_column_indices() supports negative indexing. Here, we limit
657
+ # the indexing to be positive. The upper bound will be checked
658
+ # by _get_column_indices()
659
+ if np.any(np.less(features, 0)):
660
+ raise ValueError("all features must be in [0, {}]".format(X.shape[1] - 1))
661
+
662
+ features_indices = np.asarray(
663
+ _get_column_indices(X, features), dtype=np.int32, order="C"
664
+ ).ravel()
665
+
666
+ feature_names = _check_feature_names(X, feature_names)
667
+
668
+ n_features = X.shape[1]
669
+ if categorical_features is None:
670
+ is_categorical = [False] * len(features_indices)
671
+ else:
672
+ categorical_features = np.asarray(categorical_features)
673
+ if categorical_features.dtype.kind == "b":
674
+ # categorical features provided as a list of boolean
675
+ if categorical_features.size != n_features:
676
+ raise ValueError(
677
+ "When `categorical_features` is a boolean array-like, "
678
+ "the array should be of shape (n_features,). Got "
679
+ f"{categorical_features.size} elements while `X` contains "
680
+ f"{n_features} features."
681
+ )
682
+ is_categorical = [categorical_features[idx] for idx in features_indices]
683
+ elif categorical_features.dtype.kind in ("i", "O", "U"):
684
+ # categorical features provided as a list of indices or feature names
685
+ categorical_features_idx = [
686
+ _get_feature_index(cat, feature_names=feature_names)
687
+ for cat in categorical_features
688
+ ]
689
+ is_categorical = [
690
+ idx in categorical_features_idx for idx in features_indices
691
+ ]
692
+ else:
693
+ raise ValueError(
694
+ "Expected `categorical_features` to be an array-like of boolean,"
695
+ f" integer, or string. Got {categorical_features.dtype} instead."
696
+ )
697
+
698
+ grid, values = _grid_from_X(
699
+ _safe_indexing(X, features_indices, axis=1),
700
+ percentiles,
701
+ is_categorical,
702
+ grid_resolution,
703
+ )
704
+
705
+ if method == "brute":
706
+ averaged_predictions, predictions = _partial_dependence_brute(
707
+ estimator, grid, features_indices, X, response_method, sample_weight
708
+ )
709
+
710
+ # reshape predictions to
711
+ # (n_outputs, n_instances, n_values_feature_0, n_values_feature_1, ...)
712
+ predictions = predictions.reshape(
713
+ -1, X.shape[0], *[val.shape[0] for val in values]
714
+ )
715
+ else:
716
+ averaged_predictions = _partial_dependence_recursion(
717
+ estimator, grid, features_indices
718
+ )
719
+
720
+ # reshape averaged_predictions to
721
+ # (n_outputs, n_values_feature_0, n_values_feature_1, ...)
722
+ averaged_predictions = averaged_predictions.reshape(
723
+ -1, *[val.shape[0] for val in values]
724
+ )
725
+ pdp_results = Bunch()
726
+
727
+ msg = (
728
+ "Key: 'values', is deprecated in 1.3 and will be removed in 1.5. "
729
+ "Please use 'grid_values' instead."
730
+ )
731
+ pdp_results._set_deprecated(
732
+ values, new_key="grid_values", deprecated_key="values", warning_message=msg
733
+ )
734
+
735
+ if kind == "average":
736
+ pdp_results["average"] = averaged_predictions
737
+ elif kind == "individual":
738
+ pdp_results["individual"] = predictions
739
+ else: # kind='both'
740
+ pdp_results["average"] = averaged_predictions
741
+ pdp_results["individual"] = predictions
742
+
743
+ return pdp_results
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_pd_utils.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ def _check_feature_names(X, feature_names=None):
2
+ """Check feature names.
3
+
4
+ Parameters
5
+ ----------
6
+ X : array-like of shape (n_samples, n_features)
7
+ Input data.
8
+
9
+ feature_names : None or array-like of shape (n_names,), dtype=str
10
+ Feature names to check or `None`.
11
+
12
+ Returns
13
+ -------
14
+ feature_names : list of str
15
+ Feature names validated. If `feature_names` is `None`, then a list of
16
+ feature names is provided, i.e. the column names of a pandas dataframe
17
+ or a generic list of feature names (e.g. `["x0", "x1", ...]`) for a
18
+ NumPy array.
19
+ """
20
+ if feature_names is None:
21
+ if hasattr(X, "columns") and hasattr(X.columns, "tolist"):
22
+ # get the column names for a pandas dataframe
23
+ feature_names = X.columns.tolist()
24
+ else:
25
+ # define a list of numbered indices for a numpy array
26
+ feature_names = [f"x{i}" for i in range(X.shape[1])]
27
+ elif hasattr(feature_names, "tolist"):
28
+ # convert numpy array or pandas index to a list
29
+ feature_names = feature_names.tolist()
30
+ if len(set(feature_names)) != len(feature_names):
31
+ raise ValueError("feature_names should not contain duplicates.")
32
+
33
+ return feature_names
34
+
35
+
36
+ def _get_feature_index(fx, feature_names=None):
37
+ """Get feature index.
38
+
39
+ Parameters
40
+ ----------
41
+ fx : int or str
42
+ Feature index or name.
43
+
44
+ feature_names : list of str, default=None
45
+ All feature names from which to search the indices.
46
+
47
+ Returns
48
+ -------
49
+ idx : int
50
+ Feature index.
51
+ """
52
+ if isinstance(fx, str):
53
+ if feature_names is None:
54
+ raise ValueError(
55
+ f"Cannot plot partial dependence for feature {fx!r} since "
56
+ "the list of feature names was not provided, neither as "
57
+ "column names of a pandas data-frame nor via the feature_names "
58
+ "parameter."
59
+ )
60
+ try:
61
+ return feature_names.index(fx)
62
+ except ValueError as e:
63
+ raise ValueError(f"Feature {fx!r} not in feature_names") from e
64
+ return fx
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_permutation_importance.py ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Permutation importance for estimators."""
2
+
3
+ import numbers
4
+
5
+ import numpy as np
6
+
7
+ from ..ensemble._bagging import _generate_indices
8
+ from ..metrics import check_scoring, get_scorer_names
9
+ from ..metrics._scorer import _check_multimetric_scoring, _MultimetricScorer
10
+ from ..model_selection._validation import _aggregate_score_dicts
11
+ from ..utils import Bunch, _safe_indexing, check_array, check_random_state
12
+ from ..utils._param_validation import (
13
+ HasMethods,
14
+ Integral,
15
+ Interval,
16
+ RealNotInt,
17
+ StrOptions,
18
+ validate_params,
19
+ )
20
+ from ..utils.parallel import Parallel, delayed
21
+
22
+
23
+ def _weights_scorer(scorer, estimator, X, y, sample_weight):
24
+ if sample_weight is not None:
25
+ return scorer(estimator, X, y, sample_weight=sample_weight)
26
+ return scorer(estimator, X, y)
27
+
28
+
29
+ def _calculate_permutation_scores(
30
+ estimator,
31
+ X,
32
+ y,
33
+ sample_weight,
34
+ col_idx,
35
+ random_state,
36
+ n_repeats,
37
+ scorer,
38
+ max_samples,
39
+ ):
40
+ """Calculate score when `col_idx` is permuted."""
41
+ random_state = check_random_state(random_state)
42
+
43
+ # Work on a copy of X to ensure thread-safety in case of threading based
44
+ # parallelism. Furthermore, making a copy is also useful when the joblib
45
+ # backend is 'loky' (default) or the old 'multiprocessing': in those cases,
46
+ # if X is large it will be automatically be backed by a readonly memory map
47
+ # (memmap). X.copy() on the other hand is always guaranteed to return a
48
+ # writable data-structure whose columns can be shuffled inplace.
49
+ if max_samples < X.shape[0]:
50
+ row_indices = _generate_indices(
51
+ random_state=random_state,
52
+ bootstrap=False,
53
+ n_population=X.shape[0],
54
+ n_samples=max_samples,
55
+ )
56
+ X_permuted = _safe_indexing(X, row_indices, axis=0)
57
+ y = _safe_indexing(y, row_indices, axis=0)
58
+ if sample_weight is not None:
59
+ sample_weight = _safe_indexing(sample_weight, row_indices, axis=0)
60
+ else:
61
+ X_permuted = X.copy()
62
+
63
+ scores = []
64
+ shuffling_idx = np.arange(X_permuted.shape[0])
65
+ for _ in range(n_repeats):
66
+ random_state.shuffle(shuffling_idx)
67
+ if hasattr(X_permuted, "iloc"):
68
+ col = X_permuted.iloc[shuffling_idx, col_idx]
69
+ col.index = X_permuted.index
70
+ X_permuted[X_permuted.columns[col_idx]] = col
71
+ else:
72
+ X_permuted[:, col_idx] = X_permuted[shuffling_idx, col_idx]
73
+ scores.append(_weights_scorer(scorer, estimator, X_permuted, y, sample_weight))
74
+
75
+ if isinstance(scores[0], dict):
76
+ scores = _aggregate_score_dicts(scores)
77
+ else:
78
+ scores = np.array(scores)
79
+
80
+ return scores
81
+
82
+
83
+ def _create_importances_bunch(baseline_score, permuted_score):
84
+ """Compute the importances as the decrease in score.
85
+
86
+ Parameters
87
+ ----------
88
+ baseline_score : ndarray of shape (n_features,)
89
+ The baseline score without permutation.
90
+ permuted_score : ndarray of shape (n_features, n_repeats)
91
+ The permuted scores for the `n` repetitions.
92
+
93
+ Returns
94
+ -------
95
+ importances : :class:`~sklearn.utils.Bunch`
96
+ Dictionary-like object, with the following attributes.
97
+ importances_mean : ndarray, shape (n_features, )
98
+ Mean of feature importance over `n_repeats`.
99
+ importances_std : ndarray, shape (n_features, )
100
+ Standard deviation over `n_repeats`.
101
+ importances : ndarray, shape (n_features, n_repeats)
102
+ Raw permutation importance scores.
103
+ """
104
+ importances = baseline_score - permuted_score
105
+ return Bunch(
106
+ importances_mean=np.mean(importances, axis=1),
107
+ importances_std=np.std(importances, axis=1),
108
+ importances=importances,
109
+ )
110
+
111
+
112
+ @validate_params(
113
+ {
114
+ "estimator": [HasMethods(["fit"])],
115
+ "X": ["array-like"],
116
+ "y": ["array-like", None],
117
+ "scoring": [
118
+ StrOptions(set(get_scorer_names())),
119
+ callable,
120
+ list,
121
+ tuple,
122
+ dict,
123
+ None,
124
+ ],
125
+ "n_repeats": [Interval(Integral, 1, None, closed="left")],
126
+ "n_jobs": [Integral, None],
127
+ "random_state": ["random_state"],
128
+ "sample_weight": ["array-like", None],
129
+ "max_samples": [
130
+ Interval(Integral, 1, None, closed="left"),
131
+ Interval(RealNotInt, 0, 1, closed="right"),
132
+ ],
133
+ },
134
+ prefer_skip_nested_validation=True,
135
+ )
136
+ def permutation_importance(
137
+ estimator,
138
+ X,
139
+ y,
140
+ *,
141
+ scoring=None,
142
+ n_repeats=5,
143
+ n_jobs=None,
144
+ random_state=None,
145
+ sample_weight=None,
146
+ max_samples=1.0,
147
+ ):
148
+ """Permutation importance for feature evaluation [BRE]_.
149
+
150
+ The :term:`estimator` is required to be a fitted estimator. `X` can be the
151
+ data set used to train the estimator or a hold-out set. The permutation
152
+ importance of a feature is calculated as follows. First, a baseline metric,
153
+ defined by :term:`scoring`, is evaluated on a (potentially different)
154
+ dataset defined by the `X`. Next, a feature column from the validation set
155
+ is permuted and the metric is evaluated again. The permutation importance
156
+ is defined to be the difference between the baseline metric and metric from
157
+ permutating the feature column.
158
+
159
+ Read more in the :ref:`User Guide <permutation_importance>`.
160
+
161
+ Parameters
162
+ ----------
163
+ estimator : object
164
+ An estimator that has already been :term:`fitted` and is compatible
165
+ with :term:`scorer`.
166
+
167
+ X : ndarray or DataFrame, shape (n_samples, n_features)
168
+ Data on which permutation importance will be computed.
169
+
170
+ y : array-like or None, shape (n_samples, ) or (n_samples, n_classes)
171
+ Targets for supervised or `None` for unsupervised.
172
+
173
+ scoring : str, callable, list, tuple, or dict, default=None
174
+ Scorer to use.
175
+ If `scoring` represents a single score, one can use:
176
+
177
+ - a single string (see :ref:`scoring_parameter`);
178
+ - a callable (see :ref:`scoring`) that returns a single value.
179
+
180
+ If `scoring` represents multiple scores, one can use:
181
+
182
+ - a list or tuple of unique strings;
183
+ - a callable returning a dictionary where the keys are the metric
184
+ names and the values are the metric scores;
185
+ - a dictionary with metric names as keys and callables a values.
186
+
187
+ Passing multiple scores to `scoring` is more efficient than calling
188
+ `permutation_importance` for each of the scores as it reuses
189
+ predictions to avoid redundant computation.
190
+
191
+ If None, the estimator's default scorer is used.
192
+
193
+ n_repeats : int, default=5
194
+ Number of times to permute a feature.
195
+
196
+ n_jobs : int or None, default=None
197
+ Number of jobs to run in parallel. The computation is done by computing
198
+ permutation score for each columns and parallelized over the columns.
199
+ `None` means 1 unless in a :obj:`joblib.parallel_backend` context.
200
+ `-1` means using all processors. See :term:`Glossary <n_jobs>`
201
+ for more details.
202
+
203
+ random_state : int, RandomState instance, default=None
204
+ Pseudo-random number generator to control the permutations of each
205
+ feature.
206
+ Pass an int to get reproducible results across function calls.
207
+ See :term:`Glossary <random_state>`.
208
+
209
+ sample_weight : array-like of shape (n_samples,), default=None
210
+ Sample weights used in scoring.
211
+
212
+ .. versionadded:: 0.24
213
+
214
+ max_samples : int or float, default=1.0
215
+ The number of samples to draw from X to compute feature importance
216
+ in each repeat (without replacement).
217
+
218
+ - If int, then draw `max_samples` samples.
219
+ - If float, then draw `max_samples * X.shape[0]` samples.
220
+ - If `max_samples` is equal to `1.0` or `X.shape[0]`, all samples
221
+ will be used.
222
+
223
+ While using this option may provide less accurate importance estimates,
224
+ it keeps the method tractable when evaluating feature importance on
225
+ large datasets. In combination with `n_repeats`, this allows to control
226
+ the computational speed vs statistical accuracy trade-off of this method.
227
+
228
+ .. versionadded:: 1.0
229
+
230
+ Returns
231
+ -------
232
+ result : :class:`~sklearn.utils.Bunch` or dict of such instances
233
+ Dictionary-like object, with the following attributes.
234
+
235
+ importances_mean : ndarray of shape (n_features, )
236
+ Mean of feature importance over `n_repeats`.
237
+ importances_std : ndarray of shape (n_features, )
238
+ Standard deviation over `n_repeats`.
239
+ importances : ndarray of shape (n_features, n_repeats)
240
+ Raw permutation importance scores.
241
+
242
+ If there are multiple scoring metrics in the scoring parameter
243
+ `result` is a dict with scorer names as keys (e.g. 'roc_auc') and
244
+ `Bunch` objects like above as values.
245
+
246
+ References
247
+ ----------
248
+ .. [BRE] :doi:`L. Breiman, "Random Forests", Machine Learning, 45(1), 5-32,
249
+ 2001. <10.1023/A:1010933404324>`
250
+
251
+ Examples
252
+ --------
253
+ >>> from sklearn.linear_model import LogisticRegression
254
+ >>> from sklearn.inspection import permutation_importance
255
+ >>> X = [[1, 9, 9],[1, 9, 9],[1, 9, 9],
256
+ ... [0, 9, 9],[0, 9, 9],[0, 9, 9]]
257
+ >>> y = [1, 1, 1, 0, 0, 0]
258
+ >>> clf = LogisticRegression().fit(X, y)
259
+ >>> result = permutation_importance(clf, X, y, n_repeats=10,
260
+ ... random_state=0)
261
+ >>> result.importances_mean
262
+ array([0.4666..., 0. , 0. ])
263
+ >>> result.importances_std
264
+ array([0.2211..., 0. , 0. ])
265
+ """
266
+ if not hasattr(X, "iloc"):
267
+ X = check_array(X, force_all_finite="allow-nan", dtype=None)
268
+
269
+ # Precompute random seed from the random state to be used
270
+ # to get a fresh independent RandomState instance for each
271
+ # parallel call to _calculate_permutation_scores, irrespective of
272
+ # the fact that variables are shared or not depending on the active
273
+ # joblib backend (sequential, thread-based or process-based).
274
+ random_state = check_random_state(random_state)
275
+ random_seed = random_state.randint(np.iinfo(np.int32).max + 1)
276
+
277
+ if not isinstance(max_samples, numbers.Integral):
278
+ max_samples = int(max_samples * X.shape[0])
279
+ elif max_samples > X.shape[0]:
280
+ raise ValueError("max_samples must be <= n_samples")
281
+
282
+ if callable(scoring):
283
+ scorer = scoring
284
+ elif scoring is None or isinstance(scoring, str):
285
+ scorer = check_scoring(estimator, scoring=scoring)
286
+ else:
287
+ scorers_dict = _check_multimetric_scoring(estimator, scoring)
288
+ scorer = _MultimetricScorer(scorers=scorers_dict)
289
+
290
+ baseline_score = _weights_scorer(scorer, estimator, X, y, sample_weight)
291
+
292
+ scores = Parallel(n_jobs=n_jobs)(
293
+ delayed(_calculate_permutation_scores)(
294
+ estimator,
295
+ X,
296
+ y,
297
+ sample_weight,
298
+ col_idx,
299
+ random_seed,
300
+ n_repeats,
301
+ scorer,
302
+ max_samples,
303
+ )
304
+ for col_idx in range(X.shape[1])
305
+ )
306
+
307
+ if isinstance(baseline_score, dict):
308
+ return {
309
+ name: _create_importances_bunch(
310
+ baseline_score[name],
311
+ # unpack the permuted scores
312
+ np.array([scores[col_idx][name] for col_idx in range(X.shape[1])]),
313
+ )
314
+ for name in baseline_score
315
+ }
316
+ else:
317
+ return _create_importances_bunch(baseline_score, np.array(scores))
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_plot/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (197 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_plot/__pycache__/decision_boundary.cpython-310.pyc ADDED
Binary file (12.9 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_plot/__pycache__/partial_dependence.cpython-310.pyc ADDED
Binary file (45.9 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/_plot/tests/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (203 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/__init__.py ADDED
File without changes
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/__pycache__/test_partial_dependence.cpython-310.pyc ADDED
Binary file (22.4 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/__pycache__/test_permutation_importance.cpython-310.pyc ADDED
Binary file (11.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/test_partial_dependence.py ADDED
@@ -0,0 +1,958 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Testing for the partial dependence module.
3
+ """
4
+ import warnings
5
+
6
+ import numpy as np
7
+ import pytest
8
+
9
+ import sklearn
10
+ from sklearn.base import BaseEstimator, ClassifierMixin, clone, is_regressor
11
+ from sklearn.cluster import KMeans
12
+ from sklearn.compose import make_column_transformer
13
+ from sklearn.datasets import load_iris, make_classification, make_regression
14
+ from sklearn.dummy import DummyClassifier
15
+ from sklearn.ensemble import (
16
+ GradientBoostingClassifier,
17
+ GradientBoostingRegressor,
18
+ HistGradientBoostingClassifier,
19
+ HistGradientBoostingRegressor,
20
+ RandomForestRegressor,
21
+ )
22
+ from sklearn.exceptions import NotFittedError
23
+ from sklearn.inspection import partial_dependence
24
+ from sklearn.inspection._partial_dependence import (
25
+ _grid_from_X,
26
+ _partial_dependence_brute,
27
+ _partial_dependence_recursion,
28
+ )
29
+ from sklearn.linear_model import LinearRegression, LogisticRegression, MultiTaskLasso
30
+ from sklearn.metrics import r2_score
31
+ from sklearn.pipeline import make_pipeline
32
+ from sklearn.preprocessing import (
33
+ PolynomialFeatures,
34
+ RobustScaler,
35
+ StandardScaler,
36
+ scale,
37
+ )
38
+ from sklearn.tree import DecisionTreeRegressor
39
+ from sklearn.tree.tests.test_tree import assert_is_subtree
40
+ from sklearn.utils import _IS_32BIT
41
+ from sklearn.utils._testing import assert_allclose, assert_array_equal
42
+ from sklearn.utils.validation import check_random_state
43
+
44
+ # toy sample
45
+ X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]]
46
+ y = [-1, -1, -1, 1, 1, 1]
47
+
48
+
49
+ # (X, y), n_targets <-- as expected in the output of partial_dep()
50
+ binary_classification_data = (make_classification(n_samples=50, random_state=0), 1)
51
+ multiclass_classification_data = (
52
+ make_classification(
53
+ n_samples=50, n_classes=3, n_clusters_per_class=1, random_state=0
54
+ ),
55
+ 3,
56
+ )
57
+ regression_data = (make_regression(n_samples=50, random_state=0), 1)
58
+ multioutput_regression_data = (
59
+ make_regression(n_samples=50, n_targets=2, random_state=0),
60
+ 2,
61
+ )
62
+
63
+ # iris
64
+ iris = load_iris()
65
+
66
+
67
+ @pytest.mark.parametrize(
68
+ "Estimator, method, data",
69
+ [
70
+ (GradientBoostingClassifier, "auto", binary_classification_data),
71
+ (GradientBoostingClassifier, "auto", multiclass_classification_data),
72
+ (GradientBoostingClassifier, "brute", binary_classification_data),
73
+ (GradientBoostingClassifier, "brute", multiclass_classification_data),
74
+ (GradientBoostingRegressor, "auto", regression_data),
75
+ (GradientBoostingRegressor, "brute", regression_data),
76
+ (DecisionTreeRegressor, "brute", regression_data),
77
+ (LinearRegression, "brute", regression_data),
78
+ (LinearRegression, "brute", multioutput_regression_data),
79
+ (LogisticRegression, "brute", binary_classification_data),
80
+ (LogisticRegression, "brute", multiclass_classification_data),
81
+ (MultiTaskLasso, "brute", multioutput_regression_data),
82
+ ],
83
+ )
84
+ @pytest.mark.parametrize("grid_resolution", (5, 10))
85
+ @pytest.mark.parametrize("features", ([1], [1, 2]))
86
+ @pytest.mark.parametrize("kind", ("average", "individual", "both"))
87
+ def test_output_shape(Estimator, method, data, grid_resolution, features, kind):
88
+ # Check that partial_dependence has consistent output shape for different
89
+ # kinds of estimators:
90
+ # - classifiers with binary and multiclass settings
91
+ # - regressors
92
+ # - multi-task regressors
93
+
94
+ est = Estimator()
95
+ if hasattr(est, "n_estimators"):
96
+ est.set_params(n_estimators=2) # speed-up computations
97
+
98
+ # n_target corresponds to the number of classes (1 for binary classif) or
99
+ # the number of tasks / outputs in multi task settings. It's equal to 1 for
100
+ # classical regression_data.
101
+ (X, y), n_targets = data
102
+ n_instances = X.shape[0]
103
+
104
+ est.fit(X, y)
105
+ result = partial_dependence(
106
+ est,
107
+ X=X,
108
+ features=features,
109
+ method=method,
110
+ kind=kind,
111
+ grid_resolution=grid_resolution,
112
+ )
113
+ pdp, axes = result, result["grid_values"]
114
+
115
+ expected_pdp_shape = (n_targets, *[grid_resolution for _ in range(len(features))])
116
+ expected_ice_shape = (
117
+ n_targets,
118
+ n_instances,
119
+ *[grid_resolution for _ in range(len(features))],
120
+ )
121
+ if kind == "average":
122
+ assert pdp.average.shape == expected_pdp_shape
123
+ elif kind == "individual":
124
+ assert pdp.individual.shape == expected_ice_shape
125
+ else: # 'both'
126
+ assert pdp.average.shape == expected_pdp_shape
127
+ assert pdp.individual.shape == expected_ice_shape
128
+
129
+ expected_axes_shape = (len(features), grid_resolution)
130
+ assert axes is not None
131
+ assert np.asarray(axes).shape == expected_axes_shape
132
+
133
+
134
+ def test_grid_from_X():
135
+ # tests for _grid_from_X: sanity check for output, and for shapes.
136
+
137
+ # Make sure that the grid is a cartesian product of the input (it will use
138
+ # the unique values instead of the percentiles)
139
+ percentiles = (0.05, 0.95)
140
+ grid_resolution = 100
141
+ is_categorical = [False, False]
142
+ X = np.asarray([[1, 2], [3, 4]])
143
+ grid, axes = _grid_from_X(X, percentiles, is_categorical, grid_resolution)
144
+ assert_array_equal(grid, [[1, 2], [1, 4], [3, 2], [3, 4]])
145
+ assert_array_equal(axes, X.T)
146
+
147
+ # test shapes of returned objects depending on the number of unique values
148
+ # for a feature.
149
+ rng = np.random.RandomState(0)
150
+ grid_resolution = 15
151
+
152
+ # n_unique_values > grid_resolution
153
+ X = rng.normal(size=(20, 2))
154
+ grid, axes = _grid_from_X(
155
+ X, percentiles, is_categorical, grid_resolution=grid_resolution
156
+ )
157
+ assert grid.shape == (grid_resolution * grid_resolution, X.shape[1])
158
+ assert np.asarray(axes).shape == (2, grid_resolution)
159
+
160
+ # n_unique_values < grid_resolution, will use actual values
161
+ n_unique_values = 12
162
+ X[n_unique_values - 1 :, 0] = 12345
163
+ rng.shuffle(X) # just to make sure the order is irrelevant
164
+ grid, axes = _grid_from_X(
165
+ X, percentiles, is_categorical, grid_resolution=grid_resolution
166
+ )
167
+ assert grid.shape == (n_unique_values * grid_resolution, X.shape[1])
168
+ # axes is a list of arrays of different shapes
169
+ assert axes[0].shape == (n_unique_values,)
170
+ assert axes[1].shape == (grid_resolution,)
171
+
172
+
173
+ @pytest.mark.parametrize(
174
+ "grid_resolution",
175
+ [
176
+ 2, # since n_categories > 2, we should not use quantiles resampling
177
+ 100,
178
+ ],
179
+ )
180
+ def test_grid_from_X_with_categorical(grid_resolution):
181
+ """Check that `_grid_from_X` always sample from categories and does not
182
+ depend from the percentiles.
183
+ """
184
+ pd = pytest.importorskip("pandas")
185
+ percentiles = (0.05, 0.95)
186
+ is_categorical = [True]
187
+ X = pd.DataFrame({"cat_feature": ["A", "B", "C", "A", "B", "D", "E"]})
188
+ grid, axes = _grid_from_X(
189
+ X, percentiles, is_categorical, grid_resolution=grid_resolution
190
+ )
191
+ assert grid.shape == (5, X.shape[1])
192
+ assert axes[0].shape == (5,)
193
+
194
+
195
+ @pytest.mark.parametrize("grid_resolution", [3, 100])
196
+ def test_grid_from_X_heterogeneous_type(grid_resolution):
197
+ """Check that `_grid_from_X` always sample from categories and does not
198
+ depend from the percentiles.
199
+ """
200
+ pd = pytest.importorskip("pandas")
201
+ percentiles = (0.05, 0.95)
202
+ is_categorical = [True, False]
203
+ X = pd.DataFrame(
204
+ {
205
+ "cat": ["A", "B", "C", "A", "B", "D", "E", "A", "B", "D"],
206
+ "num": [1, 1, 1, 2, 5, 6, 6, 6, 6, 8],
207
+ }
208
+ )
209
+ nunique = X.nunique()
210
+
211
+ grid, axes = _grid_from_X(
212
+ X, percentiles, is_categorical, grid_resolution=grid_resolution
213
+ )
214
+ if grid_resolution == 3:
215
+ assert grid.shape == (15, 2)
216
+ assert axes[0].shape[0] == nunique["num"]
217
+ assert axes[1].shape[0] == grid_resolution
218
+ else:
219
+ assert grid.shape == (25, 2)
220
+ assert axes[0].shape[0] == nunique["cat"]
221
+ assert axes[1].shape[0] == nunique["cat"]
222
+
223
+
224
+ @pytest.mark.parametrize(
225
+ "grid_resolution, percentiles, err_msg",
226
+ [
227
+ (2, (0, 0.0001), "percentiles are too close"),
228
+ (100, (1, 2, 3, 4), "'percentiles' must be a sequence of 2 elements"),
229
+ (100, 12345, "'percentiles' must be a sequence of 2 elements"),
230
+ (100, (-1, 0.95), r"'percentiles' values must be in \[0, 1\]"),
231
+ (100, (0.05, 2), r"'percentiles' values must be in \[0, 1\]"),
232
+ (100, (0.9, 0.1), r"percentiles\[0\] must be strictly less than"),
233
+ (1, (0.05, 0.95), "'grid_resolution' must be strictly greater than 1"),
234
+ ],
235
+ )
236
+ def test_grid_from_X_error(grid_resolution, percentiles, err_msg):
237
+ X = np.asarray([[1, 2], [3, 4]])
238
+ is_categorical = [False]
239
+ with pytest.raises(ValueError, match=err_msg):
240
+ _grid_from_X(X, percentiles, is_categorical, grid_resolution)
241
+
242
+
243
+ @pytest.mark.parametrize("target_feature", range(5))
244
+ @pytest.mark.parametrize(
245
+ "est, method",
246
+ [
247
+ (LinearRegression(), "brute"),
248
+ (GradientBoostingRegressor(random_state=0), "brute"),
249
+ (GradientBoostingRegressor(random_state=0), "recursion"),
250
+ (HistGradientBoostingRegressor(random_state=0), "brute"),
251
+ (HistGradientBoostingRegressor(random_state=0), "recursion"),
252
+ ],
253
+ )
254
+ def test_partial_dependence_helpers(est, method, target_feature):
255
+ # Check that what is returned by _partial_dependence_brute or
256
+ # _partial_dependence_recursion is equivalent to manually setting a target
257
+ # feature to a given value, and computing the average prediction over all
258
+ # samples.
259
+ # This also checks that the brute and recursion methods give the same
260
+ # output.
261
+ # Note that even on the trainset, the brute and the recursion methods
262
+ # aren't always strictly equivalent, in particular when the slow method
263
+ # generates unrealistic samples that have low mass in the joint
264
+ # distribution of the input features, and when some of the features are
265
+ # dependent. Hence the high tolerance on the checks.
266
+
267
+ X, y = make_regression(random_state=0, n_features=5, n_informative=5)
268
+ # The 'init' estimator for GBDT (here the average prediction) isn't taken
269
+ # into account with the recursion method, for technical reasons. We set
270
+ # the mean to 0 to that this 'bug' doesn't have any effect.
271
+ y = y - y.mean()
272
+ est.fit(X, y)
273
+
274
+ # target feature will be set to .5 and then to 123
275
+ features = np.array([target_feature], dtype=np.int32)
276
+ grid = np.array([[0.5], [123]])
277
+
278
+ if method == "brute":
279
+ pdp, predictions = _partial_dependence_brute(
280
+ est, grid, features, X, response_method="auto"
281
+ )
282
+ else:
283
+ pdp = _partial_dependence_recursion(est, grid, features)
284
+
285
+ mean_predictions = []
286
+ for val in (0.5, 123):
287
+ X_ = X.copy()
288
+ X_[:, target_feature] = val
289
+ mean_predictions.append(est.predict(X_).mean())
290
+
291
+ pdp = pdp[0] # (shape is (1, 2) so make it (2,))
292
+
293
+ # allow for greater margin for error with recursion method
294
+ rtol = 1e-1 if method == "recursion" else 1e-3
295
+ assert np.allclose(pdp, mean_predictions, rtol=rtol)
296
+
297
+
298
+ @pytest.mark.parametrize("seed", range(1))
299
+ def test_recursion_decision_tree_vs_forest_and_gbdt(seed):
300
+ # Make sure that the recursion method gives the same results on a
301
+ # DecisionTreeRegressor and a GradientBoostingRegressor or a
302
+ # RandomForestRegressor with 1 tree and equivalent parameters.
303
+
304
+ rng = np.random.RandomState(seed)
305
+
306
+ # Purely random dataset to avoid correlated features
307
+ n_samples = 1000
308
+ n_features = 5
309
+ X = rng.randn(n_samples, n_features)
310
+ y = rng.randn(n_samples) * 10
311
+
312
+ # The 'init' estimator for GBDT (here the average prediction) isn't taken
313
+ # into account with the recursion method, for technical reasons. We set
314
+ # the mean to 0 to that this 'bug' doesn't have any effect.
315
+ y = y - y.mean()
316
+
317
+ # set max_depth not too high to avoid splits with same gain but different
318
+ # features
319
+ max_depth = 5
320
+
321
+ tree_seed = 0
322
+ forest = RandomForestRegressor(
323
+ n_estimators=1,
324
+ max_features=None,
325
+ bootstrap=False,
326
+ max_depth=max_depth,
327
+ random_state=tree_seed,
328
+ )
329
+ # The forest will use ensemble.base._set_random_states to set the
330
+ # random_state of the tree sub-estimator. We simulate this here to have
331
+ # equivalent estimators.
332
+ equiv_random_state = check_random_state(tree_seed).randint(np.iinfo(np.int32).max)
333
+ gbdt = GradientBoostingRegressor(
334
+ n_estimators=1,
335
+ learning_rate=1,
336
+ criterion="squared_error",
337
+ max_depth=max_depth,
338
+ random_state=equiv_random_state,
339
+ )
340
+ tree = DecisionTreeRegressor(max_depth=max_depth, random_state=equiv_random_state)
341
+
342
+ forest.fit(X, y)
343
+ gbdt.fit(X, y)
344
+ tree.fit(X, y)
345
+
346
+ # sanity check: if the trees aren't the same, the PD values won't be equal
347
+ try:
348
+ assert_is_subtree(tree.tree_, gbdt[0, 0].tree_)
349
+ assert_is_subtree(tree.tree_, forest[0].tree_)
350
+ except AssertionError:
351
+ # For some reason the trees aren't exactly equal on 32bits, so the PDs
352
+ # cannot be equal either. See
353
+ # https://github.com/scikit-learn/scikit-learn/issues/8853
354
+ assert _IS_32BIT, "this should only fail on 32 bit platforms"
355
+ return
356
+
357
+ grid = rng.randn(50).reshape(-1, 1)
358
+ for f in range(n_features):
359
+ features = np.array([f], dtype=np.int32)
360
+
361
+ pdp_forest = _partial_dependence_recursion(forest, grid, features)
362
+ pdp_gbdt = _partial_dependence_recursion(gbdt, grid, features)
363
+ pdp_tree = _partial_dependence_recursion(tree, grid, features)
364
+
365
+ np.testing.assert_allclose(pdp_gbdt, pdp_tree)
366
+ np.testing.assert_allclose(pdp_forest, pdp_tree)
367
+
368
+
369
+ @pytest.mark.parametrize(
370
+ "est",
371
+ (
372
+ GradientBoostingClassifier(random_state=0),
373
+ HistGradientBoostingClassifier(random_state=0),
374
+ ),
375
+ )
376
+ @pytest.mark.parametrize("target_feature", (0, 1, 2, 3, 4, 5))
377
+ def test_recursion_decision_function(est, target_feature):
378
+ # Make sure the recursion method (implicitly uses decision_function) has
379
+ # the same result as using brute method with
380
+ # response_method=decision_function
381
+
382
+ X, y = make_classification(n_classes=2, n_clusters_per_class=1, random_state=1)
383
+ assert np.mean(y) == 0.5 # make sure the init estimator predicts 0 anyway
384
+
385
+ est.fit(X, y)
386
+
387
+ preds_1 = partial_dependence(
388
+ est,
389
+ X,
390
+ [target_feature],
391
+ response_method="decision_function",
392
+ method="recursion",
393
+ kind="average",
394
+ )
395
+ preds_2 = partial_dependence(
396
+ est,
397
+ X,
398
+ [target_feature],
399
+ response_method="decision_function",
400
+ method="brute",
401
+ kind="average",
402
+ )
403
+
404
+ assert_allclose(preds_1["average"], preds_2["average"], atol=1e-7)
405
+
406
+
407
+ @pytest.mark.parametrize(
408
+ "est",
409
+ (
410
+ LinearRegression(),
411
+ GradientBoostingRegressor(random_state=0),
412
+ HistGradientBoostingRegressor(
413
+ random_state=0, min_samples_leaf=1, max_leaf_nodes=None, max_iter=1
414
+ ),
415
+ DecisionTreeRegressor(random_state=0),
416
+ ),
417
+ )
418
+ @pytest.mark.parametrize("power", (1, 2))
419
+ def test_partial_dependence_easy_target(est, power):
420
+ # If the target y only depends on one feature in an obvious way (linear or
421
+ # quadratic) then the partial dependence for that feature should reflect
422
+ # it.
423
+ # We here fit a linear regression_data model (with polynomial features if
424
+ # needed) and compute r_squared to check that the partial dependence
425
+ # correctly reflects the target.
426
+
427
+ rng = np.random.RandomState(0)
428
+ n_samples = 200
429
+ target_variable = 2
430
+ X = rng.normal(size=(n_samples, 5))
431
+ y = X[:, target_variable] ** power
432
+
433
+ est.fit(X, y)
434
+
435
+ pdp = partial_dependence(
436
+ est, features=[target_variable], X=X, grid_resolution=1000, kind="average"
437
+ )
438
+
439
+ new_X = pdp["grid_values"][0].reshape(-1, 1)
440
+ new_y = pdp["average"][0]
441
+ # add polynomial features if needed
442
+ new_X = PolynomialFeatures(degree=power).fit_transform(new_X)
443
+
444
+ lr = LinearRegression().fit(new_X, new_y)
445
+ r2 = r2_score(new_y, lr.predict(new_X))
446
+
447
+ assert r2 > 0.99
448
+
449
+
450
+ @pytest.mark.parametrize(
451
+ "Estimator",
452
+ (
453
+ sklearn.tree.DecisionTreeClassifier,
454
+ sklearn.tree.ExtraTreeClassifier,
455
+ sklearn.ensemble.ExtraTreesClassifier,
456
+ sklearn.neighbors.KNeighborsClassifier,
457
+ sklearn.neighbors.RadiusNeighborsClassifier,
458
+ sklearn.ensemble.RandomForestClassifier,
459
+ ),
460
+ )
461
+ def test_multiclass_multioutput(Estimator):
462
+ # Make sure error is raised for multiclass-multioutput classifiers
463
+
464
+ # make multiclass-multioutput dataset
465
+ X, y = make_classification(n_classes=3, n_clusters_per_class=1, random_state=0)
466
+ y = np.array([y, y]).T
467
+
468
+ est = Estimator()
469
+ est.fit(X, y)
470
+
471
+ with pytest.raises(
472
+ ValueError, match="Multiclass-multioutput estimators are not supported"
473
+ ):
474
+ partial_dependence(est, X, [0])
475
+
476
+
477
+ class NoPredictProbaNoDecisionFunction(ClassifierMixin, BaseEstimator):
478
+ def fit(self, X, y):
479
+ # simulate that we have some classes
480
+ self.classes_ = [0, 1]
481
+ return self
482
+
483
+
484
+ @pytest.mark.filterwarnings("ignore:A Bunch will be returned")
485
+ @pytest.mark.parametrize(
486
+ "estimator, params, err_msg",
487
+ [
488
+ (
489
+ KMeans(random_state=0, n_init="auto"),
490
+ {"features": [0]},
491
+ "'estimator' must be a fitted regressor or classifier",
492
+ ),
493
+ (
494
+ LinearRegression(),
495
+ {"features": [0], "response_method": "predict_proba"},
496
+ "The response_method parameter is ignored for regressors",
497
+ ),
498
+ (
499
+ GradientBoostingClassifier(random_state=0),
500
+ {
501
+ "features": [0],
502
+ "response_method": "predict_proba",
503
+ "method": "recursion",
504
+ },
505
+ "'recursion' method, the response_method must be 'decision_function'",
506
+ ),
507
+ (
508
+ GradientBoostingClassifier(random_state=0),
509
+ {"features": [0], "response_method": "predict_proba", "method": "auto"},
510
+ "'recursion' method, the response_method must be 'decision_function'",
511
+ ),
512
+ (
513
+ LinearRegression(),
514
+ {"features": [0], "method": "recursion", "kind": "individual"},
515
+ "The 'recursion' method only applies when 'kind' is set to 'average'",
516
+ ),
517
+ (
518
+ LinearRegression(),
519
+ {"features": [0], "method": "recursion", "kind": "both"},
520
+ "The 'recursion' method only applies when 'kind' is set to 'average'",
521
+ ),
522
+ (
523
+ LinearRegression(),
524
+ {"features": [0], "method": "recursion"},
525
+ "Only the following estimators support the 'recursion' method:",
526
+ ),
527
+ ],
528
+ )
529
+ def test_partial_dependence_error(estimator, params, err_msg):
530
+ X, y = make_classification(random_state=0)
531
+ estimator.fit(X, y)
532
+
533
+ with pytest.raises(ValueError, match=err_msg):
534
+ partial_dependence(estimator, X, **params)
535
+
536
+
537
+ @pytest.mark.parametrize(
538
+ "estimator", [LinearRegression(), GradientBoostingClassifier(random_state=0)]
539
+ )
540
+ @pytest.mark.parametrize("features", [-1, 10000])
541
+ def test_partial_dependence_unknown_feature_indices(estimator, features):
542
+ X, y = make_classification(random_state=0)
543
+ estimator.fit(X, y)
544
+
545
+ err_msg = "all features must be in"
546
+ with pytest.raises(ValueError, match=err_msg):
547
+ partial_dependence(estimator, X, [features])
548
+
549
+
550
+ @pytest.mark.parametrize(
551
+ "estimator", [LinearRegression(), GradientBoostingClassifier(random_state=0)]
552
+ )
553
+ def test_partial_dependence_unknown_feature_string(estimator):
554
+ pd = pytest.importorskip("pandas")
555
+ X, y = make_classification(random_state=0)
556
+ df = pd.DataFrame(X)
557
+ estimator.fit(df, y)
558
+
559
+ features = ["random"]
560
+ err_msg = "A given column is not a column of the dataframe"
561
+ with pytest.raises(ValueError, match=err_msg):
562
+ partial_dependence(estimator, df, features)
563
+
564
+
565
+ @pytest.mark.parametrize(
566
+ "estimator", [LinearRegression(), GradientBoostingClassifier(random_state=0)]
567
+ )
568
+ def test_partial_dependence_X_list(estimator):
569
+ # check that array-like objects are accepted
570
+ X, y = make_classification(random_state=0)
571
+ estimator.fit(X, y)
572
+ partial_dependence(estimator, list(X), [0], kind="average")
573
+
574
+
575
+ def test_warning_recursion_non_constant_init():
576
+ # make sure that passing a non-constant init parameter to a GBDT and using
577
+ # recursion method yields a warning.
578
+
579
+ gbc = GradientBoostingClassifier(init=DummyClassifier(), random_state=0)
580
+ gbc.fit(X, y)
581
+
582
+ with pytest.warns(
583
+ UserWarning, match="Using recursion method with a non-constant init predictor"
584
+ ):
585
+ partial_dependence(gbc, X, [0], method="recursion", kind="average")
586
+
587
+ with pytest.warns(
588
+ UserWarning, match="Using recursion method with a non-constant init predictor"
589
+ ):
590
+ partial_dependence(gbc, X, [0], method="recursion", kind="average")
591
+
592
+
593
+ def test_partial_dependence_sample_weight_of_fitted_estimator():
594
+ # Test near perfect correlation between partial dependence and diagonal
595
+ # when sample weights emphasize y = x predictions
596
+ # non-regression test for #13193
597
+ # TODO: extend to HistGradientBoosting once sample_weight is supported
598
+ N = 1000
599
+ rng = np.random.RandomState(123456)
600
+ mask = rng.randint(2, size=N, dtype=bool)
601
+
602
+ x = rng.rand(N)
603
+ # set y = x on mask and y = -x outside
604
+ y = x.copy()
605
+ y[~mask] = -y[~mask]
606
+ X = np.c_[mask, x]
607
+ # sample weights to emphasize data points where y = x
608
+ sample_weight = np.ones(N)
609
+ sample_weight[mask] = 1000.0
610
+
611
+ clf = GradientBoostingRegressor(n_estimators=10, random_state=1)
612
+ clf.fit(X, y, sample_weight=sample_weight)
613
+
614
+ pdp = partial_dependence(clf, X, features=[1], kind="average")
615
+
616
+ assert np.corrcoef(pdp["average"], pdp["grid_values"])[0, 1] > 0.99
617
+
618
+
619
+ def test_hist_gbdt_sw_not_supported():
620
+ # TODO: remove/fix when PDP supports HGBT with sample weights
621
+ clf = HistGradientBoostingRegressor(random_state=1)
622
+ clf.fit(X, y, sample_weight=np.ones(len(X)))
623
+
624
+ with pytest.raises(
625
+ NotImplementedError, match="does not support partial dependence"
626
+ ):
627
+ partial_dependence(clf, X, features=[1])
628
+
629
+
630
+ def test_partial_dependence_pipeline():
631
+ # check that the partial dependence support pipeline
632
+ iris = load_iris()
633
+
634
+ scaler = StandardScaler()
635
+ clf = DummyClassifier(random_state=42)
636
+ pipe = make_pipeline(scaler, clf)
637
+
638
+ clf.fit(scaler.fit_transform(iris.data), iris.target)
639
+ pipe.fit(iris.data, iris.target)
640
+
641
+ features = 0
642
+ pdp_pipe = partial_dependence(
643
+ pipe, iris.data, features=[features], grid_resolution=10, kind="average"
644
+ )
645
+ pdp_clf = partial_dependence(
646
+ clf,
647
+ scaler.transform(iris.data),
648
+ features=[features],
649
+ grid_resolution=10,
650
+ kind="average",
651
+ )
652
+ assert_allclose(pdp_pipe["average"], pdp_clf["average"])
653
+ assert_allclose(
654
+ pdp_pipe["grid_values"][0],
655
+ pdp_clf["grid_values"][0] * scaler.scale_[features] + scaler.mean_[features],
656
+ )
657
+
658
+
659
+ @pytest.mark.parametrize(
660
+ "estimator",
661
+ [
662
+ LogisticRegression(max_iter=1000, random_state=0),
663
+ GradientBoostingClassifier(random_state=0, n_estimators=5),
664
+ ],
665
+ ids=["estimator-brute", "estimator-recursion"],
666
+ )
667
+ @pytest.mark.parametrize(
668
+ "preprocessor",
669
+ [
670
+ None,
671
+ make_column_transformer(
672
+ (StandardScaler(), [iris.feature_names[i] for i in (0, 2)]),
673
+ (RobustScaler(), [iris.feature_names[i] for i in (1, 3)]),
674
+ ),
675
+ make_column_transformer(
676
+ (StandardScaler(), [iris.feature_names[i] for i in (0, 2)]),
677
+ remainder="passthrough",
678
+ ),
679
+ ],
680
+ ids=["None", "column-transformer", "column-transformer-passthrough"],
681
+ )
682
+ @pytest.mark.parametrize(
683
+ "features",
684
+ [[0, 2], [iris.feature_names[i] for i in (0, 2)]],
685
+ ids=["features-integer", "features-string"],
686
+ )
687
+ def test_partial_dependence_dataframe(estimator, preprocessor, features):
688
+ # check that the partial dependence support dataframe and pipeline
689
+ # including a column transformer
690
+ pd = pytest.importorskip("pandas")
691
+ df = pd.DataFrame(scale(iris.data), columns=iris.feature_names)
692
+
693
+ pipe = make_pipeline(preprocessor, estimator)
694
+ pipe.fit(df, iris.target)
695
+ pdp_pipe = partial_dependence(
696
+ pipe, df, features=features, grid_resolution=10, kind="average"
697
+ )
698
+
699
+ # the column transformer will reorder the column when transforming
700
+ # we mixed the index to be sure that we are computing the partial
701
+ # dependence of the right columns
702
+ if preprocessor is not None:
703
+ X_proc = clone(preprocessor).fit_transform(df)
704
+ features_clf = [0, 1]
705
+ else:
706
+ X_proc = df
707
+ features_clf = [0, 2]
708
+
709
+ clf = clone(estimator).fit(X_proc, iris.target)
710
+ pdp_clf = partial_dependence(
711
+ clf,
712
+ X_proc,
713
+ features=features_clf,
714
+ method="brute",
715
+ grid_resolution=10,
716
+ kind="average",
717
+ )
718
+
719
+ assert_allclose(pdp_pipe["average"], pdp_clf["average"])
720
+ if preprocessor is not None:
721
+ scaler = preprocessor.named_transformers_["standardscaler"]
722
+ assert_allclose(
723
+ pdp_pipe["grid_values"][1],
724
+ pdp_clf["grid_values"][1] * scaler.scale_[1] + scaler.mean_[1],
725
+ )
726
+ else:
727
+ assert_allclose(pdp_pipe["grid_values"][1], pdp_clf["grid_values"][1])
728
+
729
+
730
+ @pytest.mark.parametrize(
731
+ "features, expected_pd_shape",
732
+ [
733
+ (0, (3, 10)),
734
+ (iris.feature_names[0], (3, 10)),
735
+ ([0, 2], (3, 10, 10)),
736
+ ([iris.feature_names[i] for i in (0, 2)], (3, 10, 10)),
737
+ ([True, False, True, False], (3, 10, 10)),
738
+ ],
739
+ ids=["scalar-int", "scalar-str", "list-int", "list-str", "mask"],
740
+ )
741
+ def test_partial_dependence_feature_type(features, expected_pd_shape):
742
+ # check all possible features type supported in PDP
743
+ pd = pytest.importorskip("pandas")
744
+ df = pd.DataFrame(iris.data, columns=iris.feature_names)
745
+
746
+ preprocessor = make_column_transformer(
747
+ (StandardScaler(), [iris.feature_names[i] for i in (0, 2)]),
748
+ (RobustScaler(), [iris.feature_names[i] for i in (1, 3)]),
749
+ )
750
+ pipe = make_pipeline(
751
+ preprocessor, LogisticRegression(max_iter=1000, random_state=0)
752
+ )
753
+ pipe.fit(df, iris.target)
754
+ pdp_pipe = partial_dependence(
755
+ pipe, df, features=features, grid_resolution=10, kind="average"
756
+ )
757
+ assert pdp_pipe["average"].shape == expected_pd_shape
758
+ assert len(pdp_pipe["grid_values"]) == len(pdp_pipe["average"].shape) - 1
759
+
760
+
761
+ @pytest.mark.parametrize(
762
+ "estimator",
763
+ [
764
+ LinearRegression(),
765
+ LogisticRegression(),
766
+ GradientBoostingRegressor(),
767
+ GradientBoostingClassifier(),
768
+ ],
769
+ )
770
+ def test_partial_dependence_unfitted(estimator):
771
+ X = iris.data
772
+ preprocessor = make_column_transformer(
773
+ (StandardScaler(), [0, 2]), (RobustScaler(), [1, 3])
774
+ )
775
+ pipe = make_pipeline(preprocessor, estimator)
776
+ with pytest.raises(NotFittedError, match="is not fitted yet"):
777
+ partial_dependence(pipe, X, features=[0, 2], grid_resolution=10)
778
+ with pytest.raises(NotFittedError, match="is not fitted yet"):
779
+ partial_dependence(estimator, X, features=[0, 2], grid_resolution=10)
780
+
781
+
782
+ @pytest.mark.parametrize(
783
+ "Estimator, data",
784
+ [
785
+ (LinearRegression, multioutput_regression_data),
786
+ (LogisticRegression, binary_classification_data),
787
+ ],
788
+ )
789
+ def test_kind_average_and_average_of_individual(Estimator, data):
790
+ est = Estimator()
791
+ (X, y), n_targets = data
792
+ est.fit(X, y)
793
+
794
+ pdp_avg = partial_dependence(est, X=X, features=[1, 2], kind="average")
795
+ pdp_ind = partial_dependence(est, X=X, features=[1, 2], kind="individual")
796
+ avg_ind = np.mean(pdp_ind["individual"], axis=1)
797
+ assert_allclose(avg_ind, pdp_avg["average"])
798
+
799
+
800
+ @pytest.mark.parametrize(
801
+ "Estimator, data",
802
+ [
803
+ (LinearRegression, multioutput_regression_data),
804
+ (LogisticRegression, binary_classification_data),
805
+ ],
806
+ )
807
+ def test_partial_dependence_kind_individual_ignores_sample_weight(Estimator, data):
808
+ """Check that `sample_weight` does not have any effect on reported ICE."""
809
+ est = Estimator()
810
+ (X, y), n_targets = data
811
+ sample_weight = np.arange(X.shape[0])
812
+ est.fit(X, y)
813
+
814
+ pdp_nsw = partial_dependence(est, X=X, features=[1, 2], kind="individual")
815
+ pdp_sw = partial_dependence(
816
+ est, X=X, features=[1, 2], kind="individual", sample_weight=sample_weight
817
+ )
818
+ assert_allclose(pdp_nsw["individual"], pdp_sw["individual"])
819
+ assert_allclose(pdp_nsw["grid_values"], pdp_sw["grid_values"])
820
+
821
+
822
+ @pytest.mark.parametrize(
823
+ "estimator",
824
+ [
825
+ LinearRegression(),
826
+ LogisticRegression(),
827
+ RandomForestRegressor(),
828
+ GradientBoostingClassifier(),
829
+ ],
830
+ )
831
+ @pytest.mark.parametrize("non_null_weight_idx", [0, 1, -1])
832
+ def test_partial_dependence_non_null_weight_idx(estimator, non_null_weight_idx):
833
+ """Check that if we pass a `sample_weight` of zeros with only one index with
834
+ sample weight equals one, then the average `partial_dependence` with this
835
+ `sample_weight` is equal to the individual `partial_dependence` of the
836
+ corresponding index.
837
+ """
838
+ X, y = iris.data, iris.target
839
+ preprocessor = make_column_transformer(
840
+ (StandardScaler(), [0, 2]), (RobustScaler(), [1, 3])
841
+ )
842
+ pipe = make_pipeline(preprocessor, estimator).fit(X, y)
843
+
844
+ sample_weight = np.zeros_like(y)
845
+ sample_weight[non_null_weight_idx] = 1
846
+ pdp_sw = partial_dependence(
847
+ pipe,
848
+ X,
849
+ [2, 3],
850
+ kind="average",
851
+ sample_weight=sample_weight,
852
+ grid_resolution=10,
853
+ )
854
+ pdp_ind = partial_dependence(pipe, X, [2, 3], kind="individual", grid_resolution=10)
855
+ output_dim = 1 if is_regressor(pipe) else len(np.unique(y))
856
+ for i in range(output_dim):
857
+ assert_allclose(
858
+ pdp_ind["individual"][i][non_null_weight_idx],
859
+ pdp_sw["average"][i],
860
+ )
861
+
862
+
863
+ @pytest.mark.parametrize(
864
+ "Estimator, data",
865
+ [
866
+ (LinearRegression, multioutput_regression_data),
867
+ (LogisticRegression, binary_classification_data),
868
+ ],
869
+ )
870
+ def test_partial_dependence_equivalence_equal_sample_weight(Estimator, data):
871
+ """Check that `sample_weight=None` is equivalent to having equal weights."""
872
+
873
+ est = Estimator()
874
+ (X, y), n_targets = data
875
+ est.fit(X, y)
876
+
877
+ sample_weight, params = None, {"X": X, "features": [1, 2], "kind": "average"}
878
+ pdp_sw_none = partial_dependence(est, **params, sample_weight=sample_weight)
879
+ sample_weight = np.ones(len(y))
880
+ pdp_sw_unit = partial_dependence(est, **params, sample_weight=sample_weight)
881
+ assert_allclose(pdp_sw_none["average"], pdp_sw_unit["average"])
882
+ sample_weight = 2 * np.ones(len(y))
883
+ pdp_sw_doubling = partial_dependence(est, **params, sample_weight=sample_weight)
884
+ assert_allclose(pdp_sw_none["average"], pdp_sw_doubling["average"])
885
+
886
+
887
+ def test_partial_dependence_sample_weight_size_error():
888
+ """Check that we raise an error when the size of `sample_weight` is not
889
+ consistent with `X` and `y`.
890
+ """
891
+ est = LogisticRegression()
892
+ (X, y), n_targets = binary_classification_data
893
+ sample_weight = np.ones_like(y)
894
+ est.fit(X, y)
895
+
896
+ with pytest.raises(ValueError, match="sample_weight.shape =="):
897
+ partial_dependence(
898
+ est, X, features=[0], sample_weight=sample_weight[1:], grid_resolution=10
899
+ )
900
+
901
+
902
+ def test_partial_dependence_sample_weight_with_recursion():
903
+ """Check that we raise an error when `sample_weight` is provided with
904
+ `"recursion"` method.
905
+ """
906
+ est = RandomForestRegressor()
907
+ (X, y), n_targets = regression_data
908
+ sample_weight = np.ones_like(y)
909
+ est.fit(X, y, sample_weight=sample_weight)
910
+
911
+ with pytest.raises(ValueError, match="'recursion' method can only be applied when"):
912
+ partial_dependence(
913
+ est, X, features=[0], method="recursion", sample_weight=sample_weight
914
+ )
915
+
916
+
917
+ # TODO(1.5): Remove when bunch values is deprecated in 1.5
918
+ def test_partial_dependence_bunch_values_deprecated():
919
+ """Test that deprecation warning is raised when values is accessed."""
920
+
921
+ est = LogisticRegression()
922
+ (X, y), _ = binary_classification_data
923
+ est.fit(X, y)
924
+
925
+ pdp_avg = partial_dependence(est, X=X, features=[1, 2], kind="average")
926
+
927
+ msg = (
928
+ "Key: 'values', is deprecated in 1.3 and will be "
929
+ "removed in 1.5. Please use 'grid_values' instead"
930
+ )
931
+
932
+ with warnings.catch_warnings():
933
+ # Does not raise warnings with "grid_values"
934
+ warnings.simplefilter("error", FutureWarning)
935
+ grid_values = pdp_avg["grid_values"]
936
+
937
+ with pytest.warns(FutureWarning, match=msg):
938
+ # Warns for "values"
939
+ values = pdp_avg["values"]
940
+
941
+ # "values" and "grid_values" are the same object
942
+ assert values is grid_values
943
+
944
+
945
+ def test_mixed_type_categorical():
946
+ """Check that we raise a proper error when a column has mixed types and
947
+ the sorting of `np.unique` will fail."""
948
+ X = np.array(["A", "B", "C", np.nan], dtype=object).reshape(-1, 1)
949
+ y = np.array([0, 1, 0, 1])
950
+
951
+ from sklearn.preprocessing import OrdinalEncoder
952
+
953
+ clf = make_pipeline(
954
+ OrdinalEncoder(encoded_missing_value=-1),
955
+ LogisticRegression(),
956
+ ).fit(X, y)
957
+ with pytest.raises(ValueError, match="The column #0 contains mixed data types"):
958
+ partial_dependence(clf, X, features=[0])
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/test_pd_utils.py ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import pytest
3
+
4
+ from sklearn.inspection._pd_utils import _check_feature_names, _get_feature_index
5
+ from sklearn.utils._testing import _convert_container
6
+
7
+
8
+ @pytest.mark.parametrize(
9
+ "feature_names, array_type, expected_feature_names",
10
+ [
11
+ (None, "array", ["x0", "x1", "x2"]),
12
+ (None, "dataframe", ["a", "b", "c"]),
13
+ (np.array(["a", "b", "c"]), "array", ["a", "b", "c"]),
14
+ ],
15
+ )
16
+ def test_check_feature_names(feature_names, array_type, expected_feature_names):
17
+ X = np.random.randn(10, 3)
18
+ column_names = ["a", "b", "c"]
19
+ X = _convert_container(X, constructor_name=array_type, columns_name=column_names)
20
+ feature_names_validated = _check_feature_names(X, feature_names)
21
+ assert feature_names_validated == expected_feature_names
22
+
23
+
24
+ def test_check_feature_names_error():
25
+ X = np.random.randn(10, 3)
26
+ feature_names = ["a", "b", "c", "a"]
27
+ msg = "feature_names should not contain duplicates."
28
+ with pytest.raises(ValueError, match=msg):
29
+ _check_feature_names(X, feature_names)
30
+
31
+
32
+ @pytest.mark.parametrize("fx, idx", [(0, 0), (1, 1), ("a", 0), ("b", 1), ("c", 2)])
33
+ def test_get_feature_index(fx, idx):
34
+ feature_names = ["a", "b", "c"]
35
+ assert _get_feature_index(fx, feature_names) == idx
36
+
37
+
38
+ @pytest.mark.parametrize(
39
+ "fx, feature_names, err_msg",
40
+ [
41
+ ("a", None, "Cannot plot partial dependence for feature 'a'"),
42
+ ("d", ["a", "b", "c"], "Feature 'd' not in feature_names"),
43
+ ],
44
+ )
45
+ def test_get_feature_names_error(fx, feature_names, err_msg):
46
+ with pytest.raises(ValueError, match=err_msg):
47
+ _get_feature_index(fx, feature_names)
llmeval-env/lib/python3.10/site-packages/sklearn/inspection/tests/test_permutation_importance.py ADDED
@@ -0,0 +1,542 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import pytest
3
+ from numpy.testing import assert_allclose
4
+
5
+ from sklearn.compose import ColumnTransformer
6
+ from sklearn.datasets import (
7
+ load_diabetes,
8
+ load_iris,
9
+ make_classification,
10
+ make_regression,
11
+ )
12
+ from sklearn.dummy import DummyClassifier
13
+ from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
14
+ from sklearn.impute import SimpleImputer
15
+ from sklearn.inspection import permutation_importance
16
+ from sklearn.linear_model import LinearRegression, LogisticRegression
17
+ from sklearn.metrics import (
18
+ get_scorer,
19
+ mean_squared_error,
20
+ r2_score,
21
+ )
22
+ from sklearn.model_selection import train_test_split
23
+ from sklearn.pipeline import make_pipeline
24
+ from sklearn.preprocessing import KBinsDiscretizer, OneHotEncoder, StandardScaler, scale
25
+ from sklearn.utils import parallel_backend
26
+ from sklearn.utils._testing import _convert_container
27
+
28
+
29
+ @pytest.mark.parametrize("n_jobs", [1, 2])
30
+ @pytest.mark.parametrize("max_samples", [0.5, 1.0])
31
+ @pytest.mark.parametrize("sample_weight", [None, "ones"])
32
+ def test_permutation_importance_correlated_feature_regression(
33
+ n_jobs, max_samples, sample_weight
34
+ ):
35
+ # Make sure that feature highly correlated to the target have a higher
36
+ # importance
37
+ rng = np.random.RandomState(42)
38
+ n_repeats = 5
39
+
40
+ X, y = load_diabetes(return_X_y=True)
41
+ y_with_little_noise = (y + rng.normal(scale=0.001, size=y.shape[0])).reshape(-1, 1)
42
+
43
+ X = np.hstack([X, y_with_little_noise])
44
+
45
+ weights = np.ones_like(y) if sample_weight == "ones" else sample_weight
46
+ clf = RandomForestRegressor(n_estimators=10, random_state=42)
47
+ clf.fit(X, y)
48
+
49
+ result = permutation_importance(
50
+ clf,
51
+ X,
52
+ y,
53
+ sample_weight=weights,
54
+ n_repeats=n_repeats,
55
+ random_state=rng,
56
+ n_jobs=n_jobs,
57
+ max_samples=max_samples,
58
+ )
59
+
60
+ assert result.importances.shape == (X.shape[1], n_repeats)
61
+
62
+ # the correlated feature with y was added as the last column and should
63
+ # have the highest importance
64
+ assert np.all(result.importances_mean[-1] > result.importances_mean[:-1])
65
+
66
+
67
+ @pytest.mark.parametrize("n_jobs", [1, 2])
68
+ @pytest.mark.parametrize("max_samples", [0.5, 1.0])
69
+ def test_permutation_importance_correlated_feature_regression_pandas(
70
+ n_jobs, max_samples
71
+ ):
72
+ pd = pytest.importorskip("pandas")
73
+
74
+ # Make sure that feature highly correlated to the target have a higher
75
+ # importance
76
+ rng = np.random.RandomState(42)
77
+ n_repeats = 5
78
+
79
+ dataset = load_iris()
80
+ X, y = dataset.data, dataset.target
81
+ y_with_little_noise = (y + rng.normal(scale=0.001, size=y.shape[0])).reshape(-1, 1)
82
+
83
+ # Adds feature correlated with y as the last column
84
+ X = pd.DataFrame(X, columns=dataset.feature_names)
85
+ X["correlated_feature"] = y_with_little_noise
86
+
87
+ clf = RandomForestClassifier(n_estimators=10, random_state=42)
88
+ clf.fit(X, y)
89
+
90
+ result = permutation_importance(
91
+ clf,
92
+ X,
93
+ y,
94
+ n_repeats=n_repeats,
95
+ random_state=rng,
96
+ n_jobs=n_jobs,
97
+ max_samples=max_samples,
98
+ )
99
+
100
+ assert result.importances.shape == (X.shape[1], n_repeats)
101
+
102
+ # the correlated feature with y was added as the last column and should
103
+ # have the highest importance
104
+ assert np.all(result.importances_mean[-1] > result.importances_mean[:-1])
105
+
106
+
107
+ @pytest.mark.parametrize("n_jobs", [1, 2])
108
+ @pytest.mark.parametrize("max_samples", [0.5, 1.0])
109
+ def test_robustness_to_high_cardinality_noisy_feature(n_jobs, max_samples, seed=42):
110
+ # Permutation variable importance should not be affected by the high
111
+ # cardinality bias of traditional feature importances, especially when
112
+ # computed on a held-out test set:
113
+ rng = np.random.RandomState(seed)
114
+ n_repeats = 5
115
+ n_samples = 1000
116
+ n_classes = 5
117
+ n_informative_features = 2
118
+ n_noise_features = 1
119
+ n_features = n_informative_features + n_noise_features
120
+
121
+ # Generate a multiclass classification dataset and a set of informative
122
+ # binary features that can be used to predict some classes of y exactly
123
+ # while leaving some classes unexplained to make the problem harder.
124
+ classes = np.arange(n_classes)
125
+ y = rng.choice(classes, size=n_samples)
126
+ X = np.hstack([(y == c).reshape(-1, 1) for c in classes[:n_informative_features]])
127
+ X = X.astype(np.float32)
128
+
129
+ # Not all target classes are explained by the binary class indicator
130
+ # features:
131
+ assert n_informative_features < n_classes
132
+
133
+ # Add 10 other noisy features with high cardinality (numerical) values
134
+ # that can be used to overfit the training data.
135
+ X = np.concatenate([X, rng.randn(n_samples, n_noise_features)], axis=1)
136
+ assert X.shape == (n_samples, n_features)
137
+
138
+ # Split the dataset to be able to evaluate on a held-out test set. The
139
+ # Test size should be large enough for importance measurements to be
140
+ # stable:
141
+ X_train, X_test, y_train, y_test = train_test_split(
142
+ X, y, test_size=0.5, random_state=rng
143
+ )
144
+ clf = RandomForestClassifier(n_estimators=5, random_state=rng)
145
+ clf.fit(X_train, y_train)
146
+
147
+ # Variable importances computed by impurity decrease on the tree node
148
+ # splits often use the noisy features in splits. This can give misleading
149
+ # impression that high cardinality noisy variables are the most important:
150
+ tree_importances = clf.feature_importances_
151
+ informative_tree_importances = tree_importances[:n_informative_features]
152
+ noisy_tree_importances = tree_importances[n_informative_features:]
153
+ assert informative_tree_importances.max() < noisy_tree_importances.min()
154
+
155
+ # Let's check that permutation-based feature importances do not have this
156
+ # problem.
157
+ r = permutation_importance(
158
+ clf,
159
+ X_test,
160
+ y_test,
161
+ n_repeats=n_repeats,
162
+ random_state=rng,
163
+ n_jobs=n_jobs,
164
+ max_samples=max_samples,
165
+ )
166
+
167
+ assert r.importances.shape == (X.shape[1], n_repeats)
168
+
169
+ # Split the importances between informative and noisy features
170
+ informative_importances = r.importances_mean[:n_informative_features]
171
+ noisy_importances = r.importances_mean[n_informative_features:]
172
+
173
+ # Because we do not have a binary variable explaining each target classes,
174
+ # the RF model will have to use the random variable to make some
175
+ # (overfitting) splits (as max_depth is not set). Therefore the noisy
176
+ # variables will be non-zero but with small values oscillating around
177
+ # zero:
178
+ assert max(np.abs(noisy_importances)) > 1e-7
179
+ assert noisy_importances.max() < 0.05
180
+
181
+ # The binary features correlated with y should have a higher importance
182
+ # than the high cardinality noisy features.
183
+ # The maximum test accuracy is 2 / 5 == 0.4, each informative feature
184
+ # contributing approximately a bit more than 0.2 of accuracy.
185
+ assert informative_importances.min() > 0.15
186
+
187
+
188
+ def test_permutation_importance_mixed_types():
189
+ rng = np.random.RandomState(42)
190
+ n_repeats = 4
191
+
192
+ # Last column is correlated with y
193
+ X = np.array([[1.0, 2.0, 3.0, np.nan], [2, 1, 2, 1]]).T
194
+ y = np.array([0, 1, 0, 1])
195
+
196
+ clf = make_pipeline(SimpleImputer(), LogisticRegression(solver="lbfgs"))
197
+ clf.fit(X, y)
198
+ result = permutation_importance(clf, X, y, n_repeats=n_repeats, random_state=rng)
199
+
200
+ assert result.importances.shape == (X.shape[1], n_repeats)
201
+
202
+ # the correlated feature with y is the last column and should
203
+ # have the highest importance
204
+ assert np.all(result.importances_mean[-1] > result.importances_mean[:-1])
205
+
206
+ # use another random state
207
+ rng = np.random.RandomState(0)
208
+ result2 = permutation_importance(clf, X, y, n_repeats=n_repeats, random_state=rng)
209
+ assert result2.importances.shape == (X.shape[1], n_repeats)
210
+
211
+ assert not np.allclose(result.importances, result2.importances)
212
+
213
+ # the correlated feature with y is the last column and should
214
+ # have the highest importance
215
+ assert np.all(result2.importances_mean[-1] > result2.importances_mean[:-1])
216
+
217
+
218
+ def test_permutation_importance_mixed_types_pandas():
219
+ pd = pytest.importorskip("pandas")
220
+ rng = np.random.RandomState(42)
221
+ n_repeats = 5
222
+
223
+ # Last column is correlated with y
224
+ X = pd.DataFrame({"col1": [1.0, 2.0, 3.0, np.nan], "col2": ["a", "b", "a", "b"]})
225
+ y = np.array([0, 1, 0, 1])
226
+
227
+ num_preprocess = make_pipeline(SimpleImputer(), StandardScaler())
228
+ preprocess = ColumnTransformer(
229
+ [("num", num_preprocess, ["col1"]), ("cat", OneHotEncoder(), ["col2"])]
230
+ )
231
+ clf = make_pipeline(preprocess, LogisticRegression(solver="lbfgs"))
232
+ clf.fit(X, y)
233
+
234
+ result = permutation_importance(clf, X, y, n_repeats=n_repeats, random_state=rng)
235
+
236
+ assert result.importances.shape == (X.shape[1], n_repeats)
237
+ # the correlated feature with y is the last column and should
238
+ # have the highest importance
239
+ assert np.all(result.importances_mean[-1] > result.importances_mean[:-1])
240
+
241
+
242
+ def test_permutation_importance_linear_regresssion():
243
+ X, y = make_regression(n_samples=500, n_features=10, random_state=0)
244
+
245
+ X = scale(X)
246
+ y = scale(y)
247
+
248
+ lr = LinearRegression().fit(X, y)
249
+
250
+ # this relationship can be computed in closed form
251
+ expected_importances = 2 * lr.coef_**2
252
+ results = permutation_importance(
253
+ lr, X, y, n_repeats=50, scoring="neg_mean_squared_error"
254
+ )
255
+ assert_allclose(
256
+ expected_importances, results.importances_mean, rtol=1e-1, atol=1e-6
257
+ )
258
+
259
+
260
+ @pytest.mark.parametrize("max_samples", [500, 1.0])
261
+ def test_permutation_importance_equivalence_sequential_parallel(max_samples):
262
+ # regression test to make sure that sequential and parallel calls will
263
+ # output the same results.
264
+ # Also tests that max_samples equal to number of samples is equivalent to 1.0
265
+ X, y = make_regression(n_samples=500, n_features=10, random_state=0)
266
+ lr = LinearRegression().fit(X, y)
267
+
268
+ importance_sequential = permutation_importance(
269
+ lr, X, y, n_repeats=5, random_state=0, n_jobs=1, max_samples=max_samples
270
+ )
271
+
272
+ # First check that the problem is structured enough and that the model is
273
+ # complex enough to not yield trivial, constant importances:
274
+ imp_min = importance_sequential["importances"].min()
275
+ imp_max = importance_sequential["importances"].max()
276
+ assert imp_max - imp_min > 0.3
277
+
278
+ # The actually check that parallelism does not impact the results
279
+ # either with shared memory (threading) or without isolated memory
280
+ # via process-based parallelism using the default backend
281
+ # ('loky' or 'multiprocessing') depending on the joblib version:
282
+
283
+ # process-based parallelism (by default):
284
+ importance_processes = permutation_importance(
285
+ lr, X, y, n_repeats=5, random_state=0, n_jobs=2
286
+ )
287
+ assert_allclose(
288
+ importance_processes["importances"], importance_sequential["importances"]
289
+ )
290
+
291
+ # thread-based parallelism:
292
+ with parallel_backend("threading"):
293
+ importance_threading = permutation_importance(
294
+ lr, X, y, n_repeats=5, random_state=0, n_jobs=2
295
+ )
296
+ assert_allclose(
297
+ importance_threading["importances"], importance_sequential["importances"]
298
+ )
299
+
300
+
301
+ @pytest.mark.parametrize("n_jobs", [None, 1, 2])
302
+ @pytest.mark.parametrize("max_samples", [0.5, 1.0])
303
+ def test_permutation_importance_equivalence_array_dataframe(n_jobs, max_samples):
304
+ # This test checks that the column shuffling logic has the same behavior
305
+ # both a dataframe and a simple numpy array.
306
+ pd = pytest.importorskip("pandas")
307
+
308
+ # regression test to make sure that sequential and parallel calls will
309
+ # output the same results.
310
+ X, y = make_regression(n_samples=100, n_features=5, random_state=0)
311
+ X_df = pd.DataFrame(X)
312
+
313
+ # Add a categorical feature that is statistically linked to y:
314
+ binner = KBinsDiscretizer(n_bins=3, encode="ordinal")
315
+ cat_column = binner.fit_transform(y.reshape(-1, 1))
316
+
317
+ # Concatenate the extra column to the numpy array: integers will be
318
+ # cast to float values
319
+ X = np.hstack([X, cat_column])
320
+ assert X.dtype.kind == "f"
321
+
322
+ # Insert extra column as a non-numpy-native dtype (while keeping backward
323
+ # compat for old pandas versions):
324
+ if hasattr(pd, "Categorical"):
325
+ cat_column = pd.Categorical(cat_column.ravel())
326
+ else:
327
+ cat_column = cat_column.ravel()
328
+ new_col_idx = len(X_df.columns)
329
+ X_df[new_col_idx] = cat_column
330
+ assert X_df[new_col_idx].dtype == cat_column.dtype
331
+
332
+ # Stich an arbitrary index to the dataframe:
333
+ X_df.index = np.arange(len(X_df)).astype(str)
334
+
335
+ rf = RandomForestRegressor(n_estimators=5, max_depth=3, random_state=0)
336
+ rf.fit(X, y)
337
+
338
+ n_repeats = 3
339
+ importance_array = permutation_importance(
340
+ rf,
341
+ X,
342
+ y,
343
+ n_repeats=n_repeats,
344
+ random_state=0,
345
+ n_jobs=n_jobs,
346
+ max_samples=max_samples,
347
+ )
348
+
349
+ # First check that the problem is structured enough and that the model is
350
+ # complex enough to not yield trivial, constant importances:
351
+ imp_min = importance_array["importances"].min()
352
+ imp_max = importance_array["importances"].max()
353
+ assert imp_max - imp_min > 0.3
354
+
355
+ # Now check that importances computed on dataframe matche the values
356
+ # of those computed on the array with the same data.
357
+ importance_dataframe = permutation_importance(
358
+ rf,
359
+ X_df,
360
+ y,
361
+ n_repeats=n_repeats,
362
+ random_state=0,
363
+ n_jobs=n_jobs,
364
+ max_samples=max_samples,
365
+ )
366
+ assert_allclose(
367
+ importance_array["importances"], importance_dataframe["importances"]
368
+ )
369
+
370
+
371
+ @pytest.mark.parametrize("input_type", ["array", "dataframe"])
372
+ def test_permutation_importance_large_memmaped_data(input_type):
373
+ # Smoke, non-regression test for:
374
+ # https://github.com/scikit-learn/scikit-learn/issues/15810
375
+ n_samples, n_features = int(5e4), 4
376
+ X, y = make_classification(
377
+ n_samples=n_samples, n_features=n_features, random_state=0
378
+ )
379
+ assert X.nbytes > 1e6 # trigger joblib memmaping
380
+
381
+ X = _convert_container(X, input_type)
382
+ clf = DummyClassifier(strategy="prior").fit(X, y)
383
+
384
+ # Actual smoke test: should not raise any error:
385
+ n_repeats = 5
386
+ r = permutation_importance(clf, X, y, n_repeats=n_repeats, n_jobs=2)
387
+
388
+ # Auxiliary check: DummyClassifier is feature independent:
389
+ # permutating feature should not change the predictions
390
+ expected_importances = np.zeros((n_features, n_repeats))
391
+ assert_allclose(expected_importances, r.importances)
392
+
393
+
394
+ def test_permutation_importance_sample_weight():
395
+ # Creating data with 2 features and 1000 samples, where the target
396
+ # variable is a linear combination of the two features, such that
397
+ # in half of the samples the impact of feature 1 is twice the impact of
398
+ # feature 2, and vice versa on the other half of the samples.
399
+ rng = np.random.RandomState(1)
400
+ n_samples = 1000
401
+ n_features = 2
402
+ n_half_samples = n_samples // 2
403
+ x = rng.normal(0.0, 0.001, (n_samples, n_features))
404
+ y = np.zeros(n_samples)
405
+ y[:n_half_samples] = 2 * x[:n_half_samples, 0] + x[:n_half_samples, 1]
406
+ y[n_half_samples:] = x[n_half_samples:, 0] + 2 * x[n_half_samples:, 1]
407
+
408
+ # Fitting linear regression with perfect prediction
409
+ lr = LinearRegression(fit_intercept=False)
410
+ lr.fit(x, y)
411
+
412
+ # When all samples are weighted with the same weights, the ratio of
413
+ # the two features importance should equal to 1 on expectation (when using
414
+ # mean absolutes error as the loss function).
415
+ pi = permutation_importance(
416
+ lr, x, y, random_state=1, scoring="neg_mean_absolute_error", n_repeats=200
417
+ )
418
+ x1_x2_imp_ratio_w_none = pi.importances_mean[0] / pi.importances_mean[1]
419
+ assert x1_x2_imp_ratio_w_none == pytest.approx(1, 0.01)
420
+
421
+ # When passing a vector of ones as the sample_weight, results should be
422
+ # the same as in the case that sample_weight=None.
423
+ w = np.ones(n_samples)
424
+ pi = permutation_importance(
425
+ lr,
426
+ x,
427
+ y,
428
+ random_state=1,
429
+ scoring="neg_mean_absolute_error",
430
+ n_repeats=200,
431
+ sample_weight=w,
432
+ )
433
+ x1_x2_imp_ratio_w_ones = pi.importances_mean[0] / pi.importances_mean[1]
434
+ assert x1_x2_imp_ratio_w_ones == pytest.approx(x1_x2_imp_ratio_w_none, 0.01)
435
+
436
+ # When the ratio between the weights of the first half of the samples and
437
+ # the second half of the samples approaches to infinity, the ratio of
438
+ # the two features importance should equal to 2 on expectation (when using
439
+ # mean absolutes error as the loss function).
440
+ w = np.hstack(
441
+ [np.repeat(10.0**10, n_half_samples), np.repeat(1.0, n_half_samples)]
442
+ )
443
+ lr.fit(x, y, w)
444
+ pi = permutation_importance(
445
+ lr,
446
+ x,
447
+ y,
448
+ random_state=1,
449
+ scoring="neg_mean_absolute_error",
450
+ n_repeats=200,
451
+ sample_weight=w,
452
+ )
453
+ x1_x2_imp_ratio_w = pi.importances_mean[0] / pi.importances_mean[1]
454
+ assert x1_x2_imp_ratio_w / x1_x2_imp_ratio_w_none == pytest.approx(2, 0.01)
455
+
456
+
457
+ def test_permutation_importance_no_weights_scoring_function():
458
+ # Creating a scorer function that does not takes sample_weight
459
+ def my_scorer(estimator, X, y):
460
+ return 1
461
+
462
+ # Creating some data and estimator for the permutation test
463
+ x = np.array([[1, 2], [3, 4]])
464
+ y = np.array([1, 2])
465
+ w = np.array([1, 1])
466
+ lr = LinearRegression()
467
+ lr.fit(x, y)
468
+
469
+ # test that permutation_importance does not return error when
470
+ # sample_weight is None
471
+ try:
472
+ permutation_importance(lr, x, y, random_state=1, scoring=my_scorer, n_repeats=1)
473
+ except TypeError:
474
+ pytest.fail(
475
+ "permutation_test raised an error when using a scorer "
476
+ "function that does not accept sample_weight even though "
477
+ "sample_weight was None"
478
+ )
479
+
480
+ # test that permutation_importance raise exception when sample_weight is
481
+ # not None
482
+ with pytest.raises(TypeError):
483
+ permutation_importance(
484
+ lr, x, y, random_state=1, scoring=my_scorer, n_repeats=1, sample_weight=w
485
+ )
486
+
487
+
488
+ @pytest.mark.parametrize(
489
+ "list_single_scorer, multi_scorer",
490
+ [
491
+ (["r2", "neg_mean_squared_error"], ["r2", "neg_mean_squared_error"]),
492
+ (
493
+ ["r2", "neg_mean_squared_error"],
494
+ {
495
+ "r2": get_scorer("r2"),
496
+ "neg_mean_squared_error": get_scorer("neg_mean_squared_error"),
497
+ },
498
+ ),
499
+ (
500
+ ["r2", "neg_mean_squared_error"],
501
+ lambda estimator, X, y: {
502
+ "r2": r2_score(y, estimator.predict(X)),
503
+ "neg_mean_squared_error": -mean_squared_error(y, estimator.predict(X)),
504
+ },
505
+ ),
506
+ ],
507
+ )
508
+ def test_permutation_importance_multi_metric(list_single_scorer, multi_scorer):
509
+ # Test permutation importance when scoring contains multiple scorers
510
+
511
+ # Creating some data and estimator for the permutation test
512
+ x, y = make_regression(n_samples=500, n_features=10, random_state=0)
513
+ lr = LinearRegression().fit(x, y)
514
+
515
+ multi_importance = permutation_importance(
516
+ lr, x, y, random_state=1, scoring=multi_scorer, n_repeats=2
517
+ )
518
+ assert set(multi_importance.keys()) == set(list_single_scorer)
519
+
520
+ for scorer in list_single_scorer:
521
+ multi_result = multi_importance[scorer]
522
+ single_result = permutation_importance(
523
+ lr, x, y, random_state=1, scoring=scorer, n_repeats=2
524
+ )
525
+
526
+ assert_allclose(multi_result.importances, single_result.importances)
527
+
528
+
529
+ def test_permutation_importance_max_samples_error():
530
+ """Check that a proper error message is raised when `max_samples` is not
531
+ set to a valid input value.
532
+ """
533
+ X = np.array([(1.0, 2.0, 3.0, 4.0)]).T
534
+ y = np.array([0, 1, 0, 1])
535
+
536
+ clf = LogisticRegression()
537
+ clf.fit(X, y)
538
+
539
+ err_msg = r"max_samples must be <= n_samples"
540
+
541
+ with pytest.raises(ValueError, match=err_msg):
542
+ permutation_importance(clf, X, y, max_samples=5)
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (34.3 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_arpack.cpython-310.pyc ADDED
Binary file (1.34 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_array_api.cpython-310.pyc ADDED
Binary file (15.9 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_available_if.cpython-310.pyc ADDED
Binary file (3.18 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_bunch.cpython-310.pyc ADDED
Binary file (2.18 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_encode.cpython-310.pyc ADDED
Binary file (10.2 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_estimator_html_repr.cpython-310.pyc ADDED
Binary file (15 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_joblib.cpython-310.pyc ADDED
Binary file (674 Bytes). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_mask.cpython-310.pyc ADDED
Binary file (1.67 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_metadata_requests.cpython-310.pyc ADDED
Binary file (47.5 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_mocking.cpython-310.pyc ADDED
Binary file (13.5 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_param_validation.cpython-310.pyc ADDED
Binary file (28.5 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_plotting.cpython-310.pyc ADDED
Binary file (3.5 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_pprint.cpython-310.pyc ADDED
Binary file (11.5 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_response.cpython-310.pyc ADDED
Binary file (10 kB). View file
 
llmeval-env/lib/python3.10/site-packages/sklearn/utils/__pycache__/_set_output.cpython-310.pyc ADDED
Binary file (12.7 kB). View file