applied-ai-018 commited on
Commit
ac4b7e0
·
verified ·
1 Parent(s): 0e472e8

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ckpts/llama-3b/global_step100/bf16_zero_pp_rank_167_mp_rank_02_optim_states.pt +3 -0
  2. ckpts/llama-3b/global_step100/bf16_zero_pp_rank_18_mp_rank_03_optim_states.pt +3 -0
  3. ckpts/llama-3b/global_step100/bf16_zero_pp_rank_30_mp_rank_03_optim_states.pt +3 -0
  4. venv/lib/python3.10/site-packages/pandas/__pycache__/__init__.cpython-310.pyc +0 -0
  5. venv/lib/python3.10/site-packages/pandas/__pycache__/_typing.cpython-310.pyc +0 -0
  6. venv/lib/python3.10/site-packages/pandas/__pycache__/_version.cpython-310.pyc +0 -0
  7. venv/lib/python3.10/site-packages/pandas/__pycache__/_version_meson.cpython-310.pyc +0 -0
  8. venv/lib/python3.10/site-packages/pandas/__pycache__/conftest.cpython-310.pyc +0 -0
  9. venv/lib/python3.10/site-packages/pandas/__pycache__/testing.cpython-310.pyc +0 -0
  10. venv/lib/python3.10/site-packages/pandas/_testing/__init__.py +638 -0
  11. venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/__init__.cpython-310.pyc +0 -0
  12. venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/_hypothesis.cpython-310.pyc +0 -0
  13. venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/_io.cpython-310.pyc +0 -0
  14. venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/_warnings.cpython-310.pyc +0 -0
  15. venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/asserters.cpython-310.pyc +0 -0
  16. venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/compat.cpython-310.pyc +0 -0
  17. venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/contexts.cpython-310.pyc +0 -0
  18. venv/lib/python3.10/site-packages/pandas/_testing/_hypothesis.py +93 -0
  19. venv/lib/python3.10/site-packages/pandas/_testing/_io.py +170 -0
  20. venv/lib/python3.10/site-packages/pandas/_testing/_warnings.py +232 -0
  21. venv/lib/python3.10/site-packages/pandas/_testing/asserters.py +1435 -0
  22. venv/lib/python3.10/site-packages/pandas/_testing/compat.py +29 -0
  23. venv/lib/python3.10/site-packages/pandas/_testing/contexts.py +257 -0
  24. venv/lib/python3.10/site-packages/pandas/errors/__init__.py +850 -0
  25. venv/lib/python3.10/site-packages/pandas/errors/__pycache__/__init__.cpython-310.pyc +0 -0
  26. venv/lib/python3.10/site-packages/pandas/plotting/__init__.py +98 -0
  27. venv/lib/python3.10/site-packages/pandas/plotting/__pycache__/_misc.cpython-310.pyc +0 -0
  28. venv/lib/python3.10/site-packages/pandas/plotting/_core.py +1946 -0
  29. venv/lib/python3.10/site-packages/pandas/plotting/_misc.py +688 -0
  30. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__init__.py +0 -0
  31. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/__init__.cpython-310.pyc +0 -0
  32. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_array.cpython-310.pyc +0 -0
  33. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_astype.cpython-310.pyc +0 -0
  34. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_chained_assignment_deprecation.cpython-310.pyc +0 -0
  35. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_clip.cpython-310.pyc +0 -0
  36. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_constructors.cpython-310.pyc +0 -0
  37. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_core_functionalities.cpython-310.pyc +0 -0
  38. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_functions.cpython-310.pyc +0 -0
  39. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_indexing.cpython-310.pyc +0 -0
  40. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_internals.cpython-310.pyc +0 -0
  41. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_interp_fillna.cpython-310.pyc +0 -0
  42. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_methods.cpython-310.pyc +0 -0
  43. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_replace.cpython-310.pyc +0 -0
  44. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_setitem.cpython-310.pyc +0 -0
  45. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_util.cpython-310.pyc +0 -0
  46. venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/util.cpython-310.pyc +0 -0
  47. venv/lib/python3.10/site-packages/pandas/tests/copy_view/index/__init__.py +0 -0
  48. venv/lib/python3.10/site-packages/pandas/tests/copy_view/index/__pycache__/__init__.cpython-310.pyc +0 -0
  49. venv/lib/python3.10/site-packages/pandas/tests/copy_view/index/__pycache__/test_datetimeindex.cpython-310.pyc +0 -0
  50. venv/lib/python3.10/site-packages/pandas/tests/copy_view/index/__pycache__/test_index.cpython-310.pyc +0 -0
ckpts/llama-3b/global_step100/bf16_zero_pp_rank_167_mp_rank_02_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fc987ed520695167ddebae29a47f4077bd142603f44de3aef4b6846c2c0f090
3
+ size 41830404
ckpts/llama-3b/global_step100/bf16_zero_pp_rank_18_mp_rank_03_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb4911bc865348ac9aab9a6d571cd277d8807896df02eacc888508e6820b868f
3
+ size 41830394
ckpts/llama-3b/global_step100/bf16_zero_pp_rank_30_mp_rank_03_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4dfbc7fcce9283c5da58af6007fe5fe07d9acd8554edd6f22e23bd8242f5013
3
+ size 6291456
venv/lib/python3.10/site-packages/pandas/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (6.96 kB). View file
 
venv/lib/python3.10/site-packages/pandas/__pycache__/_typing.cpython-310.pyc ADDED
Binary file (11.6 kB). View file
 
venv/lib/python3.10/site-packages/pandas/__pycache__/_version.cpython-310.pyc ADDED
Binary file (14.5 kB). View file
 
venv/lib/python3.10/site-packages/pandas/__pycache__/_version_meson.cpython-310.pyc ADDED
Binary file (266 Bytes). View file
 
venv/lib/python3.10/site-packages/pandas/__pycache__/conftest.cpython-310.pyc ADDED
Binary file (45.8 kB). View file
 
venv/lib/python3.10/site-packages/pandas/__pycache__/testing.cpython-310.pyc ADDED
Binary file (422 Bytes). View file
 
venv/lib/python3.10/site-packages/pandas/_testing/__init__.py ADDED
@@ -0,0 +1,638 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from decimal import Decimal
4
+ import operator
5
+ import os
6
+ from sys import byteorder
7
+ from typing import (
8
+ TYPE_CHECKING,
9
+ Callable,
10
+ ContextManager,
11
+ cast,
12
+ )
13
+ import warnings
14
+
15
+ import numpy as np
16
+
17
+ from pandas._config.localization import (
18
+ can_set_locale,
19
+ get_locales,
20
+ set_locale,
21
+ )
22
+
23
+ from pandas.compat import pa_version_under10p1
24
+
25
+ from pandas.core.dtypes.common import is_string_dtype
26
+
27
+ import pandas as pd
28
+ from pandas import (
29
+ ArrowDtype,
30
+ DataFrame,
31
+ Index,
32
+ MultiIndex,
33
+ RangeIndex,
34
+ Series,
35
+ )
36
+ from pandas._testing._io import (
37
+ round_trip_localpath,
38
+ round_trip_pathlib,
39
+ round_trip_pickle,
40
+ write_to_compressed,
41
+ )
42
+ from pandas._testing._warnings import (
43
+ assert_produces_warning,
44
+ maybe_produces_warning,
45
+ )
46
+ from pandas._testing.asserters import (
47
+ assert_almost_equal,
48
+ assert_attr_equal,
49
+ assert_categorical_equal,
50
+ assert_class_equal,
51
+ assert_contains_all,
52
+ assert_copy,
53
+ assert_datetime_array_equal,
54
+ assert_dict_equal,
55
+ assert_equal,
56
+ assert_extension_array_equal,
57
+ assert_frame_equal,
58
+ assert_index_equal,
59
+ assert_indexing_slices_equivalent,
60
+ assert_interval_array_equal,
61
+ assert_is_sorted,
62
+ assert_is_valid_plot_return_object,
63
+ assert_metadata_equivalent,
64
+ assert_numpy_array_equal,
65
+ assert_period_array_equal,
66
+ assert_series_equal,
67
+ assert_sp_array_equal,
68
+ assert_timedelta_array_equal,
69
+ raise_assert_detail,
70
+ )
71
+ from pandas._testing.compat import (
72
+ get_dtype,
73
+ get_obj,
74
+ )
75
+ from pandas._testing.contexts import (
76
+ assert_cow_warning,
77
+ decompress_file,
78
+ ensure_clean,
79
+ raises_chained_assignment_error,
80
+ set_timezone,
81
+ use_numexpr,
82
+ with_csv_dialect,
83
+ )
84
+ from pandas.core.arrays import (
85
+ BaseMaskedArray,
86
+ ExtensionArray,
87
+ NumpyExtensionArray,
88
+ )
89
+ from pandas.core.arrays._mixins import NDArrayBackedExtensionArray
90
+ from pandas.core.construction import extract_array
91
+
92
+ if TYPE_CHECKING:
93
+ from pandas._typing import (
94
+ Dtype,
95
+ NpDtype,
96
+ )
97
+
98
+ from pandas.core.arrays import ArrowExtensionArray
99
+
100
+ UNSIGNED_INT_NUMPY_DTYPES: list[NpDtype] = ["uint8", "uint16", "uint32", "uint64"]
101
+ UNSIGNED_INT_EA_DTYPES: list[Dtype] = ["UInt8", "UInt16", "UInt32", "UInt64"]
102
+ SIGNED_INT_NUMPY_DTYPES: list[NpDtype] = [int, "int8", "int16", "int32", "int64"]
103
+ SIGNED_INT_EA_DTYPES: list[Dtype] = ["Int8", "Int16", "Int32", "Int64"]
104
+ ALL_INT_NUMPY_DTYPES = UNSIGNED_INT_NUMPY_DTYPES + SIGNED_INT_NUMPY_DTYPES
105
+ ALL_INT_EA_DTYPES = UNSIGNED_INT_EA_DTYPES + SIGNED_INT_EA_DTYPES
106
+ ALL_INT_DTYPES: list[Dtype] = [*ALL_INT_NUMPY_DTYPES, *ALL_INT_EA_DTYPES]
107
+
108
+ FLOAT_NUMPY_DTYPES: list[NpDtype] = [float, "float32", "float64"]
109
+ FLOAT_EA_DTYPES: list[Dtype] = ["Float32", "Float64"]
110
+ ALL_FLOAT_DTYPES: list[Dtype] = [*FLOAT_NUMPY_DTYPES, *FLOAT_EA_DTYPES]
111
+
112
+ COMPLEX_DTYPES: list[Dtype] = [complex, "complex64", "complex128"]
113
+ STRING_DTYPES: list[Dtype] = [str, "str", "U"]
114
+
115
+ DATETIME64_DTYPES: list[Dtype] = ["datetime64[ns]", "M8[ns]"]
116
+ TIMEDELTA64_DTYPES: list[Dtype] = ["timedelta64[ns]", "m8[ns]"]
117
+
118
+ BOOL_DTYPES: list[Dtype] = [bool, "bool"]
119
+ BYTES_DTYPES: list[Dtype] = [bytes, "bytes"]
120
+ OBJECT_DTYPES: list[Dtype] = [object, "object"]
121
+
122
+ ALL_REAL_NUMPY_DTYPES = FLOAT_NUMPY_DTYPES + ALL_INT_NUMPY_DTYPES
123
+ ALL_REAL_EXTENSION_DTYPES = FLOAT_EA_DTYPES + ALL_INT_EA_DTYPES
124
+ ALL_REAL_DTYPES: list[Dtype] = [*ALL_REAL_NUMPY_DTYPES, *ALL_REAL_EXTENSION_DTYPES]
125
+ ALL_NUMERIC_DTYPES: list[Dtype] = [*ALL_REAL_DTYPES, *COMPLEX_DTYPES]
126
+
127
+ ALL_NUMPY_DTYPES = (
128
+ ALL_REAL_NUMPY_DTYPES
129
+ + COMPLEX_DTYPES
130
+ + STRING_DTYPES
131
+ + DATETIME64_DTYPES
132
+ + TIMEDELTA64_DTYPES
133
+ + BOOL_DTYPES
134
+ + OBJECT_DTYPES
135
+ + BYTES_DTYPES
136
+ )
137
+
138
+ NARROW_NP_DTYPES = [
139
+ np.float16,
140
+ np.float32,
141
+ np.int8,
142
+ np.int16,
143
+ np.int32,
144
+ np.uint8,
145
+ np.uint16,
146
+ np.uint32,
147
+ ]
148
+
149
+ PYTHON_DATA_TYPES = [
150
+ str,
151
+ int,
152
+ float,
153
+ complex,
154
+ list,
155
+ tuple,
156
+ range,
157
+ dict,
158
+ set,
159
+ frozenset,
160
+ bool,
161
+ bytes,
162
+ bytearray,
163
+ memoryview,
164
+ ]
165
+
166
+ ENDIAN = {"little": "<", "big": ">"}[byteorder]
167
+
168
+ NULL_OBJECTS = [None, np.nan, pd.NaT, float("nan"), pd.NA, Decimal("NaN")]
169
+ NP_NAT_OBJECTS = [
170
+ cls("NaT", unit)
171
+ for cls in [np.datetime64, np.timedelta64]
172
+ for unit in [
173
+ "Y",
174
+ "M",
175
+ "W",
176
+ "D",
177
+ "h",
178
+ "m",
179
+ "s",
180
+ "ms",
181
+ "us",
182
+ "ns",
183
+ "ps",
184
+ "fs",
185
+ "as",
186
+ ]
187
+ ]
188
+
189
+ if not pa_version_under10p1:
190
+ import pyarrow as pa
191
+
192
+ UNSIGNED_INT_PYARROW_DTYPES = [pa.uint8(), pa.uint16(), pa.uint32(), pa.uint64()]
193
+ SIGNED_INT_PYARROW_DTYPES = [pa.int8(), pa.int16(), pa.int32(), pa.int64()]
194
+ ALL_INT_PYARROW_DTYPES = UNSIGNED_INT_PYARROW_DTYPES + SIGNED_INT_PYARROW_DTYPES
195
+ ALL_INT_PYARROW_DTYPES_STR_REPR = [
196
+ str(ArrowDtype(typ)) for typ in ALL_INT_PYARROW_DTYPES
197
+ ]
198
+
199
+ # pa.float16 doesn't seem supported
200
+ # https://github.com/apache/arrow/blob/master/python/pyarrow/src/arrow/python/helpers.cc#L86
201
+ FLOAT_PYARROW_DTYPES = [pa.float32(), pa.float64()]
202
+ FLOAT_PYARROW_DTYPES_STR_REPR = [
203
+ str(ArrowDtype(typ)) for typ in FLOAT_PYARROW_DTYPES
204
+ ]
205
+ DECIMAL_PYARROW_DTYPES = [pa.decimal128(7, 3)]
206
+ STRING_PYARROW_DTYPES = [pa.string()]
207
+ BINARY_PYARROW_DTYPES = [pa.binary()]
208
+
209
+ TIME_PYARROW_DTYPES = [
210
+ pa.time32("s"),
211
+ pa.time32("ms"),
212
+ pa.time64("us"),
213
+ pa.time64("ns"),
214
+ ]
215
+ DATE_PYARROW_DTYPES = [pa.date32(), pa.date64()]
216
+ DATETIME_PYARROW_DTYPES = [
217
+ pa.timestamp(unit=unit, tz=tz)
218
+ for unit in ["s", "ms", "us", "ns"]
219
+ for tz in [None, "UTC", "US/Pacific", "US/Eastern"]
220
+ ]
221
+ TIMEDELTA_PYARROW_DTYPES = [pa.duration(unit) for unit in ["s", "ms", "us", "ns"]]
222
+
223
+ BOOL_PYARROW_DTYPES = [pa.bool_()]
224
+
225
+ # TODO: Add container like pyarrow types:
226
+ # https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions
227
+ ALL_PYARROW_DTYPES = (
228
+ ALL_INT_PYARROW_DTYPES
229
+ + FLOAT_PYARROW_DTYPES
230
+ + DECIMAL_PYARROW_DTYPES
231
+ + STRING_PYARROW_DTYPES
232
+ + BINARY_PYARROW_DTYPES
233
+ + TIME_PYARROW_DTYPES
234
+ + DATE_PYARROW_DTYPES
235
+ + DATETIME_PYARROW_DTYPES
236
+ + TIMEDELTA_PYARROW_DTYPES
237
+ + BOOL_PYARROW_DTYPES
238
+ )
239
+ ALL_REAL_PYARROW_DTYPES_STR_REPR = (
240
+ ALL_INT_PYARROW_DTYPES_STR_REPR + FLOAT_PYARROW_DTYPES_STR_REPR
241
+ )
242
+ else:
243
+ FLOAT_PYARROW_DTYPES_STR_REPR = []
244
+ ALL_INT_PYARROW_DTYPES_STR_REPR = []
245
+ ALL_PYARROW_DTYPES = []
246
+ ALL_REAL_PYARROW_DTYPES_STR_REPR = []
247
+
248
+ ALL_REAL_NULLABLE_DTYPES = (
249
+ FLOAT_NUMPY_DTYPES + ALL_REAL_EXTENSION_DTYPES + ALL_REAL_PYARROW_DTYPES_STR_REPR
250
+ )
251
+
252
+ arithmetic_dunder_methods = [
253
+ "__add__",
254
+ "__radd__",
255
+ "__sub__",
256
+ "__rsub__",
257
+ "__mul__",
258
+ "__rmul__",
259
+ "__floordiv__",
260
+ "__rfloordiv__",
261
+ "__truediv__",
262
+ "__rtruediv__",
263
+ "__pow__",
264
+ "__rpow__",
265
+ "__mod__",
266
+ "__rmod__",
267
+ ]
268
+
269
+ comparison_dunder_methods = ["__eq__", "__ne__", "__le__", "__lt__", "__ge__", "__gt__"]
270
+
271
+
272
+ # -----------------------------------------------------------------------------
273
+ # Comparators
274
+
275
+
276
+ def box_expected(expected, box_cls, transpose: bool = True):
277
+ """
278
+ Helper function to wrap the expected output of a test in a given box_class.
279
+
280
+ Parameters
281
+ ----------
282
+ expected : np.ndarray, Index, Series
283
+ box_cls : {Index, Series, DataFrame}
284
+
285
+ Returns
286
+ -------
287
+ subclass of box_cls
288
+ """
289
+ if box_cls is pd.array:
290
+ if isinstance(expected, RangeIndex):
291
+ # pd.array would return an IntegerArray
292
+ expected = NumpyExtensionArray(np.asarray(expected._values))
293
+ else:
294
+ expected = pd.array(expected, copy=False)
295
+ elif box_cls is Index:
296
+ with warnings.catch_warnings():
297
+ warnings.filterwarnings("ignore", "Dtype inference", category=FutureWarning)
298
+ expected = Index(expected)
299
+ elif box_cls is Series:
300
+ with warnings.catch_warnings():
301
+ warnings.filterwarnings("ignore", "Dtype inference", category=FutureWarning)
302
+ expected = Series(expected)
303
+ elif box_cls is DataFrame:
304
+ with warnings.catch_warnings():
305
+ warnings.filterwarnings("ignore", "Dtype inference", category=FutureWarning)
306
+ expected = Series(expected).to_frame()
307
+ if transpose:
308
+ # for vector operations, we need a DataFrame to be a single-row,
309
+ # not a single-column, in order to operate against non-DataFrame
310
+ # vectors of the same length. But convert to two rows to avoid
311
+ # single-row special cases in datetime arithmetic
312
+ expected = expected.T
313
+ expected = pd.concat([expected] * 2, ignore_index=True)
314
+ elif box_cls is np.ndarray or box_cls is np.array:
315
+ expected = np.array(expected)
316
+ elif box_cls is to_array:
317
+ expected = to_array(expected)
318
+ else:
319
+ raise NotImplementedError(box_cls)
320
+ return expected
321
+
322
+
323
+ def to_array(obj):
324
+ """
325
+ Similar to pd.array, but does not cast numpy dtypes to nullable dtypes.
326
+ """
327
+ # temporary implementation until we get pd.array in place
328
+ dtype = getattr(obj, "dtype", None)
329
+
330
+ if dtype is None:
331
+ return np.asarray(obj)
332
+
333
+ return extract_array(obj, extract_numpy=True)
334
+
335
+
336
+ class SubclassedSeries(Series):
337
+ _metadata = ["testattr", "name"]
338
+
339
+ @property
340
+ def _constructor(self):
341
+ # For testing, those properties return a generic callable, and not
342
+ # the actual class. In this case that is equivalent, but it is to
343
+ # ensure we don't rely on the property returning a class
344
+ # See https://github.com/pandas-dev/pandas/pull/46018 and
345
+ # https://github.com/pandas-dev/pandas/issues/32638 and linked issues
346
+ return lambda *args, **kwargs: SubclassedSeries(*args, **kwargs)
347
+
348
+ @property
349
+ def _constructor_expanddim(self):
350
+ return lambda *args, **kwargs: SubclassedDataFrame(*args, **kwargs)
351
+
352
+
353
+ class SubclassedDataFrame(DataFrame):
354
+ _metadata = ["testattr"]
355
+
356
+ @property
357
+ def _constructor(self):
358
+ return lambda *args, **kwargs: SubclassedDataFrame(*args, **kwargs)
359
+
360
+ @property
361
+ def _constructor_sliced(self):
362
+ return lambda *args, **kwargs: SubclassedSeries(*args, **kwargs)
363
+
364
+
365
+ def convert_rows_list_to_csv_str(rows_list: list[str]) -> str:
366
+ """
367
+ Convert list of CSV rows to single CSV-formatted string for current OS.
368
+
369
+ This method is used for creating expected value of to_csv() method.
370
+
371
+ Parameters
372
+ ----------
373
+ rows_list : List[str]
374
+ Each element represents the row of csv.
375
+
376
+ Returns
377
+ -------
378
+ str
379
+ Expected output of to_csv() in current OS.
380
+ """
381
+ sep = os.linesep
382
+ return sep.join(rows_list) + sep
383
+
384
+
385
+ def external_error_raised(expected_exception: type[Exception]) -> ContextManager:
386
+ """
387
+ Helper function to mark pytest.raises that have an external error message.
388
+
389
+ Parameters
390
+ ----------
391
+ expected_exception : Exception
392
+ Expected error to raise.
393
+
394
+ Returns
395
+ -------
396
+ Callable
397
+ Regular `pytest.raises` function with `match` equal to `None`.
398
+ """
399
+ import pytest
400
+
401
+ return pytest.raises(expected_exception, match=None)
402
+
403
+
404
+ cython_table = pd.core.common._cython_table.items()
405
+
406
+
407
+ def get_cython_table_params(ndframe, func_names_and_expected):
408
+ """
409
+ Combine frame, functions from com._cython_table
410
+ keys and expected result.
411
+
412
+ Parameters
413
+ ----------
414
+ ndframe : DataFrame or Series
415
+ func_names_and_expected : Sequence of two items
416
+ The first item is a name of a NDFrame method ('sum', 'prod') etc.
417
+ The second item is the expected return value.
418
+
419
+ Returns
420
+ -------
421
+ list
422
+ List of three items (DataFrame, function, expected result)
423
+ """
424
+ results = []
425
+ for func_name, expected in func_names_and_expected:
426
+ results.append((ndframe, func_name, expected))
427
+ results += [
428
+ (ndframe, func, expected)
429
+ for func, name in cython_table
430
+ if name == func_name
431
+ ]
432
+ return results
433
+
434
+
435
+ def get_op_from_name(op_name: str) -> Callable:
436
+ """
437
+ The operator function for a given op name.
438
+
439
+ Parameters
440
+ ----------
441
+ op_name : str
442
+ The op name, in form of "add" or "__add__".
443
+
444
+ Returns
445
+ -------
446
+ function
447
+ A function performing the operation.
448
+ """
449
+ short_opname = op_name.strip("_")
450
+ try:
451
+ op = getattr(operator, short_opname)
452
+ except AttributeError:
453
+ # Assume it is the reverse operator
454
+ rop = getattr(operator, short_opname[1:])
455
+ op = lambda x, y: rop(y, x)
456
+
457
+ return op
458
+
459
+
460
+ # -----------------------------------------------------------------------------
461
+ # Indexing test helpers
462
+
463
+
464
+ def getitem(x):
465
+ return x
466
+
467
+
468
+ def setitem(x):
469
+ return x
470
+
471
+
472
+ def loc(x):
473
+ return x.loc
474
+
475
+
476
+ def iloc(x):
477
+ return x.iloc
478
+
479
+
480
+ def at(x):
481
+ return x.at
482
+
483
+
484
+ def iat(x):
485
+ return x.iat
486
+
487
+
488
+ # -----------------------------------------------------------------------------
489
+
490
+ _UNITS = ["s", "ms", "us", "ns"]
491
+
492
+
493
+ def get_finest_unit(left: str, right: str):
494
+ """
495
+ Find the higher of two datetime64 units.
496
+ """
497
+ if _UNITS.index(left) >= _UNITS.index(right):
498
+ return left
499
+ return right
500
+
501
+
502
+ def shares_memory(left, right) -> bool:
503
+ """
504
+ Pandas-compat for np.shares_memory.
505
+ """
506
+ if isinstance(left, np.ndarray) and isinstance(right, np.ndarray):
507
+ return np.shares_memory(left, right)
508
+ elif isinstance(left, np.ndarray):
509
+ # Call with reversed args to get to unpacking logic below.
510
+ return shares_memory(right, left)
511
+
512
+ if isinstance(left, RangeIndex):
513
+ return False
514
+ if isinstance(left, MultiIndex):
515
+ return shares_memory(left._codes, right)
516
+ if isinstance(left, (Index, Series)):
517
+ return shares_memory(left._values, right)
518
+
519
+ if isinstance(left, NDArrayBackedExtensionArray):
520
+ return shares_memory(left._ndarray, right)
521
+ if isinstance(left, pd.core.arrays.SparseArray):
522
+ return shares_memory(left.sp_values, right)
523
+ if isinstance(left, pd.core.arrays.IntervalArray):
524
+ return shares_memory(left._left, right) or shares_memory(left._right, right)
525
+
526
+ if (
527
+ isinstance(left, ExtensionArray)
528
+ and is_string_dtype(left.dtype)
529
+ and left.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined]
530
+ ):
531
+ # https://github.com/pandas-dev/pandas/pull/43930#discussion_r736862669
532
+ left = cast("ArrowExtensionArray", left)
533
+ if (
534
+ isinstance(right, ExtensionArray)
535
+ and is_string_dtype(right.dtype)
536
+ and right.dtype.storage in ("pyarrow", "pyarrow_numpy") # type: ignore[attr-defined]
537
+ ):
538
+ right = cast("ArrowExtensionArray", right)
539
+ left_pa_data = left._pa_array
540
+ right_pa_data = right._pa_array
541
+ left_buf1 = left_pa_data.chunk(0).buffers()[1]
542
+ right_buf1 = right_pa_data.chunk(0).buffers()[1]
543
+ return left_buf1 == right_buf1
544
+
545
+ if isinstance(left, BaseMaskedArray) and isinstance(right, BaseMaskedArray):
546
+ # By convention, we'll say these share memory if they share *either*
547
+ # the _data or the _mask
548
+ return np.shares_memory(left._data, right._data) or np.shares_memory(
549
+ left._mask, right._mask
550
+ )
551
+
552
+ if isinstance(left, DataFrame) and len(left._mgr.arrays) == 1:
553
+ arr = left._mgr.arrays[0]
554
+ return shares_memory(arr, right)
555
+
556
+ raise NotImplementedError(type(left), type(right))
557
+
558
+
559
+ __all__ = [
560
+ "ALL_INT_EA_DTYPES",
561
+ "ALL_INT_NUMPY_DTYPES",
562
+ "ALL_NUMPY_DTYPES",
563
+ "ALL_REAL_NUMPY_DTYPES",
564
+ "assert_almost_equal",
565
+ "assert_attr_equal",
566
+ "assert_categorical_equal",
567
+ "assert_class_equal",
568
+ "assert_contains_all",
569
+ "assert_copy",
570
+ "assert_datetime_array_equal",
571
+ "assert_dict_equal",
572
+ "assert_equal",
573
+ "assert_extension_array_equal",
574
+ "assert_frame_equal",
575
+ "assert_index_equal",
576
+ "assert_indexing_slices_equivalent",
577
+ "assert_interval_array_equal",
578
+ "assert_is_sorted",
579
+ "assert_is_valid_plot_return_object",
580
+ "assert_metadata_equivalent",
581
+ "assert_numpy_array_equal",
582
+ "assert_period_array_equal",
583
+ "assert_produces_warning",
584
+ "assert_series_equal",
585
+ "assert_sp_array_equal",
586
+ "assert_timedelta_array_equal",
587
+ "assert_cow_warning",
588
+ "at",
589
+ "BOOL_DTYPES",
590
+ "box_expected",
591
+ "BYTES_DTYPES",
592
+ "can_set_locale",
593
+ "COMPLEX_DTYPES",
594
+ "convert_rows_list_to_csv_str",
595
+ "DATETIME64_DTYPES",
596
+ "decompress_file",
597
+ "ENDIAN",
598
+ "ensure_clean",
599
+ "external_error_raised",
600
+ "FLOAT_EA_DTYPES",
601
+ "FLOAT_NUMPY_DTYPES",
602
+ "get_cython_table_params",
603
+ "get_dtype",
604
+ "getitem",
605
+ "get_locales",
606
+ "get_finest_unit",
607
+ "get_obj",
608
+ "get_op_from_name",
609
+ "iat",
610
+ "iloc",
611
+ "loc",
612
+ "maybe_produces_warning",
613
+ "NARROW_NP_DTYPES",
614
+ "NP_NAT_OBJECTS",
615
+ "NULL_OBJECTS",
616
+ "OBJECT_DTYPES",
617
+ "raise_assert_detail",
618
+ "raises_chained_assignment_error",
619
+ "round_trip_localpath",
620
+ "round_trip_pathlib",
621
+ "round_trip_pickle",
622
+ "setitem",
623
+ "set_locale",
624
+ "set_timezone",
625
+ "shares_memory",
626
+ "SIGNED_INT_EA_DTYPES",
627
+ "SIGNED_INT_NUMPY_DTYPES",
628
+ "STRING_DTYPES",
629
+ "SubclassedDataFrame",
630
+ "SubclassedSeries",
631
+ "TIMEDELTA64_DTYPES",
632
+ "to_array",
633
+ "UNSIGNED_INT_EA_DTYPES",
634
+ "UNSIGNED_INT_NUMPY_DTYPES",
635
+ "use_numexpr",
636
+ "with_csv_dialect",
637
+ "write_to_compressed",
638
+ ]
venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (14.2 kB). View file
 
venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/_hypothesis.cpython-310.pyc ADDED
Binary file (1.77 kB). View file
 
venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/_io.cpython-310.pyc ADDED
Binary file (4.39 kB). View file
 
venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/_warnings.cpython-310.pyc ADDED
Binary file (6.51 kB). View file
 
venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/asserters.cpython-310.pyc ADDED
Binary file (32.9 kB). View file
 
venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/compat.cpython-310.pyc ADDED
Binary file (953 Bytes). View file
 
venv/lib/python3.10/site-packages/pandas/_testing/__pycache__/contexts.cpython-310.pyc ADDED
Binary file (6.25 kB). View file
 
venv/lib/python3.10/site-packages/pandas/_testing/_hypothesis.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Hypothesis data generator helpers.
3
+ """
4
+ from datetime import datetime
5
+
6
+ from hypothesis import strategies as st
7
+ from hypothesis.extra.dateutil import timezones as dateutil_timezones
8
+ from hypothesis.extra.pytz import timezones as pytz_timezones
9
+
10
+ from pandas.compat import is_platform_windows
11
+
12
+ import pandas as pd
13
+
14
+ from pandas.tseries.offsets import (
15
+ BMonthBegin,
16
+ BMonthEnd,
17
+ BQuarterBegin,
18
+ BQuarterEnd,
19
+ BYearBegin,
20
+ BYearEnd,
21
+ MonthBegin,
22
+ MonthEnd,
23
+ QuarterBegin,
24
+ QuarterEnd,
25
+ YearBegin,
26
+ YearEnd,
27
+ )
28
+
29
+ OPTIONAL_INTS = st.lists(st.one_of(st.integers(), st.none()), max_size=10, min_size=3)
30
+
31
+ OPTIONAL_FLOATS = st.lists(st.one_of(st.floats(), st.none()), max_size=10, min_size=3)
32
+
33
+ OPTIONAL_TEXT = st.lists(st.one_of(st.none(), st.text()), max_size=10, min_size=3)
34
+
35
+ OPTIONAL_DICTS = st.lists(
36
+ st.one_of(st.none(), st.dictionaries(st.text(), st.integers())),
37
+ max_size=10,
38
+ min_size=3,
39
+ )
40
+
41
+ OPTIONAL_LISTS = st.lists(
42
+ st.one_of(st.none(), st.lists(st.text(), max_size=10, min_size=3)),
43
+ max_size=10,
44
+ min_size=3,
45
+ )
46
+
47
+ OPTIONAL_ONE_OF_ALL = st.one_of(
48
+ OPTIONAL_DICTS, OPTIONAL_FLOATS, OPTIONAL_INTS, OPTIONAL_LISTS, OPTIONAL_TEXT
49
+ )
50
+
51
+ if is_platform_windows():
52
+ DATETIME_NO_TZ = st.datetimes(min_value=datetime(1900, 1, 1))
53
+ else:
54
+ DATETIME_NO_TZ = st.datetimes()
55
+
56
+ DATETIME_JAN_1_1900_OPTIONAL_TZ = st.datetimes(
57
+ min_value=pd.Timestamp(
58
+ 1900, 1, 1
59
+ ).to_pydatetime(), # pyright: ignore[reportGeneralTypeIssues]
60
+ max_value=pd.Timestamp(
61
+ 1900, 1, 1
62
+ ).to_pydatetime(), # pyright: ignore[reportGeneralTypeIssues]
63
+ timezones=st.one_of(st.none(), dateutil_timezones(), pytz_timezones()),
64
+ )
65
+
66
+ DATETIME_IN_PD_TIMESTAMP_RANGE_NO_TZ = st.datetimes(
67
+ min_value=pd.Timestamp.min.to_pydatetime(warn=False),
68
+ max_value=pd.Timestamp.max.to_pydatetime(warn=False),
69
+ )
70
+
71
+ INT_NEG_999_TO_POS_999 = st.integers(-999, 999)
72
+
73
+ # The strategy for each type is registered in conftest.py, as they don't carry
74
+ # enough runtime information (e.g. type hints) to infer how to build them.
75
+ YQM_OFFSET = st.one_of(
76
+ *map(
77
+ st.from_type,
78
+ [
79
+ MonthBegin,
80
+ MonthEnd,
81
+ BMonthBegin,
82
+ BMonthEnd,
83
+ QuarterBegin,
84
+ QuarterEnd,
85
+ BQuarterBegin,
86
+ BQuarterEnd,
87
+ YearBegin,
88
+ YearEnd,
89
+ BYearBegin,
90
+ BYearEnd,
91
+ ],
92
+ )
93
+ )
venv/lib/python3.10/site-packages/pandas/_testing/_io.py ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import gzip
4
+ import io
5
+ import pathlib
6
+ import tarfile
7
+ from typing import (
8
+ TYPE_CHECKING,
9
+ Any,
10
+ Callable,
11
+ )
12
+ import uuid
13
+ import zipfile
14
+
15
+ from pandas.compat import (
16
+ get_bz2_file,
17
+ get_lzma_file,
18
+ )
19
+ from pandas.compat._optional import import_optional_dependency
20
+
21
+ import pandas as pd
22
+ from pandas._testing.contexts import ensure_clean
23
+
24
+ if TYPE_CHECKING:
25
+ from pandas._typing import (
26
+ FilePath,
27
+ ReadPickleBuffer,
28
+ )
29
+
30
+ from pandas import (
31
+ DataFrame,
32
+ Series,
33
+ )
34
+
35
+ # ------------------------------------------------------------------
36
+ # File-IO
37
+
38
+
39
+ def round_trip_pickle(
40
+ obj: Any, path: FilePath | ReadPickleBuffer | None = None
41
+ ) -> DataFrame | Series:
42
+ """
43
+ Pickle an object and then read it again.
44
+
45
+ Parameters
46
+ ----------
47
+ obj : any object
48
+ The object to pickle and then re-read.
49
+ path : str, path object or file-like object, default None
50
+ The path where the pickled object is written and then read.
51
+
52
+ Returns
53
+ -------
54
+ pandas object
55
+ The original object that was pickled and then re-read.
56
+ """
57
+ _path = path
58
+ if _path is None:
59
+ _path = f"__{uuid.uuid4()}__.pickle"
60
+ with ensure_clean(_path) as temp_path:
61
+ pd.to_pickle(obj, temp_path)
62
+ return pd.read_pickle(temp_path)
63
+
64
+
65
+ def round_trip_pathlib(writer, reader, path: str | None = None):
66
+ """
67
+ Write an object to file specified by a pathlib.Path and read it back
68
+
69
+ Parameters
70
+ ----------
71
+ writer : callable bound to pandas object
72
+ IO writing function (e.g. DataFrame.to_csv )
73
+ reader : callable
74
+ IO reading function (e.g. pd.read_csv )
75
+ path : str, default None
76
+ The path where the object is written and then read.
77
+
78
+ Returns
79
+ -------
80
+ pandas object
81
+ The original object that was serialized and then re-read.
82
+ """
83
+ Path = pathlib.Path
84
+ if path is None:
85
+ path = "___pathlib___"
86
+ with ensure_clean(path) as path:
87
+ writer(Path(path)) # type: ignore[arg-type]
88
+ obj = reader(Path(path)) # type: ignore[arg-type]
89
+ return obj
90
+
91
+
92
+ def round_trip_localpath(writer, reader, path: str | None = None):
93
+ """
94
+ Write an object to file specified by a py.path LocalPath and read it back.
95
+
96
+ Parameters
97
+ ----------
98
+ writer : callable bound to pandas object
99
+ IO writing function (e.g. DataFrame.to_csv )
100
+ reader : callable
101
+ IO reading function (e.g. pd.read_csv )
102
+ path : str, default None
103
+ The path where the object is written and then read.
104
+
105
+ Returns
106
+ -------
107
+ pandas object
108
+ The original object that was serialized and then re-read.
109
+ """
110
+ import pytest
111
+
112
+ LocalPath = pytest.importorskip("py.path").local
113
+ if path is None:
114
+ path = "___localpath___"
115
+ with ensure_clean(path) as path:
116
+ writer(LocalPath(path))
117
+ obj = reader(LocalPath(path))
118
+ return obj
119
+
120
+
121
+ def write_to_compressed(compression, path, data, dest: str = "test") -> None:
122
+ """
123
+ Write data to a compressed file.
124
+
125
+ Parameters
126
+ ----------
127
+ compression : {'gzip', 'bz2', 'zip', 'xz', 'zstd'}
128
+ The compression type to use.
129
+ path : str
130
+ The file path to write the data.
131
+ data : str
132
+ The data to write.
133
+ dest : str, default "test"
134
+ The destination file (for ZIP only)
135
+
136
+ Raises
137
+ ------
138
+ ValueError : An invalid compression value was passed in.
139
+ """
140
+ args: tuple[Any, ...] = (data,)
141
+ mode = "wb"
142
+ method = "write"
143
+ compress_method: Callable
144
+
145
+ if compression == "zip":
146
+ compress_method = zipfile.ZipFile
147
+ mode = "w"
148
+ args = (dest, data)
149
+ method = "writestr"
150
+ elif compression == "tar":
151
+ compress_method = tarfile.TarFile
152
+ mode = "w"
153
+ file = tarfile.TarInfo(name=dest)
154
+ bytes = io.BytesIO(data)
155
+ file.size = len(data)
156
+ args = (file, bytes)
157
+ method = "addfile"
158
+ elif compression == "gzip":
159
+ compress_method = gzip.GzipFile
160
+ elif compression == "bz2":
161
+ compress_method = get_bz2_file()
162
+ elif compression == "zstd":
163
+ compress_method = import_optional_dependency("zstandard").open
164
+ elif compression == "xz":
165
+ compress_method = get_lzma_file()
166
+ else:
167
+ raise ValueError(f"Unrecognized compression type: {compression}")
168
+
169
+ with compress_method(path, mode=mode) as f:
170
+ getattr(f, method)(*args)
venv/lib/python3.10/site-packages/pandas/_testing/_warnings.py ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from contextlib import (
4
+ contextmanager,
5
+ nullcontext,
6
+ )
7
+ import inspect
8
+ import re
9
+ import sys
10
+ from typing import (
11
+ TYPE_CHECKING,
12
+ Literal,
13
+ cast,
14
+ )
15
+ import warnings
16
+
17
+ from pandas.compat import PY311
18
+
19
+ if TYPE_CHECKING:
20
+ from collections.abc import (
21
+ Generator,
22
+ Sequence,
23
+ )
24
+
25
+
26
+ @contextmanager
27
+ def assert_produces_warning(
28
+ expected_warning: type[Warning] | bool | tuple[type[Warning], ...] | None = Warning,
29
+ filter_level: Literal[
30
+ "error", "ignore", "always", "default", "module", "once"
31
+ ] = "always",
32
+ check_stacklevel: bool = True,
33
+ raise_on_extra_warnings: bool = True,
34
+ match: str | None = None,
35
+ ) -> Generator[list[warnings.WarningMessage], None, None]:
36
+ """
37
+ Context manager for running code expected to either raise a specific warning,
38
+ multiple specific warnings, or not raise any warnings. Verifies that the code
39
+ raises the expected warning(s), and that it does not raise any other unexpected
40
+ warnings. It is basically a wrapper around ``warnings.catch_warnings``.
41
+
42
+ Parameters
43
+ ----------
44
+ expected_warning : {Warning, False, tuple[Warning, ...], None}, default Warning
45
+ The type of Exception raised. ``exception.Warning`` is the base
46
+ class for all warnings. To raise multiple types of exceptions,
47
+ pass them as a tuple. To check that no warning is returned,
48
+ specify ``False`` or ``None``.
49
+ filter_level : str or None, default "always"
50
+ Specifies whether warnings are ignored, displayed, or turned
51
+ into errors.
52
+ Valid values are:
53
+
54
+ * "error" - turns matching warnings into exceptions
55
+ * "ignore" - discard the warning
56
+ * "always" - always emit a warning
57
+ * "default" - print the warning the first time it is generated
58
+ from each location
59
+ * "module" - print the warning the first time it is generated
60
+ from each module
61
+ * "once" - print the warning the first time it is generated
62
+
63
+ check_stacklevel : bool, default True
64
+ If True, displays the line that called the function containing
65
+ the warning to show were the function is called. Otherwise, the
66
+ line that implements the function is displayed.
67
+ raise_on_extra_warnings : bool, default True
68
+ Whether extra warnings not of the type `expected_warning` should
69
+ cause the test to fail.
70
+ match : str, optional
71
+ Match warning message.
72
+
73
+ Examples
74
+ --------
75
+ >>> import warnings
76
+ >>> with assert_produces_warning():
77
+ ... warnings.warn(UserWarning())
78
+ ...
79
+ >>> with assert_produces_warning(False):
80
+ ... warnings.warn(RuntimeWarning())
81
+ ...
82
+ Traceback (most recent call last):
83
+ ...
84
+ AssertionError: Caused unexpected warning(s): ['RuntimeWarning'].
85
+ >>> with assert_produces_warning(UserWarning):
86
+ ... warnings.warn(RuntimeWarning())
87
+ Traceback (most recent call last):
88
+ ...
89
+ AssertionError: Did not see expected warning of class 'UserWarning'.
90
+
91
+ ..warn:: This is *not* thread-safe.
92
+ """
93
+ __tracebackhide__ = True
94
+
95
+ with warnings.catch_warnings(record=True) as w:
96
+ warnings.simplefilter(filter_level)
97
+ try:
98
+ yield w
99
+ finally:
100
+ if expected_warning:
101
+ expected_warning = cast(type[Warning], expected_warning)
102
+ _assert_caught_expected_warning(
103
+ caught_warnings=w,
104
+ expected_warning=expected_warning,
105
+ match=match,
106
+ check_stacklevel=check_stacklevel,
107
+ )
108
+ if raise_on_extra_warnings:
109
+ _assert_caught_no_extra_warnings(
110
+ caught_warnings=w,
111
+ expected_warning=expected_warning,
112
+ )
113
+
114
+
115
+ def maybe_produces_warning(warning: type[Warning], condition: bool, **kwargs):
116
+ """
117
+ Return a context manager that possibly checks a warning based on the condition
118
+ """
119
+ if condition:
120
+ return assert_produces_warning(warning, **kwargs)
121
+ else:
122
+ return nullcontext()
123
+
124
+
125
+ def _assert_caught_expected_warning(
126
+ *,
127
+ caught_warnings: Sequence[warnings.WarningMessage],
128
+ expected_warning: type[Warning],
129
+ match: str | None,
130
+ check_stacklevel: bool,
131
+ ) -> None:
132
+ """Assert that there was the expected warning among the caught warnings."""
133
+ saw_warning = False
134
+ matched_message = False
135
+ unmatched_messages = []
136
+
137
+ for actual_warning in caught_warnings:
138
+ if issubclass(actual_warning.category, expected_warning):
139
+ saw_warning = True
140
+
141
+ if check_stacklevel:
142
+ _assert_raised_with_correct_stacklevel(actual_warning)
143
+
144
+ if match is not None:
145
+ if re.search(match, str(actual_warning.message)):
146
+ matched_message = True
147
+ else:
148
+ unmatched_messages.append(actual_warning.message)
149
+
150
+ if not saw_warning:
151
+ raise AssertionError(
152
+ f"Did not see expected warning of class "
153
+ f"{repr(expected_warning.__name__)}"
154
+ )
155
+
156
+ if match and not matched_message:
157
+ raise AssertionError(
158
+ f"Did not see warning {repr(expected_warning.__name__)} "
159
+ f"matching '{match}'. The emitted warning messages are "
160
+ f"{unmatched_messages}"
161
+ )
162
+
163
+
164
+ def _assert_caught_no_extra_warnings(
165
+ *,
166
+ caught_warnings: Sequence[warnings.WarningMessage],
167
+ expected_warning: type[Warning] | bool | tuple[type[Warning], ...] | None,
168
+ ) -> None:
169
+ """Assert that no extra warnings apart from the expected ones are caught."""
170
+ extra_warnings = []
171
+
172
+ for actual_warning in caught_warnings:
173
+ if _is_unexpected_warning(actual_warning, expected_warning):
174
+ # GH#38630 pytest.filterwarnings does not suppress these.
175
+ if actual_warning.category == ResourceWarning:
176
+ # GH 44732: Don't make the CI flaky by filtering SSL-related
177
+ # ResourceWarning from dependencies
178
+ if "unclosed <ssl.SSLSocket" in str(actual_warning.message):
179
+ continue
180
+ # GH 44844: Matplotlib leaves font files open during the entire process
181
+ # upon import. Don't make CI flaky if ResourceWarning raised
182
+ # due to these open files.
183
+ if any("matplotlib" in mod for mod in sys.modules):
184
+ continue
185
+ if PY311 and actual_warning.category == EncodingWarning:
186
+ # EncodingWarnings are checked in the CI
187
+ # pyproject.toml errors on EncodingWarnings in pandas
188
+ # Ignore EncodingWarnings from other libraries
189
+ continue
190
+ extra_warnings.append(
191
+ (
192
+ actual_warning.category.__name__,
193
+ actual_warning.message,
194
+ actual_warning.filename,
195
+ actual_warning.lineno,
196
+ )
197
+ )
198
+
199
+ if extra_warnings:
200
+ raise AssertionError(f"Caused unexpected warning(s): {repr(extra_warnings)}")
201
+
202
+
203
+ def _is_unexpected_warning(
204
+ actual_warning: warnings.WarningMessage,
205
+ expected_warning: type[Warning] | bool | tuple[type[Warning], ...] | None,
206
+ ) -> bool:
207
+ """Check if the actual warning issued is unexpected."""
208
+ if actual_warning and not expected_warning:
209
+ return True
210
+ expected_warning = cast(type[Warning], expected_warning)
211
+ return bool(not issubclass(actual_warning.category, expected_warning))
212
+
213
+
214
+ def _assert_raised_with_correct_stacklevel(
215
+ actual_warning: warnings.WarningMessage,
216
+ ) -> None:
217
+ # https://stackoverflow.com/questions/17407119/python-inspect-stack-is-slow
218
+ frame = inspect.currentframe()
219
+ for _ in range(4):
220
+ frame = frame.f_back # type: ignore[union-attr]
221
+ try:
222
+ caller_filename = inspect.getfile(frame) # type: ignore[arg-type]
223
+ finally:
224
+ # See note in
225
+ # https://docs.python.org/3/library/inspect.html#inspect.Traceback
226
+ del frame
227
+ msg = (
228
+ "Warning not set with correct stacklevel. "
229
+ f"File where warning is raised: {actual_warning.filename} != "
230
+ f"{caller_filename}. Warning message: {actual_warning.message}"
231
+ )
232
+ assert actual_warning.filename == caller_filename, msg
venv/lib/python3.10/site-packages/pandas/_testing/asserters.py ADDED
@@ -0,0 +1,1435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import operator
4
+ from typing import (
5
+ TYPE_CHECKING,
6
+ Literal,
7
+ NoReturn,
8
+ cast,
9
+ )
10
+
11
+ import numpy as np
12
+
13
+ from pandas._libs import lib
14
+ from pandas._libs.missing import is_matching_na
15
+ from pandas._libs.sparse import SparseIndex
16
+ import pandas._libs.testing as _testing
17
+ from pandas._libs.tslibs.np_datetime import compare_mismatched_resolutions
18
+
19
+ from pandas.core.dtypes.common import (
20
+ is_bool,
21
+ is_float_dtype,
22
+ is_integer_dtype,
23
+ is_number,
24
+ is_numeric_dtype,
25
+ needs_i8_conversion,
26
+ )
27
+ from pandas.core.dtypes.dtypes import (
28
+ CategoricalDtype,
29
+ DatetimeTZDtype,
30
+ ExtensionDtype,
31
+ NumpyEADtype,
32
+ )
33
+ from pandas.core.dtypes.missing import array_equivalent
34
+
35
+ import pandas as pd
36
+ from pandas import (
37
+ Categorical,
38
+ DataFrame,
39
+ DatetimeIndex,
40
+ Index,
41
+ IntervalDtype,
42
+ IntervalIndex,
43
+ MultiIndex,
44
+ PeriodIndex,
45
+ RangeIndex,
46
+ Series,
47
+ TimedeltaIndex,
48
+ )
49
+ from pandas.core.arrays import (
50
+ DatetimeArray,
51
+ ExtensionArray,
52
+ IntervalArray,
53
+ PeriodArray,
54
+ TimedeltaArray,
55
+ )
56
+ from pandas.core.arrays.datetimelike import DatetimeLikeArrayMixin
57
+ from pandas.core.arrays.string_ import StringDtype
58
+ from pandas.core.indexes.api import safe_sort_index
59
+
60
+ from pandas.io.formats.printing import pprint_thing
61
+
62
+ if TYPE_CHECKING:
63
+ from pandas._typing import DtypeObj
64
+
65
+
66
+ def assert_almost_equal(
67
+ left,
68
+ right,
69
+ check_dtype: bool | Literal["equiv"] = "equiv",
70
+ rtol: float = 1.0e-5,
71
+ atol: float = 1.0e-8,
72
+ **kwargs,
73
+ ) -> None:
74
+ """
75
+ Check that the left and right objects are approximately equal.
76
+
77
+ By approximately equal, we refer to objects that are numbers or that
78
+ contain numbers which may be equivalent to specific levels of precision.
79
+
80
+ Parameters
81
+ ----------
82
+ left : object
83
+ right : object
84
+ check_dtype : bool or {'equiv'}, default 'equiv'
85
+ Check dtype if both a and b are the same type. If 'equiv' is passed in,
86
+ then `RangeIndex` and `Index` with int64 dtype are also considered
87
+ equivalent when doing type checking.
88
+ rtol : float, default 1e-5
89
+ Relative tolerance.
90
+ atol : float, default 1e-8
91
+ Absolute tolerance.
92
+ """
93
+ if isinstance(left, Index):
94
+ assert_index_equal(
95
+ left,
96
+ right,
97
+ check_exact=False,
98
+ exact=check_dtype,
99
+ rtol=rtol,
100
+ atol=atol,
101
+ **kwargs,
102
+ )
103
+
104
+ elif isinstance(left, Series):
105
+ assert_series_equal(
106
+ left,
107
+ right,
108
+ check_exact=False,
109
+ check_dtype=check_dtype,
110
+ rtol=rtol,
111
+ atol=atol,
112
+ **kwargs,
113
+ )
114
+
115
+ elif isinstance(left, DataFrame):
116
+ assert_frame_equal(
117
+ left,
118
+ right,
119
+ check_exact=False,
120
+ check_dtype=check_dtype,
121
+ rtol=rtol,
122
+ atol=atol,
123
+ **kwargs,
124
+ )
125
+
126
+ else:
127
+ # Other sequences.
128
+ if check_dtype:
129
+ if is_number(left) and is_number(right):
130
+ # Do not compare numeric classes, like np.float64 and float.
131
+ pass
132
+ elif is_bool(left) and is_bool(right):
133
+ # Do not compare bool classes, like np.bool_ and bool.
134
+ pass
135
+ else:
136
+ if isinstance(left, np.ndarray) or isinstance(right, np.ndarray):
137
+ obj = "numpy array"
138
+ else:
139
+ obj = "Input"
140
+ assert_class_equal(left, right, obj=obj)
141
+
142
+ # if we have "equiv", this becomes True
143
+ _testing.assert_almost_equal(
144
+ left, right, check_dtype=bool(check_dtype), rtol=rtol, atol=atol, **kwargs
145
+ )
146
+
147
+
148
+ def _check_isinstance(left, right, cls) -> None:
149
+ """
150
+ Helper method for our assert_* methods that ensures that
151
+ the two objects being compared have the right type before
152
+ proceeding with the comparison.
153
+
154
+ Parameters
155
+ ----------
156
+ left : The first object being compared.
157
+ right : The second object being compared.
158
+ cls : The class type to check against.
159
+
160
+ Raises
161
+ ------
162
+ AssertionError : Either `left` or `right` is not an instance of `cls`.
163
+ """
164
+ cls_name = cls.__name__
165
+
166
+ if not isinstance(left, cls):
167
+ raise AssertionError(
168
+ f"{cls_name} Expected type {cls}, found {type(left)} instead"
169
+ )
170
+ if not isinstance(right, cls):
171
+ raise AssertionError(
172
+ f"{cls_name} Expected type {cls}, found {type(right)} instead"
173
+ )
174
+
175
+
176
+ def assert_dict_equal(left, right, compare_keys: bool = True) -> None:
177
+ _check_isinstance(left, right, dict)
178
+ _testing.assert_dict_equal(left, right, compare_keys=compare_keys)
179
+
180
+
181
+ def assert_index_equal(
182
+ left: Index,
183
+ right: Index,
184
+ exact: bool | str = "equiv",
185
+ check_names: bool = True,
186
+ check_exact: bool = True,
187
+ check_categorical: bool = True,
188
+ check_order: bool = True,
189
+ rtol: float = 1.0e-5,
190
+ atol: float = 1.0e-8,
191
+ obj: str = "Index",
192
+ ) -> None:
193
+ """
194
+ Check that left and right Index are equal.
195
+
196
+ Parameters
197
+ ----------
198
+ left : Index
199
+ right : Index
200
+ exact : bool or {'equiv'}, default 'equiv'
201
+ Whether to check the Index class, dtype and inferred_type
202
+ are identical. If 'equiv', then RangeIndex can be substituted for
203
+ Index with an int64 dtype as well.
204
+ check_names : bool, default True
205
+ Whether to check the names attribute.
206
+ check_exact : bool, default True
207
+ Whether to compare number exactly.
208
+ check_categorical : bool, default True
209
+ Whether to compare internal Categorical exactly.
210
+ check_order : bool, default True
211
+ Whether to compare the order of index entries as well as their values.
212
+ If True, both indexes must contain the same elements, in the same order.
213
+ If False, both indexes must contain the same elements, but in any order.
214
+ rtol : float, default 1e-5
215
+ Relative tolerance. Only used when check_exact is False.
216
+ atol : float, default 1e-8
217
+ Absolute tolerance. Only used when check_exact is False.
218
+ obj : str, default 'Index'
219
+ Specify object name being compared, internally used to show appropriate
220
+ assertion message.
221
+
222
+ Examples
223
+ --------
224
+ >>> from pandas import testing as tm
225
+ >>> a = pd.Index([1, 2, 3])
226
+ >>> b = pd.Index([1, 2, 3])
227
+ >>> tm.assert_index_equal(a, b)
228
+ """
229
+ __tracebackhide__ = True
230
+
231
+ def _check_types(left, right, obj: str = "Index") -> None:
232
+ if not exact:
233
+ return
234
+
235
+ assert_class_equal(left, right, exact=exact, obj=obj)
236
+ assert_attr_equal("inferred_type", left, right, obj=obj)
237
+
238
+ # Skip exact dtype checking when `check_categorical` is False
239
+ if isinstance(left.dtype, CategoricalDtype) and isinstance(
240
+ right.dtype, CategoricalDtype
241
+ ):
242
+ if check_categorical:
243
+ assert_attr_equal("dtype", left, right, obj=obj)
244
+ assert_index_equal(left.categories, right.categories, exact=exact)
245
+ return
246
+
247
+ assert_attr_equal("dtype", left, right, obj=obj)
248
+
249
+ # instance validation
250
+ _check_isinstance(left, right, Index)
251
+
252
+ # class / dtype comparison
253
+ _check_types(left, right, obj=obj)
254
+
255
+ # level comparison
256
+ if left.nlevels != right.nlevels:
257
+ msg1 = f"{obj} levels are different"
258
+ msg2 = f"{left.nlevels}, {left}"
259
+ msg3 = f"{right.nlevels}, {right}"
260
+ raise_assert_detail(obj, msg1, msg2, msg3)
261
+
262
+ # length comparison
263
+ if len(left) != len(right):
264
+ msg1 = f"{obj} length are different"
265
+ msg2 = f"{len(left)}, {left}"
266
+ msg3 = f"{len(right)}, {right}"
267
+ raise_assert_detail(obj, msg1, msg2, msg3)
268
+
269
+ # If order doesn't matter then sort the index entries
270
+ if not check_order:
271
+ left = safe_sort_index(left)
272
+ right = safe_sort_index(right)
273
+
274
+ # MultiIndex special comparison for little-friendly error messages
275
+ if isinstance(left, MultiIndex):
276
+ right = cast(MultiIndex, right)
277
+
278
+ for level in range(left.nlevels):
279
+ lobj = f"MultiIndex level [{level}]"
280
+ try:
281
+ # try comparison on levels/codes to avoid densifying MultiIndex
282
+ assert_index_equal(
283
+ left.levels[level],
284
+ right.levels[level],
285
+ exact=exact,
286
+ check_names=check_names,
287
+ check_exact=check_exact,
288
+ check_categorical=check_categorical,
289
+ rtol=rtol,
290
+ atol=atol,
291
+ obj=lobj,
292
+ )
293
+ assert_numpy_array_equal(left.codes[level], right.codes[level])
294
+ except AssertionError:
295
+ llevel = left.get_level_values(level)
296
+ rlevel = right.get_level_values(level)
297
+
298
+ assert_index_equal(
299
+ llevel,
300
+ rlevel,
301
+ exact=exact,
302
+ check_names=check_names,
303
+ check_exact=check_exact,
304
+ check_categorical=check_categorical,
305
+ rtol=rtol,
306
+ atol=atol,
307
+ obj=lobj,
308
+ )
309
+ # get_level_values may change dtype
310
+ _check_types(left.levels[level], right.levels[level], obj=obj)
311
+
312
+ # skip exact index checking when `check_categorical` is False
313
+ elif check_exact and check_categorical:
314
+ if not left.equals(right):
315
+ mismatch = left._values != right._values
316
+
317
+ if not isinstance(mismatch, np.ndarray):
318
+ mismatch = cast("ExtensionArray", mismatch).fillna(True)
319
+
320
+ diff = np.sum(mismatch.astype(int)) * 100.0 / len(left)
321
+ msg = f"{obj} values are different ({np.round(diff, 5)} %)"
322
+ raise_assert_detail(obj, msg, left, right)
323
+ else:
324
+ # if we have "equiv", this becomes True
325
+ exact_bool = bool(exact)
326
+ _testing.assert_almost_equal(
327
+ left.values,
328
+ right.values,
329
+ rtol=rtol,
330
+ atol=atol,
331
+ check_dtype=exact_bool,
332
+ obj=obj,
333
+ lobj=left,
334
+ robj=right,
335
+ )
336
+
337
+ # metadata comparison
338
+ if check_names:
339
+ assert_attr_equal("names", left, right, obj=obj)
340
+ if isinstance(left, PeriodIndex) or isinstance(right, PeriodIndex):
341
+ assert_attr_equal("dtype", left, right, obj=obj)
342
+ if isinstance(left, IntervalIndex) or isinstance(right, IntervalIndex):
343
+ assert_interval_array_equal(left._values, right._values)
344
+
345
+ if check_categorical:
346
+ if isinstance(left.dtype, CategoricalDtype) or isinstance(
347
+ right.dtype, CategoricalDtype
348
+ ):
349
+ assert_categorical_equal(left._values, right._values, obj=f"{obj} category")
350
+
351
+
352
+ def assert_class_equal(
353
+ left, right, exact: bool | str = True, obj: str = "Input"
354
+ ) -> None:
355
+ """
356
+ Checks classes are equal.
357
+ """
358
+ __tracebackhide__ = True
359
+
360
+ def repr_class(x):
361
+ if isinstance(x, Index):
362
+ # return Index as it is to include values in the error message
363
+ return x
364
+
365
+ return type(x).__name__
366
+
367
+ def is_class_equiv(idx: Index) -> bool:
368
+ """Classes that are a RangeIndex (sub-)instance or exactly an `Index` .
369
+
370
+ This only checks class equivalence. There is a separate check that the
371
+ dtype is int64.
372
+ """
373
+ return type(idx) is Index or isinstance(idx, RangeIndex)
374
+
375
+ if type(left) == type(right):
376
+ return
377
+
378
+ if exact == "equiv":
379
+ if is_class_equiv(left) and is_class_equiv(right):
380
+ return
381
+
382
+ msg = f"{obj} classes are different"
383
+ raise_assert_detail(obj, msg, repr_class(left), repr_class(right))
384
+
385
+
386
+ def assert_attr_equal(attr: str, left, right, obj: str = "Attributes") -> None:
387
+ """
388
+ Check attributes are equal. Both objects must have attribute.
389
+
390
+ Parameters
391
+ ----------
392
+ attr : str
393
+ Attribute name being compared.
394
+ left : object
395
+ right : object
396
+ obj : str, default 'Attributes'
397
+ Specify object name being compared, internally used to show appropriate
398
+ assertion message
399
+ """
400
+ __tracebackhide__ = True
401
+
402
+ left_attr = getattr(left, attr)
403
+ right_attr = getattr(right, attr)
404
+
405
+ if left_attr is right_attr or is_matching_na(left_attr, right_attr):
406
+ # e.g. both np.nan, both NaT, both pd.NA, ...
407
+ return None
408
+
409
+ try:
410
+ result = left_attr == right_attr
411
+ except TypeError:
412
+ # datetimetz on rhs may raise TypeError
413
+ result = False
414
+ if (left_attr is pd.NA) ^ (right_attr is pd.NA):
415
+ result = False
416
+ elif not isinstance(result, bool):
417
+ result = result.all()
418
+
419
+ if not result:
420
+ msg = f'Attribute "{attr}" are different'
421
+ raise_assert_detail(obj, msg, left_attr, right_attr)
422
+ return None
423
+
424
+
425
+ def assert_is_valid_plot_return_object(objs) -> None:
426
+ from matplotlib.artist import Artist
427
+ from matplotlib.axes import Axes
428
+
429
+ if isinstance(objs, (Series, np.ndarray)):
430
+ if isinstance(objs, Series):
431
+ objs = objs._values
432
+ for el in objs.ravel():
433
+ msg = (
434
+ "one of 'objs' is not a matplotlib Axes instance, "
435
+ f"type encountered {repr(type(el).__name__)}"
436
+ )
437
+ assert isinstance(el, (Axes, dict)), msg
438
+ else:
439
+ msg = (
440
+ "objs is neither an ndarray of Artist instances nor a single "
441
+ "ArtistArtist instance, tuple, or dict, 'objs' is a "
442
+ f"{repr(type(objs).__name__)}"
443
+ )
444
+ assert isinstance(objs, (Artist, tuple, dict)), msg
445
+
446
+
447
+ def assert_is_sorted(seq) -> None:
448
+ """Assert that the sequence is sorted."""
449
+ if isinstance(seq, (Index, Series)):
450
+ seq = seq.values
451
+ # sorting does not change precisions
452
+ if isinstance(seq, np.ndarray):
453
+ assert_numpy_array_equal(seq, np.sort(np.array(seq)))
454
+ else:
455
+ assert_extension_array_equal(seq, seq[seq.argsort()])
456
+
457
+
458
+ def assert_categorical_equal(
459
+ left,
460
+ right,
461
+ check_dtype: bool = True,
462
+ check_category_order: bool = True,
463
+ obj: str = "Categorical",
464
+ ) -> None:
465
+ """
466
+ Test that Categoricals are equivalent.
467
+
468
+ Parameters
469
+ ----------
470
+ left : Categorical
471
+ right : Categorical
472
+ check_dtype : bool, default True
473
+ Check that integer dtype of the codes are the same.
474
+ check_category_order : bool, default True
475
+ Whether the order of the categories should be compared, which
476
+ implies identical integer codes. If False, only the resulting
477
+ values are compared. The ordered attribute is
478
+ checked regardless.
479
+ obj : str, default 'Categorical'
480
+ Specify object name being compared, internally used to show appropriate
481
+ assertion message.
482
+ """
483
+ _check_isinstance(left, right, Categorical)
484
+
485
+ exact: bool | str
486
+ if isinstance(left.categories, RangeIndex) or isinstance(
487
+ right.categories, RangeIndex
488
+ ):
489
+ exact = "equiv"
490
+ else:
491
+ # We still want to require exact matches for Index
492
+ exact = True
493
+
494
+ if check_category_order:
495
+ assert_index_equal(
496
+ left.categories, right.categories, obj=f"{obj}.categories", exact=exact
497
+ )
498
+ assert_numpy_array_equal(
499
+ left.codes, right.codes, check_dtype=check_dtype, obj=f"{obj}.codes"
500
+ )
501
+ else:
502
+ try:
503
+ lc = left.categories.sort_values()
504
+ rc = right.categories.sort_values()
505
+ except TypeError:
506
+ # e.g. '<' not supported between instances of 'int' and 'str'
507
+ lc, rc = left.categories, right.categories
508
+ assert_index_equal(lc, rc, obj=f"{obj}.categories", exact=exact)
509
+ assert_index_equal(
510
+ left.categories.take(left.codes),
511
+ right.categories.take(right.codes),
512
+ obj=f"{obj}.values",
513
+ exact=exact,
514
+ )
515
+
516
+ assert_attr_equal("ordered", left, right, obj=obj)
517
+
518
+
519
+ def assert_interval_array_equal(
520
+ left, right, exact: bool | Literal["equiv"] = "equiv", obj: str = "IntervalArray"
521
+ ) -> None:
522
+ """
523
+ Test that two IntervalArrays are equivalent.
524
+
525
+ Parameters
526
+ ----------
527
+ left, right : IntervalArray
528
+ The IntervalArrays to compare.
529
+ exact : bool or {'equiv'}, default 'equiv'
530
+ Whether to check the Index class, dtype and inferred_type
531
+ are identical. If 'equiv', then RangeIndex can be substituted for
532
+ Index with an int64 dtype as well.
533
+ obj : str, default 'IntervalArray'
534
+ Specify object name being compared, internally used to show appropriate
535
+ assertion message
536
+ """
537
+ _check_isinstance(left, right, IntervalArray)
538
+
539
+ kwargs = {}
540
+ if left._left.dtype.kind in "mM":
541
+ # We have a DatetimeArray or TimedeltaArray
542
+ kwargs["check_freq"] = False
543
+
544
+ assert_equal(left._left, right._left, obj=f"{obj}.left", **kwargs)
545
+ assert_equal(left._right, right._right, obj=f"{obj}.left", **kwargs)
546
+
547
+ assert_attr_equal("closed", left, right, obj=obj)
548
+
549
+
550
+ def assert_period_array_equal(left, right, obj: str = "PeriodArray") -> None:
551
+ _check_isinstance(left, right, PeriodArray)
552
+
553
+ assert_numpy_array_equal(left._ndarray, right._ndarray, obj=f"{obj}._ndarray")
554
+ assert_attr_equal("dtype", left, right, obj=obj)
555
+
556
+
557
+ def assert_datetime_array_equal(
558
+ left, right, obj: str = "DatetimeArray", check_freq: bool = True
559
+ ) -> None:
560
+ __tracebackhide__ = True
561
+ _check_isinstance(left, right, DatetimeArray)
562
+
563
+ assert_numpy_array_equal(left._ndarray, right._ndarray, obj=f"{obj}._ndarray")
564
+ if check_freq:
565
+ assert_attr_equal("freq", left, right, obj=obj)
566
+ assert_attr_equal("tz", left, right, obj=obj)
567
+
568
+
569
+ def assert_timedelta_array_equal(
570
+ left, right, obj: str = "TimedeltaArray", check_freq: bool = True
571
+ ) -> None:
572
+ __tracebackhide__ = True
573
+ _check_isinstance(left, right, TimedeltaArray)
574
+ assert_numpy_array_equal(left._ndarray, right._ndarray, obj=f"{obj}._ndarray")
575
+ if check_freq:
576
+ assert_attr_equal("freq", left, right, obj=obj)
577
+
578
+
579
+ def raise_assert_detail(
580
+ obj, message, left, right, diff=None, first_diff=None, index_values=None
581
+ ) -> NoReturn:
582
+ __tracebackhide__ = True
583
+
584
+ msg = f"""{obj} are different
585
+
586
+ {message}"""
587
+
588
+ if isinstance(index_values, Index):
589
+ index_values = np.asarray(index_values)
590
+
591
+ if isinstance(index_values, np.ndarray):
592
+ msg += f"\n[index]: {pprint_thing(index_values)}"
593
+
594
+ if isinstance(left, np.ndarray):
595
+ left = pprint_thing(left)
596
+ elif isinstance(left, (CategoricalDtype, NumpyEADtype, StringDtype)):
597
+ left = repr(left)
598
+
599
+ if isinstance(right, np.ndarray):
600
+ right = pprint_thing(right)
601
+ elif isinstance(right, (CategoricalDtype, NumpyEADtype, StringDtype)):
602
+ right = repr(right)
603
+
604
+ msg += f"""
605
+ [left]: {left}
606
+ [right]: {right}"""
607
+
608
+ if diff is not None:
609
+ msg += f"\n[diff]: {diff}"
610
+
611
+ if first_diff is not None:
612
+ msg += f"\n{first_diff}"
613
+
614
+ raise AssertionError(msg)
615
+
616
+
617
+ def assert_numpy_array_equal(
618
+ left,
619
+ right,
620
+ strict_nan: bool = False,
621
+ check_dtype: bool | Literal["equiv"] = True,
622
+ err_msg=None,
623
+ check_same=None,
624
+ obj: str = "numpy array",
625
+ index_values=None,
626
+ ) -> None:
627
+ """
628
+ Check that 'np.ndarray' is equivalent.
629
+
630
+ Parameters
631
+ ----------
632
+ left, right : numpy.ndarray or iterable
633
+ The two arrays to be compared.
634
+ strict_nan : bool, default False
635
+ If True, consider NaN and None to be different.
636
+ check_dtype : bool, default True
637
+ Check dtype if both a and b are np.ndarray.
638
+ err_msg : str, default None
639
+ If provided, used as assertion message.
640
+ check_same : None|'copy'|'same', default None
641
+ Ensure left and right refer/do not refer to the same memory area.
642
+ obj : str, default 'numpy array'
643
+ Specify object name being compared, internally used to show appropriate
644
+ assertion message.
645
+ index_values : Index | numpy.ndarray, default None
646
+ optional index (shared by both left and right), used in output.
647
+ """
648
+ __tracebackhide__ = True
649
+
650
+ # instance validation
651
+ # Show a detailed error message when classes are different
652
+ assert_class_equal(left, right, obj=obj)
653
+ # both classes must be an np.ndarray
654
+ _check_isinstance(left, right, np.ndarray)
655
+
656
+ def _get_base(obj):
657
+ return obj.base if getattr(obj, "base", None) is not None else obj
658
+
659
+ left_base = _get_base(left)
660
+ right_base = _get_base(right)
661
+
662
+ if check_same == "same":
663
+ if left_base is not right_base:
664
+ raise AssertionError(f"{repr(left_base)} is not {repr(right_base)}")
665
+ elif check_same == "copy":
666
+ if left_base is right_base:
667
+ raise AssertionError(f"{repr(left_base)} is {repr(right_base)}")
668
+
669
+ def _raise(left, right, err_msg) -> NoReturn:
670
+ if err_msg is None:
671
+ if left.shape != right.shape:
672
+ raise_assert_detail(
673
+ obj, f"{obj} shapes are different", left.shape, right.shape
674
+ )
675
+
676
+ diff = 0
677
+ for left_arr, right_arr in zip(left, right):
678
+ # count up differences
679
+ if not array_equivalent(left_arr, right_arr, strict_nan=strict_nan):
680
+ diff += 1
681
+
682
+ diff = diff * 100.0 / left.size
683
+ msg = f"{obj} values are different ({np.round(diff, 5)} %)"
684
+ raise_assert_detail(obj, msg, left, right, index_values=index_values)
685
+
686
+ raise AssertionError(err_msg)
687
+
688
+ # compare shape and values
689
+ if not array_equivalent(left, right, strict_nan=strict_nan):
690
+ _raise(left, right, err_msg)
691
+
692
+ if check_dtype:
693
+ if isinstance(left, np.ndarray) and isinstance(right, np.ndarray):
694
+ assert_attr_equal("dtype", left, right, obj=obj)
695
+
696
+
697
+ def assert_extension_array_equal(
698
+ left,
699
+ right,
700
+ check_dtype: bool | Literal["equiv"] = True,
701
+ index_values=None,
702
+ check_exact: bool | lib.NoDefault = lib.no_default,
703
+ rtol: float | lib.NoDefault = lib.no_default,
704
+ atol: float | lib.NoDefault = lib.no_default,
705
+ obj: str = "ExtensionArray",
706
+ ) -> None:
707
+ """
708
+ Check that left and right ExtensionArrays are equal.
709
+
710
+ Parameters
711
+ ----------
712
+ left, right : ExtensionArray
713
+ The two arrays to compare.
714
+ check_dtype : bool, default True
715
+ Whether to check if the ExtensionArray dtypes are identical.
716
+ index_values : Index | numpy.ndarray, default None
717
+ Optional index (shared by both left and right), used in output.
718
+ check_exact : bool, default False
719
+ Whether to compare number exactly.
720
+
721
+ .. versionchanged:: 2.2.0
722
+
723
+ Defaults to True for integer dtypes if none of
724
+ ``check_exact``, ``rtol`` and ``atol`` are specified.
725
+ rtol : float, default 1e-5
726
+ Relative tolerance. Only used when check_exact is False.
727
+ atol : float, default 1e-8
728
+ Absolute tolerance. Only used when check_exact is False.
729
+ obj : str, default 'ExtensionArray'
730
+ Specify object name being compared, internally used to show appropriate
731
+ assertion message.
732
+
733
+ .. versionadded:: 2.0.0
734
+
735
+ Notes
736
+ -----
737
+ Missing values are checked separately from valid values.
738
+ A mask of missing values is computed for each and checked to match.
739
+ The remaining all-valid values are cast to object dtype and checked.
740
+
741
+ Examples
742
+ --------
743
+ >>> from pandas import testing as tm
744
+ >>> a = pd.Series([1, 2, 3, 4])
745
+ >>> b, c = a.array, a.array
746
+ >>> tm.assert_extension_array_equal(b, c)
747
+ """
748
+ if (
749
+ check_exact is lib.no_default
750
+ and rtol is lib.no_default
751
+ and atol is lib.no_default
752
+ ):
753
+ check_exact = (
754
+ is_numeric_dtype(left.dtype)
755
+ and not is_float_dtype(left.dtype)
756
+ or is_numeric_dtype(right.dtype)
757
+ and not is_float_dtype(right.dtype)
758
+ )
759
+ elif check_exact is lib.no_default:
760
+ check_exact = False
761
+
762
+ rtol = rtol if rtol is not lib.no_default else 1.0e-5
763
+ atol = atol if atol is not lib.no_default else 1.0e-8
764
+
765
+ assert isinstance(left, ExtensionArray), "left is not an ExtensionArray"
766
+ assert isinstance(right, ExtensionArray), "right is not an ExtensionArray"
767
+ if check_dtype:
768
+ assert_attr_equal("dtype", left, right, obj=f"Attributes of {obj}")
769
+
770
+ if (
771
+ isinstance(left, DatetimeLikeArrayMixin)
772
+ and isinstance(right, DatetimeLikeArrayMixin)
773
+ and type(right) == type(left)
774
+ ):
775
+ # GH 52449
776
+ if not check_dtype and left.dtype.kind in "mM":
777
+ if not isinstance(left.dtype, np.dtype):
778
+ l_unit = cast(DatetimeTZDtype, left.dtype).unit
779
+ else:
780
+ l_unit = np.datetime_data(left.dtype)[0]
781
+ if not isinstance(right.dtype, np.dtype):
782
+ r_unit = cast(DatetimeTZDtype, right.dtype).unit
783
+ else:
784
+ r_unit = np.datetime_data(right.dtype)[0]
785
+ if (
786
+ l_unit != r_unit
787
+ and compare_mismatched_resolutions(
788
+ left._ndarray, right._ndarray, operator.eq
789
+ ).all()
790
+ ):
791
+ return
792
+ # Avoid slow object-dtype comparisons
793
+ # np.asarray for case where we have a np.MaskedArray
794
+ assert_numpy_array_equal(
795
+ np.asarray(left.asi8),
796
+ np.asarray(right.asi8),
797
+ index_values=index_values,
798
+ obj=obj,
799
+ )
800
+ return
801
+
802
+ left_na = np.asarray(left.isna())
803
+ right_na = np.asarray(right.isna())
804
+ assert_numpy_array_equal(
805
+ left_na, right_na, obj=f"{obj} NA mask", index_values=index_values
806
+ )
807
+
808
+ left_valid = left[~left_na].to_numpy(dtype=object)
809
+ right_valid = right[~right_na].to_numpy(dtype=object)
810
+ if check_exact:
811
+ assert_numpy_array_equal(
812
+ left_valid, right_valid, obj=obj, index_values=index_values
813
+ )
814
+ else:
815
+ _testing.assert_almost_equal(
816
+ left_valid,
817
+ right_valid,
818
+ check_dtype=bool(check_dtype),
819
+ rtol=rtol,
820
+ atol=atol,
821
+ obj=obj,
822
+ index_values=index_values,
823
+ )
824
+
825
+
826
+ # This could be refactored to use the NDFrame.equals method
827
+ def assert_series_equal(
828
+ left,
829
+ right,
830
+ check_dtype: bool | Literal["equiv"] = True,
831
+ check_index_type: bool | Literal["equiv"] = "equiv",
832
+ check_series_type: bool = True,
833
+ check_names: bool = True,
834
+ check_exact: bool | lib.NoDefault = lib.no_default,
835
+ check_datetimelike_compat: bool = False,
836
+ check_categorical: bool = True,
837
+ check_category_order: bool = True,
838
+ check_freq: bool = True,
839
+ check_flags: bool = True,
840
+ rtol: float | lib.NoDefault = lib.no_default,
841
+ atol: float | lib.NoDefault = lib.no_default,
842
+ obj: str = "Series",
843
+ *,
844
+ check_index: bool = True,
845
+ check_like: bool = False,
846
+ ) -> None:
847
+ """
848
+ Check that left and right Series are equal.
849
+
850
+ Parameters
851
+ ----------
852
+ left : Series
853
+ right : Series
854
+ check_dtype : bool, default True
855
+ Whether to check the Series dtype is identical.
856
+ check_index_type : bool or {'equiv'}, default 'equiv'
857
+ Whether to check the Index class, dtype and inferred_type
858
+ are identical.
859
+ check_series_type : bool, default True
860
+ Whether to check the Series class is identical.
861
+ check_names : bool, default True
862
+ Whether to check the Series and Index names attribute.
863
+ check_exact : bool, default False
864
+ Whether to compare number exactly.
865
+
866
+ .. versionchanged:: 2.2.0
867
+
868
+ Defaults to True for integer dtypes if none of
869
+ ``check_exact``, ``rtol`` and ``atol`` are specified.
870
+ check_datetimelike_compat : bool, default False
871
+ Compare datetime-like which is comparable ignoring dtype.
872
+ check_categorical : bool, default True
873
+ Whether to compare internal Categorical exactly.
874
+ check_category_order : bool, default True
875
+ Whether to compare category order of internal Categoricals.
876
+ check_freq : bool, default True
877
+ Whether to check the `freq` attribute on a DatetimeIndex or TimedeltaIndex.
878
+ check_flags : bool, default True
879
+ Whether to check the `flags` attribute.
880
+ rtol : float, default 1e-5
881
+ Relative tolerance. Only used when check_exact is False.
882
+ atol : float, default 1e-8
883
+ Absolute tolerance. Only used when check_exact is False.
884
+ obj : str, default 'Series'
885
+ Specify object name being compared, internally used to show appropriate
886
+ assertion message.
887
+ check_index : bool, default True
888
+ Whether to check index equivalence. If False, then compare only values.
889
+
890
+ .. versionadded:: 1.3.0
891
+ check_like : bool, default False
892
+ If True, ignore the order of the index. Must be False if check_index is False.
893
+ Note: same labels must be with the same data.
894
+
895
+ .. versionadded:: 1.5.0
896
+
897
+ Examples
898
+ --------
899
+ >>> from pandas import testing as tm
900
+ >>> a = pd.Series([1, 2, 3, 4])
901
+ >>> b = pd.Series([1, 2, 3, 4])
902
+ >>> tm.assert_series_equal(a, b)
903
+ """
904
+ __tracebackhide__ = True
905
+ check_exact_index = False if check_exact is lib.no_default else check_exact
906
+ if (
907
+ check_exact is lib.no_default
908
+ and rtol is lib.no_default
909
+ and atol is lib.no_default
910
+ ):
911
+ check_exact = (
912
+ is_numeric_dtype(left.dtype)
913
+ and not is_float_dtype(left.dtype)
914
+ or is_numeric_dtype(right.dtype)
915
+ and not is_float_dtype(right.dtype)
916
+ )
917
+ elif check_exact is lib.no_default:
918
+ check_exact = False
919
+
920
+ rtol = rtol if rtol is not lib.no_default else 1.0e-5
921
+ atol = atol if atol is not lib.no_default else 1.0e-8
922
+
923
+ if not check_index and check_like:
924
+ raise ValueError("check_like must be False if check_index is False")
925
+
926
+ # instance validation
927
+ _check_isinstance(left, right, Series)
928
+
929
+ if check_series_type:
930
+ assert_class_equal(left, right, obj=obj)
931
+
932
+ # length comparison
933
+ if len(left) != len(right):
934
+ msg1 = f"{len(left)}, {left.index}"
935
+ msg2 = f"{len(right)}, {right.index}"
936
+ raise_assert_detail(obj, "Series length are different", msg1, msg2)
937
+
938
+ if check_flags:
939
+ assert left.flags == right.flags, f"{repr(left.flags)} != {repr(right.flags)}"
940
+
941
+ if check_index:
942
+ # GH #38183
943
+ assert_index_equal(
944
+ left.index,
945
+ right.index,
946
+ exact=check_index_type,
947
+ check_names=check_names,
948
+ check_exact=check_exact_index,
949
+ check_categorical=check_categorical,
950
+ check_order=not check_like,
951
+ rtol=rtol,
952
+ atol=atol,
953
+ obj=f"{obj}.index",
954
+ )
955
+
956
+ if check_like:
957
+ left = left.reindex_like(right)
958
+
959
+ if check_freq and isinstance(left.index, (DatetimeIndex, TimedeltaIndex)):
960
+ lidx = left.index
961
+ ridx = right.index
962
+ assert lidx.freq == ridx.freq, (lidx.freq, ridx.freq)
963
+
964
+ if check_dtype:
965
+ # We want to skip exact dtype checking when `check_categorical`
966
+ # is False. We'll still raise if only one is a `Categorical`,
967
+ # regardless of `check_categorical`
968
+ if (
969
+ isinstance(left.dtype, CategoricalDtype)
970
+ and isinstance(right.dtype, CategoricalDtype)
971
+ and not check_categorical
972
+ ):
973
+ pass
974
+ else:
975
+ assert_attr_equal("dtype", left, right, obj=f"Attributes of {obj}")
976
+ if check_exact:
977
+ left_values = left._values
978
+ right_values = right._values
979
+ # Only check exact if dtype is numeric
980
+ if isinstance(left_values, ExtensionArray) and isinstance(
981
+ right_values, ExtensionArray
982
+ ):
983
+ assert_extension_array_equal(
984
+ left_values,
985
+ right_values,
986
+ check_dtype=check_dtype,
987
+ index_values=left.index,
988
+ obj=str(obj),
989
+ )
990
+ else:
991
+ # convert both to NumPy if not, check_dtype would raise earlier
992
+ lv, rv = left_values, right_values
993
+ if isinstance(left_values, ExtensionArray):
994
+ lv = left_values.to_numpy()
995
+ if isinstance(right_values, ExtensionArray):
996
+ rv = right_values.to_numpy()
997
+ assert_numpy_array_equal(
998
+ lv,
999
+ rv,
1000
+ check_dtype=check_dtype,
1001
+ obj=str(obj),
1002
+ index_values=left.index,
1003
+ )
1004
+ elif check_datetimelike_compat and (
1005
+ needs_i8_conversion(left.dtype) or needs_i8_conversion(right.dtype)
1006
+ ):
1007
+ # we want to check only if we have compat dtypes
1008
+ # e.g. integer and M|m are NOT compat, but we can simply check
1009
+ # the values in that case
1010
+
1011
+ # datetimelike may have different objects (e.g. datetime.datetime
1012
+ # vs Timestamp) but will compare equal
1013
+ if not Index(left._values).equals(Index(right._values)):
1014
+ msg = (
1015
+ f"[datetimelike_compat=True] {left._values} "
1016
+ f"is not equal to {right._values}."
1017
+ )
1018
+ raise AssertionError(msg)
1019
+ elif isinstance(left.dtype, IntervalDtype) and isinstance(
1020
+ right.dtype, IntervalDtype
1021
+ ):
1022
+ assert_interval_array_equal(left.array, right.array)
1023
+ elif isinstance(left.dtype, CategoricalDtype) or isinstance(
1024
+ right.dtype, CategoricalDtype
1025
+ ):
1026
+ _testing.assert_almost_equal(
1027
+ left._values,
1028
+ right._values,
1029
+ rtol=rtol,
1030
+ atol=atol,
1031
+ check_dtype=bool(check_dtype),
1032
+ obj=str(obj),
1033
+ index_values=left.index,
1034
+ )
1035
+ elif isinstance(left.dtype, ExtensionDtype) and isinstance(
1036
+ right.dtype, ExtensionDtype
1037
+ ):
1038
+ assert_extension_array_equal(
1039
+ left._values,
1040
+ right._values,
1041
+ rtol=rtol,
1042
+ atol=atol,
1043
+ check_dtype=check_dtype,
1044
+ index_values=left.index,
1045
+ obj=str(obj),
1046
+ )
1047
+ elif is_extension_array_dtype_and_needs_i8_conversion(
1048
+ left.dtype, right.dtype
1049
+ ) or is_extension_array_dtype_and_needs_i8_conversion(right.dtype, left.dtype):
1050
+ assert_extension_array_equal(
1051
+ left._values,
1052
+ right._values,
1053
+ check_dtype=check_dtype,
1054
+ index_values=left.index,
1055
+ obj=str(obj),
1056
+ )
1057
+ elif needs_i8_conversion(left.dtype) and needs_i8_conversion(right.dtype):
1058
+ # DatetimeArray or TimedeltaArray
1059
+ assert_extension_array_equal(
1060
+ left._values,
1061
+ right._values,
1062
+ check_dtype=check_dtype,
1063
+ index_values=left.index,
1064
+ obj=str(obj),
1065
+ )
1066
+ else:
1067
+ _testing.assert_almost_equal(
1068
+ left._values,
1069
+ right._values,
1070
+ rtol=rtol,
1071
+ atol=atol,
1072
+ check_dtype=bool(check_dtype),
1073
+ obj=str(obj),
1074
+ index_values=left.index,
1075
+ )
1076
+
1077
+ # metadata comparison
1078
+ if check_names:
1079
+ assert_attr_equal("name", left, right, obj=obj)
1080
+
1081
+ if check_categorical:
1082
+ if isinstance(left.dtype, CategoricalDtype) or isinstance(
1083
+ right.dtype, CategoricalDtype
1084
+ ):
1085
+ assert_categorical_equal(
1086
+ left._values,
1087
+ right._values,
1088
+ obj=f"{obj} category",
1089
+ check_category_order=check_category_order,
1090
+ )
1091
+
1092
+
1093
+ # This could be refactored to use the NDFrame.equals method
1094
+ def assert_frame_equal(
1095
+ left,
1096
+ right,
1097
+ check_dtype: bool | Literal["equiv"] = True,
1098
+ check_index_type: bool | Literal["equiv"] = "equiv",
1099
+ check_column_type: bool | Literal["equiv"] = "equiv",
1100
+ check_frame_type: bool = True,
1101
+ check_names: bool = True,
1102
+ by_blocks: bool = False,
1103
+ check_exact: bool | lib.NoDefault = lib.no_default,
1104
+ check_datetimelike_compat: bool = False,
1105
+ check_categorical: bool = True,
1106
+ check_like: bool = False,
1107
+ check_freq: bool = True,
1108
+ check_flags: bool = True,
1109
+ rtol: float | lib.NoDefault = lib.no_default,
1110
+ atol: float | lib.NoDefault = lib.no_default,
1111
+ obj: str = "DataFrame",
1112
+ ) -> None:
1113
+ """
1114
+ Check that left and right DataFrame are equal.
1115
+
1116
+ This function is intended to compare two DataFrames and output any
1117
+ differences. It is mostly intended for use in unit tests.
1118
+ Additional parameters allow varying the strictness of the
1119
+ equality checks performed.
1120
+
1121
+ Parameters
1122
+ ----------
1123
+ left : DataFrame
1124
+ First DataFrame to compare.
1125
+ right : DataFrame
1126
+ Second DataFrame to compare.
1127
+ check_dtype : bool, default True
1128
+ Whether to check the DataFrame dtype is identical.
1129
+ check_index_type : bool or {'equiv'}, default 'equiv'
1130
+ Whether to check the Index class, dtype and inferred_type
1131
+ are identical.
1132
+ check_column_type : bool or {'equiv'}, default 'equiv'
1133
+ Whether to check the columns class, dtype and inferred_type
1134
+ are identical. Is passed as the ``exact`` argument of
1135
+ :func:`assert_index_equal`.
1136
+ check_frame_type : bool, default True
1137
+ Whether to check the DataFrame class is identical.
1138
+ check_names : bool, default True
1139
+ Whether to check that the `names` attribute for both the `index`
1140
+ and `column` attributes of the DataFrame is identical.
1141
+ by_blocks : bool, default False
1142
+ Specify how to compare internal data. If False, compare by columns.
1143
+ If True, compare by blocks.
1144
+ check_exact : bool, default False
1145
+ Whether to compare number exactly.
1146
+
1147
+ .. versionchanged:: 2.2.0
1148
+
1149
+ Defaults to True for integer dtypes if none of
1150
+ ``check_exact``, ``rtol`` and ``atol`` are specified.
1151
+ check_datetimelike_compat : bool, default False
1152
+ Compare datetime-like which is comparable ignoring dtype.
1153
+ check_categorical : bool, default True
1154
+ Whether to compare internal Categorical exactly.
1155
+ check_like : bool, default False
1156
+ If True, ignore the order of index & columns.
1157
+ Note: index labels must match their respective rows
1158
+ (same as in columns) - same labels must be with the same data.
1159
+ check_freq : bool, default True
1160
+ Whether to check the `freq` attribute on a DatetimeIndex or TimedeltaIndex.
1161
+ check_flags : bool, default True
1162
+ Whether to check the `flags` attribute.
1163
+ rtol : float, default 1e-5
1164
+ Relative tolerance. Only used when check_exact is False.
1165
+ atol : float, default 1e-8
1166
+ Absolute tolerance. Only used when check_exact is False.
1167
+ obj : str, default 'DataFrame'
1168
+ Specify object name being compared, internally used to show appropriate
1169
+ assertion message.
1170
+
1171
+ See Also
1172
+ --------
1173
+ assert_series_equal : Equivalent method for asserting Series equality.
1174
+ DataFrame.equals : Check DataFrame equality.
1175
+
1176
+ Examples
1177
+ --------
1178
+ This example shows comparing two DataFrames that are equal
1179
+ but with columns of differing dtypes.
1180
+
1181
+ >>> from pandas.testing import assert_frame_equal
1182
+ >>> df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})
1183
+ >>> df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]})
1184
+
1185
+ df1 equals itself.
1186
+
1187
+ >>> assert_frame_equal(df1, df1)
1188
+
1189
+ df1 differs from df2 as column 'b' is of a different type.
1190
+
1191
+ >>> assert_frame_equal(df1, df2)
1192
+ Traceback (most recent call last):
1193
+ ...
1194
+ AssertionError: Attributes of DataFrame.iloc[:, 1] (column name="b") are different
1195
+
1196
+ Attribute "dtype" are different
1197
+ [left]: int64
1198
+ [right]: float64
1199
+
1200
+ Ignore differing dtypes in columns with check_dtype.
1201
+
1202
+ >>> assert_frame_equal(df1, df2, check_dtype=False)
1203
+ """
1204
+ __tracebackhide__ = True
1205
+ _rtol = rtol if rtol is not lib.no_default else 1.0e-5
1206
+ _atol = atol if atol is not lib.no_default else 1.0e-8
1207
+ _check_exact = check_exact if check_exact is not lib.no_default else False
1208
+
1209
+ # instance validation
1210
+ _check_isinstance(left, right, DataFrame)
1211
+
1212
+ if check_frame_type:
1213
+ assert isinstance(left, type(right))
1214
+ # assert_class_equal(left, right, obj=obj)
1215
+
1216
+ # shape comparison
1217
+ if left.shape != right.shape:
1218
+ raise_assert_detail(
1219
+ obj, f"{obj} shape mismatch", f"{repr(left.shape)}", f"{repr(right.shape)}"
1220
+ )
1221
+
1222
+ if check_flags:
1223
+ assert left.flags == right.flags, f"{repr(left.flags)} != {repr(right.flags)}"
1224
+
1225
+ # index comparison
1226
+ assert_index_equal(
1227
+ left.index,
1228
+ right.index,
1229
+ exact=check_index_type,
1230
+ check_names=check_names,
1231
+ check_exact=_check_exact,
1232
+ check_categorical=check_categorical,
1233
+ check_order=not check_like,
1234
+ rtol=_rtol,
1235
+ atol=_atol,
1236
+ obj=f"{obj}.index",
1237
+ )
1238
+
1239
+ # column comparison
1240
+ assert_index_equal(
1241
+ left.columns,
1242
+ right.columns,
1243
+ exact=check_column_type,
1244
+ check_names=check_names,
1245
+ check_exact=_check_exact,
1246
+ check_categorical=check_categorical,
1247
+ check_order=not check_like,
1248
+ rtol=_rtol,
1249
+ atol=_atol,
1250
+ obj=f"{obj}.columns",
1251
+ )
1252
+
1253
+ if check_like:
1254
+ left = left.reindex_like(right)
1255
+
1256
+ # compare by blocks
1257
+ if by_blocks:
1258
+ rblocks = right._to_dict_of_blocks()
1259
+ lblocks = left._to_dict_of_blocks()
1260
+ for dtype in list(set(list(lblocks.keys()) + list(rblocks.keys()))):
1261
+ assert dtype in lblocks
1262
+ assert dtype in rblocks
1263
+ assert_frame_equal(
1264
+ lblocks[dtype], rblocks[dtype], check_dtype=check_dtype, obj=obj
1265
+ )
1266
+
1267
+ # compare by columns
1268
+ else:
1269
+ for i, col in enumerate(left.columns):
1270
+ # We have already checked that columns match, so we can do
1271
+ # fast location-based lookups
1272
+ lcol = left._ixs(i, axis=1)
1273
+ rcol = right._ixs(i, axis=1)
1274
+
1275
+ # GH #38183
1276
+ # use check_index=False, because we do not want to run
1277
+ # assert_index_equal for each column,
1278
+ # as we already checked it for the whole dataframe before.
1279
+ assert_series_equal(
1280
+ lcol,
1281
+ rcol,
1282
+ check_dtype=check_dtype,
1283
+ check_index_type=check_index_type,
1284
+ check_exact=check_exact,
1285
+ check_names=check_names,
1286
+ check_datetimelike_compat=check_datetimelike_compat,
1287
+ check_categorical=check_categorical,
1288
+ check_freq=check_freq,
1289
+ obj=f'{obj}.iloc[:, {i}] (column name="{col}")',
1290
+ rtol=rtol,
1291
+ atol=atol,
1292
+ check_index=False,
1293
+ check_flags=False,
1294
+ )
1295
+
1296
+
1297
+ def assert_equal(left, right, **kwargs) -> None:
1298
+ """
1299
+ Wrapper for tm.assert_*_equal to dispatch to the appropriate test function.
1300
+
1301
+ Parameters
1302
+ ----------
1303
+ left, right : Index, Series, DataFrame, ExtensionArray, or np.ndarray
1304
+ The two items to be compared.
1305
+ **kwargs
1306
+ All keyword arguments are passed through to the underlying assert method.
1307
+ """
1308
+ __tracebackhide__ = True
1309
+
1310
+ if isinstance(left, Index):
1311
+ assert_index_equal(left, right, **kwargs)
1312
+ if isinstance(left, (DatetimeIndex, TimedeltaIndex)):
1313
+ assert left.freq == right.freq, (left.freq, right.freq)
1314
+ elif isinstance(left, Series):
1315
+ assert_series_equal(left, right, **kwargs)
1316
+ elif isinstance(left, DataFrame):
1317
+ assert_frame_equal(left, right, **kwargs)
1318
+ elif isinstance(left, IntervalArray):
1319
+ assert_interval_array_equal(left, right, **kwargs)
1320
+ elif isinstance(left, PeriodArray):
1321
+ assert_period_array_equal(left, right, **kwargs)
1322
+ elif isinstance(left, DatetimeArray):
1323
+ assert_datetime_array_equal(left, right, **kwargs)
1324
+ elif isinstance(left, TimedeltaArray):
1325
+ assert_timedelta_array_equal(left, right, **kwargs)
1326
+ elif isinstance(left, ExtensionArray):
1327
+ assert_extension_array_equal(left, right, **kwargs)
1328
+ elif isinstance(left, np.ndarray):
1329
+ assert_numpy_array_equal(left, right, **kwargs)
1330
+ elif isinstance(left, str):
1331
+ assert kwargs == {}
1332
+ assert left == right
1333
+ else:
1334
+ assert kwargs == {}
1335
+ assert_almost_equal(left, right)
1336
+
1337
+
1338
+ def assert_sp_array_equal(left, right) -> None:
1339
+ """
1340
+ Check that the left and right SparseArray are equal.
1341
+
1342
+ Parameters
1343
+ ----------
1344
+ left : SparseArray
1345
+ right : SparseArray
1346
+ """
1347
+ _check_isinstance(left, right, pd.arrays.SparseArray)
1348
+
1349
+ assert_numpy_array_equal(left.sp_values, right.sp_values)
1350
+
1351
+ # SparseIndex comparison
1352
+ assert isinstance(left.sp_index, SparseIndex)
1353
+ assert isinstance(right.sp_index, SparseIndex)
1354
+
1355
+ left_index = left.sp_index
1356
+ right_index = right.sp_index
1357
+
1358
+ if not left_index.equals(right_index):
1359
+ raise_assert_detail(
1360
+ "SparseArray.index", "index are not equal", left_index, right_index
1361
+ )
1362
+ else:
1363
+ # Just ensure a
1364
+ pass
1365
+
1366
+ assert_attr_equal("fill_value", left, right)
1367
+ assert_attr_equal("dtype", left, right)
1368
+ assert_numpy_array_equal(left.to_dense(), right.to_dense())
1369
+
1370
+
1371
+ def assert_contains_all(iterable, dic) -> None:
1372
+ for k in iterable:
1373
+ assert k in dic, f"Did not contain item: {repr(k)}"
1374
+
1375
+
1376
+ def assert_copy(iter1, iter2, **eql_kwargs) -> None:
1377
+ """
1378
+ iter1, iter2: iterables that produce elements
1379
+ comparable with assert_almost_equal
1380
+
1381
+ Checks that the elements are equal, but not
1382
+ the same object. (Does not check that items
1383
+ in sequences are also not the same object)
1384
+ """
1385
+ for elem1, elem2 in zip(iter1, iter2):
1386
+ assert_almost_equal(elem1, elem2, **eql_kwargs)
1387
+ msg = (
1388
+ f"Expected object {repr(type(elem1))} and object {repr(type(elem2))} to be "
1389
+ "different objects, but they were the same object."
1390
+ )
1391
+ assert elem1 is not elem2, msg
1392
+
1393
+
1394
+ def is_extension_array_dtype_and_needs_i8_conversion(
1395
+ left_dtype: DtypeObj, right_dtype: DtypeObj
1396
+ ) -> bool:
1397
+ """
1398
+ Checks that we have the combination of an ExtensionArraydtype and
1399
+ a dtype that should be converted to int64
1400
+
1401
+ Returns
1402
+ -------
1403
+ bool
1404
+
1405
+ Related to issue #37609
1406
+ """
1407
+ return isinstance(left_dtype, ExtensionDtype) and needs_i8_conversion(right_dtype)
1408
+
1409
+
1410
+ def assert_indexing_slices_equivalent(ser: Series, l_slc: slice, i_slc: slice) -> None:
1411
+ """
1412
+ Check that ser.iloc[i_slc] matches ser.loc[l_slc] and, if applicable,
1413
+ ser[l_slc].
1414
+ """
1415
+ expected = ser.iloc[i_slc]
1416
+
1417
+ assert_series_equal(ser.loc[l_slc], expected)
1418
+
1419
+ if not is_integer_dtype(ser.index):
1420
+ # For integer indices, .loc and plain getitem are position-based.
1421
+ assert_series_equal(ser[l_slc], expected)
1422
+
1423
+
1424
+ def assert_metadata_equivalent(
1425
+ left: DataFrame | Series, right: DataFrame | Series | None = None
1426
+ ) -> None:
1427
+ """
1428
+ Check that ._metadata attributes are equivalent.
1429
+ """
1430
+ for attr in left._metadata:
1431
+ val = getattr(left, attr, None)
1432
+ if right is None:
1433
+ assert val is None
1434
+ else:
1435
+ assert val == getattr(right, attr, None)
venv/lib/python3.10/site-packages/pandas/_testing/compat.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Helpers for sharing tests between DataFrame/Series
3
+ """
4
+ from __future__ import annotations
5
+
6
+ from typing import TYPE_CHECKING
7
+
8
+ from pandas import DataFrame
9
+
10
+ if TYPE_CHECKING:
11
+ from pandas._typing import DtypeObj
12
+
13
+
14
+ def get_dtype(obj) -> DtypeObj:
15
+ if isinstance(obj, DataFrame):
16
+ # Note: we are assuming only one column
17
+ return obj.dtypes.iat[0]
18
+ else:
19
+ return obj.dtype
20
+
21
+
22
+ def get_obj(df: DataFrame, klass):
23
+ """
24
+ For sharing tests using frame_or_series, either return the DataFrame
25
+ unchanged or return it's first column as a Series.
26
+ """
27
+ if klass is DataFrame:
28
+ return df
29
+ return df._ixs(0, axis=1)
venv/lib/python3.10/site-packages/pandas/_testing/contexts.py ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from contextlib import contextmanager
4
+ import os
5
+ from pathlib import Path
6
+ import tempfile
7
+ from typing import (
8
+ IO,
9
+ TYPE_CHECKING,
10
+ Any,
11
+ )
12
+ import uuid
13
+
14
+ from pandas._config import using_copy_on_write
15
+
16
+ from pandas.compat import PYPY
17
+ from pandas.errors import ChainedAssignmentError
18
+
19
+ from pandas import set_option
20
+
21
+ from pandas.io.common import get_handle
22
+
23
+ if TYPE_CHECKING:
24
+ from collections.abc import Generator
25
+
26
+ from pandas._typing import (
27
+ BaseBuffer,
28
+ CompressionOptions,
29
+ FilePath,
30
+ )
31
+
32
+
33
+ @contextmanager
34
+ def decompress_file(
35
+ path: FilePath | BaseBuffer, compression: CompressionOptions
36
+ ) -> Generator[IO[bytes], None, None]:
37
+ """
38
+ Open a compressed file and return a file object.
39
+
40
+ Parameters
41
+ ----------
42
+ path : str
43
+ The path where the file is read from.
44
+
45
+ compression : {'gzip', 'bz2', 'zip', 'xz', 'zstd', None}
46
+ Name of the decompression to use
47
+
48
+ Returns
49
+ -------
50
+ file object
51
+ """
52
+ with get_handle(path, "rb", compression=compression, is_text=False) as handle:
53
+ yield handle.handle
54
+
55
+
56
+ @contextmanager
57
+ def set_timezone(tz: str) -> Generator[None, None, None]:
58
+ """
59
+ Context manager for temporarily setting a timezone.
60
+
61
+ Parameters
62
+ ----------
63
+ tz : str
64
+ A string representing a valid timezone.
65
+
66
+ Examples
67
+ --------
68
+ >>> from datetime import datetime
69
+ >>> from dateutil.tz import tzlocal
70
+ >>> tzlocal().tzname(datetime(2021, 1, 1)) # doctest: +SKIP
71
+ 'IST'
72
+
73
+ >>> with set_timezone('US/Eastern'):
74
+ ... tzlocal().tzname(datetime(2021, 1, 1))
75
+ ...
76
+ 'EST'
77
+ """
78
+ import time
79
+
80
+ def setTZ(tz) -> None:
81
+ if tz is None:
82
+ try:
83
+ del os.environ["TZ"]
84
+ except KeyError:
85
+ pass
86
+ else:
87
+ os.environ["TZ"] = tz
88
+ time.tzset()
89
+
90
+ orig_tz = os.environ.get("TZ")
91
+ setTZ(tz)
92
+ try:
93
+ yield
94
+ finally:
95
+ setTZ(orig_tz)
96
+
97
+
98
+ @contextmanager
99
+ def ensure_clean(
100
+ filename=None, return_filelike: bool = False, **kwargs: Any
101
+ ) -> Generator[Any, None, None]:
102
+ """
103
+ Gets a temporary path and agrees to remove on close.
104
+
105
+ This implementation does not use tempfile.mkstemp to avoid having a file handle.
106
+ If the code using the returned path wants to delete the file itself, windows
107
+ requires that no program has a file handle to it.
108
+
109
+ Parameters
110
+ ----------
111
+ filename : str (optional)
112
+ suffix of the created file.
113
+ return_filelike : bool (default False)
114
+ if True, returns a file-like which is *always* cleaned. Necessary for
115
+ savefig and other functions which want to append extensions.
116
+ **kwargs
117
+ Additional keywords are passed to open().
118
+
119
+ """
120
+ folder = Path(tempfile.gettempdir())
121
+
122
+ if filename is None:
123
+ filename = ""
124
+ filename = str(uuid.uuid4()) + filename
125
+ path = folder / filename
126
+
127
+ path.touch()
128
+
129
+ handle_or_str: str | IO = str(path)
130
+ encoding = kwargs.pop("encoding", None)
131
+ if return_filelike:
132
+ kwargs.setdefault("mode", "w+b")
133
+ if encoding is None and "b" not in kwargs["mode"]:
134
+ encoding = "utf-8"
135
+ handle_or_str = open(path, encoding=encoding, **kwargs)
136
+
137
+ try:
138
+ yield handle_or_str
139
+ finally:
140
+ if not isinstance(handle_or_str, str):
141
+ handle_or_str.close()
142
+ if path.is_file():
143
+ path.unlink()
144
+
145
+
146
+ @contextmanager
147
+ def with_csv_dialect(name: str, **kwargs) -> Generator[None, None, None]:
148
+ """
149
+ Context manager to temporarily register a CSV dialect for parsing CSV.
150
+
151
+ Parameters
152
+ ----------
153
+ name : str
154
+ The name of the dialect.
155
+ kwargs : mapping
156
+ The parameters for the dialect.
157
+
158
+ Raises
159
+ ------
160
+ ValueError : the name of the dialect conflicts with a builtin one.
161
+
162
+ See Also
163
+ --------
164
+ csv : Python's CSV library.
165
+ """
166
+ import csv
167
+
168
+ _BUILTIN_DIALECTS = {"excel", "excel-tab", "unix"}
169
+
170
+ if name in _BUILTIN_DIALECTS:
171
+ raise ValueError("Cannot override builtin dialect.")
172
+
173
+ csv.register_dialect(name, **kwargs)
174
+ try:
175
+ yield
176
+ finally:
177
+ csv.unregister_dialect(name)
178
+
179
+
180
+ @contextmanager
181
+ def use_numexpr(use, min_elements=None) -> Generator[None, None, None]:
182
+ from pandas.core.computation import expressions as expr
183
+
184
+ if min_elements is None:
185
+ min_elements = expr._MIN_ELEMENTS
186
+
187
+ olduse = expr.USE_NUMEXPR
188
+ oldmin = expr._MIN_ELEMENTS
189
+ set_option("compute.use_numexpr", use)
190
+ expr._MIN_ELEMENTS = min_elements
191
+ try:
192
+ yield
193
+ finally:
194
+ expr._MIN_ELEMENTS = oldmin
195
+ set_option("compute.use_numexpr", olduse)
196
+
197
+
198
+ def raises_chained_assignment_error(warn=True, extra_warnings=(), extra_match=()):
199
+ from pandas._testing import assert_produces_warning
200
+
201
+ if not warn:
202
+ from contextlib import nullcontext
203
+
204
+ return nullcontext()
205
+
206
+ if PYPY and not extra_warnings:
207
+ from contextlib import nullcontext
208
+
209
+ return nullcontext()
210
+ elif PYPY and extra_warnings:
211
+ return assert_produces_warning(
212
+ extra_warnings,
213
+ match="|".join(extra_match),
214
+ )
215
+ else:
216
+ if using_copy_on_write():
217
+ warning = ChainedAssignmentError
218
+ match = (
219
+ "A value is trying to be set on a copy of a DataFrame or Series "
220
+ "through chained assignment"
221
+ )
222
+ else:
223
+ warning = FutureWarning # type: ignore[assignment]
224
+ # TODO update match
225
+ match = "ChainedAssignmentError"
226
+ if extra_warnings:
227
+ warning = (warning, *extra_warnings) # type: ignore[assignment]
228
+ return assert_produces_warning(
229
+ warning,
230
+ match="|".join((match, *extra_match)),
231
+ )
232
+
233
+
234
+ def assert_cow_warning(warn=True, match=None, **kwargs):
235
+ """
236
+ Assert that a warning is raised in the CoW warning mode.
237
+
238
+ Parameters
239
+ ----------
240
+ warn : bool, default True
241
+ By default, check that a warning is raised. Can be turned off by passing False.
242
+ match : str
243
+ The warning message to match against, if different from the default.
244
+ kwargs
245
+ Passed through to assert_produces_warning
246
+ """
247
+ from pandas._testing import assert_produces_warning
248
+
249
+ if not warn:
250
+ from contextlib import nullcontext
251
+
252
+ return nullcontext()
253
+
254
+ if not match:
255
+ match = "Setting a value on a view"
256
+
257
+ return assert_produces_warning(FutureWarning, match=match, **kwargs)
venv/lib/python3.10/site-packages/pandas/errors/__init__.py ADDED
@@ -0,0 +1,850 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Expose public exceptions & warnings
3
+ """
4
+ from __future__ import annotations
5
+
6
+ import ctypes
7
+
8
+ from pandas._config.config import OptionError
9
+
10
+ from pandas._libs.tslibs import (
11
+ OutOfBoundsDatetime,
12
+ OutOfBoundsTimedelta,
13
+ )
14
+
15
+ from pandas.util.version import InvalidVersion
16
+
17
+
18
+ class IntCastingNaNError(ValueError):
19
+ """
20
+ Exception raised when converting (``astype``) an array with NaN to an integer type.
21
+
22
+ Examples
23
+ --------
24
+ >>> pd.DataFrame(np.array([[1, np.nan], [2, 3]]), dtype="i8")
25
+ Traceback (most recent call last):
26
+ IntCastingNaNError: Cannot convert non-finite values (NA or inf) to integer
27
+ """
28
+
29
+
30
+ class NullFrequencyError(ValueError):
31
+ """
32
+ Exception raised when a ``freq`` cannot be null.
33
+
34
+ Particularly ``DatetimeIndex.shift``, ``TimedeltaIndex.shift``,
35
+ ``PeriodIndex.shift``.
36
+
37
+ Examples
38
+ --------
39
+ >>> df = pd.DatetimeIndex(["2011-01-01 10:00", "2011-01-01"], freq=None)
40
+ >>> df.shift(2)
41
+ Traceback (most recent call last):
42
+ NullFrequencyError: Cannot shift with no freq
43
+ """
44
+
45
+
46
+ class PerformanceWarning(Warning):
47
+ """
48
+ Warning raised when there is a possible performance impact.
49
+
50
+ Examples
51
+ --------
52
+ >>> df = pd.DataFrame({"jim": [0, 0, 1, 1],
53
+ ... "joe": ["x", "x", "z", "y"],
54
+ ... "jolie": [1, 2, 3, 4]})
55
+ >>> df = df.set_index(["jim", "joe"])
56
+ >>> df
57
+ jolie
58
+ jim joe
59
+ 0 x 1
60
+ x 2
61
+ 1 z 3
62
+ y 4
63
+ >>> df.loc[(1, 'z')] # doctest: +SKIP
64
+ # PerformanceWarning: indexing past lexsort depth may impact performance.
65
+ df.loc[(1, 'z')]
66
+ jolie
67
+ jim joe
68
+ 1 z 3
69
+ """
70
+
71
+
72
+ class UnsupportedFunctionCall(ValueError):
73
+ """
74
+ Exception raised when attempting to call a unsupported numpy function.
75
+
76
+ For example, ``np.cumsum(groupby_object)``.
77
+
78
+ Examples
79
+ --------
80
+ >>> df = pd.DataFrame({"A": [0, 0, 1, 1],
81
+ ... "B": ["x", "x", "z", "y"],
82
+ ... "C": [1, 2, 3, 4]}
83
+ ... )
84
+ >>> np.cumsum(df.groupby(["A"]))
85
+ Traceback (most recent call last):
86
+ UnsupportedFunctionCall: numpy operations are not valid with groupby.
87
+ Use .groupby(...).cumsum() instead
88
+ """
89
+
90
+
91
+ class UnsortedIndexError(KeyError):
92
+ """
93
+ Error raised when slicing a MultiIndex which has not been lexsorted.
94
+
95
+ Subclass of `KeyError`.
96
+
97
+ Examples
98
+ --------
99
+ >>> df = pd.DataFrame({"cat": [0, 0, 1, 1],
100
+ ... "color": ["white", "white", "brown", "black"],
101
+ ... "lives": [4, 4, 3, 7]},
102
+ ... )
103
+ >>> df = df.set_index(["cat", "color"])
104
+ >>> df
105
+ lives
106
+ cat color
107
+ 0 white 4
108
+ white 4
109
+ 1 brown 3
110
+ black 7
111
+ >>> df.loc[(0, "black"):(1, "white")]
112
+ Traceback (most recent call last):
113
+ UnsortedIndexError: 'Key length (2) was greater
114
+ than MultiIndex lexsort depth (1)'
115
+ """
116
+
117
+
118
+ class ParserError(ValueError):
119
+ """
120
+ Exception that is raised by an error encountered in parsing file contents.
121
+
122
+ This is a generic error raised for errors encountered when functions like
123
+ `read_csv` or `read_html` are parsing contents of a file.
124
+
125
+ See Also
126
+ --------
127
+ read_csv : Read CSV (comma-separated) file into a DataFrame.
128
+ read_html : Read HTML table into a DataFrame.
129
+
130
+ Examples
131
+ --------
132
+ >>> data = '''a,b,c
133
+ ... cat,foo,bar
134
+ ... dog,foo,"baz'''
135
+ >>> from io import StringIO
136
+ >>> pd.read_csv(StringIO(data), skipfooter=1, engine='python')
137
+ Traceback (most recent call last):
138
+ ParserError: ',' expected after '"'. Error could possibly be due
139
+ to parsing errors in the skipped footer rows
140
+ """
141
+
142
+
143
+ class DtypeWarning(Warning):
144
+ """
145
+ Warning raised when reading different dtypes in a column from a file.
146
+
147
+ Raised for a dtype incompatibility. This can happen whenever `read_csv`
148
+ or `read_table` encounter non-uniform dtypes in a column(s) of a given
149
+ CSV file.
150
+
151
+ See Also
152
+ --------
153
+ read_csv : Read CSV (comma-separated) file into a DataFrame.
154
+ read_table : Read general delimited file into a DataFrame.
155
+
156
+ Notes
157
+ -----
158
+ This warning is issued when dealing with larger files because the dtype
159
+ checking happens per chunk read.
160
+
161
+ Despite the warning, the CSV file is read with mixed types in a single
162
+ column which will be an object type. See the examples below to better
163
+ understand this issue.
164
+
165
+ Examples
166
+ --------
167
+ This example creates and reads a large CSV file with a column that contains
168
+ `int` and `str`.
169
+
170
+ >>> df = pd.DataFrame({'a': (['1'] * 100000 + ['X'] * 100000 +
171
+ ... ['1'] * 100000),
172
+ ... 'b': ['b'] * 300000}) # doctest: +SKIP
173
+ >>> df.to_csv('test.csv', index=False) # doctest: +SKIP
174
+ >>> df2 = pd.read_csv('test.csv') # doctest: +SKIP
175
+ ... # DtypeWarning: Columns (0) have mixed types
176
+
177
+ Important to notice that ``df2`` will contain both `str` and `int` for the
178
+ same input, '1'.
179
+
180
+ >>> df2.iloc[262140, 0] # doctest: +SKIP
181
+ '1'
182
+ >>> type(df2.iloc[262140, 0]) # doctest: +SKIP
183
+ <class 'str'>
184
+ >>> df2.iloc[262150, 0] # doctest: +SKIP
185
+ 1
186
+ >>> type(df2.iloc[262150, 0]) # doctest: +SKIP
187
+ <class 'int'>
188
+
189
+ One way to solve this issue is using the `dtype` parameter in the
190
+ `read_csv` and `read_table` functions to explicit the conversion:
191
+
192
+ >>> df2 = pd.read_csv('test.csv', sep=',', dtype={'a': str}) # doctest: +SKIP
193
+
194
+ No warning was issued.
195
+ """
196
+
197
+
198
+ class EmptyDataError(ValueError):
199
+ """
200
+ Exception raised in ``pd.read_csv`` when empty data or header is encountered.
201
+
202
+ Examples
203
+ --------
204
+ >>> from io import StringIO
205
+ >>> empty = StringIO()
206
+ >>> pd.read_csv(empty)
207
+ Traceback (most recent call last):
208
+ EmptyDataError: No columns to parse from file
209
+ """
210
+
211
+
212
+ class ParserWarning(Warning):
213
+ """
214
+ Warning raised when reading a file that doesn't use the default 'c' parser.
215
+
216
+ Raised by `pd.read_csv` and `pd.read_table` when it is necessary to change
217
+ parsers, generally from the default 'c' parser to 'python'.
218
+
219
+ It happens due to a lack of support or functionality for parsing a
220
+ particular attribute of a CSV file with the requested engine.
221
+
222
+ Currently, 'c' unsupported options include the following parameters:
223
+
224
+ 1. `sep` other than a single character (e.g. regex separators)
225
+ 2. `skipfooter` higher than 0
226
+ 3. `sep=None` with `delim_whitespace=False`
227
+
228
+ The warning can be avoided by adding `engine='python'` as a parameter in
229
+ `pd.read_csv` and `pd.read_table` methods.
230
+
231
+ See Also
232
+ --------
233
+ pd.read_csv : Read CSV (comma-separated) file into DataFrame.
234
+ pd.read_table : Read general delimited file into DataFrame.
235
+
236
+ Examples
237
+ --------
238
+ Using a `sep` in `pd.read_csv` other than a single character:
239
+
240
+ >>> import io
241
+ >>> csv = '''a;b;c
242
+ ... 1;1,8
243
+ ... 1;2,1'''
244
+ >>> df = pd.read_csv(io.StringIO(csv), sep='[;,]') # doctest: +SKIP
245
+ ... # ParserWarning: Falling back to the 'python' engine...
246
+
247
+ Adding `engine='python'` to `pd.read_csv` removes the Warning:
248
+
249
+ >>> df = pd.read_csv(io.StringIO(csv), sep='[;,]', engine='python')
250
+ """
251
+
252
+
253
+ class MergeError(ValueError):
254
+ """
255
+ Exception raised when merging data.
256
+
257
+ Subclass of ``ValueError``.
258
+
259
+ Examples
260
+ --------
261
+ >>> left = pd.DataFrame({"a": ["a", "b", "b", "d"],
262
+ ... "b": ["cat", "dog", "weasel", "horse"]},
263
+ ... index=range(4))
264
+ >>> right = pd.DataFrame({"a": ["a", "b", "c", "d"],
265
+ ... "c": ["meow", "bark", "chirp", "nay"]},
266
+ ... index=range(4)).set_index("a")
267
+ >>> left.join(right, on="a", validate="one_to_one",)
268
+ Traceback (most recent call last):
269
+ MergeError: Merge keys are not unique in left dataset; not a one-to-one merge
270
+ """
271
+
272
+
273
+ class AbstractMethodError(NotImplementedError):
274
+ """
275
+ Raise this error instead of NotImplementedError for abstract methods.
276
+
277
+ Examples
278
+ --------
279
+ >>> class Foo:
280
+ ... @classmethod
281
+ ... def classmethod(cls):
282
+ ... raise pd.errors.AbstractMethodError(cls, methodtype="classmethod")
283
+ ... def method(self):
284
+ ... raise pd.errors.AbstractMethodError(self)
285
+ >>> test = Foo.classmethod()
286
+ Traceback (most recent call last):
287
+ AbstractMethodError: This classmethod must be defined in the concrete class Foo
288
+
289
+ >>> test2 = Foo().method()
290
+ Traceback (most recent call last):
291
+ AbstractMethodError: This classmethod must be defined in the concrete class Foo
292
+ """
293
+
294
+ def __init__(self, class_instance, methodtype: str = "method") -> None:
295
+ types = {"method", "classmethod", "staticmethod", "property"}
296
+ if methodtype not in types:
297
+ raise ValueError(
298
+ f"methodtype must be one of {methodtype}, got {types} instead."
299
+ )
300
+ self.methodtype = methodtype
301
+ self.class_instance = class_instance
302
+
303
+ def __str__(self) -> str:
304
+ if self.methodtype == "classmethod":
305
+ name = self.class_instance.__name__
306
+ else:
307
+ name = type(self.class_instance).__name__
308
+ return f"This {self.methodtype} must be defined in the concrete class {name}"
309
+
310
+
311
+ class NumbaUtilError(Exception):
312
+ """
313
+ Error raised for unsupported Numba engine routines.
314
+
315
+ Examples
316
+ --------
317
+ >>> df = pd.DataFrame({"key": ["a", "a", "b", "b"], "data": [1, 2, 3, 4]},
318
+ ... columns=["key", "data"])
319
+ >>> def incorrect_function(x):
320
+ ... return sum(x) * 2.7
321
+ >>> df.groupby("key").agg(incorrect_function, engine="numba")
322
+ Traceback (most recent call last):
323
+ NumbaUtilError: The first 2 arguments to incorrect_function
324
+ must be ['values', 'index']
325
+ """
326
+
327
+
328
+ class DuplicateLabelError(ValueError):
329
+ """
330
+ Error raised when an operation would introduce duplicate labels.
331
+
332
+ Examples
333
+ --------
334
+ >>> s = pd.Series([0, 1, 2], index=['a', 'b', 'c']).set_flags(
335
+ ... allows_duplicate_labels=False
336
+ ... )
337
+ >>> s.reindex(['a', 'a', 'b'])
338
+ Traceback (most recent call last):
339
+ ...
340
+ DuplicateLabelError: Index has duplicates.
341
+ positions
342
+ label
343
+ a [0, 1]
344
+ """
345
+
346
+
347
+ class InvalidIndexError(Exception):
348
+ """
349
+ Exception raised when attempting to use an invalid index key.
350
+
351
+ Examples
352
+ --------
353
+ >>> idx = pd.MultiIndex.from_product([["x", "y"], [0, 1]])
354
+ >>> df = pd.DataFrame([[1, 1, 2, 2],
355
+ ... [3, 3, 4, 4]], columns=idx)
356
+ >>> df
357
+ x y
358
+ 0 1 0 1
359
+ 0 1 1 2 2
360
+ 1 3 3 4 4
361
+ >>> df[:, 0]
362
+ Traceback (most recent call last):
363
+ InvalidIndexError: (slice(None, None, None), 0)
364
+ """
365
+
366
+
367
+ class DataError(Exception):
368
+ """
369
+ Exceptionn raised when performing an operation on non-numerical data.
370
+
371
+ For example, calling ``ohlc`` on a non-numerical column or a function
372
+ on a rolling window.
373
+
374
+ Examples
375
+ --------
376
+ >>> ser = pd.Series(['a', 'b', 'c'])
377
+ >>> ser.rolling(2).sum()
378
+ Traceback (most recent call last):
379
+ DataError: No numeric types to aggregate
380
+ """
381
+
382
+
383
+ class SpecificationError(Exception):
384
+ """
385
+ Exception raised by ``agg`` when the functions are ill-specified.
386
+
387
+ The exception raised in two scenarios.
388
+
389
+ The first way is calling ``agg`` on a
390
+ Dataframe or Series using a nested renamer (dict-of-dict).
391
+
392
+ The second way is calling ``agg`` on a Dataframe with duplicated functions
393
+ names without assigning column name.
394
+
395
+ Examples
396
+ --------
397
+ >>> df = pd.DataFrame({'A': [1, 1, 1, 2, 2],
398
+ ... 'B': range(5),
399
+ ... 'C': range(5)})
400
+ >>> df.groupby('A').B.agg({'foo': 'count'}) # doctest: +SKIP
401
+ ... # SpecificationError: nested renamer is not supported
402
+
403
+ >>> df.groupby('A').agg({'B': {'foo': ['sum', 'max']}}) # doctest: +SKIP
404
+ ... # SpecificationError: nested renamer is not supported
405
+
406
+ >>> df.groupby('A').agg(['min', 'min']) # doctest: +SKIP
407
+ ... # SpecificationError: nested renamer is not supported
408
+ """
409
+
410
+
411
+ class SettingWithCopyError(ValueError):
412
+ """
413
+ Exception raised when trying to set on a copied slice from a ``DataFrame``.
414
+
415
+ The ``mode.chained_assignment`` needs to be set to set to 'raise.' This can
416
+ happen unintentionally when chained indexing.
417
+
418
+ For more information on evaluation order,
419
+ see :ref:`the user guide<indexing.evaluation_order>`.
420
+
421
+ For more information on view vs. copy,
422
+ see :ref:`the user guide<indexing.view_versus_copy>`.
423
+
424
+ Examples
425
+ --------
426
+ >>> pd.options.mode.chained_assignment = 'raise'
427
+ >>> df = pd.DataFrame({'A': [1, 1, 1, 2, 2]}, columns=['A'])
428
+ >>> df.loc[0:3]['A'] = 'a' # doctest: +SKIP
429
+ ... # SettingWithCopyError: A value is trying to be set on a copy of a...
430
+ """
431
+
432
+
433
+ class SettingWithCopyWarning(Warning):
434
+ """
435
+ Warning raised when trying to set on a copied slice from a ``DataFrame``.
436
+
437
+ The ``mode.chained_assignment`` needs to be set to set to 'warn.'
438
+ 'Warn' is the default option. This can happen unintentionally when
439
+ chained indexing.
440
+
441
+ For more information on evaluation order,
442
+ see :ref:`the user guide<indexing.evaluation_order>`.
443
+
444
+ For more information on view vs. copy,
445
+ see :ref:`the user guide<indexing.view_versus_copy>`.
446
+
447
+ Examples
448
+ --------
449
+ >>> df = pd.DataFrame({'A': [1, 1, 1, 2, 2]}, columns=['A'])
450
+ >>> df.loc[0:3]['A'] = 'a' # doctest: +SKIP
451
+ ... # SettingWithCopyWarning: A value is trying to be set on a copy of a...
452
+ """
453
+
454
+
455
+ class ChainedAssignmentError(Warning):
456
+ """
457
+ Warning raised when trying to set using chained assignment.
458
+
459
+ When the ``mode.copy_on_write`` option is enabled, chained assignment can
460
+ never work. In such a situation, we are always setting into a temporary
461
+ object that is the result of an indexing operation (getitem), which under
462
+ Copy-on-Write always behaves as a copy. Thus, assigning through a chain
463
+ can never update the original Series or DataFrame.
464
+
465
+ For more information on view vs. copy,
466
+ see :ref:`the user guide<indexing.view_versus_copy>`.
467
+
468
+ Examples
469
+ --------
470
+ >>> pd.options.mode.copy_on_write = True
471
+ >>> df = pd.DataFrame({'A': [1, 1, 1, 2, 2]}, columns=['A'])
472
+ >>> df["A"][0:3] = 10 # doctest: +SKIP
473
+ ... # ChainedAssignmentError: ...
474
+ >>> pd.options.mode.copy_on_write = False
475
+ """
476
+
477
+
478
+ _chained_assignment_msg = (
479
+ "A value is trying to be set on a copy of a DataFrame or Series "
480
+ "through chained assignment.\n"
481
+ "When using the Copy-on-Write mode, such chained assignment never works "
482
+ "to update the original DataFrame or Series, because the intermediate "
483
+ "object on which we are setting values always behaves as a copy.\n\n"
484
+ "Try using '.loc[row_indexer, col_indexer] = value' instead, to perform "
485
+ "the assignment in a single step.\n\n"
486
+ "See the caveats in the documentation: "
487
+ "https://pandas.pydata.org/pandas-docs/stable/user_guide/"
488
+ "indexing.html#returning-a-view-versus-a-copy"
489
+ )
490
+
491
+
492
+ _chained_assignment_method_msg = (
493
+ "A value is trying to be set on a copy of a DataFrame or Series "
494
+ "through chained assignment using an inplace method.\n"
495
+ "When using the Copy-on-Write mode, such inplace method never works "
496
+ "to update the original DataFrame or Series, because the intermediate "
497
+ "object on which we are setting values always behaves as a copy.\n\n"
498
+ "For example, when doing 'df[col].method(value, inplace=True)', try "
499
+ "using 'df.method({col: value}, inplace=True)' instead, to perform "
500
+ "the operation inplace on the original object.\n\n"
501
+ )
502
+
503
+
504
+ _chained_assignment_warning_msg = (
505
+ "ChainedAssignmentError: behaviour will change in pandas 3.0!\n"
506
+ "You are setting values through chained assignment. Currently this works "
507
+ "in certain cases, but when using Copy-on-Write (which will become the "
508
+ "default behaviour in pandas 3.0) this will never work to update the "
509
+ "original DataFrame or Series, because the intermediate object on which "
510
+ "we are setting values will behave as a copy.\n"
511
+ "A typical example is when you are setting values in a column of a "
512
+ "DataFrame, like:\n\n"
513
+ 'df["col"][row_indexer] = value\n\n'
514
+ 'Use `df.loc[row_indexer, "col"] = values` instead, to perform the '
515
+ "assignment in a single step and ensure this keeps updating the original `df`.\n\n"
516
+ "See the caveats in the documentation: "
517
+ "https://pandas.pydata.org/pandas-docs/stable/user_guide/"
518
+ "indexing.html#returning-a-view-versus-a-copy\n"
519
+ )
520
+
521
+
522
+ _chained_assignment_warning_method_msg = (
523
+ "A value is trying to be set on a copy of a DataFrame or Series "
524
+ "through chained assignment using an inplace method.\n"
525
+ "The behavior will change in pandas 3.0. This inplace method will "
526
+ "never work because the intermediate object on which we are setting "
527
+ "values always behaves as a copy.\n\n"
528
+ "For example, when doing 'df[col].method(value, inplace=True)', try "
529
+ "using 'df.method({col: value}, inplace=True)' or "
530
+ "df[col] = df[col].method(value) instead, to perform "
531
+ "the operation inplace on the original object.\n\n"
532
+ )
533
+
534
+
535
+ def _check_cacher(obj):
536
+ # This is a mess, selection paths that return a view set the _cacher attribute
537
+ # on the Series; most of them also set _item_cache which adds 1 to our relevant
538
+ # reference count, but iloc does not, so we have to check if we are actually
539
+ # in the item cache
540
+ if hasattr(obj, "_cacher"):
541
+ parent = obj._cacher[1]()
542
+ # parent could be dead
543
+ if parent is None:
544
+ return False
545
+ if hasattr(parent, "_item_cache"):
546
+ if obj._cacher[0] in parent._item_cache:
547
+ # Check if we are actually the item from item_cache, iloc creates a
548
+ # new object
549
+ return obj is parent._item_cache[obj._cacher[0]]
550
+ return False
551
+
552
+
553
+ class NumExprClobberingError(NameError):
554
+ """
555
+ Exception raised when trying to use a built-in numexpr name as a variable name.
556
+
557
+ ``eval`` or ``query`` will throw the error if the engine is set
558
+ to 'numexpr'. 'numexpr' is the default engine value for these methods if the
559
+ numexpr package is installed.
560
+
561
+ Examples
562
+ --------
563
+ >>> df = pd.DataFrame({'abs': [1, 1, 1]})
564
+ >>> df.query("abs > 2") # doctest: +SKIP
565
+ ... # NumExprClobberingError: Variables in expression "(abs) > (2)" overlap...
566
+ >>> sin, a = 1, 2
567
+ >>> pd.eval("sin + a", engine='numexpr') # doctest: +SKIP
568
+ ... # NumExprClobberingError: Variables in expression "(sin) + (a)" overlap...
569
+ """
570
+
571
+
572
+ class UndefinedVariableError(NameError):
573
+ """
574
+ Exception raised by ``query`` or ``eval`` when using an undefined variable name.
575
+
576
+ It will also specify whether the undefined variable is local or not.
577
+
578
+ Examples
579
+ --------
580
+ >>> df = pd.DataFrame({'A': [1, 1, 1]})
581
+ >>> df.query("A > x") # doctest: +SKIP
582
+ ... # UndefinedVariableError: name 'x' is not defined
583
+ >>> df.query("A > @y") # doctest: +SKIP
584
+ ... # UndefinedVariableError: local variable 'y' is not defined
585
+ >>> pd.eval('x + 1') # doctest: +SKIP
586
+ ... # UndefinedVariableError: name 'x' is not defined
587
+ """
588
+
589
+ def __init__(self, name: str, is_local: bool | None = None) -> None:
590
+ base_msg = f"{repr(name)} is not defined"
591
+ if is_local:
592
+ msg = f"local variable {base_msg}"
593
+ else:
594
+ msg = f"name {base_msg}"
595
+ super().__init__(msg)
596
+
597
+
598
+ class IndexingError(Exception):
599
+ """
600
+ Exception is raised when trying to index and there is a mismatch in dimensions.
601
+
602
+ Examples
603
+ --------
604
+ >>> df = pd.DataFrame({'A': [1, 1, 1]})
605
+ >>> df.loc[..., ..., 'A'] # doctest: +SKIP
606
+ ... # IndexingError: indexer may only contain one '...' entry
607
+ >>> df = pd.DataFrame({'A': [1, 1, 1]})
608
+ >>> df.loc[1, ..., ...] # doctest: +SKIP
609
+ ... # IndexingError: Too many indexers
610
+ >>> df[pd.Series([True], dtype=bool)] # doctest: +SKIP
611
+ ... # IndexingError: Unalignable boolean Series provided as indexer...
612
+ >>> s = pd.Series(range(2),
613
+ ... index = pd.MultiIndex.from_product([["a", "b"], ["c"]]))
614
+ >>> s.loc["a", "c", "d"] # doctest: +SKIP
615
+ ... # IndexingError: Too many indexers
616
+ """
617
+
618
+
619
+ class PyperclipException(RuntimeError):
620
+ """
621
+ Exception raised when clipboard functionality is unsupported.
622
+
623
+ Raised by ``to_clipboard()`` and ``read_clipboard()``.
624
+ """
625
+
626
+
627
+ class PyperclipWindowsException(PyperclipException):
628
+ """
629
+ Exception raised when clipboard functionality is unsupported by Windows.
630
+
631
+ Access to the clipboard handle would be denied due to some other
632
+ window process is accessing it.
633
+ """
634
+
635
+ def __init__(self, message: str) -> None:
636
+ # attr only exists on Windows, so typing fails on other platforms
637
+ message += f" ({ctypes.WinError()})" # type: ignore[attr-defined]
638
+ super().__init__(message)
639
+
640
+
641
+ class CSSWarning(UserWarning):
642
+ """
643
+ Warning is raised when converting css styling fails.
644
+
645
+ This can be due to the styling not having an equivalent value or because the
646
+ styling isn't properly formatted.
647
+
648
+ Examples
649
+ --------
650
+ >>> df = pd.DataFrame({'A': [1, 1, 1]})
651
+ >>> df.style.applymap(
652
+ ... lambda x: 'background-color: blueGreenRed;'
653
+ ... ).to_excel('styled.xlsx') # doctest: +SKIP
654
+ CSSWarning: Unhandled color format: 'blueGreenRed'
655
+ >>> df.style.applymap(
656
+ ... lambda x: 'border: 1px solid red red;'
657
+ ... ).to_excel('styled.xlsx') # doctest: +SKIP
658
+ CSSWarning: Unhandled color format: 'blueGreenRed'
659
+ """
660
+
661
+
662
+ class PossibleDataLossError(Exception):
663
+ """
664
+ Exception raised when trying to open a HDFStore file when already opened.
665
+
666
+ Examples
667
+ --------
668
+ >>> store = pd.HDFStore('my-store', 'a') # doctest: +SKIP
669
+ >>> store.open("w") # doctest: +SKIP
670
+ ... # PossibleDataLossError: Re-opening the file [my-store] with mode [a]...
671
+ """
672
+
673
+
674
+ class ClosedFileError(Exception):
675
+ """
676
+ Exception is raised when trying to perform an operation on a closed HDFStore file.
677
+
678
+ Examples
679
+ --------
680
+ >>> store = pd.HDFStore('my-store', 'a') # doctest: +SKIP
681
+ >>> store.close() # doctest: +SKIP
682
+ >>> store.keys() # doctest: +SKIP
683
+ ... # ClosedFileError: my-store file is not open!
684
+ """
685
+
686
+
687
+ class IncompatibilityWarning(Warning):
688
+ """
689
+ Warning raised when trying to use where criteria on an incompatible HDF5 file.
690
+ """
691
+
692
+
693
+ class AttributeConflictWarning(Warning):
694
+ """
695
+ Warning raised when index attributes conflict when using HDFStore.
696
+
697
+ Occurs when attempting to append an index with a different
698
+ name than the existing index on an HDFStore or attempting to append an index with a
699
+ different frequency than the existing index on an HDFStore.
700
+
701
+ Examples
702
+ --------
703
+ >>> idx1 = pd.Index(['a', 'b'], name='name1')
704
+ >>> df1 = pd.DataFrame([[1, 2], [3, 4]], index=idx1)
705
+ >>> df1.to_hdf('file', 'data', 'w', append=True) # doctest: +SKIP
706
+ >>> idx2 = pd.Index(['c', 'd'], name='name2')
707
+ >>> df2 = pd.DataFrame([[5, 6], [7, 8]], index=idx2)
708
+ >>> df2.to_hdf('file', 'data', 'a', append=True) # doctest: +SKIP
709
+ AttributeConflictWarning: the [index_name] attribute of the existing index is
710
+ [name1] which conflicts with the new [name2]...
711
+ """
712
+
713
+
714
+ class DatabaseError(OSError):
715
+ """
716
+ Error is raised when executing sql with bad syntax or sql that throws an error.
717
+
718
+ Examples
719
+ --------
720
+ >>> from sqlite3 import connect
721
+ >>> conn = connect(':memory:')
722
+ >>> pd.read_sql('select * test', conn) # doctest: +SKIP
723
+ ... # DatabaseError: Execution failed on sql 'test': near "test": syntax error
724
+ """
725
+
726
+
727
+ class PossiblePrecisionLoss(Warning):
728
+ """
729
+ Warning raised by to_stata on a column with a value outside or equal to int64.
730
+
731
+ When the column value is outside or equal to the int64 value the column is
732
+ converted to a float64 dtype.
733
+
734
+ Examples
735
+ --------
736
+ >>> df = pd.DataFrame({"s": pd.Series([1, 2**53], dtype=np.int64)})
737
+ >>> df.to_stata('test') # doctest: +SKIP
738
+ ... # PossiblePrecisionLoss: Column converted from int64 to float64...
739
+ """
740
+
741
+
742
+ class ValueLabelTypeMismatch(Warning):
743
+ """
744
+ Warning raised by to_stata on a category column that contains non-string values.
745
+
746
+ Examples
747
+ --------
748
+ >>> df = pd.DataFrame({"categories": pd.Series(["a", 2], dtype="category")})
749
+ >>> df.to_stata('test') # doctest: +SKIP
750
+ ... # ValueLabelTypeMismatch: Stata value labels (pandas categories) must be str...
751
+ """
752
+
753
+
754
+ class InvalidColumnName(Warning):
755
+ """
756
+ Warning raised by to_stata the column contains a non-valid stata name.
757
+
758
+ Because the column name is an invalid Stata variable, the name needs to be
759
+ converted.
760
+
761
+ Examples
762
+ --------
763
+ >>> df = pd.DataFrame({"0categories": pd.Series([2, 2])})
764
+ >>> df.to_stata('test') # doctest: +SKIP
765
+ ... # InvalidColumnName: Not all pandas column names were valid Stata variable...
766
+ """
767
+
768
+
769
+ class CategoricalConversionWarning(Warning):
770
+ """
771
+ Warning is raised when reading a partial labeled Stata file using a iterator.
772
+
773
+ Examples
774
+ --------
775
+ >>> from pandas.io.stata import StataReader
776
+ >>> with StataReader('dta_file', chunksize=2) as reader: # doctest: +SKIP
777
+ ... for i, block in enumerate(reader):
778
+ ... print(i, block)
779
+ ... # CategoricalConversionWarning: One or more series with value labels...
780
+ """
781
+
782
+
783
+ class LossySetitemError(Exception):
784
+ """
785
+ Raised when trying to do a __setitem__ on an np.ndarray that is not lossless.
786
+
787
+ Notes
788
+ -----
789
+ This is an internal error.
790
+ """
791
+
792
+
793
+ class NoBufferPresent(Exception):
794
+ """
795
+ Exception is raised in _get_data_buffer to signal that there is no requested buffer.
796
+ """
797
+
798
+
799
+ class InvalidComparison(Exception):
800
+ """
801
+ Exception is raised by _validate_comparison_value to indicate an invalid comparison.
802
+
803
+ Notes
804
+ -----
805
+ This is an internal error.
806
+ """
807
+
808
+
809
+ __all__ = [
810
+ "AbstractMethodError",
811
+ "AttributeConflictWarning",
812
+ "CategoricalConversionWarning",
813
+ "ClosedFileError",
814
+ "CSSWarning",
815
+ "DatabaseError",
816
+ "DataError",
817
+ "DtypeWarning",
818
+ "DuplicateLabelError",
819
+ "EmptyDataError",
820
+ "IncompatibilityWarning",
821
+ "IntCastingNaNError",
822
+ "InvalidColumnName",
823
+ "InvalidComparison",
824
+ "InvalidIndexError",
825
+ "InvalidVersion",
826
+ "IndexingError",
827
+ "LossySetitemError",
828
+ "MergeError",
829
+ "NoBufferPresent",
830
+ "NullFrequencyError",
831
+ "NumbaUtilError",
832
+ "NumExprClobberingError",
833
+ "OptionError",
834
+ "OutOfBoundsDatetime",
835
+ "OutOfBoundsTimedelta",
836
+ "ParserError",
837
+ "ParserWarning",
838
+ "PerformanceWarning",
839
+ "PossibleDataLossError",
840
+ "PossiblePrecisionLoss",
841
+ "PyperclipException",
842
+ "PyperclipWindowsException",
843
+ "SettingWithCopyError",
844
+ "SettingWithCopyWarning",
845
+ "SpecificationError",
846
+ "UndefinedVariableError",
847
+ "UnsortedIndexError",
848
+ "UnsupportedFunctionCall",
849
+ "ValueLabelTypeMismatch",
850
+ ]
venv/lib/python3.10/site-packages/pandas/errors/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (29 kB). View file
 
venv/lib/python3.10/site-packages/pandas/plotting/__init__.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Plotting public API.
3
+
4
+ Authors of third-party plotting backends should implement a module with a
5
+ public ``plot(data, kind, **kwargs)``. The parameter `data` will contain
6
+ the data structure and can be a `Series` or a `DataFrame`. For example,
7
+ for ``df.plot()`` the parameter `data` will contain the DataFrame `df`.
8
+ In some cases, the data structure is transformed before being sent to
9
+ the backend (see PlotAccessor.__call__ in pandas/plotting/_core.py for
10
+ the exact transformations).
11
+
12
+ The parameter `kind` will be one of:
13
+
14
+ - line
15
+ - bar
16
+ - barh
17
+ - box
18
+ - hist
19
+ - kde
20
+ - area
21
+ - pie
22
+ - scatter
23
+ - hexbin
24
+
25
+ See the pandas API reference for documentation on each kind of plot.
26
+
27
+ Any other keyword argument is currently assumed to be backend specific,
28
+ but some parameters may be unified and added to the signature in the
29
+ future (e.g. `title` which should be useful for any backend).
30
+
31
+ Currently, all the Matplotlib functions in pandas are accessed through
32
+ the selected backend. For example, `pandas.plotting.boxplot` (equivalent
33
+ to `DataFrame.boxplot`) is also accessed in the selected backend. This
34
+ is expected to change, and the exact API is under discussion. But with
35
+ the current version, backends are expected to implement the next functions:
36
+
37
+ - plot (describe above, used for `Series.plot` and `DataFrame.plot`)
38
+ - hist_series and hist_frame (for `Series.hist` and `DataFrame.hist`)
39
+ - boxplot (`pandas.plotting.boxplot(df)` equivalent to `DataFrame.boxplot`)
40
+ - boxplot_frame and boxplot_frame_groupby
41
+ - register and deregister (register converters for the tick formats)
42
+ - Plots not called as `Series` and `DataFrame` methods:
43
+ - table
44
+ - andrews_curves
45
+ - autocorrelation_plot
46
+ - bootstrap_plot
47
+ - lag_plot
48
+ - parallel_coordinates
49
+ - radviz
50
+ - scatter_matrix
51
+
52
+ Use the code in pandas/plotting/_matplotib.py and
53
+ https://github.com/pyviz/hvplot as a reference on how to write a backend.
54
+
55
+ For the discussion about the API see
56
+ https://github.com/pandas-dev/pandas/issues/26747.
57
+ """
58
+ from pandas.plotting._core import (
59
+ PlotAccessor,
60
+ boxplot,
61
+ boxplot_frame,
62
+ boxplot_frame_groupby,
63
+ hist_frame,
64
+ hist_series,
65
+ )
66
+ from pandas.plotting._misc import (
67
+ andrews_curves,
68
+ autocorrelation_plot,
69
+ bootstrap_plot,
70
+ deregister as deregister_matplotlib_converters,
71
+ lag_plot,
72
+ parallel_coordinates,
73
+ plot_params,
74
+ radviz,
75
+ register as register_matplotlib_converters,
76
+ scatter_matrix,
77
+ table,
78
+ )
79
+
80
+ __all__ = [
81
+ "PlotAccessor",
82
+ "boxplot",
83
+ "boxplot_frame",
84
+ "boxplot_frame_groupby",
85
+ "hist_frame",
86
+ "hist_series",
87
+ "scatter_matrix",
88
+ "radviz",
89
+ "andrews_curves",
90
+ "bootstrap_plot",
91
+ "parallel_coordinates",
92
+ "lag_plot",
93
+ "autocorrelation_plot",
94
+ "table",
95
+ "plot_params",
96
+ "register_matplotlib_converters",
97
+ "deregister_matplotlib_converters",
98
+ ]
venv/lib/python3.10/site-packages/pandas/plotting/__pycache__/_misc.cpython-310.pyc ADDED
Binary file (21.2 kB). View file
 
venv/lib/python3.10/site-packages/pandas/plotting/_core.py ADDED
@@ -0,0 +1,1946 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import importlib
4
+ from typing import (
5
+ TYPE_CHECKING,
6
+ Callable,
7
+ Literal,
8
+ )
9
+
10
+ from pandas._config import get_option
11
+
12
+ from pandas.util._decorators import (
13
+ Appender,
14
+ Substitution,
15
+ )
16
+
17
+ from pandas.core.dtypes.common import (
18
+ is_integer,
19
+ is_list_like,
20
+ )
21
+ from pandas.core.dtypes.generic import (
22
+ ABCDataFrame,
23
+ ABCSeries,
24
+ )
25
+
26
+ from pandas.core.base import PandasObject
27
+
28
+ if TYPE_CHECKING:
29
+ from collections.abc import (
30
+ Hashable,
31
+ Sequence,
32
+ )
33
+ import types
34
+
35
+ from matplotlib.axes import Axes
36
+ import numpy as np
37
+
38
+ from pandas._typing import IndexLabel
39
+
40
+ from pandas import (
41
+ DataFrame,
42
+ Series,
43
+ )
44
+ from pandas.core.groupby.generic import DataFrameGroupBy
45
+
46
+
47
+ def hist_series(
48
+ self: Series,
49
+ by=None,
50
+ ax=None,
51
+ grid: bool = True,
52
+ xlabelsize: int | None = None,
53
+ xrot: float | None = None,
54
+ ylabelsize: int | None = None,
55
+ yrot: float | None = None,
56
+ figsize: tuple[int, int] | None = None,
57
+ bins: int | Sequence[int] = 10,
58
+ backend: str | None = None,
59
+ legend: bool = False,
60
+ **kwargs,
61
+ ):
62
+ """
63
+ Draw histogram of the input series using matplotlib.
64
+
65
+ Parameters
66
+ ----------
67
+ by : object, optional
68
+ If passed, then used to form histograms for separate groups.
69
+ ax : matplotlib axis object
70
+ If not passed, uses gca().
71
+ grid : bool, default True
72
+ Whether to show axis grid lines.
73
+ xlabelsize : int, default None
74
+ If specified changes the x-axis label size.
75
+ xrot : float, default None
76
+ Rotation of x axis labels.
77
+ ylabelsize : int, default None
78
+ If specified changes the y-axis label size.
79
+ yrot : float, default None
80
+ Rotation of y axis labels.
81
+ figsize : tuple, default None
82
+ Figure size in inches by default.
83
+ bins : int or sequence, default 10
84
+ Number of histogram bins to be used. If an integer is given, bins + 1
85
+ bin edges are calculated and returned. If bins is a sequence, gives
86
+ bin edges, including left edge of first bin and right edge of last
87
+ bin. In this case, bins is returned unmodified.
88
+ backend : str, default None
89
+ Backend to use instead of the backend specified in the option
90
+ ``plotting.backend``. For instance, 'matplotlib'. Alternatively, to
91
+ specify the ``plotting.backend`` for the whole session, set
92
+ ``pd.options.plotting.backend``.
93
+ legend : bool, default False
94
+ Whether to show the legend.
95
+
96
+ **kwargs
97
+ To be passed to the actual plotting function.
98
+
99
+ Returns
100
+ -------
101
+ matplotlib.AxesSubplot
102
+ A histogram plot.
103
+
104
+ See Also
105
+ --------
106
+ matplotlib.axes.Axes.hist : Plot a histogram using matplotlib.
107
+
108
+ Examples
109
+ --------
110
+ For Series:
111
+
112
+ .. plot::
113
+ :context: close-figs
114
+
115
+ >>> lst = ['a', 'a', 'a', 'b', 'b', 'b']
116
+ >>> ser = pd.Series([1, 2, 2, 4, 6, 6], index=lst)
117
+ >>> hist = ser.hist()
118
+
119
+ For Groupby:
120
+
121
+ .. plot::
122
+ :context: close-figs
123
+
124
+ >>> lst = ['a', 'a', 'a', 'b', 'b', 'b']
125
+ >>> ser = pd.Series([1, 2, 2, 4, 6, 6], index=lst)
126
+ >>> hist = ser.groupby(level=0).hist()
127
+ """
128
+ plot_backend = _get_plot_backend(backend)
129
+ return plot_backend.hist_series(
130
+ self,
131
+ by=by,
132
+ ax=ax,
133
+ grid=grid,
134
+ xlabelsize=xlabelsize,
135
+ xrot=xrot,
136
+ ylabelsize=ylabelsize,
137
+ yrot=yrot,
138
+ figsize=figsize,
139
+ bins=bins,
140
+ legend=legend,
141
+ **kwargs,
142
+ )
143
+
144
+
145
+ def hist_frame(
146
+ data: DataFrame,
147
+ column: IndexLabel | None = None,
148
+ by=None,
149
+ grid: bool = True,
150
+ xlabelsize: int | None = None,
151
+ xrot: float | None = None,
152
+ ylabelsize: int | None = None,
153
+ yrot: float | None = None,
154
+ ax=None,
155
+ sharex: bool = False,
156
+ sharey: bool = False,
157
+ figsize: tuple[int, int] | None = None,
158
+ layout: tuple[int, int] | None = None,
159
+ bins: int | Sequence[int] = 10,
160
+ backend: str | None = None,
161
+ legend: bool = False,
162
+ **kwargs,
163
+ ):
164
+ """
165
+ Make a histogram of the DataFrame's columns.
166
+
167
+ A `histogram`_ is a representation of the distribution of data.
168
+ This function calls :meth:`matplotlib.pyplot.hist`, on each series in
169
+ the DataFrame, resulting in one histogram per column.
170
+
171
+ .. _histogram: https://en.wikipedia.org/wiki/Histogram
172
+
173
+ Parameters
174
+ ----------
175
+ data : DataFrame
176
+ The pandas object holding the data.
177
+ column : str or sequence, optional
178
+ If passed, will be used to limit data to a subset of columns.
179
+ by : object, optional
180
+ If passed, then used to form histograms for separate groups.
181
+ grid : bool, default True
182
+ Whether to show axis grid lines.
183
+ xlabelsize : int, default None
184
+ If specified changes the x-axis label size.
185
+ xrot : float, default None
186
+ Rotation of x axis labels. For example, a value of 90 displays the
187
+ x labels rotated 90 degrees clockwise.
188
+ ylabelsize : int, default None
189
+ If specified changes the y-axis label size.
190
+ yrot : float, default None
191
+ Rotation of y axis labels. For example, a value of 90 displays the
192
+ y labels rotated 90 degrees clockwise.
193
+ ax : Matplotlib axes object, default None
194
+ The axes to plot the histogram on.
195
+ sharex : bool, default True if ax is None else False
196
+ In case subplots=True, share x axis and set some x axis labels to
197
+ invisible; defaults to True if ax is None otherwise False if an ax
198
+ is passed in.
199
+ Note that passing in both an ax and sharex=True will alter all x axis
200
+ labels for all subplots in a figure.
201
+ sharey : bool, default False
202
+ In case subplots=True, share y axis and set some y axis labels to
203
+ invisible.
204
+ figsize : tuple, optional
205
+ The size in inches of the figure to create. Uses the value in
206
+ `matplotlib.rcParams` by default.
207
+ layout : tuple, optional
208
+ Tuple of (rows, columns) for the layout of the histograms.
209
+ bins : int or sequence, default 10
210
+ Number of histogram bins to be used. If an integer is given, bins + 1
211
+ bin edges are calculated and returned. If bins is a sequence, gives
212
+ bin edges, including left edge of first bin and right edge of last
213
+ bin. In this case, bins is returned unmodified.
214
+
215
+ backend : str, default None
216
+ Backend to use instead of the backend specified in the option
217
+ ``plotting.backend``. For instance, 'matplotlib'. Alternatively, to
218
+ specify the ``plotting.backend`` for the whole session, set
219
+ ``pd.options.plotting.backend``.
220
+
221
+ legend : bool, default False
222
+ Whether to show the legend.
223
+
224
+ **kwargs
225
+ All other plotting keyword arguments to be passed to
226
+ :meth:`matplotlib.pyplot.hist`.
227
+
228
+ Returns
229
+ -------
230
+ matplotlib.AxesSubplot or numpy.ndarray of them
231
+
232
+ See Also
233
+ --------
234
+ matplotlib.pyplot.hist : Plot a histogram using matplotlib.
235
+
236
+ Examples
237
+ --------
238
+ This example draws a histogram based on the length and width of
239
+ some animals, displayed in three bins
240
+
241
+ .. plot::
242
+ :context: close-figs
243
+
244
+ >>> data = {'length': [1.5, 0.5, 1.2, 0.9, 3],
245
+ ... 'width': [0.7, 0.2, 0.15, 0.2, 1.1]}
246
+ >>> index = ['pig', 'rabbit', 'duck', 'chicken', 'horse']
247
+ >>> df = pd.DataFrame(data, index=index)
248
+ >>> hist = df.hist(bins=3)
249
+ """
250
+ plot_backend = _get_plot_backend(backend)
251
+ return plot_backend.hist_frame(
252
+ data,
253
+ column=column,
254
+ by=by,
255
+ grid=grid,
256
+ xlabelsize=xlabelsize,
257
+ xrot=xrot,
258
+ ylabelsize=ylabelsize,
259
+ yrot=yrot,
260
+ ax=ax,
261
+ sharex=sharex,
262
+ sharey=sharey,
263
+ figsize=figsize,
264
+ layout=layout,
265
+ legend=legend,
266
+ bins=bins,
267
+ **kwargs,
268
+ )
269
+
270
+
271
+ _boxplot_doc = """
272
+ Make a box plot from DataFrame columns.
273
+
274
+ Make a box-and-whisker plot from DataFrame columns, optionally grouped
275
+ by some other columns. A box plot is a method for graphically depicting
276
+ groups of numerical data through their quartiles.
277
+ The box extends from the Q1 to Q3 quartile values of the data,
278
+ with a line at the median (Q2). The whiskers extend from the edges
279
+ of box to show the range of the data. By default, they extend no more than
280
+ `1.5 * IQR (IQR = Q3 - Q1)` from the edges of the box, ending at the farthest
281
+ data point within that interval. Outliers are plotted as separate dots.
282
+
283
+ For further details see
284
+ Wikipedia's entry for `boxplot <https://en.wikipedia.org/wiki/Box_plot>`_.
285
+
286
+ Parameters
287
+ ----------
288
+ %(data)s\
289
+ column : str or list of str, optional
290
+ Column name or list of names, or vector.
291
+ Can be any valid input to :meth:`pandas.DataFrame.groupby`.
292
+ by : str or array-like, optional
293
+ Column in the DataFrame to :meth:`pandas.DataFrame.groupby`.
294
+ One box-plot will be done per value of columns in `by`.
295
+ ax : object of class matplotlib.axes.Axes, optional
296
+ The matplotlib axes to be used by boxplot.
297
+ fontsize : float or str
298
+ Tick label font size in points or as a string (e.g., `large`).
299
+ rot : float, default 0
300
+ The rotation angle of labels (in degrees)
301
+ with respect to the screen coordinate system.
302
+ grid : bool, default True
303
+ Setting this to True will show the grid.
304
+ figsize : A tuple (width, height) in inches
305
+ The size of the figure to create in matplotlib.
306
+ layout : tuple (rows, columns), optional
307
+ For example, (3, 5) will display the subplots
308
+ using 3 rows and 5 columns, starting from the top-left.
309
+ return_type : {'axes', 'dict', 'both'} or None, default 'axes'
310
+ The kind of object to return. The default is ``axes``.
311
+
312
+ * 'axes' returns the matplotlib axes the boxplot is drawn on.
313
+ * 'dict' returns a dictionary whose values are the matplotlib
314
+ Lines of the boxplot.
315
+ * 'both' returns a namedtuple with the axes and dict.
316
+ * when grouping with ``by``, a Series mapping columns to
317
+ ``return_type`` is returned.
318
+
319
+ If ``return_type`` is `None`, a NumPy array
320
+ of axes with the same shape as ``layout`` is returned.
321
+ %(backend)s\
322
+
323
+ **kwargs
324
+ All other plotting keyword arguments to be passed to
325
+ :func:`matplotlib.pyplot.boxplot`.
326
+
327
+ Returns
328
+ -------
329
+ result
330
+ See Notes.
331
+
332
+ See Also
333
+ --------
334
+ pandas.Series.plot.hist: Make a histogram.
335
+ matplotlib.pyplot.boxplot : Matplotlib equivalent plot.
336
+
337
+ Notes
338
+ -----
339
+ The return type depends on the `return_type` parameter:
340
+
341
+ * 'axes' : object of class matplotlib.axes.Axes
342
+ * 'dict' : dict of matplotlib.lines.Line2D objects
343
+ * 'both' : a namedtuple with structure (ax, lines)
344
+
345
+ For data grouped with ``by``, return a Series of the above or a numpy
346
+ array:
347
+
348
+ * :class:`~pandas.Series`
349
+ * :class:`~numpy.array` (for ``return_type = None``)
350
+
351
+ Use ``return_type='dict'`` when you want to tweak the appearance
352
+ of the lines after plotting. In this case a dict containing the Lines
353
+ making up the boxes, caps, fliers, medians, and whiskers is returned.
354
+
355
+ Examples
356
+ --------
357
+
358
+ Boxplots can be created for every column in the dataframe
359
+ by ``df.boxplot()`` or indicating the columns to be used:
360
+
361
+ .. plot::
362
+ :context: close-figs
363
+
364
+ >>> np.random.seed(1234)
365
+ >>> df = pd.DataFrame(np.random.randn(10, 4),
366
+ ... columns=['Col1', 'Col2', 'Col3', 'Col4'])
367
+ >>> boxplot = df.boxplot(column=['Col1', 'Col2', 'Col3']) # doctest: +SKIP
368
+
369
+ Boxplots of variables distributions grouped by the values of a third
370
+ variable can be created using the option ``by``. For instance:
371
+
372
+ .. plot::
373
+ :context: close-figs
374
+
375
+ >>> df = pd.DataFrame(np.random.randn(10, 2),
376
+ ... columns=['Col1', 'Col2'])
377
+ >>> df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A',
378
+ ... 'B', 'B', 'B', 'B', 'B'])
379
+ >>> boxplot = df.boxplot(by='X')
380
+
381
+ A list of strings (i.e. ``['X', 'Y']``) can be passed to boxplot
382
+ in order to group the data by combination of the variables in the x-axis:
383
+
384
+ .. plot::
385
+ :context: close-figs
386
+
387
+ >>> df = pd.DataFrame(np.random.randn(10, 3),
388
+ ... columns=['Col1', 'Col2', 'Col3'])
389
+ >>> df['X'] = pd.Series(['A', 'A', 'A', 'A', 'A',
390
+ ... 'B', 'B', 'B', 'B', 'B'])
391
+ >>> df['Y'] = pd.Series(['A', 'B', 'A', 'B', 'A',
392
+ ... 'B', 'A', 'B', 'A', 'B'])
393
+ >>> boxplot = df.boxplot(column=['Col1', 'Col2'], by=['X', 'Y'])
394
+
395
+ The layout of boxplot can be adjusted giving a tuple to ``layout``:
396
+
397
+ .. plot::
398
+ :context: close-figs
399
+
400
+ >>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
401
+ ... layout=(2, 1))
402
+
403
+ Additional formatting can be done to the boxplot, like suppressing the grid
404
+ (``grid=False``), rotating the labels in the x-axis (i.e. ``rot=45``)
405
+ or changing the fontsize (i.e. ``fontsize=15``):
406
+
407
+ .. plot::
408
+ :context: close-figs
409
+
410
+ >>> boxplot = df.boxplot(grid=False, rot=45, fontsize=15) # doctest: +SKIP
411
+
412
+ The parameter ``return_type`` can be used to select the type of element
413
+ returned by `boxplot`. When ``return_type='axes'`` is selected,
414
+ the matplotlib axes on which the boxplot is drawn are returned:
415
+
416
+ >>> boxplot = df.boxplot(column=['Col1', 'Col2'], return_type='axes')
417
+ >>> type(boxplot)
418
+ <class 'matplotlib.axes._axes.Axes'>
419
+
420
+ When grouping with ``by``, a Series mapping columns to ``return_type``
421
+ is returned:
422
+
423
+ >>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
424
+ ... return_type='axes')
425
+ >>> type(boxplot)
426
+ <class 'pandas.core.series.Series'>
427
+
428
+ If ``return_type`` is `None`, a NumPy array of axes with the same shape
429
+ as ``layout`` is returned:
430
+
431
+ >>> boxplot = df.boxplot(column=['Col1', 'Col2'], by='X',
432
+ ... return_type=None)
433
+ >>> type(boxplot)
434
+ <class 'numpy.ndarray'>
435
+ """
436
+
437
+ _backend_doc = """\
438
+ backend : str, default None
439
+ Backend to use instead of the backend specified in the option
440
+ ``plotting.backend``. For instance, 'matplotlib'. Alternatively, to
441
+ specify the ``plotting.backend`` for the whole session, set
442
+ ``pd.options.plotting.backend``.
443
+ """
444
+
445
+
446
+ _bar_or_line_doc = """
447
+ Parameters
448
+ ----------
449
+ x : label or position, optional
450
+ Allows plotting of one column versus another. If not specified,
451
+ the index of the DataFrame is used.
452
+ y : label or position, optional
453
+ Allows plotting of one column versus another. If not specified,
454
+ all numerical columns are used.
455
+ color : str, array-like, or dict, optional
456
+ The color for each of the DataFrame's columns. Possible values are:
457
+
458
+ - A single color string referred to by name, RGB or RGBA code,
459
+ for instance 'red' or '#a98d19'.
460
+
461
+ - A sequence of color strings referred to by name, RGB or RGBA
462
+ code, which will be used for each column recursively. For
463
+ instance ['green','yellow'] each column's %(kind)s will be filled in
464
+ green or yellow, alternatively. If there is only a single column to
465
+ be plotted, then only the first color from the color list will be
466
+ used.
467
+
468
+ - A dict of the form {column name : color}, so that each column will be
469
+ colored accordingly. For example, if your columns are called `a` and
470
+ `b`, then passing {'a': 'green', 'b': 'red'} will color %(kind)ss for
471
+ column `a` in green and %(kind)ss for column `b` in red.
472
+
473
+ **kwargs
474
+ Additional keyword arguments are documented in
475
+ :meth:`DataFrame.plot`.
476
+
477
+ Returns
478
+ -------
479
+ matplotlib.axes.Axes or np.ndarray of them
480
+ An ndarray is returned with one :class:`matplotlib.axes.Axes`
481
+ per column when ``subplots=True``.
482
+ """
483
+
484
+
485
+ @Substitution(data="data : DataFrame\n The data to visualize.\n", backend="")
486
+ @Appender(_boxplot_doc)
487
+ def boxplot(
488
+ data: DataFrame,
489
+ column: str | list[str] | None = None,
490
+ by: str | list[str] | None = None,
491
+ ax: Axes | None = None,
492
+ fontsize: float | str | None = None,
493
+ rot: int = 0,
494
+ grid: bool = True,
495
+ figsize: tuple[float, float] | None = None,
496
+ layout: tuple[int, int] | None = None,
497
+ return_type: str | None = None,
498
+ **kwargs,
499
+ ):
500
+ plot_backend = _get_plot_backend("matplotlib")
501
+ return plot_backend.boxplot(
502
+ data,
503
+ column=column,
504
+ by=by,
505
+ ax=ax,
506
+ fontsize=fontsize,
507
+ rot=rot,
508
+ grid=grid,
509
+ figsize=figsize,
510
+ layout=layout,
511
+ return_type=return_type,
512
+ **kwargs,
513
+ )
514
+
515
+
516
+ @Substitution(data="", backend=_backend_doc)
517
+ @Appender(_boxplot_doc)
518
+ def boxplot_frame(
519
+ self: DataFrame,
520
+ column=None,
521
+ by=None,
522
+ ax=None,
523
+ fontsize: int | None = None,
524
+ rot: int = 0,
525
+ grid: bool = True,
526
+ figsize: tuple[float, float] | None = None,
527
+ layout=None,
528
+ return_type=None,
529
+ backend=None,
530
+ **kwargs,
531
+ ):
532
+ plot_backend = _get_plot_backend(backend)
533
+ return plot_backend.boxplot_frame(
534
+ self,
535
+ column=column,
536
+ by=by,
537
+ ax=ax,
538
+ fontsize=fontsize,
539
+ rot=rot,
540
+ grid=grid,
541
+ figsize=figsize,
542
+ layout=layout,
543
+ return_type=return_type,
544
+ **kwargs,
545
+ )
546
+
547
+
548
+ def boxplot_frame_groupby(
549
+ grouped: DataFrameGroupBy,
550
+ subplots: bool = True,
551
+ column=None,
552
+ fontsize: int | None = None,
553
+ rot: int = 0,
554
+ grid: bool = True,
555
+ ax=None,
556
+ figsize: tuple[float, float] | None = None,
557
+ layout=None,
558
+ sharex: bool = False,
559
+ sharey: bool = True,
560
+ backend=None,
561
+ **kwargs,
562
+ ):
563
+ """
564
+ Make box plots from DataFrameGroupBy data.
565
+
566
+ Parameters
567
+ ----------
568
+ grouped : Grouped DataFrame
569
+ subplots : bool
570
+ * ``False`` - no subplots will be used
571
+ * ``True`` - create a subplot for each group.
572
+
573
+ column : column name or list of names, or vector
574
+ Can be any valid input to groupby.
575
+ fontsize : float or str
576
+ rot : label rotation angle
577
+ grid : Setting this to True will show the grid
578
+ ax : Matplotlib axis object, default None
579
+ figsize : A tuple (width, height) in inches
580
+ layout : tuple (optional)
581
+ The layout of the plot: (rows, columns).
582
+ sharex : bool, default False
583
+ Whether x-axes will be shared among subplots.
584
+ sharey : bool, default True
585
+ Whether y-axes will be shared among subplots.
586
+ backend : str, default None
587
+ Backend to use instead of the backend specified in the option
588
+ ``plotting.backend``. For instance, 'matplotlib'. Alternatively, to
589
+ specify the ``plotting.backend`` for the whole session, set
590
+ ``pd.options.plotting.backend``.
591
+ **kwargs
592
+ All other plotting keyword arguments to be passed to
593
+ matplotlib's boxplot function.
594
+
595
+ Returns
596
+ -------
597
+ dict of key/value = group key/DataFrame.boxplot return value
598
+ or DataFrame.boxplot return value in case subplots=figures=False
599
+
600
+ Examples
601
+ --------
602
+ You can create boxplots for grouped data and show them as separate subplots:
603
+
604
+ .. plot::
605
+ :context: close-figs
606
+
607
+ >>> import itertools
608
+ >>> tuples = [t for t in itertools.product(range(1000), range(4))]
609
+ >>> index = pd.MultiIndex.from_tuples(tuples, names=['lvl0', 'lvl1'])
610
+ >>> data = np.random.randn(len(index), 4)
611
+ >>> df = pd.DataFrame(data, columns=list('ABCD'), index=index)
612
+ >>> grouped = df.groupby(level='lvl1')
613
+ >>> grouped.boxplot(rot=45, fontsize=12, figsize=(8, 10)) # doctest: +SKIP
614
+
615
+ The ``subplots=False`` option shows the boxplots in a single figure.
616
+
617
+ .. plot::
618
+ :context: close-figs
619
+
620
+ >>> grouped.boxplot(subplots=False, rot=45, fontsize=12) # doctest: +SKIP
621
+ """
622
+ plot_backend = _get_plot_backend(backend)
623
+ return plot_backend.boxplot_frame_groupby(
624
+ grouped,
625
+ subplots=subplots,
626
+ column=column,
627
+ fontsize=fontsize,
628
+ rot=rot,
629
+ grid=grid,
630
+ ax=ax,
631
+ figsize=figsize,
632
+ layout=layout,
633
+ sharex=sharex,
634
+ sharey=sharey,
635
+ **kwargs,
636
+ )
637
+
638
+
639
+ class PlotAccessor(PandasObject):
640
+ """
641
+ Make plots of Series or DataFrame.
642
+
643
+ Uses the backend specified by the
644
+ option ``plotting.backend``. By default, matplotlib is used.
645
+
646
+ Parameters
647
+ ----------
648
+ data : Series or DataFrame
649
+ The object for which the method is called.
650
+ x : label or position, default None
651
+ Only used if data is a DataFrame.
652
+ y : label, position or list of label, positions, default None
653
+ Allows plotting of one column versus another. Only used if data is a
654
+ DataFrame.
655
+ kind : str
656
+ The kind of plot to produce:
657
+
658
+ - 'line' : line plot (default)
659
+ - 'bar' : vertical bar plot
660
+ - 'barh' : horizontal bar plot
661
+ - 'hist' : histogram
662
+ - 'box' : boxplot
663
+ - 'kde' : Kernel Density Estimation plot
664
+ - 'density' : same as 'kde'
665
+ - 'area' : area plot
666
+ - 'pie' : pie plot
667
+ - 'scatter' : scatter plot (DataFrame only)
668
+ - 'hexbin' : hexbin plot (DataFrame only)
669
+ ax : matplotlib axes object, default None
670
+ An axes of the current figure.
671
+ subplots : bool or sequence of iterables, default False
672
+ Whether to group columns into subplots:
673
+
674
+ - ``False`` : No subplots will be used
675
+ - ``True`` : Make separate subplots for each column.
676
+ - sequence of iterables of column labels: Create a subplot for each
677
+ group of columns. For example `[('a', 'c'), ('b', 'd')]` will
678
+ create 2 subplots: one with columns 'a' and 'c', and one
679
+ with columns 'b' and 'd'. Remaining columns that aren't specified
680
+ will be plotted in additional subplots (one per column).
681
+
682
+ .. versionadded:: 1.5.0
683
+
684
+ sharex : bool, default True if ax is None else False
685
+ In case ``subplots=True``, share x axis and set some x axis labels
686
+ to invisible; defaults to True if ax is None otherwise False if
687
+ an ax is passed in; Be aware, that passing in both an ax and
688
+ ``sharex=True`` will alter all x axis labels for all axis in a figure.
689
+ sharey : bool, default False
690
+ In case ``subplots=True``, share y axis and set some y axis labels to invisible.
691
+ layout : tuple, optional
692
+ (rows, columns) for the layout of subplots.
693
+ figsize : a tuple (width, height) in inches
694
+ Size of a figure object.
695
+ use_index : bool, default True
696
+ Use index as ticks for x axis.
697
+ title : str or list
698
+ Title to use for the plot. If a string is passed, print the string
699
+ at the top of the figure. If a list is passed and `subplots` is
700
+ True, print each item in the list above the corresponding subplot.
701
+ grid : bool, default None (matlab style default)
702
+ Axis grid lines.
703
+ legend : bool or {'reverse'}
704
+ Place legend on axis subplots.
705
+ style : list or dict
706
+ The matplotlib line style per column.
707
+ logx : bool or 'sym', default False
708
+ Use log scaling or symlog scaling on x axis.
709
+
710
+ logy : bool or 'sym' default False
711
+ Use log scaling or symlog scaling on y axis.
712
+
713
+ loglog : bool or 'sym', default False
714
+ Use log scaling or symlog scaling on both x and y axes.
715
+
716
+ xticks : sequence
717
+ Values to use for the xticks.
718
+ yticks : sequence
719
+ Values to use for the yticks.
720
+ xlim : 2-tuple/list
721
+ Set the x limits of the current axes.
722
+ ylim : 2-tuple/list
723
+ Set the y limits of the current axes.
724
+ xlabel : label, optional
725
+ Name to use for the xlabel on x-axis. Default uses index name as xlabel, or the
726
+ x-column name for planar plots.
727
+
728
+ .. versionchanged:: 2.0.0
729
+
730
+ Now applicable to histograms.
731
+
732
+ ylabel : label, optional
733
+ Name to use for the ylabel on y-axis. Default will show no ylabel, or the
734
+ y-column name for planar plots.
735
+
736
+ .. versionchanged:: 2.0.0
737
+
738
+ Now applicable to histograms.
739
+
740
+ rot : float, default None
741
+ Rotation for ticks (xticks for vertical, yticks for horizontal
742
+ plots).
743
+ fontsize : float, default None
744
+ Font size for xticks and yticks.
745
+ colormap : str or matplotlib colormap object, default None
746
+ Colormap to select colors from. If string, load colormap with that
747
+ name from matplotlib.
748
+ colorbar : bool, optional
749
+ If True, plot colorbar (only relevant for 'scatter' and 'hexbin'
750
+ plots).
751
+ position : float
752
+ Specify relative alignments for bar plot layout.
753
+ From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5
754
+ (center).
755
+ table : bool, Series or DataFrame, default False
756
+ If True, draw a table using the data in the DataFrame and the data
757
+ will be transposed to meet matplotlib's default layout.
758
+ If a Series or DataFrame is passed, use passed data to draw a
759
+ table.
760
+ yerr : DataFrame, Series, array-like, dict and str
761
+ See :ref:`Plotting with Error Bars <visualization.errorbars>` for
762
+ detail.
763
+ xerr : DataFrame, Series, array-like, dict and str
764
+ Equivalent to yerr.
765
+ stacked : bool, default False in line and bar plots, and True in area plot
766
+ If True, create stacked plot.
767
+ secondary_y : bool or sequence, default False
768
+ Whether to plot on the secondary y-axis if a list/tuple, which
769
+ columns to plot on secondary y-axis.
770
+ mark_right : bool, default True
771
+ When using a secondary_y axis, automatically mark the column
772
+ labels with "(right)" in the legend.
773
+ include_bool : bool, default is False
774
+ If True, boolean values can be plotted.
775
+ backend : str, default None
776
+ Backend to use instead of the backend specified in the option
777
+ ``plotting.backend``. For instance, 'matplotlib'. Alternatively, to
778
+ specify the ``plotting.backend`` for the whole session, set
779
+ ``pd.options.plotting.backend``.
780
+ **kwargs
781
+ Options to pass to matplotlib plotting method.
782
+
783
+ Returns
784
+ -------
785
+ :class:`matplotlib.axes.Axes` or numpy.ndarray of them
786
+ If the backend is not the default matplotlib one, the return value
787
+ will be the object returned by the backend.
788
+
789
+ Notes
790
+ -----
791
+ - See matplotlib documentation online for more on this subject
792
+ - If `kind` = 'bar' or 'barh', you can specify relative alignments
793
+ for bar plot layout by `position` keyword.
794
+ From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5
795
+ (center)
796
+
797
+ Examples
798
+ --------
799
+ For Series:
800
+
801
+ .. plot::
802
+ :context: close-figs
803
+
804
+ >>> ser = pd.Series([1, 2, 3, 3])
805
+ >>> plot = ser.plot(kind='hist', title="My plot")
806
+
807
+ For DataFrame:
808
+
809
+ .. plot::
810
+ :context: close-figs
811
+
812
+ >>> df = pd.DataFrame({'length': [1.5, 0.5, 1.2, 0.9, 3],
813
+ ... 'width': [0.7, 0.2, 0.15, 0.2, 1.1]},
814
+ ... index=['pig', 'rabbit', 'duck', 'chicken', 'horse'])
815
+ >>> plot = df.plot(title="DataFrame Plot")
816
+
817
+ For SeriesGroupBy:
818
+
819
+ .. plot::
820
+ :context: close-figs
821
+
822
+ >>> lst = [-1, -2, -3, 1, 2, 3]
823
+ >>> ser = pd.Series([1, 2, 2, 4, 6, 6], index=lst)
824
+ >>> plot = ser.groupby(lambda x: x > 0).plot(title="SeriesGroupBy Plot")
825
+
826
+ For DataFrameGroupBy:
827
+
828
+ .. plot::
829
+ :context: close-figs
830
+
831
+ >>> df = pd.DataFrame({"col1" : [1, 2, 3, 4],
832
+ ... "col2" : ["A", "B", "A", "B"]})
833
+ >>> plot = df.groupby("col2").plot(kind="bar", title="DataFrameGroupBy Plot")
834
+ """
835
+
836
+ _common_kinds = ("line", "bar", "barh", "kde", "density", "area", "hist", "box")
837
+ _series_kinds = ("pie",)
838
+ _dataframe_kinds = ("scatter", "hexbin")
839
+ _kind_aliases = {"density": "kde"}
840
+ _all_kinds = _common_kinds + _series_kinds + _dataframe_kinds
841
+
842
+ def __init__(self, data: Series | DataFrame) -> None:
843
+ self._parent = data
844
+
845
+ @staticmethod
846
+ def _get_call_args(backend_name: str, data: Series | DataFrame, args, kwargs):
847
+ """
848
+ This function makes calls to this accessor `__call__` method compatible
849
+ with the previous `SeriesPlotMethods.__call__` and
850
+ `DataFramePlotMethods.__call__`. Those had slightly different
851
+ signatures, since `DataFramePlotMethods` accepted `x` and `y`
852
+ parameters.
853
+ """
854
+ if isinstance(data, ABCSeries):
855
+ arg_def = [
856
+ ("kind", "line"),
857
+ ("ax", None),
858
+ ("figsize", None),
859
+ ("use_index", True),
860
+ ("title", None),
861
+ ("grid", None),
862
+ ("legend", False),
863
+ ("style", None),
864
+ ("logx", False),
865
+ ("logy", False),
866
+ ("loglog", False),
867
+ ("xticks", None),
868
+ ("yticks", None),
869
+ ("xlim", None),
870
+ ("ylim", None),
871
+ ("rot", None),
872
+ ("fontsize", None),
873
+ ("colormap", None),
874
+ ("table", False),
875
+ ("yerr", None),
876
+ ("xerr", None),
877
+ ("label", None),
878
+ ("secondary_y", False),
879
+ ("xlabel", None),
880
+ ("ylabel", None),
881
+ ]
882
+ elif isinstance(data, ABCDataFrame):
883
+ arg_def = [
884
+ ("x", None),
885
+ ("y", None),
886
+ ("kind", "line"),
887
+ ("ax", None),
888
+ ("subplots", False),
889
+ ("sharex", None),
890
+ ("sharey", False),
891
+ ("layout", None),
892
+ ("figsize", None),
893
+ ("use_index", True),
894
+ ("title", None),
895
+ ("grid", None),
896
+ ("legend", True),
897
+ ("style", None),
898
+ ("logx", False),
899
+ ("logy", False),
900
+ ("loglog", False),
901
+ ("xticks", None),
902
+ ("yticks", None),
903
+ ("xlim", None),
904
+ ("ylim", None),
905
+ ("rot", None),
906
+ ("fontsize", None),
907
+ ("colormap", None),
908
+ ("table", False),
909
+ ("yerr", None),
910
+ ("xerr", None),
911
+ ("secondary_y", False),
912
+ ("xlabel", None),
913
+ ("ylabel", None),
914
+ ]
915
+ else:
916
+ raise TypeError(
917
+ f"Called plot accessor for type {type(data).__name__}, "
918
+ "expected Series or DataFrame"
919
+ )
920
+
921
+ if args and isinstance(data, ABCSeries):
922
+ positional_args = str(args)[1:-1]
923
+ keyword_args = ", ".join(
924
+ [f"{name}={repr(value)}" for (name, _), value in zip(arg_def, args)]
925
+ )
926
+ msg = (
927
+ "`Series.plot()` should not be called with positional "
928
+ "arguments, only keyword arguments. The order of "
929
+ "positional arguments will change in the future. "
930
+ f"Use `Series.plot({keyword_args})` instead of "
931
+ f"`Series.plot({positional_args})`."
932
+ )
933
+ raise TypeError(msg)
934
+
935
+ pos_args = {name: value for (name, _), value in zip(arg_def, args)}
936
+ if backend_name == "pandas.plotting._matplotlib":
937
+ kwargs = dict(arg_def, **pos_args, **kwargs)
938
+ else:
939
+ kwargs = dict(pos_args, **kwargs)
940
+
941
+ x = kwargs.pop("x", None)
942
+ y = kwargs.pop("y", None)
943
+ kind = kwargs.pop("kind", "line")
944
+ return x, y, kind, kwargs
945
+
946
+ def __call__(self, *args, **kwargs):
947
+ plot_backend = _get_plot_backend(kwargs.pop("backend", None))
948
+
949
+ x, y, kind, kwargs = self._get_call_args(
950
+ plot_backend.__name__, self._parent, args, kwargs
951
+ )
952
+
953
+ kind = self._kind_aliases.get(kind, kind)
954
+
955
+ # when using another backend, get out of the way
956
+ if plot_backend.__name__ != "pandas.plotting._matplotlib":
957
+ return plot_backend.plot(self._parent, x=x, y=y, kind=kind, **kwargs)
958
+
959
+ if kind not in self._all_kinds:
960
+ raise ValueError(
961
+ f"{kind} is not a valid plot kind "
962
+ f"Valid plot kinds: {self._all_kinds}"
963
+ )
964
+
965
+ # The original data structured can be transformed before passed to the
966
+ # backend. For example, for DataFrame is common to set the index as the
967
+ # `x` parameter, and return a Series with the parameter `y` as values.
968
+ data = self._parent.copy()
969
+
970
+ if isinstance(data, ABCSeries):
971
+ kwargs["reuse_plot"] = True
972
+
973
+ if kind in self._dataframe_kinds:
974
+ if isinstance(data, ABCDataFrame):
975
+ return plot_backend.plot(data, x=x, y=y, kind=kind, **kwargs)
976
+ else:
977
+ raise ValueError(f"plot kind {kind} can only be used for data frames")
978
+ elif kind in self._series_kinds:
979
+ if isinstance(data, ABCDataFrame):
980
+ if y is None and kwargs.get("subplots") is False:
981
+ raise ValueError(
982
+ f"{kind} requires either y column or 'subplots=True'"
983
+ )
984
+ if y is not None:
985
+ if is_integer(y) and not data.columns._holds_integer():
986
+ y = data.columns[y]
987
+ # converted to series actually. copy to not modify
988
+ data = data[y].copy()
989
+ data.index.name = y
990
+ elif isinstance(data, ABCDataFrame):
991
+ data_cols = data.columns
992
+ if x is not None:
993
+ if is_integer(x) and not data.columns._holds_integer():
994
+ x = data_cols[x]
995
+ elif not isinstance(data[x], ABCSeries):
996
+ raise ValueError("x must be a label or position")
997
+ data = data.set_index(x)
998
+ if y is not None:
999
+ # check if we have y as int or list of ints
1000
+ int_ylist = is_list_like(y) and all(is_integer(c) for c in y)
1001
+ int_y_arg = is_integer(y) or int_ylist
1002
+ if int_y_arg and not data.columns._holds_integer():
1003
+ y = data_cols[y]
1004
+
1005
+ label_kw = kwargs["label"] if "label" in kwargs else False
1006
+ for kw in ["xerr", "yerr"]:
1007
+ if kw in kwargs and (
1008
+ isinstance(kwargs[kw], str) or is_integer(kwargs[kw])
1009
+ ):
1010
+ try:
1011
+ kwargs[kw] = data[kwargs[kw]]
1012
+ except (IndexError, KeyError, TypeError):
1013
+ pass
1014
+
1015
+ # don't overwrite
1016
+ data = data[y].copy()
1017
+
1018
+ if isinstance(data, ABCSeries):
1019
+ label_name = label_kw or y
1020
+ data.name = label_name
1021
+ else:
1022
+ match = is_list_like(label_kw) and len(label_kw) == len(y)
1023
+ if label_kw and not match:
1024
+ raise ValueError(
1025
+ "label should be list-like and same length as y"
1026
+ )
1027
+ label_name = label_kw or data.columns
1028
+ data.columns = label_name
1029
+
1030
+ return plot_backend.plot(data, kind=kind, **kwargs)
1031
+
1032
+ __call__.__doc__ = __doc__
1033
+
1034
+ @Appender(
1035
+ """
1036
+ See Also
1037
+ --------
1038
+ matplotlib.pyplot.plot : Plot y versus x as lines and/or markers.
1039
+
1040
+ Examples
1041
+ --------
1042
+
1043
+ .. plot::
1044
+ :context: close-figs
1045
+
1046
+ >>> s = pd.Series([1, 3, 2])
1047
+ >>> s.plot.line() # doctest: +SKIP
1048
+
1049
+ .. plot::
1050
+ :context: close-figs
1051
+
1052
+ The following example shows the populations for some animals
1053
+ over the years.
1054
+
1055
+ >>> df = pd.DataFrame({
1056
+ ... 'pig': [20, 18, 489, 675, 1776],
1057
+ ... 'horse': [4, 25, 281, 600, 1900]
1058
+ ... }, index=[1990, 1997, 2003, 2009, 2014])
1059
+ >>> lines = df.plot.line()
1060
+
1061
+ .. plot::
1062
+ :context: close-figs
1063
+
1064
+ An example with subplots, so an array of axes is returned.
1065
+
1066
+ >>> axes = df.plot.line(subplots=True)
1067
+ >>> type(axes)
1068
+ <class 'numpy.ndarray'>
1069
+
1070
+ .. plot::
1071
+ :context: close-figs
1072
+
1073
+ Let's repeat the same example, but specifying colors for
1074
+ each column (in this case, for each animal).
1075
+
1076
+ >>> axes = df.plot.line(
1077
+ ... subplots=True, color={"pig": "pink", "horse": "#742802"}
1078
+ ... )
1079
+
1080
+ .. plot::
1081
+ :context: close-figs
1082
+
1083
+ The following example shows the relationship between both
1084
+ populations.
1085
+
1086
+ >>> lines = df.plot.line(x='pig', y='horse')
1087
+ """
1088
+ )
1089
+ @Substitution(kind="line")
1090
+ @Appender(_bar_or_line_doc)
1091
+ def line(
1092
+ self, x: Hashable | None = None, y: Hashable | None = None, **kwargs
1093
+ ) -> PlotAccessor:
1094
+ """
1095
+ Plot Series or DataFrame as lines.
1096
+
1097
+ This function is useful to plot lines using DataFrame's values
1098
+ as coordinates.
1099
+ """
1100
+ return self(kind="line", x=x, y=y, **kwargs)
1101
+
1102
+ @Appender(
1103
+ """
1104
+ See Also
1105
+ --------
1106
+ DataFrame.plot.barh : Horizontal bar plot.
1107
+ DataFrame.plot : Make plots of a DataFrame.
1108
+ matplotlib.pyplot.bar : Make a bar plot with matplotlib.
1109
+
1110
+ Examples
1111
+ --------
1112
+ Basic plot.
1113
+
1114
+ .. plot::
1115
+ :context: close-figs
1116
+
1117
+ >>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})
1118
+ >>> ax = df.plot.bar(x='lab', y='val', rot=0)
1119
+
1120
+ Plot a whole dataframe to a bar plot. Each column is assigned a
1121
+ distinct color, and each row is nested in a group along the
1122
+ horizontal axis.
1123
+
1124
+ .. plot::
1125
+ :context: close-figs
1126
+
1127
+ >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
1128
+ >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
1129
+ >>> index = ['snail', 'pig', 'elephant',
1130
+ ... 'rabbit', 'giraffe', 'coyote', 'horse']
1131
+ >>> df = pd.DataFrame({'speed': speed,
1132
+ ... 'lifespan': lifespan}, index=index)
1133
+ >>> ax = df.plot.bar(rot=0)
1134
+
1135
+ Plot stacked bar charts for the DataFrame
1136
+
1137
+ .. plot::
1138
+ :context: close-figs
1139
+
1140
+ >>> ax = df.plot.bar(stacked=True)
1141
+
1142
+ Instead of nesting, the figure can be split by column with
1143
+ ``subplots=True``. In this case, a :class:`numpy.ndarray` of
1144
+ :class:`matplotlib.axes.Axes` are returned.
1145
+
1146
+ .. plot::
1147
+ :context: close-figs
1148
+
1149
+ >>> axes = df.plot.bar(rot=0, subplots=True)
1150
+ >>> axes[1].legend(loc=2) # doctest: +SKIP
1151
+
1152
+ If you don't like the default colours, you can specify how you'd
1153
+ like each column to be colored.
1154
+
1155
+ .. plot::
1156
+ :context: close-figs
1157
+
1158
+ >>> axes = df.plot.bar(
1159
+ ... rot=0, subplots=True, color={"speed": "red", "lifespan": "green"}
1160
+ ... )
1161
+ >>> axes[1].legend(loc=2) # doctest: +SKIP
1162
+
1163
+ Plot a single column.
1164
+
1165
+ .. plot::
1166
+ :context: close-figs
1167
+
1168
+ >>> ax = df.plot.bar(y='speed', rot=0)
1169
+
1170
+ Plot only selected categories for the DataFrame.
1171
+
1172
+ .. plot::
1173
+ :context: close-figs
1174
+
1175
+ >>> ax = df.plot.bar(x='lifespan', rot=0)
1176
+ """
1177
+ )
1178
+ @Substitution(kind="bar")
1179
+ @Appender(_bar_or_line_doc)
1180
+ def bar( # pylint: disable=disallowed-name
1181
+ self, x: Hashable | None = None, y: Hashable | None = None, **kwargs
1182
+ ) -> PlotAccessor:
1183
+ """
1184
+ Vertical bar plot.
1185
+
1186
+ A bar plot is a plot that presents categorical data with
1187
+ rectangular bars with lengths proportional to the values that they
1188
+ represent. A bar plot shows comparisons among discrete categories. One
1189
+ axis of the plot shows the specific categories being compared, and the
1190
+ other axis represents a measured value.
1191
+ """
1192
+ return self(kind="bar", x=x, y=y, **kwargs)
1193
+
1194
+ @Appender(
1195
+ """
1196
+ See Also
1197
+ --------
1198
+ DataFrame.plot.bar: Vertical bar plot.
1199
+ DataFrame.plot : Make plots of DataFrame using matplotlib.
1200
+ matplotlib.axes.Axes.bar : Plot a vertical bar plot using matplotlib.
1201
+
1202
+ Examples
1203
+ --------
1204
+ Basic example
1205
+
1206
+ .. plot::
1207
+ :context: close-figs
1208
+
1209
+ >>> df = pd.DataFrame({'lab': ['A', 'B', 'C'], 'val': [10, 30, 20]})
1210
+ >>> ax = df.plot.barh(x='lab', y='val')
1211
+
1212
+ Plot a whole DataFrame to a horizontal bar plot
1213
+
1214
+ .. plot::
1215
+ :context: close-figs
1216
+
1217
+ >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
1218
+ >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
1219
+ >>> index = ['snail', 'pig', 'elephant',
1220
+ ... 'rabbit', 'giraffe', 'coyote', 'horse']
1221
+ >>> df = pd.DataFrame({'speed': speed,
1222
+ ... 'lifespan': lifespan}, index=index)
1223
+ >>> ax = df.plot.barh()
1224
+
1225
+ Plot stacked barh charts for the DataFrame
1226
+
1227
+ .. plot::
1228
+ :context: close-figs
1229
+
1230
+ >>> ax = df.plot.barh(stacked=True)
1231
+
1232
+ We can specify colors for each column
1233
+
1234
+ .. plot::
1235
+ :context: close-figs
1236
+
1237
+ >>> ax = df.plot.barh(color={"speed": "red", "lifespan": "green"})
1238
+
1239
+ Plot a column of the DataFrame to a horizontal bar plot
1240
+
1241
+ .. plot::
1242
+ :context: close-figs
1243
+
1244
+ >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
1245
+ >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
1246
+ >>> index = ['snail', 'pig', 'elephant',
1247
+ ... 'rabbit', 'giraffe', 'coyote', 'horse']
1248
+ >>> df = pd.DataFrame({'speed': speed,
1249
+ ... 'lifespan': lifespan}, index=index)
1250
+ >>> ax = df.plot.barh(y='speed')
1251
+
1252
+ Plot DataFrame versus the desired column
1253
+
1254
+ .. plot::
1255
+ :context: close-figs
1256
+
1257
+ >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
1258
+ >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
1259
+ >>> index = ['snail', 'pig', 'elephant',
1260
+ ... 'rabbit', 'giraffe', 'coyote', 'horse']
1261
+ >>> df = pd.DataFrame({'speed': speed,
1262
+ ... 'lifespan': lifespan}, index=index)
1263
+ >>> ax = df.plot.barh(x='lifespan')
1264
+ """
1265
+ )
1266
+ @Substitution(kind="bar")
1267
+ @Appender(_bar_or_line_doc)
1268
+ def barh(
1269
+ self, x: Hashable | None = None, y: Hashable | None = None, **kwargs
1270
+ ) -> PlotAccessor:
1271
+ """
1272
+ Make a horizontal bar plot.
1273
+
1274
+ A horizontal bar plot is a plot that presents quantitative data with
1275
+ rectangular bars with lengths proportional to the values that they
1276
+ represent. A bar plot shows comparisons among discrete categories. One
1277
+ axis of the plot shows the specific categories being compared, and the
1278
+ other axis represents a measured value.
1279
+ """
1280
+ return self(kind="barh", x=x, y=y, **kwargs)
1281
+
1282
+ def box(self, by: IndexLabel | None = None, **kwargs) -> PlotAccessor:
1283
+ r"""
1284
+ Make a box plot of the DataFrame columns.
1285
+
1286
+ A box plot is a method for graphically depicting groups of numerical
1287
+ data through their quartiles.
1288
+ The box extends from the Q1 to Q3 quartile values of the data,
1289
+ with a line at the median (Q2). The whiskers extend from the edges
1290
+ of box to show the range of the data. The position of the whiskers
1291
+ is set by default to 1.5*IQR (IQR = Q3 - Q1) from the edges of the
1292
+ box. Outlier points are those past the end of the whiskers.
1293
+
1294
+ For further details see Wikipedia's
1295
+ entry for `boxplot <https://en.wikipedia.org/wiki/Box_plot>`__.
1296
+
1297
+ A consideration when using this chart is that the box and the whiskers
1298
+ can overlap, which is very common when plotting small sets of data.
1299
+
1300
+ Parameters
1301
+ ----------
1302
+ by : str or sequence
1303
+ Column in the DataFrame to group by.
1304
+
1305
+ .. versionchanged:: 1.4.0
1306
+
1307
+ Previously, `by` is silently ignore and makes no groupings
1308
+
1309
+ **kwargs
1310
+ Additional keywords are documented in
1311
+ :meth:`DataFrame.plot`.
1312
+
1313
+ Returns
1314
+ -------
1315
+ :class:`matplotlib.axes.Axes` or numpy.ndarray of them
1316
+
1317
+ See Also
1318
+ --------
1319
+ DataFrame.boxplot: Another method to draw a box plot.
1320
+ Series.plot.box: Draw a box plot from a Series object.
1321
+ matplotlib.pyplot.boxplot: Draw a box plot in matplotlib.
1322
+
1323
+ Examples
1324
+ --------
1325
+ Draw a box plot from a DataFrame with four columns of randomly
1326
+ generated data.
1327
+
1328
+ .. plot::
1329
+ :context: close-figs
1330
+
1331
+ >>> data = np.random.randn(25, 4)
1332
+ >>> df = pd.DataFrame(data, columns=list('ABCD'))
1333
+ >>> ax = df.plot.box()
1334
+
1335
+ You can also generate groupings if you specify the `by` parameter (which
1336
+ can take a column name, or a list or tuple of column names):
1337
+
1338
+ .. versionchanged:: 1.4.0
1339
+
1340
+ .. plot::
1341
+ :context: close-figs
1342
+
1343
+ >>> age_list = [8, 10, 12, 14, 72, 74, 76, 78, 20, 25, 30, 35, 60, 85]
1344
+ >>> df = pd.DataFrame({"gender": list("MMMMMMMMFFFFFF"), "age": age_list})
1345
+ >>> ax = df.plot.box(column="age", by="gender", figsize=(10, 8))
1346
+ """
1347
+ return self(kind="box", by=by, **kwargs)
1348
+
1349
+ def hist(
1350
+ self, by: IndexLabel | None = None, bins: int = 10, **kwargs
1351
+ ) -> PlotAccessor:
1352
+ """
1353
+ Draw one histogram of the DataFrame's columns.
1354
+
1355
+ A histogram is a representation of the distribution of data.
1356
+ This function groups the values of all given Series in the DataFrame
1357
+ into bins and draws all bins in one :class:`matplotlib.axes.Axes`.
1358
+ This is useful when the DataFrame's Series are in a similar scale.
1359
+
1360
+ Parameters
1361
+ ----------
1362
+ by : str or sequence, optional
1363
+ Column in the DataFrame to group by.
1364
+
1365
+ .. versionchanged:: 1.4.0
1366
+
1367
+ Previously, `by` is silently ignore and makes no groupings
1368
+
1369
+ bins : int, default 10
1370
+ Number of histogram bins to be used.
1371
+ **kwargs
1372
+ Additional keyword arguments are documented in
1373
+ :meth:`DataFrame.plot`.
1374
+
1375
+ Returns
1376
+ -------
1377
+ class:`matplotlib.AxesSubplot`
1378
+ Return a histogram plot.
1379
+
1380
+ See Also
1381
+ --------
1382
+ DataFrame.hist : Draw histograms per DataFrame's Series.
1383
+ Series.hist : Draw a histogram with Series' data.
1384
+
1385
+ Examples
1386
+ --------
1387
+ When we roll a die 6000 times, we expect to get each value around 1000
1388
+ times. But when we roll two dice and sum the result, the distribution
1389
+ is going to be quite different. A histogram illustrates those
1390
+ distributions.
1391
+
1392
+ .. plot::
1393
+ :context: close-figs
1394
+
1395
+ >>> df = pd.DataFrame(np.random.randint(1, 7, 6000), columns=['one'])
1396
+ >>> df['two'] = df['one'] + np.random.randint(1, 7, 6000)
1397
+ >>> ax = df.plot.hist(bins=12, alpha=0.5)
1398
+
1399
+ A grouped histogram can be generated by providing the parameter `by` (which
1400
+ can be a column name, or a list of column names):
1401
+
1402
+ .. plot::
1403
+ :context: close-figs
1404
+
1405
+ >>> age_list = [8, 10, 12, 14, 72, 74, 76, 78, 20, 25, 30, 35, 60, 85]
1406
+ >>> df = pd.DataFrame({"gender": list("MMMMMMMMFFFFFF"), "age": age_list})
1407
+ >>> ax = df.plot.hist(column=["age"], by="gender", figsize=(10, 8))
1408
+ """
1409
+ return self(kind="hist", by=by, bins=bins, **kwargs)
1410
+
1411
+ def kde(
1412
+ self,
1413
+ bw_method: Literal["scott", "silverman"] | float | Callable | None = None,
1414
+ ind: np.ndarray | int | None = None,
1415
+ **kwargs,
1416
+ ) -> PlotAccessor:
1417
+ """
1418
+ Generate Kernel Density Estimate plot using Gaussian kernels.
1419
+
1420
+ In statistics, `kernel density estimation`_ (KDE) is a non-parametric
1421
+ way to estimate the probability density function (PDF) of a random
1422
+ variable. This function uses Gaussian kernels and includes automatic
1423
+ bandwidth determination.
1424
+
1425
+ .. _kernel density estimation:
1426
+ https://en.wikipedia.org/wiki/Kernel_density_estimation
1427
+
1428
+ Parameters
1429
+ ----------
1430
+ bw_method : str, scalar or callable, optional
1431
+ The method used to calculate the estimator bandwidth. This can be
1432
+ 'scott', 'silverman', a scalar constant or a callable.
1433
+ If None (default), 'scott' is used.
1434
+ See :class:`scipy.stats.gaussian_kde` for more information.
1435
+ ind : NumPy array or int, optional
1436
+ Evaluation points for the estimated PDF. If None (default),
1437
+ 1000 equally spaced points are used. If `ind` is a NumPy array, the
1438
+ KDE is evaluated at the points passed. If `ind` is an integer,
1439
+ `ind` number of equally spaced points are used.
1440
+ **kwargs
1441
+ Additional keyword arguments are documented in
1442
+ :meth:`DataFrame.plot`.
1443
+
1444
+ Returns
1445
+ -------
1446
+ matplotlib.axes.Axes or numpy.ndarray of them
1447
+
1448
+ See Also
1449
+ --------
1450
+ scipy.stats.gaussian_kde : Representation of a kernel-density
1451
+ estimate using Gaussian kernels. This is the function used
1452
+ internally to estimate the PDF.
1453
+
1454
+ Examples
1455
+ --------
1456
+ Given a Series of points randomly sampled from an unknown
1457
+ distribution, estimate its PDF using KDE with automatic
1458
+ bandwidth determination and plot the results, evaluating them at
1459
+ 1000 equally spaced points (default):
1460
+
1461
+ .. plot::
1462
+ :context: close-figs
1463
+
1464
+ >>> s = pd.Series([1, 2, 2.5, 3, 3.5, 4, 5])
1465
+ >>> ax = s.plot.kde()
1466
+
1467
+ A scalar bandwidth can be specified. Using a small bandwidth value can
1468
+ lead to over-fitting, while using a large bandwidth value may result
1469
+ in under-fitting:
1470
+
1471
+ .. plot::
1472
+ :context: close-figs
1473
+
1474
+ >>> ax = s.plot.kde(bw_method=0.3)
1475
+
1476
+ .. plot::
1477
+ :context: close-figs
1478
+
1479
+ >>> ax = s.plot.kde(bw_method=3)
1480
+
1481
+ Finally, the `ind` parameter determines the evaluation points for the
1482
+ plot of the estimated PDF:
1483
+
1484
+ .. plot::
1485
+ :context: close-figs
1486
+
1487
+ >>> ax = s.plot.kde(ind=[1, 2, 3, 4, 5])
1488
+
1489
+ For DataFrame, it works in the same way:
1490
+
1491
+ .. plot::
1492
+ :context: close-figs
1493
+
1494
+ >>> df = pd.DataFrame({
1495
+ ... 'x': [1, 2, 2.5, 3, 3.5, 4, 5],
1496
+ ... 'y': [4, 4, 4.5, 5, 5.5, 6, 6],
1497
+ ... })
1498
+ >>> ax = df.plot.kde()
1499
+
1500
+ A scalar bandwidth can be specified. Using a small bandwidth value can
1501
+ lead to over-fitting, while using a large bandwidth value may result
1502
+ in under-fitting:
1503
+
1504
+ .. plot::
1505
+ :context: close-figs
1506
+
1507
+ >>> ax = df.plot.kde(bw_method=0.3)
1508
+
1509
+ .. plot::
1510
+ :context: close-figs
1511
+
1512
+ >>> ax = df.plot.kde(bw_method=3)
1513
+
1514
+ Finally, the `ind` parameter determines the evaluation points for the
1515
+ plot of the estimated PDF:
1516
+
1517
+ .. plot::
1518
+ :context: close-figs
1519
+
1520
+ >>> ax = df.plot.kde(ind=[1, 2, 3, 4, 5, 6])
1521
+ """
1522
+ return self(kind="kde", bw_method=bw_method, ind=ind, **kwargs)
1523
+
1524
+ density = kde
1525
+
1526
+ def area(
1527
+ self,
1528
+ x: Hashable | None = None,
1529
+ y: Hashable | None = None,
1530
+ stacked: bool = True,
1531
+ **kwargs,
1532
+ ) -> PlotAccessor:
1533
+ """
1534
+ Draw a stacked area plot.
1535
+
1536
+ An area plot displays quantitative data visually.
1537
+ This function wraps the matplotlib area function.
1538
+
1539
+ Parameters
1540
+ ----------
1541
+ x : label or position, optional
1542
+ Coordinates for the X axis. By default uses the index.
1543
+ y : label or position, optional
1544
+ Column to plot. By default uses all columns.
1545
+ stacked : bool, default True
1546
+ Area plots are stacked by default. Set to False to create a
1547
+ unstacked plot.
1548
+ **kwargs
1549
+ Additional keyword arguments are documented in
1550
+ :meth:`DataFrame.plot`.
1551
+
1552
+ Returns
1553
+ -------
1554
+ matplotlib.axes.Axes or numpy.ndarray
1555
+ Area plot, or array of area plots if subplots is True.
1556
+
1557
+ See Also
1558
+ --------
1559
+ DataFrame.plot : Make plots of DataFrame using matplotlib / pylab.
1560
+
1561
+ Examples
1562
+ --------
1563
+ Draw an area plot based on basic business metrics:
1564
+
1565
+ .. plot::
1566
+ :context: close-figs
1567
+
1568
+ >>> df = pd.DataFrame({
1569
+ ... 'sales': [3, 2, 3, 9, 10, 6],
1570
+ ... 'signups': [5, 5, 6, 12, 14, 13],
1571
+ ... 'visits': [20, 42, 28, 62, 81, 50],
1572
+ ... }, index=pd.date_range(start='2018/01/01', end='2018/07/01',
1573
+ ... freq='ME'))
1574
+ >>> ax = df.plot.area()
1575
+
1576
+ Area plots are stacked by default. To produce an unstacked plot,
1577
+ pass ``stacked=False``:
1578
+
1579
+ .. plot::
1580
+ :context: close-figs
1581
+
1582
+ >>> ax = df.plot.area(stacked=False)
1583
+
1584
+ Draw an area plot for a single column:
1585
+
1586
+ .. plot::
1587
+ :context: close-figs
1588
+
1589
+ >>> ax = df.plot.area(y='sales')
1590
+
1591
+ Draw with a different `x`:
1592
+
1593
+ .. plot::
1594
+ :context: close-figs
1595
+
1596
+ >>> df = pd.DataFrame({
1597
+ ... 'sales': [3, 2, 3],
1598
+ ... 'visits': [20, 42, 28],
1599
+ ... 'day': [1, 2, 3],
1600
+ ... })
1601
+ >>> ax = df.plot.area(x='day')
1602
+ """
1603
+ return self(kind="area", x=x, y=y, stacked=stacked, **kwargs)
1604
+
1605
+ def pie(self, **kwargs) -> PlotAccessor:
1606
+ """
1607
+ Generate a pie plot.
1608
+
1609
+ A pie plot is a proportional representation of the numerical data in a
1610
+ column. This function wraps :meth:`matplotlib.pyplot.pie` for the
1611
+ specified column. If no column reference is passed and
1612
+ ``subplots=True`` a pie plot is drawn for each numerical column
1613
+ independently.
1614
+
1615
+ Parameters
1616
+ ----------
1617
+ y : int or label, optional
1618
+ Label or position of the column to plot.
1619
+ If not provided, ``subplots=True`` argument must be passed.
1620
+ **kwargs
1621
+ Keyword arguments to pass on to :meth:`DataFrame.plot`.
1622
+
1623
+ Returns
1624
+ -------
1625
+ matplotlib.axes.Axes or np.ndarray of them
1626
+ A NumPy array is returned when `subplots` is True.
1627
+
1628
+ See Also
1629
+ --------
1630
+ Series.plot.pie : Generate a pie plot for a Series.
1631
+ DataFrame.plot : Make plots of a DataFrame.
1632
+
1633
+ Examples
1634
+ --------
1635
+ In the example below we have a DataFrame with the information about
1636
+ planet's mass and radius. We pass the 'mass' column to the
1637
+ pie function to get a pie plot.
1638
+
1639
+ .. plot::
1640
+ :context: close-figs
1641
+
1642
+ >>> df = pd.DataFrame({'mass': [0.330, 4.87 , 5.97],
1643
+ ... 'radius': [2439.7, 6051.8, 6378.1]},
1644
+ ... index=['Mercury', 'Venus', 'Earth'])
1645
+ >>> plot = df.plot.pie(y='mass', figsize=(5, 5))
1646
+
1647
+ .. plot::
1648
+ :context: close-figs
1649
+
1650
+ >>> plot = df.plot.pie(subplots=True, figsize=(11, 6))
1651
+ """
1652
+ if (
1653
+ isinstance(self._parent, ABCDataFrame)
1654
+ and kwargs.get("y", None) is None
1655
+ and not kwargs.get("subplots", False)
1656
+ ):
1657
+ raise ValueError("pie requires either y column or 'subplots=True'")
1658
+ return self(kind="pie", **kwargs)
1659
+
1660
+ def scatter(
1661
+ self,
1662
+ x: Hashable,
1663
+ y: Hashable,
1664
+ s: Hashable | Sequence[Hashable] | None = None,
1665
+ c: Hashable | Sequence[Hashable] | None = None,
1666
+ **kwargs,
1667
+ ) -> PlotAccessor:
1668
+ """
1669
+ Create a scatter plot with varying marker point size and color.
1670
+
1671
+ The coordinates of each point are defined by two dataframe columns and
1672
+ filled circles are used to represent each point. This kind of plot is
1673
+ useful to see complex correlations between two variables. Points could
1674
+ be for instance natural 2D coordinates like longitude and latitude in
1675
+ a map or, in general, any pair of metrics that can be plotted against
1676
+ each other.
1677
+
1678
+ Parameters
1679
+ ----------
1680
+ x : int or str
1681
+ The column name or column position to be used as horizontal
1682
+ coordinates for each point.
1683
+ y : int or str
1684
+ The column name or column position to be used as vertical
1685
+ coordinates for each point.
1686
+ s : str, scalar or array-like, optional
1687
+ The size of each point. Possible values are:
1688
+
1689
+ - A string with the name of the column to be used for marker's size.
1690
+
1691
+ - A single scalar so all points have the same size.
1692
+
1693
+ - A sequence of scalars, which will be used for each point's size
1694
+ recursively. For instance, when passing [2,14] all points size
1695
+ will be either 2 or 14, alternatively.
1696
+
1697
+ c : str, int or array-like, optional
1698
+ The color of each point. Possible values are:
1699
+
1700
+ - A single color string referred to by name, RGB or RGBA code,
1701
+ for instance 'red' or '#a98d19'.
1702
+
1703
+ - A sequence of color strings referred to by name, RGB or RGBA
1704
+ code, which will be used for each point's color recursively. For
1705
+ instance ['green','yellow'] all points will be filled in green or
1706
+ yellow, alternatively.
1707
+
1708
+ - A column name or position whose values will be used to color the
1709
+ marker points according to a colormap.
1710
+
1711
+ **kwargs
1712
+ Keyword arguments to pass on to :meth:`DataFrame.plot`.
1713
+
1714
+ Returns
1715
+ -------
1716
+ :class:`matplotlib.axes.Axes` or numpy.ndarray of them
1717
+
1718
+ See Also
1719
+ --------
1720
+ matplotlib.pyplot.scatter : Scatter plot using multiple input data
1721
+ formats.
1722
+
1723
+ Examples
1724
+ --------
1725
+ Let's see how to draw a scatter plot using coordinates from the values
1726
+ in a DataFrame's columns.
1727
+
1728
+ .. plot::
1729
+ :context: close-figs
1730
+
1731
+ >>> df = pd.DataFrame([[5.1, 3.5, 0], [4.9, 3.0, 0], [7.0, 3.2, 1],
1732
+ ... [6.4, 3.2, 1], [5.9, 3.0, 2]],
1733
+ ... columns=['length', 'width', 'species'])
1734
+ >>> ax1 = df.plot.scatter(x='length',
1735
+ ... y='width',
1736
+ ... c='DarkBlue')
1737
+
1738
+ And now with the color determined by a column as well.
1739
+
1740
+ .. plot::
1741
+ :context: close-figs
1742
+
1743
+ >>> ax2 = df.plot.scatter(x='length',
1744
+ ... y='width',
1745
+ ... c='species',
1746
+ ... colormap='viridis')
1747
+ """
1748
+ return self(kind="scatter", x=x, y=y, s=s, c=c, **kwargs)
1749
+
1750
+ def hexbin(
1751
+ self,
1752
+ x: Hashable,
1753
+ y: Hashable,
1754
+ C: Hashable | None = None,
1755
+ reduce_C_function: Callable | None = None,
1756
+ gridsize: int | tuple[int, int] | None = None,
1757
+ **kwargs,
1758
+ ) -> PlotAccessor:
1759
+ """
1760
+ Generate a hexagonal binning plot.
1761
+
1762
+ Generate a hexagonal binning plot of `x` versus `y`. If `C` is `None`
1763
+ (the default), this is a histogram of the number of occurrences
1764
+ of the observations at ``(x[i], y[i])``.
1765
+
1766
+ If `C` is specified, specifies values at given coordinates
1767
+ ``(x[i], y[i])``. These values are accumulated for each hexagonal
1768
+ bin and then reduced according to `reduce_C_function`,
1769
+ having as default the NumPy's mean function (:meth:`numpy.mean`).
1770
+ (If `C` is specified, it must also be a 1-D sequence
1771
+ of the same length as `x` and `y`, or a column label.)
1772
+
1773
+ Parameters
1774
+ ----------
1775
+ x : int or str
1776
+ The column label or position for x points.
1777
+ y : int or str
1778
+ The column label or position for y points.
1779
+ C : int or str, optional
1780
+ The column label or position for the value of `(x, y)` point.
1781
+ reduce_C_function : callable, default `np.mean`
1782
+ Function of one argument that reduces all the values in a bin to
1783
+ a single number (e.g. `np.mean`, `np.max`, `np.sum`, `np.std`).
1784
+ gridsize : int or tuple of (int, int), default 100
1785
+ The number of hexagons in the x-direction.
1786
+ The corresponding number of hexagons in the y-direction is
1787
+ chosen in a way that the hexagons are approximately regular.
1788
+ Alternatively, gridsize can be a tuple with two elements
1789
+ specifying the number of hexagons in the x-direction and the
1790
+ y-direction.
1791
+ **kwargs
1792
+ Additional keyword arguments are documented in
1793
+ :meth:`DataFrame.plot`.
1794
+
1795
+ Returns
1796
+ -------
1797
+ matplotlib.AxesSubplot
1798
+ The matplotlib ``Axes`` on which the hexbin is plotted.
1799
+
1800
+ See Also
1801
+ --------
1802
+ DataFrame.plot : Make plots of a DataFrame.
1803
+ matplotlib.pyplot.hexbin : Hexagonal binning plot using matplotlib,
1804
+ the matplotlib function that is used under the hood.
1805
+
1806
+ Examples
1807
+ --------
1808
+ The following examples are generated with random data from
1809
+ a normal distribution.
1810
+
1811
+ .. plot::
1812
+ :context: close-figs
1813
+
1814
+ >>> n = 10000
1815
+ >>> df = pd.DataFrame({'x': np.random.randn(n),
1816
+ ... 'y': np.random.randn(n)})
1817
+ >>> ax = df.plot.hexbin(x='x', y='y', gridsize=20)
1818
+
1819
+ The next example uses `C` and `np.sum` as `reduce_C_function`.
1820
+ Note that `'observations'` values ranges from 1 to 5 but the result
1821
+ plot shows values up to more than 25. This is because of the
1822
+ `reduce_C_function`.
1823
+
1824
+ .. plot::
1825
+ :context: close-figs
1826
+
1827
+ >>> n = 500
1828
+ >>> df = pd.DataFrame({
1829
+ ... 'coord_x': np.random.uniform(-3, 3, size=n),
1830
+ ... 'coord_y': np.random.uniform(30, 50, size=n),
1831
+ ... 'observations': np.random.randint(1,5, size=n)
1832
+ ... })
1833
+ >>> ax = df.plot.hexbin(x='coord_x',
1834
+ ... y='coord_y',
1835
+ ... C='observations',
1836
+ ... reduce_C_function=np.sum,
1837
+ ... gridsize=10,
1838
+ ... cmap="viridis")
1839
+ """
1840
+ if reduce_C_function is not None:
1841
+ kwargs["reduce_C_function"] = reduce_C_function
1842
+ if gridsize is not None:
1843
+ kwargs["gridsize"] = gridsize
1844
+
1845
+ return self(kind="hexbin", x=x, y=y, C=C, **kwargs)
1846
+
1847
+
1848
+ _backends: dict[str, types.ModuleType] = {}
1849
+
1850
+
1851
+ def _load_backend(backend: str) -> types.ModuleType:
1852
+ """
1853
+ Load a pandas plotting backend.
1854
+
1855
+ Parameters
1856
+ ----------
1857
+ backend : str
1858
+ The identifier for the backend. Either an entrypoint item registered
1859
+ with importlib.metadata, "matplotlib", or a module name.
1860
+
1861
+ Returns
1862
+ -------
1863
+ types.ModuleType
1864
+ The imported backend.
1865
+ """
1866
+ from importlib.metadata import entry_points
1867
+
1868
+ if backend == "matplotlib":
1869
+ # Because matplotlib is an optional dependency and first-party backend,
1870
+ # we need to attempt an import here to raise an ImportError if needed.
1871
+ try:
1872
+ module = importlib.import_module("pandas.plotting._matplotlib")
1873
+ except ImportError:
1874
+ raise ImportError(
1875
+ "matplotlib is required for plotting when the "
1876
+ 'default backend "matplotlib" is selected.'
1877
+ ) from None
1878
+ return module
1879
+
1880
+ found_backend = False
1881
+
1882
+ eps = entry_points()
1883
+ key = "pandas_plotting_backends"
1884
+ # entry_points lost dict API ~ PY 3.10
1885
+ # https://github.com/python/importlib_metadata/issues/298
1886
+ if hasattr(eps, "select"):
1887
+ entry = eps.select(group=key)
1888
+ else:
1889
+ # Argument 2 to "get" of "dict" has incompatible type "Tuple[]";
1890
+ # expected "EntryPoints" [arg-type]
1891
+ entry = eps.get(key, ()) # type: ignore[arg-type]
1892
+ for entry_point in entry:
1893
+ found_backend = entry_point.name == backend
1894
+ if found_backend:
1895
+ module = entry_point.load()
1896
+ break
1897
+
1898
+ if not found_backend:
1899
+ # Fall back to unregistered, module name approach.
1900
+ try:
1901
+ module = importlib.import_module(backend)
1902
+ found_backend = True
1903
+ except ImportError:
1904
+ # We re-raise later on.
1905
+ pass
1906
+
1907
+ if found_backend:
1908
+ if hasattr(module, "plot"):
1909
+ # Validate that the interface is implemented when the option is set,
1910
+ # rather than at plot time.
1911
+ return module
1912
+
1913
+ raise ValueError(
1914
+ f"Could not find plotting backend '{backend}'. Ensure that you've "
1915
+ f"installed the package providing the '{backend}' entrypoint, or that "
1916
+ "the package has a top-level `.plot` method."
1917
+ )
1918
+
1919
+
1920
+ def _get_plot_backend(backend: str | None = None):
1921
+ """
1922
+ Return the plotting backend to use (e.g. `pandas.plotting._matplotlib`).
1923
+
1924
+ The plotting system of pandas uses matplotlib by default, but the idea here
1925
+ is that it can also work with other third-party backends. This function
1926
+ returns the module which provides a top-level `.plot` method that will
1927
+ actually do the plotting. The backend is specified from a string, which
1928
+ either comes from the keyword argument `backend`, or, if not specified, from
1929
+ the option `pandas.options.plotting.backend`. All the rest of the code in
1930
+ this file uses the backend specified there for the plotting.
1931
+
1932
+ The backend is imported lazily, as matplotlib is a soft dependency, and
1933
+ pandas can be used without it being installed.
1934
+
1935
+ Notes
1936
+ -----
1937
+ Modifies `_backends` with imported backend as a side effect.
1938
+ """
1939
+ backend_str: str = backend or get_option("plotting.backend")
1940
+
1941
+ if backend_str in _backends:
1942
+ return _backends[backend_str]
1943
+
1944
+ module = _load_backend(backend_str)
1945
+ _backends[backend_str] = module
1946
+ return module
venv/lib/python3.10/site-packages/pandas/plotting/_misc.py ADDED
@@ -0,0 +1,688 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from contextlib import contextmanager
4
+ from typing import (
5
+ TYPE_CHECKING,
6
+ Any,
7
+ )
8
+
9
+ from pandas.plotting._core import _get_plot_backend
10
+
11
+ if TYPE_CHECKING:
12
+ from collections.abc import (
13
+ Generator,
14
+ Mapping,
15
+ )
16
+
17
+ from matplotlib.axes import Axes
18
+ from matplotlib.colors import Colormap
19
+ from matplotlib.figure import Figure
20
+ from matplotlib.table import Table
21
+ import numpy as np
22
+
23
+ from pandas import (
24
+ DataFrame,
25
+ Series,
26
+ )
27
+
28
+
29
+ def table(ax: Axes, data: DataFrame | Series, **kwargs) -> Table:
30
+ """
31
+ Helper function to convert DataFrame and Series to matplotlib.table.
32
+
33
+ Parameters
34
+ ----------
35
+ ax : Matplotlib axes object
36
+ data : DataFrame or Series
37
+ Data for table contents.
38
+ **kwargs
39
+ Keyword arguments to be passed to matplotlib.table.table.
40
+ If `rowLabels` or `colLabels` is not specified, data index or column
41
+ name will be used.
42
+
43
+ Returns
44
+ -------
45
+ matplotlib table object
46
+
47
+ Examples
48
+ --------
49
+
50
+ .. plot::
51
+ :context: close-figs
52
+
53
+ >>> import matplotlib.pyplot as plt
54
+ >>> df = pd.DataFrame({'A': [1, 2], 'B': [3, 4]})
55
+ >>> fix, ax = plt.subplots()
56
+ >>> ax.axis('off')
57
+ (0.0, 1.0, 0.0, 1.0)
58
+ >>> table = pd.plotting.table(ax, df, loc='center',
59
+ ... cellLoc='center', colWidths=list([.2, .2]))
60
+ """
61
+ plot_backend = _get_plot_backend("matplotlib")
62
+ return plot_backend.table(
63
+ ax=ax, data=data, rowLabels=None, colLabels=None, **kwargs
64
+ )
65
+
66
+
67
+ def register() -> None:
68
+ """
69
+ Register pandas formatters and converters with matplotlib.
70
+
71
+ This function modifies the global ``matplotlib.units.registry``
72
+ dictionary. pandas adds custom converters for
73
+
74
+ * pd.Timestamp
75
+ * pd.Period
76
+ * np.datetime64
77
+ * datetime.datetime
78
+ * datetime.date
79
+ * datetime.time
80
+
81
+ See Also
82
+ --------
83
+ deregister_matplotlib_converters : Remove pandas formatters and converters.
84
+
85
+ Examples
86
+ --------
87
+ .. plot::
88
+ :context: close-figs
89
+
90
+ The following line is done automatically by pandas so
91
+ the plot can be rendered:
92
+
93
+ >>> pd.plotting.register_matplotlib_converters()
94
+
95
+ >>> df = pd.DataFrame({'ts': pd.period_range('2020', periods=2, freq='M'),
96
+ ... 'y': [1, 2]
97
+ ... })
98
+ >>> plot = df.plot.line(x='ts', y='y')
99
+
100
+ Unsetting the register manually an error will be raised:
101
+
102
+ >>> pd.set_option("plotting.matplotlib.register_converters",
103
+ ... False) # doctest: +SKIP
104
+ >>> df.plot.line(x='ts', y='y') # doctest: +SKIP
105
+ Traceback (most recent call last):
106
+ TypeError: float() argument must be a string or a real number, not 'Period'
107
+ """
108
+ plot_backend = _get_plot_backend("matplotlib")
109
+ plot_backend.register()
110
+
111
+
112
+ def deregister() -> None:
113
+ """
114
+ Remove pandas formatters and converters.
115
+
116
+ Removes the custom converters added by :func:`register`. This
117
+ attempts to set the state of the registry back to the state before
118
+ pandas registered its own units. Converters for pandas' own types like
119
+ Timestamp and Period are removed completely. Converters for types
120
+ pandas overwrites, like ``datetime.datetime``, are restored to their
121
+ original value.
122
+
123
+ See Also
124
+ --------
125
+ register_matplotlib_converters : Register pandas formatters and converters
126
+ with matplotlib.
127
+
128
+ Examples
129
+ --------
130
+ .. plot::
131
+ :context: close-figs
132
+
133
+ The following line is done automatically by pandas so
134
+ the plot can be rendered:
135
+
136
+ >>> pd.plotting.register_matplotlib_converters()
137
+
138
+ >>> df = pd.DataFrame({'ts': pd.period_range('2020', periods=2, freq='M'),
139
+ ... 'y': [1, 2]
140
+ ... })
141
+ >>> plot = df.plot.line(x='ts', y='y')
142
+
143
+ Unsetting the register manually an error will be raised:
144
+
145
+ >>> pd.set_option("plotting.matplotlib.register_converters",
146
+ ... False) # doctest: +SKIP
147
+ >>> df.plot.line(x='ts', y='y') # doctest: +SKIP
148
+ Traceback (most recent call last):
149
+ TypeError: float() argument must be a string or a real number, not 'Period'
150
+ """
151
+ plot_backend = _get_plot_backend("matplotlib")
152
+ plot_backend.deregister()
153
+
154
+
155
+ def scatter_matrix(
156
+ frame: DataFrame,
157
+ alpha: float = 0.5,
158
+ figsize: tuple[float, float] | None = None,
159
+ ax: Axes | None = None,
160
+ grid: bool = False,
161
+ diagonal: str = "hist",
162
+ marker: str = ".",
163
+ density_kwds: Mapping[str, Any] | None = None,
164
+ hist_kwds: Mapping[str, Any] | None = None,
165
+ range_padding: float = 0.05,
166
+ **kwargs,
167
+ ) -> np.ndarray:
168
+ """
169
+ Draw a matrix of scatter plots.
170
+
171
+ Parameters
172
+ ----------
173
+ frame : DataFrame
174
+ alpha : float, optional
175
+ Amount of transparency applied.
176
+ figsize : (float,float), optional
177
+ A tuple (width, height) in inches.
178
+ ax : Matplotlib axis object, optional
179
+ grid : bool, optional
180
+ Setting this to True will show the grid.
181
+ diagonal : {'hist', 'kde'}
182
+ Pick between 'kde' and 'hist' for either Kernel Density Estimation or
183
+ Histogram plot in the diagonal.
184
+ marker : str, optional
185
+ Matplotlib marker type, default '.'.
186
+ density_kwds : keywords
187
+ Keyword arguments to be passed to kernel density estimate plot.
188
+ hist_kwds : keywords
189
+ Keyword arguments to be passed to hist function.
190
+ range_padding : float, default 0.05
191
+ Relative extension of axis range in x and y with respect to
192
+ (x_max - x_min) or (y_max - y_min).
193
+ **kwargs
194
+ Keyword arguments to be passed to scatter function.
195
+
196
+ Returns
197
+ -------
198
+ numpy.ndarray
199
+ A matrix of scatter plots.
200
+
201
+ Examples
202
+ --------
203
+
204
+ .. plot::
205
+ :context: close-figs
206
+
207
+ >>> df = pd.DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])
208
+ >>> pd.plotting.scatter_matrix(df, alpha=0.2)
209
+ array([[<Axes: xlabel='A', ylabel='A'>, <Axes: xlabel='B', ylabel='A'>,
210
+ <Axes: xlabel='C', ylabel='A'>, <Axes: xlabel='D', ylabel='A'>],
211
+ [<Axes: xlabel='A', ylabel='B'>, <Axes: xlabel='B', ylabel='B'>,
212
+ <Axes: xlabel='C', ylabel='B'>, <Axes: xlabel='D', ylabel='B'>],
213
+ [<Axes: xlabel='A', ylabel='C'>, <Axes: xlabel='B', ylabel='C'>,
214
+ <Axes: xlabel='C', ylabel='C'>, <Axes: xlabel='D', ylabel='C'>],
215
+ [<Axes: xlabel='A', ylabel='D'>, <Axes: xlabel='B', ylabel='D'>,
216
+ <Axes: xlabel='C', ylabel='D'>, <Axes: xlabel='D', ylabel='D'>]],
217
+ dtype=object)
218
+ """
219
+ plot_backend = _get_plot_backend("matplotlib")
220
+ return plot_backend.scatter_matrix(
221
+ frame=frame,
222
+ alpha=alpha,
223
+ figsize=figsize,
224
+ ax=ax,
225
+ grid=grid,
226
+ diagonal=diagonal,
227
+ marker=marker,
228
+ density_kwds=density_kwds,
229
+ hist_kwds=hist_kwds,
230
+ range_padding=range_padding,
231
+ **kwargs,
232
+ )
233
+
234
+
235
+ def radviz(
236
+ frame: DataFrame,
237
+ class_column: str,
238
+ ax: Axes | None = None,
239
+ color: list[str] | tuple[str, ...] | None = None,
240
+ colormap: Colormap | str | None = None,
241
+ **kwds,
242
+ ) -> Axes:
243
+ """
244
+ Plot a multidimensional dataset in 2D.
245
+
246
+ Each Series in the DataFrame is represented as a evenly distributed
247
+ slice on a circle. Each data point is rendered in the circle according to
248
+ the value on each Series. Highly correlated `Series` in the `DataFrame`
249
+ are placed closer on the unit circle.
250
+
251
+ RadViz allow to project a N-dimensional data set into a 2D space where the
252
+ influence of each dimension can be interpreted as a balance between the
253
+ influence of all dimensions.
254
+
255
+ More info available at the `original article
256
+ <https://doi.org/10.1145/331770.331775>`_
257
+ describing RadViz.
258
+
259
+ Parameters
260
+ ----------
261
+ frame : `DataFrame`
262
+ Object holding the data.
263
+ class_column : str
264
+ Column name containing the name of the data point category.
265
+ ax : :class:`matplotlib.axes.Axes`, optional
266
+ A plot instance to which to add the information.
267
+ color : list[str] or tuple[str], optional
268
+ Assign a color to each category. Example: ['blue', 'green'].
269
+ colormap : str or :class:`matplotlib.colors.Colormap`, default None
270
+ Colormap to select colors from. If string, load colormap with that
271
+ name from matplotlib.
272
+ **kwds
273
+ Options to pass to matplotlib scatter plotting method.
274
+
275
+ Returns
276
+ -------
277
+ :class:`matplotlib.axes.Axes`
278
+
279
+ See Also
280
+ --------
281
+ pandas.plotting.andrews_curves : Plot clustering visualization.
282
+
283
+ Examples
284
+ --------
285
+
286
+ .. plot::
287
+ :context: close-figs
288
+
289
+ >>> df = pd.DataFrame(
290
+ ... {
291
+ ... 'SepalLength': [6.5, 7.7, 5.1, 5.8, 7.6, 5.0, 5.4, 4.6, 6.7, 4.6],
292
+ ... 'SepalWidth': [3.0, 3.8, 3.8, 2.7, 3.0, 2.3, 3.0, 3.2, 3.3, 3.6],
293
+ ... 'PetalLength': [5.5, 6.7, 1.9, 5.1, 6.6, 3.3, 4.5, 1.4, 5.7, 1.0],
294
+ ... 'PetalWidth': [1.8, 2.2, 0.4, 1.9, 2.1, 1.0, 1.5, 0.2, 2.1, 0.2],
295
+ ... 'Category': [
296
+ ... 'virginica',
297
+ ... 'virginica',
298
+ ... 'setosa',
299
+ ... 'virginica',
300
+ ... 'virginica',
301
+ ... 'versicolor',
302
+ ... 'versicolor',
303
+ ... 'setosa',
304
+ ... 'virginica',
305
+ ... 'setosa'
306
+ ... ]
307
+ ... }
308
+ ... )
309
+ >>> pd.plotting.radviz(df, 'Category') # doctest: +SKIP
310
+ """
311
+ plot_backend = _get_plot_backend("matplotlib")
312
+ return plot_backend.radviz(
313
+ frame=frame,
314
+ class_column=class_column,
315
+ ax=ax,
316
+ color=color,
317
+ colormap=colormap,
318
+ **kwds,
319
+ )
320
+
321
+
322
+ def andrews_curves(
323
+ frame: DataFrame,
324
+ class_column: str,
325
+ ax: Axes | None = None,
326
+ samples: int = 200,
327
+ color: list[str] | tuple[str, ...] | None = None,
328
+ colormap: Colormap | str | None = None,
329
+ **kwargs,
330
+ ) -> Axes:
331
+ """
332
+ Generate a matplotlib plot for visualizing clusters of multivariate data.
333
+
334
+ Andrews curves have the functional form:
335
+
336
+ .. math::
337
+ f(t) = \\frac{x_1}{\\sqrt{2}} + x_2 \\sin(t) + x_3 \\cos(t) +
338
+ x_4 \\sin(2t) + x_5 \\cos(2t) + \\cdots
339
+
340
+ Where :math:`x` coefficients correspond to the values of each dimension
341
+ and :math:`t` is linearly spaced between :math:`-\\pi` and :math:`+\\pi`.
342
+ Each row of frame then corresponds to a single curve.
343
+
344
+ Parameters
345
+ ----------
346
+ frame : DataFrame
347
+ Data to be plotted, preferably normalized to (0.0, 1.0).
348
+ class_column : label
349
+ Name of the column containing class names.
350
+ ax : axes object, default None
351
+ Axes to use.
352
+ samples : int
353
+ Number of points to plot in each curve.
354
+ color : str, list[str] or tuple[str], optional
355
+ Colors to use for the different classes. Colors can be strings
356
+ or 3-element floating point RGB values.
357
+ colormap : str or matplotlib colormap object, default None
358
+ Colormap to select colors from. If a string, load colormap with that
359
+ name from matplotlib.
360
+ **kwargs
361
+ Options to pass to matplotlib plotting method.
362
+
363
+ Returns
364
+ -------
365
+ :class:`matplotlib.axes.Axes`
366
+
367
+ Examples
368
+ --------
369
+
370
+ .. plot::
371
+ :context: close-figs
372
+
373
+ >>> df = pd.read_csv(
374
+ ... 'https://raw.githubusercontent.com/pandas-dev/'
375
+ ... 'pandas/main/pandas/tests/io/data/csv/iris.csv'
376
+ ... )
377
+ >>> pd.plotting.andrews_curves(df, 'Name') # doctest: +SKIP
378
+ """
379
+ plot_backend = _get_plot_backend("matplotlib")
380
+ return plot_backend.andrews_curves(
381
+ frame=frame,
382
+ class_column=class_column,
383
+ ax=ax,
384
+ samples=samples,
385
+ color=color,
386
+ colormap=colormap,
387
+ **kwargs,
388
+ )
389
+
390
+
391
+ def bootstrap_plot(
392
+ series: Series,
393
+ fig: Figure | None = None,
394
+ size: int = 50,
395
+ samples: int = 500,
396
+ **kwds,
397
+ ) -> Figure:
398
+ """
399
+ Bootstrap plot on mean, median and mid-range statistics.
400
+
401
+ The bootstrap plot is used to estimate the uncertainty of a statistic
402
+ by relying on random sampling with replacement [1]_. This function will
403
+ generate bootstrapping plots for mean, median and mid-range statistics
404
+ for the given number of samples of the given size.
405
+
406
+ .. [1] "Bootstrapping (statistics)" in \
407
+ https://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
408
+
409
+ Parameters
410
+ ----------
411
+ series : pandas.Series
412
+ Series from where to get the samplings for the bootstrapping.
413
+ fig : matplotlib.figure.Figure, default None
414
+ If given, it will use the `fig` reference for plotting instead of
415
+ creating a new one with default parameters.
416
+ size : int, default 50
417
+ Number of data points to consider during each sampling. It must be
418
+ less than or equal to the length of the `series`.
419
+ samples : int, default 500
420
+ Number of times the bootstrap procedure is performed.
421
+ **kwds
422
+ Options to pass to matplotlib plotting method.
423
+
424
+ Returns
425
+ -------
426
+ matplotlib.figure.Figure
427
+ Matplotlib figure.
428
+
429
+ See Also
430
+ --------
431
+ pandas.DataFrame.plot : Basic plotting for DataFrame objects.
432
+ pandas.Series.plot : Basic plotting for Series objects.
433
+
434
+ Examples
435
+ --------
436
+ This example draws a basic bootstrap plot for a Series.
437
+
438
+ .. plot::
439
+ :context: close-figs
440
+
441
+ >>> s = pd.Series(np.random.uniform(size=100))
442
+ >>> pd.plotting.bootstrap_plot(s) # doctest: +SKIP
443
+ <Figure size 640x480 with 6 Axes>
444
+ """
445
+ plot_backend = _get_plot_backend("matplotlib")
446
+ return plot_backend.bootstrap_plot(
447
+ series=series, fig=fig, size=size, samples=samples, **kwds
448
+ )
449
+
450
+
451
+ def parallel_coordinates(
452
+ frame: DataFrame,
453
+ class_column: str,
454
+ cols: list[str] | None = None,
455
+ ax: Axes | None = None,
456
+ color: list[str] | tuple[str, ...] | None = None,
457
+ use_columns: bool = False,
458
+ xticks: list | tuple | None = None,
459
+ colormap: Colormap | str | None = None,
460
+ axvlines: bool = True,
461
+ axvlines_kwds: Mapping[str, Any] | None = None,
462
+ sort_labels: bool = False,
463
+ **kwargs,
464
+ ) -> Axes:
465
+ """
466
+ Parallel coordinates plotting.
467
+
468
+ Parameters
469
+ ----------
470
+ frame : DataFrame
471
+ class_column : str
472
+ Column name containing class names.
473
+ cols : list, optional
474
+ A list of column names to use.
475
+ ax : matplotlib.axis, optional
476
+ Matplotlib axis object.
477
+ color : list or tuple, optional
478
+ Colors to use for the different classes.
479
+ use_columns : bool, optional
480
+ If true, columns will be used as xticks.
481
+ xticks : list or tuple, optional
482
+ A list of values to use for xticks.
483
+ colormap : str or matplotlib colormap, default None
484
+ Colormap to use for line colors.
485
+ axvlines : bool, optional
486
+ If true, vertical lines will be added at each xtick.
487
+ axvlines_kwds : keywords, optional
488
+ Options to be passed to axvline method for vertical lines.
489
+ sort_labels : bool, default False
490
+ Sort class_column labels, useful when assigning colors.
491
+ **kwargs
492
+ Options to pass to matplotlib plotting method.
493
+
494
+ Returns
495
+ -------
496
+ matplotlib.axes.Axes
497
+
498
+ Examples
499
+ --------
500
+
501
+ .. plot::
502
+ :context: close-figs
503
+
504
+ >>> df = pd.read_csv(
505
+ ... 'https://raw.githubusercontent.com/pandas-dev/'
506
+ ... 'pandas/main/pandas/tests/io/data/csv/iris.csv'
507
+ ... )
508
+ >>> pd.plotting.parallel_coordinates(
509
+ ... df, 'Name', color=('#556270', '#4ECDC4', '#C7F464')
510
+ ... ) # doctest: +SKIP
511
+ """
512
+ plot_backend = _get_plot_backend("matplotlib")
513
+ return plot_backend.parallel_coordinates(
514
+ frame=frame,
515
+ class_column=class_column,
516
+ cols=cols,
517
+ ax=ax,
518
+ color=color,
519
+ use_columns=use_columns,
520
+ xticks=xticks,
521
+ colormap=colormap,
522
+ axvlines=axvlines,
523
+ axvlines_kwds=axvlines_kwds,
524
+ sort_labels=sort_labels,
525
+ **kwargs,
526
+ )
527
+
528
+
529
+ def lag_plot(series: Series, lag: int = 1, ax: Axes | None = None, **kwds) -> Axes:
530
+ """
531
+ Lag plot for time series.
532
+
533
+ Parameters
534
+ ----------
535
+ series : Series
536
+ The time series to visualize.
537
+ lag : int, default 1
538
+ Lag length of the scatter plot.
539
+ ax : Matplotlib axis object, optional
540
+ The matplotlib axis object to use.
541
+ **kwds
542
+ Matplotlib scatter method keyword arguments.
543
+
544
+ Returns
545
+ -------
546
+ matplotlib.axes.Axes
547
+
548
+ Examples
549
+ --------
550
+ Lag plots are most commonly used to look for patterns in time series data.
551
+
552
+ Given the following time series
553
+
554
+ .. plot::
555
+ :context: close-figs
556
+
557
+ >>> np.random.seed(5)
558
+ >>> x = np.cumsum(np.random.normal(loc=1, scale=5, size=50))
559
+ >>> s = pd.Series(x)
560
+ >>> s.plot() # doctest: +SKIP
561
+
562
+ A lag plot with ``lag=1`` returns
563
+
564
+ .. plot::
565
+ :context: close-figs
566
+
567
+ >>> pd.plotting.lag_plot(s, lag=1)
568
+ <Axes: xlabel='y(t)', ylabel='y(t + 1)'>
569
+ """
570
+ plot_backend = _get_plot_backend("matplotlib")
571
+ return plot_backend.lag_plot(series=series, lag=lag, ax=ax, **kwds)
572
+
573
+
574
+ def autocorrelation_plot(series: Series, ax: Axes | None = None, **kwargs) -> Axes:
575
+ """
576
+ Autocorrelation plot for time series.
577
+
578
+ Parameters
579
+ ----------
580
+ series : Series
581
+ The time series to visualize.
582
+ ax : Matplotlib axis object, optional
583
+ The matplotlib axis object to use.
584
+ **kwargs
585
+ Options to pass to matplotlib plotting method.
586
+
587
+ Returns
588
+ -------
589
+ matplotlib.axes.Axes
590
+
591
+ Examples
592
+ --------
593
+ The horizontal lines in the plot correspond to 95% and 99% confidence bands.
594
+
595
+ The dashed line is 99% confidence band.
596
+
597
+ .. plot::
598
+ :context: close-figs
599
+
600
+ >>> spacing = np.linspace(-9 * np.pi, 9 * np.pi, num=1000)
601
+ >>> s = pd.Series(0.7 * np.random.rand(1000) + 0.3 * np.sin(spacing))
602
+ >>> pd.plotting.autocorrelation_plot(s) # doctest: +SKIP
603
+ """
604
+ plot_backend = _get_plot_backend("matplotlib")
605
+ return plot_backend.autocorrelation_plot(series=series, ax=ax, **kwargs)
606
+
607
+
608
+ class _Options(dict):
609
+ """
610
+ Stores pandas plotting options.
611
+
612
+ Allows for parameter aliasing so you can just use parameter names that are
613
+ the same as the plot function parameters, but is stored in a canonical
614
+ format that makes it easy to breakdown into groups later.
615
+
616
+ Examples
617
+ --------
618
+
619
+ .. plot::
620
+ :context: close-figs
621
+
622
+ >>> np.random.seed(42)
623
+ >>> df = pd.DataFrame({'A': np.random.randn(10),
624
+ ... 'B': np.random.randn(10)},
625
+ ... index=pd.date_range("1/1/2000",
626
+ ... freq='4MS', periods=10))
627
+ >>> with pd.plotting.plot_params.use("x_compat", True):
628
+ ... _ = df["A"].plot(color="r")
629
+ ... _ = df["B"].plot(color="g")
630
+ """
631
+
632
+ # alias so the names are same as plotting method parameter names
633
+ _ALIASES = {"x_compat": "xaxis.compat"}
634
+ _DEFAULT_KEYS = ["xaxis.compat"]
635
+
636
+ def __init__(self, deprecated: bool = False) -> None:
637
+ self._deprecated = deprecated
638
+ super().__setitem__("xaxis.compat", False)
639
+
640
+ def __getitem__(self, key):
641
+ key = self._get_canonical_key(key)
642
+ if key not in self:
643
+ raise ValueError(f"{key} is not a valid pandas plotting option")
644
+ return super().__getitem__(key)
645
+
646
+ def __setitem__(self, key, value) -> None:
647
+ key = self._get_canonical_key(key)
648
+ super().__setitem__(key, value)
649
+
650
+ def __delitem__(self, key) -> None:
651
+ key = self._get_canonical_key(key)
652
+ if key in self._DEFAULT_KEYS:
653
+ raise ValueError(f"Cannot remove default parameter {key}")
654
+ super().__delitem__(key)
655
+
656
+ def __contains__(self, key) -> bool:
657
+ key = self._get_canonical_key(key)
658
+ return super().__contains__(key)
659
+
660
+ def reset(self) -> None:
661
+ """
662
+ Reset the option store to its initial state
663
+
664
+ Returns
665
+ -------
666
+ None
667
+ """
668
+ # error: Cannot access "__init__" directly
669
+ self.__init__() # type: ignore[misc]
670
+
671
+ def _get_canonical_key(self, key):
672
+ return self._ALIASES.get(key, key)
673
+
674
+ @contextmanager
675
+ def use(self, key, value) -> Generator[_Options, None, None]:
676
+ """
677
+ Temporarily set a parameter value using the with statement.
678
+ Aliasing allowed.
679
+ """
680
+ old_value = self[key]
681
+ try:
682
+ self[key] = value
683
+ yield self
684
+ finally:
685
+ self[key] = old_value
686
+
687
+
688
+ plot_params = _Options()
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (190 Bytes). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_array.cpython-310.pyc ADDED
Binary file (4.78 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_astype.cpython-310.pyc ADDED
Binary file (6.76 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_chained_assignment_deprecation.cpython-310.pyc ADDED
Binary file (4.05 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_clip.cpython-310.pyc ADDED
Binary file (3.07 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_constructors.cpython-310.pyc ADDED
Binary file (9.82 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_core_functionalities.cpython-310.pyc ADDED
Binary file (3.13 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_functions.cpython-310.pyc ADDED
Binary file (10.1 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_indexing.cpython-310.pyc ADDED
Binary file (27.1 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_internals.cpython-310.pyc ADDED
Binary file (3.73 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_interp_fillna.cpython-310.pyc ADDED
Binary file (11.8 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_methods.cpython-310.pyc ADDED
Binary file (52 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_replace.cpython-310.pyc ADDED
Binary file (12.6 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_setitem.cpython-310.pyc ADDED
Binary file (3.76 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/test_util.cpython-310.pyc ADDED
Binary file (714 Bytes). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/__pycache__/util.cpython-310.pyc ADDED
Binary file (994 Bytes). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/index/__init__.py ADDED
File without changes
venv/lib/python3.10/site-packages/pandas/tests/copy_view/index/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (196 Bytes). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/index/__pycache__/test_datetimeindex.cpython-310.pyc ADDED
Binary file (2.31 kB). View file
 
venv/lib/python3.10/site-packages/pandas/tests/copy_view/index/__pycache__/test_index.cpython-310.pyc ADDED
Binary file (5.9 kB). View file