diff --git a/ckpts/universal/global_step40/zero/16.attention.query_key_value.weight/exp_avg_sq.pt b/ckpts/universal/global_step40/zero/16.attention.query_key_value.weight/exp_avg_sq.pt new file mode 100644 index 0000000000000000000000000000000000000000..a3cb3218bf7e6cb0c06fb0b903f929cac1c6f327 --- /dev/null +++ b/ckpts/universal/global_step40/zero/16.attention.query_key_value.weight/exp_avg_sq.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac468ceabd93f514aa7d003d1dbab05b8e68035900d5d25ef6938c4616ea959c +size 50332843 diff --git a/ckpts/universal/global_step40/zero/17.mlp.dense_4h_to_h.weight/exp_avg.pt b/ckpts/universal/global_step40/zero/17.mlp.dense_4h_to_h.weight/exp_avg.pt new file mode 100644 index 0000000000000000000000000000000000000000..ede41a14e094e2561e6c226c56e4680e9741a9a5 --- /dev/null +++ b/ckpts/universal/global_step40/zero/17.mlp.dense_4h_to_h.weight/exp_avg.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4da93b9372f8c88f63165119a244af2e9becd0ac17f694915d6905196348d603 +size 33555612 diff --git a/ckpts/universal/global_step40/zero/17.mlp.dense_4h_to_h.weight/exp_avg_sq.pt b/ckpts/universal/global_step40/zero/17.mlp.dense_4h_to_h.weight/exp_avg_sq.pt new file mode 100644 index 0000000000000000000000000000000000000000..052bc5ff8cf20ba715b0bba9adf385cbe7cdd7df --- /dev/null +++ b/ckpts/universal/global_step40/zero/17.mlp.dense_4h_to_h.weight/exp_avg_sq.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4313de6920348f98073a99238c9a87ab3cb78baba2bd66d265f5e6b76c80a5b5 +size 33555627 diff --git a/ckpts/universal/global_step40/zero/17.mlp.dense_4h_to_h.weight/fp32.pt b/ckpts/universal/global_step40/zero/17.mlp.dense_4h_to_h.weight/fp32.pt new file mode 100644 index 0000000000000000000000000000000000000000..231724bd48ac22e649cc0267c95c568b9e30e5af --- /dev/null +++ b/ckpts/universal/global_step40/zero/17.mlp.dense_4h_to_h.weight/fp32.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b3089e77b53620c3658b64c419e0ef3fc307e45c68fdaff018ade8e4c68bd36d +size 33555533 diff --git a/ckpts/universal/global_step40/zero/6.post_attention_layernorm.weight/fp32.pt b/ckpts/universal/global_step40/zero/6.post_attention_layernorm.weight/fp32.pt new file mode 100644 index 0000000000000000000000000000000000000000..43d10ecd4bab9758356bad23834a901e681f3f18 --- /dev/null +++ b/ckpts/universal/global_step40/zero/6.post_attention_layernorm.weight/fp32.pt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66912cee30aeef84c558b59578dbd7376a05e35377acd70a85a2d8e02035c4ce +size 9293 diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/__init__.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..0de8b0923545a634b7d02873c6e783cb5e5882a1 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/__init__.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_filters.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_filters.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..1f49441a01a83063587a8d20ddd7718b0e61d112 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_filters.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_fourier.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_fourier.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..d4111d8366b3619226312758f9deade6976ae347 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_fourier.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_interpolation.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_interpolation.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..64c428b6295d1056265c795dd5e84f37fe873485 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_interpolation.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_measurements.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_measurements.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..f0bedcd18bd6a04817675237a37a457c95f83d1d Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_measurements.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_morphology.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_morphology.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..5ebfacaa74d04f502fe6863cad4f0da369c31153 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_morphology.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_ni_docstrings.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_ni_docstrings.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..c735435965ddc28ef8e1940d6870527cafa389d0 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_ni_docstrings.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_ni_support.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_ni_support.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..48a88e11339717ec1056e9e2e84f813a9d6f98f8 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/_ni_support.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/filters.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/filters.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..a73f87d331f1e22410f091303f2465cf9bbd25fd Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/filters.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/fourier.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/fourier.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..daf3605dc54c8dd698f9ba67ea11601d02d931e6 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/fourier.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/interpolation.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/interpolation.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..d4eff92c1ad888b73ad047ddf792b057066bfa24 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/interpolation.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/measurements.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/measurements.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..b0f7d0a9169d79f6c9a1890984e0526f5b03dabd Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/measurements.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/morphology.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/morphology.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..144f4468967ea8ed80f23ff7f20ab10c549aafc7 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/__pycache__/morphology.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__init__.py b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..8fc6581276063ca8d0e90362e1d1eee743d4ed18 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__init__.py @@ -0,0 +1,13 @@ +from __future__ import annotations +import numpy + +# list of numarray data types +integer_types: list[type] = [ + numpy.int8, numpy.uint8, numpy.int16, numpy.uint16, + numpy.int32, numpy.uint32, numpy.int64, numpy.uint64] + +float_types: list[type] = [numpy.float32, numpy.float64] + +complex_types: list[type] = [numpy.complex64, numpy.complex128] + +types: list[type] = integer_types + float_types diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/__init__.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..8ab30fb1c01cf5fe7de3813270e6720c30d51445 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/__init__.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_c_api.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_c_api.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..0ad0ff0a3edefe304449d8097af9adfe3eab76ba Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_c_api.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_datatypes.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_datatypes.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..5144714975e1e8b4523c699481e388fd44c505b8 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_datatypes.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_filters.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_filters.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..37401b47da6b0ce0a6ca3ee566f584101c322f21 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_filters.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_fourier.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_fourier.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..c54e735450cbb3d891390e03c98b48c58e7ee0b8 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_fourier.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_interpolation.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_interpolation.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..cf4ec5a809ef4adc4c6a2670ec3ef65681167486 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_interpolation.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_measurements.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_measurements.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..60d3f84347ca37d55849823b80a4a4c15251cb82 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_measurements.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_morphology.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_morphology.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..7e07cb5540885c13ccf9323bd0fc34327d0fd10d Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_morphology.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_ni_support.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_ni_support.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..eb0c066329ce286b49a73a3c337a6ab14ce2051a Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_ni_support.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_splines.cpython-310.pyc b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_splines.cpython-310.pyc new file mode 100644 index 0000000000000000000000000000000000000000..cddca8de925369e980c1d1c90acf6b82c02af23f Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/__pycache__/test_splines.cpython-310.pyc differ diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/data/label_inputs.txt b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/data/label_inputs.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c3cff3b12cec4ad050b31cc5d5c327f32784447 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/data/label_inputs.txt @@ -0,0 +1,21 @@ +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 0 1 1 1 +1 1 0 0 0 1 1 +1 0 1 0 1 0 1 +0 0 0 1 0 0 0 +1 0 1 0 1 0 1 +1 1 0 0 0 1 1 +1 1 1 0 1 1 1 +1 0 1 1 1 0 1 +0 0 0 1 0 0 0 +1 0 0 1 0 0 1 +1 1 1 1 1 1 1 +1 0 0 1 0 0 1 +0 0 0 1 0 0 0 +1 0 1 1 1 0 1 diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/data/label_results.txt b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/data/label_results.txt new file mode 100644 index 0000000000000000000000000000000000000000..c239b0369c9df3e06df9a2fbf048faec2f84941f --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/data/label_results.txt @@ -0,0 +1,294 @@ +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +2 2 2 2 2 2 2 +3 3 3 3 3 3 3 +4 4 4 4 4 4 4 +5 5 5 5 5 5 5 +6 6 6 6 6 6 6 +7 7 7 7 7 7 7 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 2 3 4 5 6 7 +8 9 10 11 12 13 14 +15 16 17 18 19 20 21 +22 23 24 25 26 27 28 +29 30 31 32 33 34 35 +36 37 38 39 40 41 42 +43 44 45 46 47 48 49 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 2 3 4 5 6 7 +8 1 2 3 4 5 6 +9 8 1 2 3 4 5 +10 9 8 1 2 3 4 +11 10 9 8 1 2 3 +12 11 10 9 8 1 2 +13 12 11 10 9 8 1 +1 2 3 4 5 6 7 +1 2 3 4 5 6 7 +1 2 3 4 5 6 7 +1 2 3 4 5 6 7 +1 2 3 4 5 6 7 +1 2 3 4 5 6 7 +1 2 3 4 5 6 7 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 2 1 2 1 2 1 +2 1 2 1 2 1 2 +1 2 1 2 1 2 1 +2 1 2 1 2 1 2 +1 2 1 2 1 2 1 +2 1 2 1 2 1 2 +1 2 1 2 1 2 1 +1 2 3 4 5 6 7 +2 3 4 5 6 7 8 +3 4 5 6 7 8 9 +4 5 6 7 8 9 10 +5 6 7 8 9 10 11 +6 7 8 9 10 11 12 +7 8 9 10 11 12 13 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 1 1 1 1 +1 1 1 0 2 2 2 +1 1 0 0 0 2 2 +1 0 3 0 2 0 4 +0 0 0 2 0 0 0 +5 0 2 0 6 0 7 +2 2 0 0 0 7 7 +2 2 2 0 7 7 7 +1 1 1 0 2 2 2 +1 1 0 0 0 2 2 +3 0 1 0 4 0 2 +0 0 0 1 0 0 0 +5 0 6 0 1 0 7 +5 5 0 0 0 1 1 +5 5 5 0 1 1 1 +1 1 1 0 2 2 2 +3 3 0 0 0 4 4 +5 0 6 0 7 0 8 +0 0 0 9 0 0 0 +10 0 11 0 12 0 13 +14 14 0 0 0 15 15 +16 16 16 0 17 17 17 +1 1 1 0 2 3 3 +1 1 0 0 0 3 3 +1 0 4 0 3 0 3 +0 0 0 3 0 0 0 +3 0 3 0 5 0 6 +3 3 0 0 0 6 6 +3 3 7 0 6 6 6 +1 2 3 0 4 5 6 +7 8 0 0 0 9 10 +11 0 12 0 13 0 14 +0 0 0 15 0 0 0 +16 0 17 0 18 0 19 +20 21 0 0 0 22 23 +24 25 26 0 27 28 29 +1 1 1 0 2 2 2 +1 1 0 0 0 2 2 +1 0 3 0 2 0 2 +0 0 0 2 0 0 0 +2 0 2 0 4 0 5 +2 2 0 0 0 5 5 +2 2 2 0 5 5 5 +1 1 1 0 2 2 2 +1 1 0 0 0 2 2 +1 0 3 0 4 0 2 +0 0 0 5 0 0 0 +6 0 7 0 8 0 9 +6 6 0 0 0 9 9 +6 6 6 0 9 9 9 +1 2 3 0 4 5 6 +7 1 0 0 0 4 5 +8 0 1 0 9 0 4 +0 0 0 1 0 0 0 +10 0 11 0 1 0 12 +13 10 0 0 0 1 14 +15 13 10 0 16 17 1 +1 2 3 0 4 5 6 +1 2 0 0 0 5 6 +1 0 7 0 8 0 6 +0 0 0 9 0 0 0 +10 0 11 0 12 0 13 +10 14 0 0 0 15 13 +10 14 16 0 17 15 13 +1 1 1 0 1 1 1 +1 1 0 0 0 1 1 +1 0 1 0 1 0 1 +0 0 0 1 0 0 0 +1 0 1 0 1 0 1 +1 1 0 0 0 1 1 +1 1 1 0 1 1 1 +1 1 2 0 3 3 3 +1 1 0 0 0 3 3 +1 0 1 0 4 0 3 +0 0 0 1 0 0 0 +5 0 6 0 1 0 1 +5 5 0 0 0 1 1 +5 5 5 0 7 1 1 +1 2 1 0 1 3 1 +2 1 0 0 0 1 3 +1 0 1 0 1 0 1 +0 0 0 1 0 0 0 +1 0 1 0 1 0 1 +4 1 0 0 0 1 5 +1 4 1 0 1 5 1 +1 2 3 0 4 5 6 +2 3 0 0 0 6 7 +3 0 8 0 6 0 9 +0 0 0 6 0 0 0 +10 0 6 0 11 0 12 +13 6 0 0 0 12 14 +6 15 16 0 12 14 17 +1 1 1 0 2 2 2 +1 1 0 0 0 2 2 +1 0 1 0 3 0 2 +0 0 0 1 0 0 0 +4 0 5 0 1 0 1 +4 4 0 0 0 1 1 +4 4 4 0 1 1 1 +1 0 2 2 2 0 3 +0 0 0 2 0 0 0 +4 0 0 5 0 0 5 +5 5 5 5 5 5 5 +5 0 0 5 0 0 6 +0 0 0 7 0 0 0 +8 0 7 7 7 0 9 +1 0 2 2 2 0 3 +0 0 0 2 0 0 0 +4 0 0 4 0 0 5 +4 4 4 4 4 4 4 +6 0 0 4 0 0 4 +0 0 0 7 0 0 0 +8 0 7 7 7 0 9 +1 0 2 2 2 0 3 +0 0 0 4 0 0 0 +5 0 0 6 0 0 7 +8 8 8 8 8 8 8 +9 0 0 10 0 0 11 +0 0 0 12 0 0 0 +13 0 14 14 14 0 15 +1 0 2 3 3 0 4 +0 0 0 3 0 0 0 +5 0 0 3 0 0 6 +5 5 3 3 3 6 6 +5 0 0 3 0 0 6 +0 0 0 3 0 0 0 +7 0 3 3 8 0 9 +1 0 2 3 4 0 5 +0 0 0 6 0 0 0 +7 0 0 8 0 0 9 +10 11 12 13 14 15 16 +17 0 0 18 0 0 19 +0 0 0 20 0 0 0 +21 0 22 23 24 0 25 +1 0 2 2 2 0 3 +0 0 0 2 0 0 0 +2 0 0 2 0 0 2 +2 2 2 2 2 2 2 +2 0 0 2 0 0 2 +0 0 0 2 0 0 0 +4 0 2 2 2 0 5 +1 0 2 2 2 0 3 +0 0 0 2 0 0 0 +2 0 0 2 0 0 2 +2 2 2 2 2 2 2 +2 0 0 2 0 0 2 +0 0 0 2 0 0 0 +4 0 2 2 2 0 5 +1 0 2 3 4 0 5 +0 0 0 2 0 0 0 +6 0 0 7 0 0 8 +9 6 10 11 7 12 13 +14 0 0 10 0 0 12 +0 0 0 15 0 0 0 +16 0 17 18 15 0 19 +1 0 2 3 4 0 5 +0 0 0 3 0 0 0 +6 0 0 3 0 0 7 +6 8 9 3 10 11 7 +6 0 0 3 0 0 7 +0 0 0 3 0 0 0 +12 0 13 3 14 0 15 +1 0 2 2 2 0 3 +0 0 0 2 0 0 0 +2 0 0 2 0 0 2 +2 2 2 2 2 2 2 +2 0 0 2 0 0 2 +0 0 0 2 0 0 0 +4 0 2 2 2 0 5 +1 0 2 2 3 0 4 +0 0 0 2 0 0 0 +5 0 0 2 0 0 6 +5 5 2 2 2 6 6 +5 0 0 2 0 0 6 +0 0 0 2 0 0 0 +7 0 8 2 2 0 9 +1 0 2 3 2 0 4 +0 0 0 2 0 0 0 +5 0 0 6 0 0 7 +8 5 6 9 6 7 10 +5 0 0 6 0 0 7 +0 0 0 11 0 0 0 +12 0 11 13 11 0 14 +1 0 2 3 4 0 5 +0 0 0 4 0 0 0 +6 0 0 7 0 0 8 +9 10 7 11 12 8 13 +10 0 0 12 0 0 14 +0 0 0 15 0 0 0 +16 0 15 17 18 0 19 +1 0 2 2 2 0 3 +0 0 0 2 0 0 0 +2 0 0 2 0 0 2 +2 2 2 2 2 2 2 +2 0 0 2 0 0 2 +0 0 0 2 0 0 0 +4 0 2 2 2 0 5 diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/data/label_strels.txt b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/data/label_strels.txt new file mode 100644 index 0000000000000000000000000000000000000000..35ae8121364d4fb3292c11f2a72333f456fa9c0a --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/data/label_strels.txt @@ -0,0 +1,42 @@ +0 0 1 +1 1 1 +1 0 0 +1 0 0 +1 1 1 +0 0 1 +0 0 0 +1 1 1 +0 0 0 +0 1 1 +0 1 0 +1 1 0 +0 0 0 +0 0 0 +0 0 0 +0 1 1 +1 1 1 +1 1 0 +0 1 0 +1 1 1 +0 1 0 +1 0 0 +0 1 0 +0 0 1 +0 1 0 +0 1 0 +0 1 0 +1 1 1 +1 1 1 +1 1 1 +1 1 0 +0 1 0 +0 1 1 +1 0 1 +0 1 0 +1 0 1 +0 0 1 +0 1 0 +1 0 0 +1 1 0 +1 1 1 +0 1 1 diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_c_api.py b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_c_api.py new file mode 100644 index 0000000000000000000000000000000000000000..ed52ed8477056176e1f5aacbf681b12b0153fee6 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_c_api.py @@ -0,0 +1,102 @@ +import numpy as np +from numpy.testing import assert_allclose + +from scipy import ndimage +from scipy.ndimage import _ctest +from scipy.ndimage import _cytest +from scipy._lib._ccallback import LowLevelCallable + +FILTER1D_FUNCTIONS = [ + lambda filter_size: _ctest.filter1d(filter_size), + lambda filter_size: _cytest.filter1d(filter_size, with_signature=False), + lambda filter_size: LowLevelCallable( + _cytest.filter1d(filter_size, with_signature=True) + ), + lambda filter_size: LowLevelCallable.from_cython( + _cytest, "_filter1d", + _cytest.filter1d_capsule(filter_size), + ), +] + +FILTER2D_FUNCTIONS = [ + lambda weights: _ctest.filter2d(weights), + lambda weights: _cytest.filter2d(weights, with_signature=False), + lambda weights: LowLevelCallable(_cytest.filter2d(weights, with_signature=True)), + lambda weights: LowLevelCallable.from_cython(_cytest, + "_filter2d", + _cytest.filter2d_capsule(weights),), +] + +TRANSFORM_FUNCTIONS = [ + lambda shift: _ctest.transform(shift), + lambda shift: _cytest.transform(shift, with_signature=False), + lambda shift: LowLevelCallable(_cytest.transform(shift, with_signature=True)), + lambda shift: LowLevelCallable.from_cython(_cytest, + "_transform", + _cytest.transform_capsule(shift),), +] + + +def test_generic_filter(): + def filter2d(footprint_elements, weights): + return (weights*footprint_elements).sum() + + def check(j): + func = FILTER2D_FUNCTIONS[j] + + im = np.ones((20, 20)) + im[:10,:10] = 0 + footprint = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]]) + footprint_size = np.count_nonzero(footprint) + weights = np.ones(footprint_size)/footprint_size + + res = ndimage.generic_filter(im, func(weights), + footprint=footprint) + std = ndimage.generic_filter(im, filter2d, footprint=footprint, + extra_arguments=(weights,)) + assert_allclose(res, std, err_msg=f"#{j} failed") + + for j, func in enumerate(FILTER2D_FUNCTIONS): + check(j) + + +def test_generic_filter1d(): + def filter1d(input_line, output_line, filter_size): + for i in range(output_line.size): + output_line[i] = 0 + for j in range(filter_size): + output_line[i] += input_line[i+j] + output_line /= filter_size + + def check(j): + func = FILTER1D_FUNCTIONS[j] + + im = np.tile(np.hstack((np.zeros(10), np.ones(10))), (10, 1)) + filter_size = 3 + + res = ndimage.generic_filter1d(im, func(filter_size), + filter_size) + std = ndimage.generic_filter1d(im, filter1d, filter_size, + extra_arguments=(filter_size,)) + assert_allclose(res, std, err_msg=f"#{j} failed") + + for j, func in enumerate(FILTER1D_FUNCTIONS): + check(j) + + +def test_geometric_transform(): + def transform(output_coordinates, shift): + return output_coordinates[0] - shift, output_coordinates[1] - shift + + def check(j): + func = TRANSFORM_FUNCTIONS[j] + + im = np.arange(12).reshape(4, 3).astype(np.float64) + shift = 0.5 + + res = ndimage.geometric_transform(im, func(shift)) + std = ndimage.geometric_transform(im, transform, extra_arguments=(shift,)) + assert_allclose(res, std, err_msg=f"#{j} failed") + + for j, func in enumerate(TRANSFORM_FUNCTIONS): + check(j) diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_datatypes.py b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_datatypes.py new file mode 100644 index 0000000000000000000000000000000000000000..cd9382a16ada38a6d3059d54ad765c2e0f74b7c1 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_datatypes.py @@ -0,0 +1,66 @@ +""" Testing data types for ndimage calls +""" +import numpy as np +from numpy.testing import assert_array_almost_equal, assert_ +import pytest + +from scipy import ndimage + + +def test_map_coordinates_dts(): + # check that ndimage accepts different data types for interpolation + data = np.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + shifted_data = np.array([[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + idx = np.indices(data.shape) + dts = (np.uint8, np.uint16, np.uint32, np.uint64, + np.int8, np.int16, np.int32, np.int64, + np.intp, np.uintp, np.float32, np.float64) + for order in range(0, 6): + for data_dt in dts: + these_data = data.astype(data_dt) + for coord_dt in dts: + # affine mapping + mat = np.eye(2, dtype=coord_dt) + off = np.zeros((2,), dtype=coord_dt) + out = ndimage.affine_transform(these_data, mat, off) + assert_array_almost_equal(these_data, out) + # map coordinates + coords_m1 = idx.astype(coord_dt) - 1 + coords_p10 = idx.astype(coord_dt) + 10 + out = ndimage.map_coordinates(these_data, coords_m1, order=order) + assert_array_almost_equal(out, shifted_data) + # check constant fill works + out = ndimage.map_coordinates(these_data, coords_p10, order=order) + assert_array_almost_equal(out, np.zeros((3,4))) + # check shift and zoom + out = ndimage.shift(these_data, 1) + assert_array_almost_equal(out, shifted_data) + out = ndimage.zoom(these_data, 1) + assert_array_almost_equal(these_data, out) + + +@pytest.mark.xfail(True, reason="Broken on many platforms") +def test_uint64_max(): + # Test interpolation respects uint64 max. Reported to fail at least on + # win32 (due to the 32 bit visual C compiler using signed int64 when + # converting between uint64 to double) and Debian on s390x. + # Interpolation is always done in double precision floating point, so + # we use the largest uint64 value for which int(float(big)) still fits + # in a uint64. + # This test was last enabled on macOS only, and there it started failing + # on arm64 as well (see gh-19117). + big = 2**64 - 1025 + arr = np.array([big, big, big], dtype=np.uint64) + # Tests geometric transform (map_coordinates, affine_transform) + inds = np.indices(arr.shape) - 0.1 + x = ndimage.map_coordinates(arr, inds) + assert_(x[1] == int(float(big))) + assert_(x[2] == int(float(big))) + # Tests zoom / shift + x = ndimage.shift(arr, 0.1) + assert_(x[1] == int(float(big))) + assert_(x[2] == int(float(big))) diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_filters.py b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_filters.py new file mode 100644 index 0000000000000000000000000000000000000000..6401a69f8627fa6b95c71f3711dfd064e74264f2 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_filters.py @@ -0,0 +1,2189 @@ +''' Some tests for filters ''' +import functools +import itertools +import math +import numpy + +from numpy.testing import (assert_equal, assert_allclose, + assert_array_almost_equal, + assert_array_equal, assert_almost_equal, + suppress_warnings, assert_) +import pytest +from pytest import raises as assert_raises + +from scipy import ndimage +from scipy.ndimage._filters import _gaussian_kernel1d + +from . import types, float_types, complex_types + + +def sumsq(a, b): + return math.sqrt(((a - b)**2).sum()) + + +def _complex_correlate(array, kernel, real_dtype, convolve=False, + mode="reflect", cval=0, ): + """Utility to perform a reference complex-valued convolutions. + + When convolve==False, correlation is performed instead + """ + array = numpy.asarray(array) + kernel = numpy.asarray(kernel) + complex_array = array.dtype.kind == 'c' + complex_kernel = kernel.dtype.kind == 'c' + if array.ndim == 1: + func = ndimage.convolve1d if convolve else ndimage.correlate1d + else: + func = ndimage.convolve if convolve else ndimage.correlate + if not convolve: + kernel = kernel.conj() + if complex_array and complex_kernel: + # use: real(cval) for array.real component + # imag(cval) for array.imag component + output = ( + func(array.real, kernel.real, output=real_dtype, + mode=mode, cval=numpy.real(cval)) - + func(array.imag, kernel.imag, output=real_dtype, + mode=mode, cval=numpy.imag(cval)) + + 1j * func(array.imag, kernel.real, output=real_dtype, + mode=mode, cval=numpy.imag(cval)) + + 1j * func(array.real, kernel.imag, output=real_dtype, + mode=mode, cval=numpy.real(cval)) + ) + elif complex_array: + output = ( + func(array.real, kernel, output=real_dtype, mode=mode, + cval=numpy.real(cval)) + + 1j * func(array.imag, kernel, output=real_dtype, mode=mode, + cval=numpy.imag(cval)) + ) + elif complex_kernel: + # real array so cval is real too + output = ( + func(array, kernel.real, output=real_dtype, mode=mode, cval=cval) + + 1j * func(array, kernel.imag, output=real_dtype, mode=mode, + cval=cval) + ) + return output + + +def _cases_axes_tuple_length_mismatch(): + # Generate combinations of filter function, valid kwargs, and + # keyword-value pairs for which the value will become with mismatched + # (invalid) size + filter_func = ndimage.gaussian_filter + kwargs = dict(radius=3, mode='constant', sigma=1.0, order=0) + for key, val in kwargs.items(): + yield filter_func, kwargs, key, val + + filter_funcs = [ndimage.uniform_filter, ndimage.minimum_filter, + ndimage.maximum_filter] + kwargs = dict(size=3, mode='constant', origin=0) + for filter_func in filter_funcs: + for key, val in kwargs.items(): + yield filter_func, kwargs, key, val + + +class TestNdimageFilters: + + def _validate_complex(self, array, kernel, type2, mode='reflect', cval=0): + # utility for validating complex-valued correlations + real_dtype = numpy.asarray([], dtype=type2).real.dtype + expected = _complex_correlate( + array, kernel, real_dtype, convolve=False, mode=mode, cval=cval + ) + + if array.ndim == 1: + correlate = functools.partial(ndimage.correlate1d, axis=-1, + mode=mode, cval=cval) + convolve = functools.partial(ndimage.convolve1d, axis=-1, + mode=mode, cval=cval) + else: + correlate = functools.partial(ndimage.correlate, mode=mode, + cval=cval) + convolve = functools.partial(ndimage.convolve, mode=mode, + cval=cval) + + # test correlate output dtype + output = correlate(array, kernel, output=type2) + assert_array_almost_equal(expected, output) + assert_equal(output.dtype.type, type2) + + # test correlate with pre-allocated output + output = numpy.zeros_like(array, dtype=type2) + correlate(array, kernel, output=output) + assert_array_almost_equal(expected, output) + + # test convolve output dtype + output = convolve(array, kernel, output=type2) + expected = _complex_correlate( + array, kernel, real_dtype, convolve=True, mode=mode, cval=cval, + ) + assert_array_almost_equal(expected, output) + assert_equal(output.dtype.type, type2) + + # convolve with pre-allocated output + convolve(array, kernel, output=output) + assert_array_almost_equal(expected, output) + assert_equal(output.dtype.type, type2) + + # warns if the output is not a complex dtype + with pytest.warns(UserWarning, + match="promoting specified output dtype to complex"): + correlate(array, kernel, output=real_dtype) + + with pytest.warns(UserWarning, + match="promoting specified output dtype to complex"): + convolve(array, kernel, output=real_dtype) + + # raises if output array is provided, but is not complex-valued + output_real = numpy.zeros_like(array, dtype=real_dtype) + with assert_raises(RuntimeError): + correlate(array, kernel, output=output_real) + + with assert_raises(RuntimeError): + convolve(array, kernel, output=output_real) + + def test_correlate01(self): + array = numpy.array([1, 2]) + weights = numpy.array([2]) + expected = [2, 4] + + output = ndimage.correlate(array, weights) + assert_array_almost_equal(output, expected) + + output = ndimage.convolve(array, weights) + assert_array_almost_equal(output, expected) + + output = ndimage.correlate1d(array, weights) + assert_array_almost_equal(output, expected) + + output = ndimage.convolve1d(array, weights) + assert_array_almost_equal(output, expected) + + def test_correlate01_overlap(self): + array = numpy.arange(256).reshape(16, 16) + weights = numpy.array([2]) + expected = 2 * array + + ndimage.correlate1d(array, weights, output=array) + assert_array_almost_equal(array, expected) + + def test_correlate02(self): + array = numpy.array([1, 2, 3]) + kernel = numpy.array([1]) + + output = ndimage.correlate(array, kernel) + assert_array_almost_equal(array, output) + + output = ndimage.convolve(array, kernel) + assert_array_almost_equal(array, output) + + output = ndimage.correlate1d(array, kernel) + assert_array_almost_equal(array, output) + + output = ndimage.convolve1d(array, kernel) + assert_array_almost_equal(array, output) + + def test_correlate03(self): + array = numpy.array([1]) + weights = numpy.array([1, 1]) + expected = [2] + + output = ndimage.correlate(array, weights) + assert_array_almost_equal(output, expected) + + output = ndimage.convolve(array, weights) + assert_array_almost_equal(output, expected) + + output = ndimage.correlate1d(array, weights) + assert_array_almost_equal(output, expected) + + output = ndimage.convolve1d(array, weights) + assert_array_almost_equal(output, expected) + + def test_correlate04(self): + array = numpy.array([1, 2]) + tcor = [2, 3] + tcov = [3, 4] + weights = numpy.array([1, 1]) + output = ndimage.correlate(array, weights) + assert_array_almost_equal(output, tcor) + output = ndimage.convolve(array, weights) + assert_array_almost_equal(output, tcov) + output = ndimage.correlate1d(array, weights) + assert_array_almost_equal(output, tcor) + output = ndimage.convolve1d(array, weights) + assert_array_almost_equal(output, tcov) + + def test_correlate05(self): + array = numpy.array([1, 2, 3]) + tcor = [2, 3, 5] + tcov = [3, 5, 6] + kernel = numpy.array([1, 1]) + output = ndimage.correlate(array, kernel) + assert_array_almost_equal(tcor, output) + output = ndimage.convolve(array, kernel) + assert_array_almost_equal(tcov, output) + output = ndimage.correlate1d(array, kernel) + assert_array_almost_equal(tcor, output) + output = ndimage.convolve1d(array, kernel) + assert_array_almost_equal(tcov, output) + + def test_correlate06(self): + array = numpy.array([1, 2, 3]) + tcor = [9, 14, 17] + tcov = [7, 10, 15] + weights = numpy.array([1, 2, 3]) + output = ndimage.correlate(array, weights) + assert_array_almost_equal(output, tcor) + output = ndimage.convolve(array, weights) + assert_array_almost_equal(output, tcov) + output = ndimage.correlate1d(array, weights) + assert_array_almost_equal(output, tcor) + output = ndimage.convolve1d(array, weights) + assert_array_almost_equal(output, tcov) + + def test_correlate07(self): + array = numpy.array([1, 2, 3]) + expected = [5, 8, 11] + weights = numpy.array([1, 2, 1]) + output = ndimage.correlate(array, weights) + assert_array_almost_equal(output, expected) + output = ndimage.convolve(array, weights) + assert_array_almost_equal(output, expected) + output = ndimage.correlate1d(array, weights) + assert_array_almost_equal(output, expected) + output = ndimage.convolve1d(array, weights) + assert_array_almost_equal(output, expected) + + def test_correlate08(self): + array = numpy.array([1, 2, 3]) + tcor = [1, 2, 5] + tcov = [3, 6, 7] + weights = numpy.array([1, 2, -1]) + output = ndimage.correlate(array, weights) + assert_array_almost_equal(output, tcor) + output = ndimage.convolve(array, weights) + assert_array_almost_equal(output, tcov) + output = ndimage.correlate1d(array, weights) + assert_array_almost_equal(output, tcor) + output = ndimage.convolve1d(array, weights) + assert_array_almost_equal(output, tcov) + + def test_correlate09(self): + array = [] + kernel = numpy.array([1, 1]) + output = ndimage.correlate(array, kernel) + assert_array_almost_equal(array, output) + output = ndimage.convolve(array, kernel) + assert_array_almost_equal(array, output) + output = ndimage.correlate1d(array, kernel) + assert_array_almost_equal(array, output) + output = ndimage.convolve1d(array, kernel) + assert_array_almost_equal(array, output) + + def test_correlate10(self): + array = [[]] + kernel = numpy.array([[1, 1]]) + output = ndimage.correlate(array, kernel) + assert_array_almost_equal(array, output) + output = ndimage.convolve(array, kernel) + assert_array_almost_equal(array, output) + + def test_correlate11(self): + array = numpy.array([[1, 2, 3], + [4, 5, 6]]) + kernel = numpy.array([[1, 1], + [1, 1]]) + output = ndimage.correlate(array, kernel) + assert_array_almost_equal([[4, 6, 10], [10, 12, 16]], output) + output = ndimage.convolve(array, kernel) + assert_array_almost_equal([[12, 16, 18], [18, 22, 24]], output) + + def test_correlate12(self): + array = numpy.array([[1, 2, 3], + [4, 5, 6]]) + kernel = numpy.array([[1, 0], + [0, 1]]) + output = ndimage.correlate(array, kernel) + assert_array_almost_equal([[2, 3, 5], [5, 6, 8]], output) + output = ndimage.convolve(array, kernel) + assert_array_almost_equal([[6, 8, 9], [9, 11, 12]], output) + + @pytest.mark.parametrize('dtype_array', types) + @pytest.mark.parametrize('dtype_kernel', types) + def test_correlate13(self, dtype_array, dtype_kernel): + kernel = numpy.array([[1, 0], + [0, 1]]) + array = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype_array) + output = ndimage.correlate(array, kernel, output=dtype_kernel) + assert_array_almost_equal([[2, 3, 5], [5, 6, 8]], output) + assert_equal(output.dtype.type, dtype_kernel) + + output = ndimage.convolve(array, kernel, + output=dtype_kernel) + assert_array_almost_equal([[6, 8, 9], [9, 11, 12]], output) + assert_equal(output.dtype.type, dtype_kernel) + + @pytest.mark.parametrize('dtype_array', types) + @pytest.mark.parametrize('dtype_output', types) + def test_correlate14(self, dtype_array, dtype_output): + kernel = numpy.array([[1, 0], + [0, 1]]) + array = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype_array) + output = numpy.zeros(array.shape, dtype_output) + ndimage.correlate(array, kernel, output=output) + assert_array_almost_equal([[2, 3, 5], [5, 6, 8]], output) + assert_equal(output.dtype.type, dtype_output) + + ndimage.convolve(array, kernel, output=output) + assert_array_almost_equal([[6, 8, 9], [9, 11, 12]], output) + assert_equal(output.dtype.type, dtype_output) + + @pytest.mark.parametrize('dtype_array', types) + def test_correlate15(self, dtype_array): + kernel = numpy.array([[1, 0], + [0, 1]]) + array = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype_array) + output = ndimage.correlate(array, kernel, output=numpy.float32) + assert_array_almost_equal([[2, 3, 5], [5, 6, 8]], output) + assert_equal(output.dtype.type, numpy.float32) + + output = ndimage.convolve(array, kernel, output=numpy.float32) + assert_array_almost_equal([[6, 8, 9], [9, 11, 12]], output) + assert_equal(output.dtype.type, numpy.float32) + + @pytest.mark.parametrize('dtype_array', types) + def test_correlate16(self, dtype_array): + kernel = numpy.array([[0.5, 0], + [0, 0.5]]) + array = numpy.array([[1, 2, 3], [4, 5, 6]], dtype_array) + output = ndimage.correlate(array, kernel, output=numpy.float32) + assert_array_almost_equal([[1, 1.5, 2.5], [2.5, 3, 4]], output) + assert_equal(output.dtype.type, numpy.float32) + + output = ndimage.convolve(array, kernel, output=numpy.float32) + assert_array_almost_equal([[3, 4, 4.5], [4.5, 5.5, 6]], output) + assert_equal(output.dtype.type, numpy.float32) + + def test_correlate17(self): + array = numpy.array([1, 2, 3]) + tcor = [3, 5, 6] + tcov = [2, 3, 5] + kernel = numpy.array([1, 1]) + output = ndimage.correlate(array, kernel, origin=-1) + assert_array_almost_equal(tcor, output) + output = ndimage.convolve(array, kernel, origin=-1) + assert_array_almost_equal(tcov, output) + output = ndimage.correlate1d(array, kernel, origin=-1) + assert_array_almost_equal(tcor, output) + output = ndimage.convolve1d(array, kernel, origin=-1) + assert_array_almost_equal(tcov, output) + + @pytest.mark.parametrize('dtype_array', types) + def test_correlate18(self, dtype_array): + kernel = numpy.array([[1, 0], + [0, 1]]) + array = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype_array) + output = ndimage.correlate(array, kernel, + output=numpy.float32, + mode='nearest', origin=-1) + assert_array_almost_equal([[6, 8, 9], [9, 11, 12]], output) + assert_equal(output.dtype.type, numpy.float32) + + output = ndimage.convolve(array, kernel, + output=numpy.float32, + mode='nearest', origin=-1) + assert_array_almost_equal([[2, 3, 5], [5, 6, 8]], output) + assert_equal(output.dtype.type, numpy.float32) + + def test_correlate_mode_sequence(self): + kernel = numpy.ones((2, 2)) + array = numpy.ones((3, 3), float) + with assert_raises(RuntimeError): + ndimage.correlate(array, kernel, mode=['nearest', 'reflect']) + with assert_raises(RuntimeError): + ndimage.convolve(array, kernel, mode=['nearest', 'reflect']) + + @pytest.mark.parametrize('dtype_array', types) + def test_correlate19(self, dtype_array): + kernel = numpy.array([[1, 0], + [0, 1]]) + array = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype_array) + output = ndimage.correlate(array, kernel, + output=numpy.float32, + mode='nearest', origin=[-1, 0]) + assert_array_almost_equal([[5, 6, 8], [8, 9, 11]], output) + assert_equal(output.dtype.type, numpy.float32) + + output = ndimage.convolve(array, kernel, + output=numpy.float32, + mode='nearest', origin=[-1, 0]) + assert_array_almost_equal([[3, 5, 6], [6, 8, 9]], output) + assert_equal(output.dtype.type, numpy.float32) + + @pytest.mark.parametrize('dtype_array', types) + @pytest.mark.parametrize('dtype_output', types) + def test_correlate20(self, dtype_array, dtype_output): + weights = numpy.array([1, 2, 1]) + expected = [[5, 10, 15], [7, 14, 21]] + array = numpy.array([[1, 2, 3], + [2, 4, 6]], dtype_array) + output = numpy.zeros((2, 3), dtype_output) + ndimage.correlate1d(array, weights, axis=0, output=output) + assert_array_almost_equal(output, expected) + ndimage.convolve1d(array, weights, axis=0, output=output) + assert_array_almost_equal(output, expected) + + def test_correlate21(self): + array = numpy.array([[1, 2, 3], + [2, 4, 6]]) + expected = [[5, 10, 15], [7, 14, 21]] + weights = numpy.array([1, 2, 1]) + output = ndimage.correlate1d(array, weights, axis=0) + assert_array_almost_equal(output, expected) + output = ndimage.convolve1d(array, weights, axis=0) + assert_array_almost_equal(output, expected) + + @pytest.mark.parametrize('dtype_array', types) + @pytest.mark.parametrize('dtype_output', types) + def test_correlate22(self, dtype_array, dtype_output): + weights = numpy.array([1, 2, 1]) + expected = [[6, 12, 18], [6, 12, 18]] + array = numpy.array([[1, 2, 3], + [2, 4, 6]], dtype_array) + output = numpy.zeros((2, 3), dtype_output) + ndimage.correlate1d(array, weights, axis=0, + mode='wrap', output=output) + assert_array_almost_equal(output, expected) + ndimage.convolve1d(array, weights, axis=0, + mode='wrap', output=output) + assert_array_almost_equal(output, expected) + + @pytest.mark.parametrize('dtype_array', types) + @pytest.mark.parametrize('dtype_output', types) + def test_correlate23(self, dtype_array, dtype_output): + weights = numpy.array([1, 2, 1]) + expected = [[5, 10, 15], [7, 14, 21]] + array = numpy.array([[1, 2, 3], + [2, 4, 6]], dtype_array) + output = numpy.zeros((2, 3), dtype_output) + ndimage.correlate1d(array, weights, axis=0, + mode='nearest', output=output) + assert_array_almost_equal(output, expected) + ndimage.convolve1d(array, weights, axis=0, + mode='nearest', output=output) + assert_array_almost_equal(output, expected) + + @pytest.mark.parametrize('dtype_array', types) + @pytest.mark.parametrize('dtype_output', types) + def test_correlate24(self, dtype_array, dtype_output): + weights = numpy.array([1, 2, 1]) + tcor = [[7, 14, 21], [8, 16, 24]] + tcov = [[4, 8, 12], [5, 10, 15]] + array = numpy.array([[1, 2, 3], + [2, 4, 6]], dtype_array) + output = numpy.zeros((2, 3), dtype_output) + ndimage.correlate1d(array, weights, axis=0, + mode='nearest', output=output, origin=-1) + assert_array_almost_equal(output, tcor) + ndimage.convolve1d(array, weights, axis=0, + mode='nearest', output=output, origin=-1) + assert_array_almost_equal(output, tcov) + + @pytest.mark.parametrize('dtype_array', types) + @pytest.mark.parametrize('dtype_output', types) + def test_correlate25(self, dtype_array, dtype_output): + weights = numpy.array([1, 2, 1]) + tcor = [[4, 8, 12], [5, 10, 15]] + tcov = [[7, 14, 21], [8, 16, 24]] + array = numpy.array([[1, 2, 3], + [2, 4, 6]], dtype_array) + output = numpy.zeros((2, 3), dtype_output) + ndimage.correlate1d(array, weights, axis=0, + mode='nearest', output=output, origin=1) + assert_array_almost_equal(output, tcor) + ndimage.convolve1d(array, weights, axis=0, + mode='nearest', output=output, origin=1) + assert_array_almost_equal(output, tcov) + + def test_correlate26(self): + # test fix for gh-11661 (mirror extension of a length 1 signal) + y = ndimage.convolve1d(numpy.ones(1), numpy.ones(5), mode='mirror') + assert_array_equal(y, numpy.array(5.)) + + y = ndimage.correlate1d(numpy.ones(1), numpy.ones(5), mode='mirror') + assert_array_equal(y, numpy.array(5.)) + + @pytest.mark.parametrize('dtype_kernel', complex_types) + @pytest.mark.parametrize('dtype_input', types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate_complex_kernel(self, dtype_input, dtype_kernel, + dtype_output): + kernel = numpy.array([[1, 0], + [0, 1 + 1j]], dtype_kernel) + array = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype_input) + self._validate_complex(array, kernel, dtype_output) + + @pytest.mark.parametrize('dtype_kernel', complex_types) + @pytest.mark.parametrize('dtype_input', types) + @pytest.mark.parametrize('dtype_output', complex_types) + @pytest.mark.parametrize('mode', ['grid-constant', 'constant']) + def test_correlate_complex_kernel_cval(self, dtype_input, dtype_kernel, + dtype_output, mode): + # test use of non-zero cval with complex inputs + # also verifies that mode 'grid-constant' does not segfault + kernel = numpy.array([[1, 0], + [0, 1 + 1j]], dtype_kernel) + array = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype_input) + self._validate_complex(array, kernel, dtype_output, mode=mode, + cval=5.0) + + @pytest.mark.parametrize('dtype_kernel', complex_types) + @pytest.mark.parametrize('dtype_input', types) + def test_correlate_complex_kernel_invalid_cval(self, dtype_input, + dtype_kernel): + # cannot give complex cval with a real image + kernel = numpy.array([[1, 0], + [0, 1 + 1j]], dtype_kernel) + array = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype_input) + for func in [ndimage.convolve, ndimage.correlate, ndimage.convolve1d, + ndimage.correlate1d]: + with pytest.raises(ValueError): + func(array, kernel, mode='constant', cval=5.0 + 1.0j, + output=numpy.complex64) + + @pytest.mark.parametrize('dtype_kernel', complex_types) + @pytest.mark.parametrize('dtype_input', types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate1d_complex_kernel(self, dtype_input, dtype_kernel, + dtype_output): + kernel = numpy.array([1, 1 + 1j], dtype_kernel) + array = numpy.array([1, 2, 3, 4, 5, 6], dtype_input) + self._validate_complex(array, kernel, dtype_output) + + @pytest.mark.parametrize('dtype_kernel', complex_types) + @pytest.mark.parametrize('dtype_input', types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate1d_complex_kernel_cval(self, dtype_input, dtype_kernel, + dtype_output): + kernel = numpy.array([1, 1 + 1j], dtype_kernel) + array = numpy.array([1, 2, 3, 4, 5, 6], dtype_input) + self._validate_complex(array, kernel, dtype_output, mode='constant', + cval=5.0) + + @pytest.mark.parametrize('dtype_kernel', types) + @pytest.mark.parametrize('dtype_input', complex_types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate_complex_input(self, dtype_input, dtype_kernel, + dtype_output): + kernel = numpy.array([[1, 0], + [0, 1]], dtype_kernel) + array = numpy.array([[1, 2j, 3], + [1 + 4j, 5, 6j]], dtype_input) + self._validate_complex(array, kernel, dtype_output) + + @pytest.mark.parametrize('dtype_kernel', types) + @pytest.mark.parametrize('dtype_input', complex_types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate1d_complex_input(self, dtype_input, dtype_kernel, + dtype_output): + kernel = numpy.array([1, 0, 1], dtype_kernel) + array = numpy.array([1, 2j, 3, 1 + 4j, 5, 6j], dtype_input) + self._validate_complex(array, kernel, dtype_output) + + @pytest.mark.parametrize('dtype_kernel', types) + @pytest.mark.parametrize('dtype_input', complex_types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate1d_complex_input_cval(self, dtype_input, dtype_kernel, + dtype_output): + kernel = numpy.array([1, 0, 1], dtype_kernel) + array = numpy.array([1, 2j, 3, 1 + 4j, 5, 6j], dtype_input) + self._validate_complex(array, kernel, dtype_output, mode='constant', + cval=5 - 3j) + + @pytest.mark.parametrize('dtype', complex_types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate_complex_input_and_kernel(self, dtype, dtype_output): + kernel = numpy.array([[1, 0], + [0, 1 + 1j]], dtype) + array = numpy.array([[1, 2j, 3], + [1 + 4j, 5, 6j]], dtype) + self._validate_complex(array, kernel, dtype_output) + + @pytest.mark.parametrize('dtype', complex_types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate_complex_input_and_kernel_cval(self, dtype, + dtype_output): + kernel = numpy.array([[1, 0], + [0, 1 + 1j]], dtype) + array = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype) + self._validate_complex(array, kernel, dtype_output, mode='constant', + cval=5.0 + 2.0j) + + @pytest.mark.parametrize('dtype', complex_types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate1d_complex_input_and_kernel(self, dtype, dtype_output): + kernel = numpy.array([1, 1 + 1j], dtype) + array = numpy.array([1, 2j, 3, 1 + 4j, 5, 6j], dtype) + self._validate_complex(array, kernel, dtype_output) + + @pytest.mark.parametrize('dtype', complex_types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_correlate1d_complex_input_and_kernel_cval(self, dtype, + dtype_output): + kernel = numpy.array([1, 1 + 1j], dtype) + array = numpy.array([1, 2j, 3, 1 + 4j, 5, 6j], dtype) + self._validate_complex(array, kernel, dtype_output, mode='constant', + cval=5.0 + 2.0j) + + def test_gauss01(self): + input = numpy.array([[1, 2, 3], + [2, 4, 6]], numpy.float32) + output = ndimage.gaussian_filter(input, 0) + assert_array_almost_equal(output, input) + + def test_gauss02(self): + input = numpy.array([[1, 2, 3], + [2, 4, 6]], numpy.float32) + output = ndimage.gaussian_filter(input, 1.0) + assert_equal(input.dtype, output.dtype) + assert_equal(input.shape, output.shape) + + def test_gauss03(self): + # single precision data + input = numpy.arange(100 * 100).astype(numpy.float32) + input.shape = (100, 100) + output = ndimage.gaussian_filter(input, [1.0, 1.0]) + + assert_equal(input.dtype, output.dtype) + assert_equal(input.shape, output.shape) + + # input.sum() is 49995000.0. With single precision floats, we can't + # expect more than 8 digits of accuracy, so use decimal=0 in this test. + assert_almost_equal(output.sum(dtype='d'), input.sum(dtype='d'), + decimal=0) + assert_(sumsq(input, output) > 1.0) + + def test_gauss04(self): + input = numpy.arange(100 * 100).astype(numpy.float32) + input.shape = (100, 100) + otype = numpy.float64 + output = ndimage.gaussian_filter(input, [1.0, 1.0], output=otype) + assert_equal(output.dtype.type, numpy.float64) + assert_equal(input.shape, output.shape) + assert_(sumsq(input, output) > 1.0) + + def test_gauss05(self): + input = numpy.arange(100 * 100).astype(numpy.float32) + input.shape = (100, 100) + otype = numpy.float64 + output = ndimage.gaussian_filter(input, [1.0, 1.0], + order=1, output=otype) + assert_equal(output.dtype.type, numpy.float64) + assert_equal(input.shape, output.shape) + assert_(sumsq(input, output) > 1.0) + + def test_gauss06(self): + input = numpy.arange(100 * 100).astype(numpy.float32) + input.shape = (100, 100) + otype = numpy.float64 + output1 = ndimage.gaussian_filter(input, [1.0, 1.0], output=otype) + output2 = ndimage.gaussian_filter(input, 1.0, output=otype) + assert_array_almost_equal(output1, output2) + + def test_gauss_memory_overlap(self): + input = numpy.arange(100 * 100).astype(numpy.float32) + input.shape = (100, 100) + output1 = ndimage.gaussian_filter(input, 1.0) + ndimage.gaussian_filter(input, 1.0, output=input) + assert_array_almost_equal(output1, input) + + @pytest.mark.parametrize(('filter_func', 'extra_args', 'size0', 'size'), + [(ndimage.gaussian_filter, (), 0, 1.0), + (ndimage.uniform_filter, (), 1, 3), + (ndimage.minimum_filter, (), 1, 3), + (ndimage.maximum_filter, (), 1, 3), + (ndimage.median_filter, (), 1, 3), + (ndimage.rank_filter, (1,), 1, 3), + (ndimage.percentile_filter, (40,), 1, 3)]) + @pytest.mark.parametrize( + 'axes', + tuple(itertools.combinations(range(-3, 3), 1)) + + tuple(itertools.combinations(range(-3, 3), 2)) + + ((0, 1, 2),)) + def test_filter_axes(self, filter_func, extra_args, size0, size, axes): + # Note: `size` is called `sigma` in `gaussian_filter` + array = numpy.arange(6 * 8 * 12, dtype=numpy.float64).reshape(6, 8, 12) + axes = numpy.array(axes) + + if len(set(axes % array.ndim)) != len(axes): + # parametrized cases with duplicate axes raise an error + with pytest.raises(ValueError, match="axes must be unique"): + filter_func(array, *extra_args, size, axes=axes) + return + output = filter_func(array, *extra_args, size, axes=axes) + + # result should be equivalent to sigma=0.0/size=1 on unfiltered axes + all_sizes = (size if ax in (axes % array.ndim) else size0 + for ax in range(array.ndim)) + expected = filter_func(array, *extra_args, all_sizes) + assert_allclose(output, expected) + + kwargs_gauss = dict(radius=[4, 2, 3], order=[0, 1, 2], + mode=['reflect', 'nearest', 'constant']) + kwargs_other = dict(origin=(-1, 0, 1), + mode=['reflect', 'nearest', 'constant']) + kwargs_rank = dict(origin=(-1, 0, 1)) + + @pytest.mark.parametrize("filter_func, size0, size, kwargs", + [(ndimage.gaussian_filter, 0, 1.0, kwargs_gauss), + (ndimage.uniform_filter, 1, 3, kwargs_other), + (ndimage.maximum_filter, 1, 3, kwargs_other), + (ndimage.minimum_filter, 1, 3, kwargs_other), + (ndimage.median_filter, 1, 3, kwargs_rank), + (ndimage.rank_filter, 1, 3, kwargs_rank), + (ndimage.percentile_filter, 1, 3, kwargs_rank)]) + @pytest.mark.parametrize('axes', itertools.combinations(range(-3, 3), 2)) + def test_filter_axes_kwargs(self, filter_func, size0, size, kwargs, axes): + array = numpy.arange(6 * 8 * 12, dtype=numpy.float64).reshape(6, 8, 12) + + kwargs = {key: numpy.array(val) for key, val in kwargs.items()} + axes = numpy.array(axes) + n_axes = axes.size + + if filter_func == ndimage.rank_filter: + args = (2,) # (rank,) + elif filter_func == ndimage.percentile_filter: + args = (30,) # (percentile,) + else: + args = () + + # form kwargs that specify only the axes in `axes` + reduced_kwargs = {key: val[axes] for key, val in kwargs.items()} + if len(set(axes % array.ndim)) != len(axes): + # parametrized cases with duplicate axes raise an error + with pytest.raises(ValueError, match="axes must be unique"): + filter_func(array, *args, [size]*n_axes, axes=axes, + **reduced_kwargs) + return + + output = filter_func(array, *args, [size]*n_axes, axes=axes, + **reduced_kwargs) + + # result should be equivalent to sigma=0.0/size=1 on unfiltered axes + size_3d = numpy.full(array.ndim, fill_value=size0) + size_3d[axes] = size + if 'origin' in kwargs: + # origin should be zero on the axis that has size 0 + origin = numpy.array([0, 0, 0]) + origin[axes] = reduced_kwargs['origin'] + kwargs['origin'] = origin + expected = filter_func(array, *args, size_3d, **kwargs) + assert_allclose(output, expected) + + @pytest.mark.parametrize( + 'filter_func, args', + [(ndimage.gaussian_filter, (1.0,)), # args = (sigma,) + (ndimage.uniform_filter, (3,)), # args = (size,) + (ndimage.minimum_filter, (3,)), # args = (size,) + (ndimage.maximum_filter, (3,)), # args = (size,) + (ndimage.median_filter, (3,)), # args = (size,) + (ndimage.rank_filter, (2, 3)), # args = (rank, size) + (ndimage.percentile_filter, (30, 3))]) # args = (percentile, size) + @pytest.mark.parametrize( + 'axes', [(1.5,), (0, 1, 2, 3), (3,), (-4,)] + ) + def test_filter_invalid_axes(self, filter_func, args, axes): + array = numpy.arange(6 * 8 * 12, dtype=numpy.float64).reshape(6, 8, 12) + if any(isinstance(ax, float) for ax in axes): + error_class = TypeError + match = "cannot be interpreted as an integer" + else: + error_class = ValueError + match = "out of range" + with pytest.raises(error_class, match=match): + filter_func(array, *args, axes=axes) + + @pytest.mark.parametrize( + 'filter_func, kwargs', + [(ndimage.minimum_filter, {}), + (ndimage.maximum_filter, {}), + (ndimage.median_filter, {}), + (ndimage.rank_filter, dict(rank=3)), + (ndimage.percentile_filter, dict(percentile=30))]) + @pytest.mark.parametrize( + 'axes', [(0, ), (1, 2), (0, 1, 2)] + ) + @pytest.mark.parametrize('separable_footprint', [False, True]) + def test_filter_invalid_footprint_ndim(self, filter_func, kwargs, axes, + separable_footprint): + array = numpy.arange(6 * 8 * 12, dtype=numpy.float64).reshape(6, 8, 12) + # create a footprint with one too many dimensions + footprint = numpy.ones((3,) * (len(axes) + 1)) + if not separable_footprint: + footprint[(0,) * footprint.ndim] = 0 + if (filter_func in [ndimage.minimum_filter, ndimage.maximum_filter] + and separable_footprint): + match = "sequence argument must have length equal to input rank" + else: + match = "footprint array has incorrect shape" + with pytest.raises(RuntimeError, match=match): + filter_func(array, **kwargs, footprint=footprint, axes=axes) + + @pytest.mark.parametrize('n_mismatch', [1, 3]) + @pytest.mark.parametrize('filter_func, kwargs, key, val', + _cases_axes_tuple_length_mismatch()) + def test_filter_tuple_length_mismatch(self, n_mismatch, filter_func, + kwargs, key, val): + # Test for the intended RuntimeError when a kwargs has an invalid size + array = numpy.arange(6 * 8 * 12, dtype=numpy.float64).reshape(6, 8, 12) + kwargs = dict(**kwargs, axes=(0, 1)) + kwargs[key] = (val,) * n_mismatch + err_msg = "sequence argument must have length equal to input rank" + with pytest.raises(RuntimeError, match=err_msg): + filter_func(array, **kwargs) + + @pytest.mark.parametrize('dtype', types + complex_types) + def test_prewitt01(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + t = ndimage.correlate1d(array, [-1.0, 0.0, 1.0], 0) + t = ndimage.correlate1d(t, [1.0, 1.0, 1.0], 1) + output = ndimage.prewitt(array, 0) + assert_array_almost_equal(t, output) + + @pytest.mark.parametrize('dtype', types + complex_types) + def test_prewitt02(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + t = ndimage.correlate1d(array, [-1.0, 0.0, 1.0], 0) + t = ndimage.correlate1d(t, [1.0, 1.0, 1.0], 1) + output = numpy.zeros(array.shape, dtype) + ndimage.prewitt(array, 0, output) + assert_array_almost_equal(t, output) + + @pytest.mark.parametrize('dtype', types + complex_types) + def test_prewitt03(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + t = ndimage.correlate1d(array, [-1.0, 0.0, 1.0], 1) + t = ndimage.correlate1d(t, [1.0, 1.0, 1.0], 0) + output = ndimage.prewitt(array, 1) + assert_array_almost_equal(t, output) + + @pytest.mark.parametrize('dtype', types + complex_types) + def test_prewitt04(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + t = ndimage.prewitt(array, -1) + output = ndimage.prewitt(array, 1) + assert_array_almost_equal(t, output) + + @pytest.mark.parametrize('dtype', types + complex_types) + def test_sobel01(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + t = ndimage.correlate1d(array, [-1.0, 0.0, 1.0], 0) + t = ndimage.correlate1d(t, [1.0, 2.0, 1.0], 1) + output = ndimage.sobel(array, 0) + assert_array_almost_equal(t, output) + + @pytest.mark.parametrize('dtype', types + complex_types) + def test_sobel02(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + t = ndimage.correlate1d(array, [-1.0, 0.0, 1.0], 0) + t = ndimage.correlate1d(t, [1.0, 2.0, 1.0], 1) + output = numpy.zeros(array.shape, dtype) + ndimage.sobel(array, 0, output) + assert_array_almost_equal(t, output) + + @pytest.mark.parametrize('dtype', types + complex_types) + def test_sobel03(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + t = ndimage.correlate1d(array, [-1.0, 0.0, 1.0], 1) + t = ndimage.correlate1d(t, [1.0, 2.0, 1.0], 0) + output = numpy.zeros(array.shape, dtype) + output = ndimage.sobel(array, 1) + assert_array_almost_equal(t, output) + + @pytest.mark.parametrize('dtype', types + complex_types) + def test_sobel04(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + t = ndimage.sobel(array, -1) + output = ndimage.sobel(array, 1) + assert_array_almost_equal(t, output) + + @pytest.mark.parametrize('dtype', + [numpy.int32, numpy.float32, numpy.float64, + numpy.complex64, numpy.complex128]) + def test_laplace01(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) * 100 + tmp1 = ndimage.correlate1d(array, [1, -2, 1], 0) + tmp2 = ndimage.correlate1d(array, [1, -2, 1], 1) + output = ndimage.laplace(array) + assert_array_almost_equal(tmp1 + tmp2, output) + + @pytest.mark.parametrize('dtype', + [numpy.int32, numpy.float32, numpy.float64, + numpy.complex64, numpy.complex128]) + def test_laplace02(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) * 100 + tmp1 = ndimage.correlate1d(array, [1, -2, 1], 0) + tmp2 = ndimage.correlate1d(array, [1, -2, 1], 1) + output = numpy.zeros(array.shape, dtype) + ndimage.laplace(array, output=output) + assert_array_almost_equal(tmp1 + tmp2, output) + + @pytest.mark.parametrize('dtype', + [numpy.int32, numpy.float32, numpy.float64, + numpy.complex64, numpy.complex128]) + def test_gaussian_laplace01(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) * 100 + tmp1 = ndimage.gaussian_filter(array, 1.0, [2, 0]) + tmp2 = ndimage.gaussian_filter(array, 1.0, [0, 2]) + output = ndimage.gaussian_laplace(array, 1.0) + assert_array_almost_equal(tmp1 + tmp2, output) + + @pytest.mark.parametrize('dtype', + [numpy.int32, numpy.float32, numpy.float64, + numpy.complex64, numpy.complex128]) + def test_gaussian_laplace02(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) * 100 + tmp1 = ndimage.gaussian_filter(array, 1.0, [2, 0]) + tmp2 = ndimage.gaussian_filter(array, 1.0, [0, 2]) + output = numpy.zeros(array.shape, dtype) + ndimage.gaussian_laplace(array, 1.0, output) + assert_array_almost_equal(tmp1 + tmp2, output) + + @pytest.mark.parametrize('dtype', types + complex_types) + def test_generic_laplace01(self, dtype): + def derivative2(input, axis, output, mode, cval, a, b): + sigma = [a, b / 2.0] + input = numpy.asarray(input) + order = [0] * input.ndim + order[axis] = 2 + return ndimage.gaussian_filter(input, sigma, order, + output, mode, cval) + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + output = numpy.zeros(array.shape, dtype) + tmp = ndimage.generic_laplace(array, derivative2, + extra_arguments=(1.0,), + extra_keywords={'b': 2.0}) + ndimage.gaussian_laplace(array, 1.0, output) + assert_array_almost_equal(tmp, output) + + @pytest.mark.parametrize('dtype', + [numpy.int32, numpy.float32, numpy.float64, + numpy.complex64, numpy.complex128]) + def test_gaussian_gradient_magnitude01(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) * 100 + tmp1 = ndimage.gaussian_filter(array, 1.0, [1, 0]) + tmp2 = ndimage.gaussian_filter(array, 1.0, [0, 1]) + output = ndimage.gaussian_gradient_magnitude(array, 1.0) + expected = tmp1 * tmp1 + tmp2 * tmp2 + expected = numpy.sqrt(expected).astype(dtype) + assert_array_almost_equal(expected, output) + + @pytest.mark.parametrize('dtype', + [numpy.int32, numpy.float32, numpy.float64, + numpy.complex64, numpy.complex128]) + def test_gaussian_gradient_magnitude02(self, dtype): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) * 100 + tmp1 = ndimage.gaussian_filter(array, 1.0, [1, 0]) + tmp2 = ndimage.gaussian_filter(array, 1.0, [0, 1]) + output = numpy.zeros(array.shape, dtype) + ndimage.gaussian_gradient_magnitude(array, 1.0, output) + expected = tmp1 * tmp1 + tmp2 * tmp2 + expected = numpy.sqrt(expected).astype(dtype) + assert_array_almost_equal(expected, output) + + def test_generic_gradient_magnitude01(self): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], numpy.float64) + + def derivative(input, axis, output, mode, cval, a, b): + sigma = [a, b / 2.0] + input = numpy.asarray(input) + order = [0] * input.ndim + order[axis] = 1 + return ndimage.gaussian_filter(input, sigma, order, + output, mode, cval) + tmp1 = ndimage.gaussian_gradient_magnitude(array, 1.0) + tmp2 = ndimage.generic_gradient_magnitude( + array, derivative, extra_arguments=(1.0,), + extra_keywords={'b': 2.0}) + assert_array_almost_equal(tmp1, tmp2) + + def test_uniform01(self): + array = numpy.array([2, 4, 6]) + size = 2 + output = ndimage.uniform_filter1d(array, size, origin=-1) + assert_array_almost_equal([3, 5, 6], output) + + def test_uniform01_complex(self): + array = numpy.array([2 + 1j, 4 + 2j, 6 + 3j], dtype=numpy.complex128) + size = 2 + output = ndimage.uniform_filter1d(array, size, origin=-1) + assert_array_almost_equal([3, 5, 6], output.real) + assert_array_almost_equal([1.5, 2.5, 3], output.imag) + + def test_uniform02(self): + array = numpy.array([1, 2, 3]) + filter_shape = [0] + output = ndimage.uniform_filter(array, filter_shape) + assert_array_almost_equal(array, output) + + def test_uniform03(self): + array = numpy.array([1, 2, 3]) + filter_shape = [1] + output = ndimage.uniform_filter(array, filter_shape) + assert_array_almost_equal(array, output) + + def test_uniform04(self): + array = numpy.array([2, 4, 6]) + filter_shape = [2] + output = ndimage.uniform_filter(array, filter_shape) + assert_array_almost_equal([2, 3, 5], output) + + def test_uniform05(self): + array = [] + filter_shape = [1] + output = ndimage.uniform_filter(array, filter_shape) + assert_array_almost_equal([], output) + + @pytest.mark.parametrize('dtype_array', types) + @pytest.mark.parametrize('dtype_output', types) + def test_uniform06(self, dtype_array, dtype_output): + filter_shape = [2, 2] + array = numpy.array([[4, 8, 12], + [16, 20, 24]], dtype_array) + output = ndimage.uniform_filter( + array, filter_shape, output=dtype_output) + assert_array_almost_equal([[4, 6, 10], [10, 12, 16]], output) + assert_equal(output.dtype.type, dtype_output) + + @pytest.mark.parametrize('dtype_array', complex_types) + @pytest.mark.parametrize('dtype_output', complex_types) + def test_uniform06_complex(self, dtype_array, dtype_output): + filter_shape = [2, 2] + array = numpy.array([[4, 8 + 5j, 12], + [16, 20, 24]], dtype_array) + output = ndimage.uniform_filter( + array, filter_shape, output=dtype_output) + assert_array_almost_equal([[4, 6, 10], [10, 12, 16]], output.real) + assert_equal(output.dtype.type, dtype_output) + + def test_minimum_filter01(self): + array = numpy.array([1, 2, 3, 4, 5]) + filter_shape = numpy.array([2]) + output = ndimage.minimum_filter(array, filter_shape) + assert_array_almost_equal([1, 1, 2, 3, 4], output) + + def test_minimum_filter02(self): + array = numpy.array([1, 2, 3, 4, 5]) + filter_shape = numpy.array([3]) + output = ndimage.minimum_filter(array, filter_shape) + assert_array_almost_equal([1, 1, 2, 3, 4], output) + + def test_minimum_filter03(self): + array = numpy.array([3, 2, 5, 1, 4]) + filter_shape = numpy.array([2]) + output = ndimage.minimum_filter(array, filter_shape) + assert_array_almost_equal([3, 2, 2, 1, 1], output) + + def test_minimum_filter04(self): + array = numpy.array([3, 2, 5, 1, 4]) + filter_shape = numpy.array([3]) + output = ndimage.minimum_filter(array, filter_shape) + assert_array_almost_equal([2, 2, 1, 1, 1], output) + + def test_minimum_filter05(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + filter_shape = numpy.array([2, 3]) + output = ndimage.minimum_filter(array, filter_shape) + assert_array_almost_equal([[2, 2, 1, 1, 1], + [2, 2, 1, 1, 1], + [5, 3, 3, 1, 1]], output) + + def test_minimum_filter05_overlap(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + filter_shape = numpy.array([2, 3]) + ndimage.minimum_filter(array, filter_shape, output=array) + assert_array_almost_equal([[2, 2, 1, 1, 1], + [2, 2, 1, 1, 1], + [5, 3, 3, 1, 1]], array) + + def test_minimum_filter06(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 1, 1], [1, 1, 1]] + output = ndimage.minimum_filter(array, footprint=footprint) + assert_array_almost_equal([[2, 2, 1, 1, 1], + [2, 2, 1, 1, 1], + [5, 3, 3, 1, 1]], output) + # separable footprint should allow mode sequence + output2 = ndimage.minimum_filter(array, footprint=footprint, + mode=['reflect', 'reflect']) + assert_array_almost_equal(output2, output) + + def test_minimum_filter07(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + output = ndimage.minimum_filter(array, footprint=footprint) + assert_array_almost_equal([[2, 2, 1, 1, 1], + [2, 3, 1, 3, 1], + [5, 5, 3, 3, 1]], output) + with assert_raises(RuntimeError): + ndimage.minimum_filter(array, footprint=footprint, + mode=['reflect', 'constant']) + + def test_minimum_filter08(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + output = ndimage.minimum_filter(array, footprint=footprint, origin=-1) + assert_array_almost_equal([[3, 1, 3, 1, 1], + [5, 3, 3, 1, 1], + [3, 3, 1, 1, 1]], output) + + def test_minimum_filter09(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + output = ndimage.minimum_filter(array, footprint=footprint, + origin=[-1, 0]) + assert_array_almost_equal([[2, 3, 1, 3, 1], + [5, 5, 3, 3, 1], + [5, 3, 3, 1, 1]], output) + + def test_maximum_filter01(self): + array = numpy.array([1, 2, 3, 4, 5]) + filter_shape = numpy.array([2]) + output = ndimage.maximum_filter(array, filter_shape) + assert_array_almost_equal([1, 2, 3, 4, 5], output) + + def test_maximum_filter02(self): + array = numpy.array([1, 2, 3, 4, 5]) + filter_shape = numpy.array([3]) + output = ndimage.maximum_filter(array, filter_shape) + assert_array_almost_equal([2, 3, 4, 5, 5], output) + + def test_maximum_filter03(self): + array = numpy.array([3, 2, 5, 1, 4]) + filter_shape = numpy.array([2]) + output = ndimage.maximum_filter(array, filter_shape) + assert_array_almost_equal([3, 3, 5, 5, 4], output) + + def test_maximum_filter04(self): + array = numpy.array([3, 2, 5, 1, 4]) + filter_shape = numpy.array([3]) + output = ndimage.maximum_filter(array, filter_shape) + assert_array_almost_equal([3, 5, 5, 5, 4], output) + + def test_maximum_filter05(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + filter_shape = numpy.array([2, 3]) + output = ndimage.maximum_filter(array, filter_shape) + assert_array_almost_equal([[3, 5, 5, 5, 4], + [7, 9, 9, 9, 5], + [8, 9, 9, 9, 7]], output) + + def test_maximum_filter06(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 1, 1], [1, 1, 1]] + output = ndimage.maximum_filter(array, footprint=footprint) + assert_array_almost_equal([[3, 5, 5, 5, 4], + [7, 9, 9, 9, 5], + [8, 9, 9, 9, 7]], output) + # separable footprint should allow mode sequence + output2 = ndimage.maximum_filter(array, footprint=footprint, + mode=['reflect', 'reflect']) + assert_array_almost_equal(output2, output) + + def test_maximum_filter07(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + output = ndimage.maximum_filter(array, footprint=footprint) + assert_array_almost_equal([[3, 5, 5, 5, 4], + [7, 7, 9, 9, 5], + [7, 9, 8, 9, 7]], output) + # non-separable footprint should not allow mode sequence + with assert_raises(RuntimeError): + ndimage.maximum_filter(array, footprint=footprint, + mode=['reflect', 'reflect']) + + def test_maximum_filter08(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + output = ndimage.maximum_filter(array, footprint=footprint, origin=-1) + assert_array_almost_equal([[7, 9, 9, 5, 5], + [9, 8, 9, 7, 5], + [8, 8, 7, 7, 7]], output) + + def test_maximum_filter09(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + output = ndimage.maximum_filter(array, footprint=footprint, + origin=[-1, 0]) + assert_array_almost_equal([[7, 7, 9, 9, 5], + [7, 9, 8, 9, 7], + [8, 8, 8, 7, 7]], output) + + @pytest.mark.parametrize( + 'axes', tuple(itertools.combinations(range(-3, 3), 2)) + ) + @pytest.mark.parametrize( + 'filter_func, kwargs', + [(ndimage.minimum_filter, {}), + (ndimage.maximum_filter, {}), + (ndimage.median_filter, {}), + (ndimage.rank_filter, dict(rank=3)), + (ndimage.percentile_filter, dict(percentile=60))] + ) + def test_minmax_nonseparable_axes(self, filter_func, axes, kwargs): + array = numpy.arange(6 * 8 * 12, dtype=numpy.float32).reshape(6, 8, 12) + # use 2D triangular footprint because it is non-separable + footprint = numpy.tri(5) + axes = numpy.array(axes) + + if len(set(axes % array.ndim)) != len(axes): + # parametrized cases with duplicate axes raise an error + with pytest.raises(ValueError): + filter_func(array, footprint=footprint, axes=axes, **kwargs) + return + output = filter_func(array, footprint=footprint, axes=axes, **kwargs) + + missing_axis = tuple(set(range(3)) - set(axes % array.ndim))[0] + footprint_3d = numpy.expand_dims(footprint, missing_axis) + expected = filter_func(array, footprint=footprint_3d, **kwargs) + assert_allclose(output, expected) + + def test_rank01(self): + array = numpy.array([1, 2, 3, 4, 5]) + output = ndimage.rank_filter(array, 1, size=2) + assert_array_almost_equal(array, output) + output = ndimage.percentile_filter(array, 100, size=2) + assert_array_almost_equal(array, output) + output = ndimage.median_filter(array, 2) + assert_array_almost_equal(array, output) + + def test_rank02(self): + array = numpy.array([1, 2, 3, 4, 5]) + output = ndimage.rank_filter(array, 1, size=[3]) + assert_array_almost_equal(array, output) + output = ndimage.percentile_filter(array, 50, size=3) + assert_array_almost_equal(array, output) + output = ndimage.median_filter(array, (3,)) + assert_array_almost_equal(array, output) + + def test_rank03(self): + array = numpy.array([3, 2, 5, 1, 4]) + output = ndimage.rank_filter(array, 1, size=[2]) + assert_array_almost_equal([3, 3, 5, 5, 4], output) + output = ndimage.percentile_filter(array, 100, size=2) + assert_array_almost_equal([3, 3, 5, 5, 4], output) + + def test_rank04(self): + array = numpy.array([3, 2, 5, 1, 4]) + expected = [3, 3, 2, 4, 4] + output = ndimage.rank_filter(array, 1, size=3) + assert_array_almost_equal(expected, output) + output = ndimage.percentile_filter(array, 50, size=3) + assert_array_almost_equal(expected, output) + output = ndimage.median_filter(array, size=3) + assert_array_almost_equal(expected, output) + + def test_rank05(self): + array = numpy.array([3, 2, 5, 1, 4]) + expected = [3, 3, 2, 4, 4] + output = ndimage.rank_filter(array, -2, size=3) + assert_array_almost_equal(expected, output) + + def test_rank06(self): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]]) + expected = [[2, 2, 1, 1, 1], + [3, 3, 2, 1, 1], + [5, 5, 3, 3, 1]] + output = ndimage.rank_filter(array, 1, size=[2, 3]) + assert_array_almost_equal(expected, output) + output = ndimage.percentile_filter(array, 17, size=(2, 3)) + assert_array_almost_equal(expected, output) + + def test_rank06_overlap(self): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]]) + array_copy = array.copy() + expected = [[2, 2, 1, 1, 1], + [3, 3, 2, 1, 1], + [5, 5, 3, 3, 1]] + ndimage.rank_filter(array, 1, size=[2, 3], output=array) + assert_array_almost_equal(expected, array) + + ndimage.percentile_filter(array_copy, 17, size=(2, 3), + output=array_copy) + assert_array_almost_equal(expected, array_copy) + + def test_rank07(self): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]]) + expected = [[3, 5, 5, 5, 4], + [5, 5, 7, 5, 4], + [6, 8, 8, 7, 5]] + output = ndimage.rank_filter(array, -2, size=[2, 3]) + assert_array_almost_equal(expected, output) + + def test_rank08(self): + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]]) + expected = [[3, 3, 2, 4, 4], + [5, 5, 5, 4, 4], + [5, 6, 7, 5, 5]] + output = ndimage.percentile_filter(array, 50.0, size=(2, 3)) + assert_array_almost_equal(expected, output) + output = ndimage.rank_filter(array, 3, size=(2, 3)) + assert_array_almost_equal(expected, output) + output = ndimage.median_filter(array, size=(2, 3)) + assert_array_almost_equal(expected, output) + + # non-separable: does not allow mode sequence + with assert_raises(RuntimeError): + ndimage.percentile_filter(array, 50.0, size=(2, 3), + mode=['reflect', 'constant']) + with assert_raises(RuntimeError): + ndimage.rank_filter(array, 3, size=(2, 3), mode=['reflect']*2) + with assert_raises(RuntimeError): + ndimage.median_filter(array, size=(2, 3), mode=['reflect']*2) + + @pytest.mark.parametrize('dtype', types) + def test_rank09(self, dtype): + expected = [[3, 3, 2, 4, 4], + [3, 5, 2, 5, 1], + [5, 5, 8, 3, 5]] + footprint = [[1, 0, 1], [0, 1, 0]] + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + output = ndimage.rank_filter(array, 1, footprint=footprint) + assert_array_almost_equal(expected, output) + output = ndimage.percentile_filter(array, 35, footprint=footprint) + assert_array_almost_equal(expected, output) + + def test_rank10(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + expected = [[2, 2, 1, 1, 1], + [2, 3, 1, 3, 1], + [5, 5, 3, 3, 1]] + footprint = [[1, 0, 1], [1, 1, 0]] + output = ndimage.rank_filter(array, 0, footprint=footprint) + assert_array_almost_equal(expected, output) + output = ndimage.percentile_filter(array, 0.0, footprint=footprint) + assert_array_almost_equal(expected, output) + + def test_rank11(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + expected = [[3, 5, 5, 5, 4], + [7, 7, 9, 9, 5], + [7, 9, 8, 9, 7]] + footprint = [[1, 0, 1], [1, 1, 0]] + output = ndimage.rank_filter(array, -1, footprint=footprint) + assert_array_almost_equal(expected, output) + output = ndimage.percentile_filter(array, 100.0, footprint=footprint) + assert_array_almost_equal(expected, output) + + @pytest.mark.parametrize('dtype', types) + def test_rank12(self, dtype): + expected = [[3, 3, 2, 4, 4], + [3, 5, 2, 5, 1], + [5, 5, 8, 3, 5]] + footprint = [[1, 0, 1], [0, 1, 0]] + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + output = ndimage.rank_filter(array, 1, footprint=footprint) + assert_array_almost_equal(expected, output) + output = ndimage.percentile_filter(array, 50.0, + footprint=footprint) + assert_array_almost_equal(expected, output) + output = ndimage.median_filter(array, footprint=footprint) + assert_array_almost_equal(expected, output) + + @pytest.mark.parametrize('dtype', types) + def test_rank13(self, dtype): + expected = [[5, 2, 5, 1, 1], + [5, 8, 3, 5, 5], + [6, 6, 5, 5, 5]] + footprint = [[1, 0, 1], [0, 1, 0]] + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + output = ndimage.rank_filter(array, 1, footprint=footprint, + origin=-1) + assert_array_almost_equal(expected, output) + + @pytest.mark.parametrize('dtype', types) + def test_rank14(self, dtype): + expected = [[3, 5, 2, 5, 1], + [5, 5, 8, 3, 5], + [5, 6, 6, 5, 5]] + footprint = [[1, 0, 1], [0, 1, 0]] + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + output = ndimage.rank_filter(array, 1, footprint=footprint, + origin=[-1, 0]) + assert_array_almost_equal(expected, output) + + @pytest.mark.parametrize('dtype', types) + def test_rank15(self, dtype): + expected = [[2, 3, 1, 4, 1], + [5, 3, 7, 1, 1], + [5, 5, 3, 3, 3]] + footprint = [[1, 0, 1], [0, 1, 0]] + array = numpy.array([[3, 2, 5, 1, 4], + [5, 8, 3, 7, 1], + [5, 6, 9, 3, 5]], dtype) + output = ndimage.rank_filter(array, 0, footprint=footprint, + origin=[-1, 0]) + assert_array_almost_equal(expected, output) + + @pytest.mark.parametrize('dtype', types) + def test_generic_filter1d01(self, dtype): + weights = numpy.array([1.1, 2.2, 3.3]) + + def _filter_func(input, output, fltr, total): + fltr = fltr / total + for ii in range(input.shape[0] - 2): + output[ii] = input[ii] * fltr[0] + output[ii] += input[ii + 1] * fltr[1] + output[ii] += input[ii + 2] * fltr[2] + a = numpy.arange(12, dtype=dtype) + a.shape = (3, 4) + r1 = ndimage.correlate1d(a, weights / weights.sum(), 0, origin=-1) + r2 = ndimage.generic_filter1d( + a, _filter_func, 3, axis=0, origin=-1, + extra_arguments=(weights,), + extra_keywords={'total': weights.sum()}) + assert_array_almost_equal(r1, r2) + + @pytest.mark.parametrize('dtype', types) + def test_generic_filter01(self, dtype): + filter_ = numpy.array([[1.0, 2.0], [3.0, 4.0]]) + footprint = numpy.array([[1, 0], [0, 1]]) + cf = numpy.array([1., 4.]) + + def _filter_func(buffer, weights, total=1.0): + weights = cf / total + return (buffer * weights).sum() + + a = numpy.arange(12, dtype=dtype) + a.shape = (3, 4) + r1 = ndimage.correlate(a, filter_ * footprint) + if dtype in float_types: + r1 /= 5 + else: + r1 //= 5 + r2 = ndimage.generic_filter( + a, _filter_func, footprint=footprint, extra_arguments=(cf,), + extra_keywords={'total': cf.sum()}) + assert_array_almost_equal(r1, r2) + + # generic_filter doesn't allow mode sequence + with assert_raises(RuntimeError): + r2 = ndimage.generic_filter( + a, _filter_func, mode=['reflect', 'reflect'], + footprint=footprint, extra_arguments=(cf,), + extra_keywords={'total': cf.sum()}) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [1, 1, 2]), + ('wrap', [3, 1, 2]), + ('reflect', [1, 1, 2]), + ('mirror', [2, 1, 2]), + ('constant', [0, 1, 2])] + ) + def test_extend01(self, mode, expected_value): + array = numpy.array([1, 2, 3]) + weights = numpy.array([1, 0]) + output = ndimage.correlate1d(array, weights, 0, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [1, 1, 1]), + ('wrap', [3, 1, 2]), + ('reflect', [3, 3, 2]), + ('mirror', [1, 2, 3]), + ('constant', [0, 0, 0])] + ) + def test_extend02(self, mode, expected_value): + array = numpy.array([1, 2, 3]) + weights = numpy.array([1, 0, 0, 0, 0, 0, 0, 0]) + output = ndimage.correlate1d(array, weights, 0, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [2, 3, 3]), + ('wrap', [2, 3, 1]), + ('reflect', [2, 3, 3]), + ('mirror', [2, 3, 2]), + ('constant', [2, 3, 0])] + ) + def test_extend03(self, mode, expected_value): + array = numpy.array([1, 2, 3]) + weights = numpy.array([0, 0, 1]) + output = ndimage.correlate1d(array, weights, 0, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [3, 3, 3]), + ('wrap', [2, 3, 1]), + ('reflect', [2, 1, 1]), + ('mirror', [1, 2, 3]), + ('constant', [0, 0, 0])] + ) + def test_extend04(self, mode, expected_value): + array = numpy.array([1, 2, 3]) + weights = numpy.array([0, 0, 0, 0, 0, 0, 0, 0, 1]) + output = ndimage.correlate1d(array, weights, 0, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [[1, 1, 2], [1, 1, 2], [4, 4, 5]]), + ('wrap', [[9, 7, 8], [3, 1, 2], [6, 4, 5]]), + ('reflect', [[1, 1, 2], [1, 1, 2], [4, 4, 5]]), + ('mirror', [[5, 4, 5], [2, 1, 2], [5, 4, 5]]), + ('constant', [[0, 0, 0], [0, 1, 2], [0, 4, 5]])] + ) + def test_extend05(self, mode, expected_value): + array = numpy.array([[1, 2, 3], + [4, 5, 6], + [7, 8, 9]]) + weights = numpy.array([[1, 0], [0, 0]]) + output = ndimage.correlate(array, weights, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [[5, 6, 6], [8, 9, 9], [8, 9, 9]]), + ('wrap', [[5, 6, 4], [8, 9, 7], [2, 3, 1]]), + ('reflect', [[5, 6, 6], [8, 9, 9], [8, 9, 9]]), + ('mirror', [[5, 6, 5], [8, 9, 8], [5, 6, 5]]), + ('constant', [[5, 6, 0], [8, 9, 0], [0, 0, 0]])] + ) + def test_extend06(self, mode, expected_value): + array = numpy.array([[1, 2, 3], + [4, 5, 6], + [7, 8, 9]]) + weights = numpy.array([[0, 0, 0], [0, 0, 0], [0, 0, 1]]) + output = ndimage.correlate(array, weights, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [3, 3, 3]), + ('wrap', [2, 3, 1]), + ('reflect', [2, 1, 1]), + ('mirror', [1, 2, 3]), + ('constant', [0, 0, 0])] + ) + def test_extend07(self, mode, expected_value): + array = numpy.array([1, 2, 3]) + weights = numpy.array([0, 0, 0, 0, 0, 0, 0, 0, 1]) + output = ndimage.correlate(array, weights, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [[3], [3], [3]]), + ('wrap', [[2], [3], [1]]), + ('reflect', [[2], [1], [1]]), + ('mirror', [[1], [2], [3]]), + ('constant', [[0], [0], [0]])] + ) + def test_extend08(self, mode, expected_value): + array = numpy.array([[1], [2], [3]]) + weights = numpy.array([[0], [0], [0], [0], [0], [0], [0], [0], [1]]) + output = ndimage.correlate(array, weights, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [3, 3, 3]), + ('wrap', [2, 3, 1]), + ('reflect', [2, 1, 1]), + ('mirror', [1, 2, 3]), + ('constant', [0, 0, 0])] + ) + def test_extend09(self, mode, expected_value): + array = numpy.array([1, 2, 3]) + weights = numpy.array([0, 0, 0, 0, 0, 0, 0, 0, 1]) + output = ndimage.correlate(array, weights, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [[3], [3], [3]]), + ('wrap', [[2], [3], [1]]), + ('reflect', [[2], [1], [1]]), + ('mirror', [[1], [2], [3]]), + ('constant', [[0], [0], [0]])] + ) + def test_extend10(self, mode, expected_value): + array = numpy.array([[1], [2], [3]]) + weights = numpy.array([[0], [0], [0], [0], [0], [0], [0], [0], [1]]) + output = ndimage.correlate(array, weights, mode=mode, cval=0) + assert_array_equal(output, expected_value) + + +def test_ticket_701(): + # Test generic filter sizes + arr = numpy.arange(4).reshape((2, 2)) + def func(x): + return numpy.min(x) + res = ndimage.generic_filter(arr, func, size=(1, 1)) + # The following raises an error unless ticket 701 is fixed + res2 = ndimage.generic_filter(arr, func, size=1) + assert_equal(res, res2) + + +def test_gh_5430(): + # At least one of these raises an error unless gh-5430 is + # fixed. In py2k an int is implemented using a C long, so + # which one fails depends on your system. In py3k there is only + # one arbitrary precision integer type, so both should fail. + sigma = numpy.int32(1) + out = ndimage._ni_support._normalize_sequence(sigma, 1) + assert_equal(out, [sigma]) + sigma = numpy.int64(1) + out = ndimage._ni_support._normalize_sequence(sigma, 1) + assert_equal(out, [sigma]) + # This worked before; make sure it still works + sigma = 1 + out = ndimage._ni_support._normalize_sequence(sigma, 1) + assert_equal(out, [sigma]) + # This worked before; make sure it still works + sigma = [1, 1] + out = ndimage._ni_support._normalize_sequence(sigma, 2) + assert_equal(out, sigma) + # Also include the OPs original example to make sure we fixed the issue + x = numpy.random.normal(size=(256, 256)) + perlin = numpy.zeros_like(x) + for i in 2**numpy.arange(6): + perlin += ndimage.gaussian_filter(x, i, mode="wrap") * i**2 + # This also fixes gh-4106, show that the OPs example now runs. + x = numpy.int64(21) + ndimage._ni_support._normalize_sequence(x, 0) + + +def test_gaussian_kernel1d(): + radius = 10 + sigma = 2 + sigma2 = sigma * sigma + x = numpy.arange(-radius, radius + 1, dtype=numpy.double) + phi_x = numpy.exp(-0.5 * x * x / sigma2) + phi_x /= phi_x.sum() + assert_allclose(phi_x, _gaussian_kernel1d(sigma, 0, radius)) + assert_allclose(-phi_x * x / sigma2, _gaussian_kernel1d(sigma, 1, radius)) + assert_allclose(phi_x * (x * x / sigma2 - 1) / sigma2, + _gaussian_kernel1d(sigma, 2, radius)) + assert_allclose(phi_x * (3 - x * x / sigma2) * x / (sigma2 * sigma2), + _gaussian_kernel1d(sigma, 3, radius)) + + +def test_orders_gauss(): + # Check order inputs to Gaussians + arr = numpy.zeros((1,)) + assert_equal(0, ndimage.gaussian_filter(arr, 1, order=0)) + assert_equal(0, ndimage.gaussian_filter(arr, 1, order=3)) + assert_raises(ValueError, ndimage.gaussian_filter, arr, 1, -1) + assert_equal(0, ndimage.gaussian_filter1d(arr, 1, axis=-1, order=0)) + assert_equal(0, ndimage.gaussian_filter1d(arr, 1, axis=-1, order=3)) + assert_raises(ValueError, ndimage.gaussian_filter1d, arr, 1, -1, -1) + + +def test_valid_origins(): + """Regression test for #1311.""" + def func(x): + return numpy.mean(x) + data = numpy.array([1, 2, 3, 4, 5], dtype=numpy.float64) + assert_raises(ValueError, ndimage.generic_filter, data, func, size=3, + origin=2) + assert_raises(ValueError, ndimage.generic_filter1d, data, func, + filter_size=3, origin=2) + assert_raises(ValueError, ndimage.percentile_filter, data, 0.2, size=3, + origin=2) + + for filter in [ndimage.uniform_filter, ndimage.minimum_filter, + ndimage.maximum_filter, ndimage.maximum_filter1d, + ndimage.median_filter, ndimage.minimum_filter1d]: + # This should work, since for size == 3, the valid range for origin is + # -1 to 1. + list(filter(data, 3, origin=-1)) + list(filter(data, 3, origin=1)) + # Just check this raises an error instead of silently accepting or + # segfaulting. + assert_raises(ValueError, filter, data, 3, origin=2) + + +def test_bad_convolve_and_correlate_origins(): + """Regression test for gh-822.""" + # Before gh-822 was fixed, these would generate seg. faults or + # other crashes on many system. + assert_raises(ValueError, ndimage.correlate1d, + [0, 1, 2, 3, 4, 5], [1, 1, 2, 0], origin=2) + assert_raises(ValueError, ndimage.correlate, + [0, 1, 2, 3, 4, 5], [0, 1, 2], origin=[2]) + assert_raises(ValueError, ndimage.correlate, + numpy.ones((3, 5)), numpy.ones((2, 2)), origin=[0, 1]) + + assert_raises(ValueError, ndimage.convolve1d, + numpy.arange(10), numpy.ones(3), origin=-2) + assert_raises(ValueError, ndimage.convolve, + numpy.arange(10), numpy.ones(3), origin=[-2]) + assert_raises(ValueError, ndimage.convolve, + numpy.ones((3, 5)), numpy.ones((2, 2)), origin=[0, -2]) + + +def test_multiple_modes(): + # Test that the filters with multiple mode cababilities for different + # dimensions give the same result as applying a single mode. + arr = numpy.array([[1., 0., 0.], + [1., 1., 0.], + [0., 0., 0.]]) + + mode1 = 'reflect' + mode2 = ['reflect', 'reflect'] + + assert_equal(ndimage.gaussian_filter(arr, 1, mode=mode1), + ndimage.gaussian_filter(arr, 1, mode=mode2)) + assert_equal(ndimage.prewitt(arr, mode=mode1), + ndimage.prewitt(arr, mode=mode2)) + assert_equal(ndimage.sobel(arr, mode=mode1), + ndimage.sobel(arr, mode=mode2)) + assert_equal(ndimage.laplace(arr, mode=mode1), + ndimage.laplace(arr, mode=mode2)) + assert_equal(ndimage.gaussian_laplace(arr, 1, mode=mode1), + ndimage.gaussian_laplace(arr, 1, mode=mode2)) + assert_equal(ndimage.maximum_filter(arr, size=5, mode=mode1), + ndimage.maximum_filter(arr, size=5, mode=mode2)) + assert_equal(ndimage.minimum_filter(arr, size=5, mode=mode1), + ndimage.minimum_filter(arr, size=5, mode=mode2)) + assert_equal(ndimage.gaussian_gradient_magnitude(arr, 1, mode=mode1), + ndimage.gaussian_gradient_magnitude(arr, 1, mode=mode2)) + assert_equal(ndimage.uniform_filter(arr, 5, mode=mode1), + ndimage.uniform_filter(arr, 5, mode=mode2)) + + +def test_multiple_modes_sequentially(): + # Test that the filters with multiple mode cababilities for different + # dimensions give the same result as applying the filters with + # different modes sequentially + arr = numpy.array([[1., 0., 0.], + [1., 1., 0.], + [0., 0., 0.]]) + + modes = ['reflect', 'wrap'] + + expected = ndimage.gaussian_filter1d(arr, 1, axis=0, mode=modes[0]) + expected = ndimage.gaussian_filter1d(expected, 1, axis=1, mode=modes[1]) + assert_equal(expected, + ndimage.gaussian_filter(arr, 1, mode=modes)) + + expected = ndimage.uniform_filter1d(arr, 5, axis=0, mode=modes[0]) + expected = ndimage.uniform_filter1d(expected, 5, axis=1, mode=modes[1]) + assert_equal(expected, + ndimage.uniform_filter(arr, 5, mode=modes)) + + expected = ndimage.maximum_filter1d(arr, size=5, axis=0, mode=modes[0]) + expected = ndimage.maximum_filter1d(expected, size=5, axis=1, + mode=modes[1]) + assert_equal(expected, + ndimage.maximum_filter(arr, size=5, mode=modes)) + + expected = ndimage.minimum_filter1d(arr, size=5, axis=0, mode=modes[0]) + expected = ndimage.minimum_filter1d(expected, size=5, axis=1, + mode=modes[1]) + assert_equal(expected, + ndimage.minimum_filter(arr, size=5, mode=modes)) + + +def test_multiple_modes_prewitt(): + # Test prewitt filter for multiple extrapolation modes + arr = numpy.array([[1., 0., 0.], + [1., 1., 0.], + [0., 0., 0.]]) + + expected = numpy.array([[1., -3., 2.], + [1., -2., 1.], + [1., -1., 0.]]) + + modes = ['reflect', 'wrap'] + + assert_equal(expected, + ndimage.prewitt(arr, mode=modes)) + + +def test_multiple_modes_sobel(): + # Test sobel filter for multiple extrapolation modes + arr = numpy.array([[1., 0., 0.], + [1., 1., 0.], + [0., 0., 0.]]) + + expected = numpy.array([[1., -4., 3.], + [2., -3., 1.], + [1., -1., 0.]]) + + modes = ['reflect', 'wrap'] + + assert_equal(expected, + ndimage.sobel(arr, mode=modes)) + + +def test_multiple_modes_laplace(): + # Test laplace filter for multiple extrapolation modes + arr = numpy.array([[1., 0., 0.], + [1., 1., 0.], + [0., 0., 0.]]) + + expected = numpy.array([[-2., 2., 1.], + [-2., -3., 2.], + [1., 1., 0.]]) + + modes = ['reflect', 'wrap'] + + assert_equal(expected, + ndimage.laplace(arr, mode=modes)) + + +def test_multiple_modes_gaussian_laplace(): + # Test gaussian_laplace filter for multiple extrapolation modes + arr = numpy.array([[1., 0., 0.], + [1., 1., 0.], + [0., 0., 0.]]) + + expected = numpy.array([[-0.28438687, 0.01559809, 0.19773499], + [-0.36630503, -0.20069774, 0.07483620], + [0.15849176, 0.18495566, 0.21934094]]) + + modes = ['reflect', 'wrap'] + + assert_almost_equal(expected, + ndimage.gaussian_laplace(arr, 1, mode=modes)) + + +def test_multiple_modes_gaussian_gradient_magnitude(): + # Test gaussian_gradient_magnitude filter for multiple + # extrapolation modes + arr = numpy.array([[1., 0., 0.], + [1., 1., 0.], + [0., 0., 0.]]) + + expected = numpy.array([[0.04928965, 0.09745625, 0.06405368], + [0.23056905, 0.14025305, 0.04550846], + [0.19894369, 0.14950060, 0.06796850]]) + + modes = ['reflect', 'wrap'] + + calculated = ndimage.gaussian_gradient_magnitude(arr, 1, mode=modes) + + assert_almost_equal(expected, calculated) + + +def test_multiple_modes_uniform(): + # Test uniform filter for multiple extrapolation modes + arr = numpy.array([[1., 0., 0.], + [1., 1., 0.], + [0., 0., 0.]]) + + expected = numpy.array([[0.32, 0.40, 0.48], + [0.20, 0.28, 0.32], + [0.28, 0.32, 0.40]]) + + modes = ['reflect', 'wrap'] + + assert_almost_equal(expected, + ndimage.uniform_filter(arr, 5, mode=modes)) + + +def test_gaussian_truncate(): + # Test that Gaussian filters can be truncated at different widths. + # These tests only check that the result has the expected number + # of nonzero elements. + arr = numpy.zeros((100, 100), float) + arr[50, 50] = 1 + num_nonzeros_2 = (ndimage.gaussian_filter(arr, 5, truncate=2) > 0).sum() + assert_equal(num_nonzeros_2, 21**2) + num_nonzeros_5 = (ndimage.gaussian_filter(arr, 5, truncate=5) > 0).sum() + assert_equal(num_nonzeros_5, 51**2) + + # Test truncate when sigma is a sequence. + f = ndimage.gaussian_filter(arr, [0.5, 2.5], truncate=3.5) + fpos = f > 0 + n0 = fpos.any(axis=0).sum() + # n0 should be 2*int(2.5*3.5 + 0.5) + 1 + assert_equal(n0, 19) + n1 = fpos.any(axis=1).sum() + # n1 should be 2*int(0.5*3.5 + 0.5) + 1 + assert_equal(n1, 5) + + # Test gaussian_filter1d. + x = numpy.zeros(51) + x[25] = 1 + f = ndimage.gaussian_filter1d(x, sigma=2, truncate=3.5) + n = (f > 0).sum() + assert_equal(n, 15) + + # Test gaussian_laplace + y = ndimage.gaussian_laplace(x, sigma=2, truncate=3.5) + nonzero_indices = numpy.nonzero(y != 0)[0] + n = numpy.ptp(nonzero_indices) + 1 + assert_equal(n, 15) + + # Test gaussian_gradient_magnitude + y = ndimage.gaussian_gradient_magnitude(x, sigma=2, truncate=3.5) + nonzero_indices = numpy.nonzero(y != 0)[0] + n = numpy.ptp(nonzero_indices) + 1 + assert_equal(n, 15) + + +def test_gaussian_radius(): + # Test that Gaussian filters with radius argument produce the same + # results as the filters with corresponding truncate argument. + # radius = int(truncate * sigma + 0.5) + # Test gaussian_filter1d + x = numpy.zeros(7) + x[3] = 1 + f1 = ndimage.gaussian_filter1d(x, sigma=2, truncate=1.5) + f2 = ndimage.gaussian_filter1d(x, sigma=2, radius=3) + assert_equal(f1, f2) + + # Test gaussian_filter when sigma is a number. + a = numpy.zeros((9, 9)) + a[4, 4] = 1 + f1 = ndimage.gaussian_filter(a, sigma=0.5, truncate=3.5) + f2 = ndimage.gaussian_filter(a, sigma=0.5, radius=2) + assert_equal(f1, f2) + + # Test gaussian_filter when sigma is a sequence. + a = numpy.zeros((50, 50)) + a[25, 25] = 1 + f1 = ndimage.gaussian_filter(a, sigma=[0.5, 2.5], truncate=3.5) + f2 = ndimage.gaussian_filter(a, sigma=[0.5, 2.5], radius=[2, 9]) + assert_equal(f1, f2) + + +def test_gaussian_radius_invalid(): + # radius must be a nonnegative integer + with assert_raises(ValueError): + ndimage.gaussian_filter1d(numpy.zeros(8), sigma=1, radius=-1) + with assert_raises(ValueError): + ndimage.gaussian_filter1d(numpy.zeros(8), sigma=1, radius=1.1) + + +class TestThreading: + def check_func_thread(self, n, fun, args, out): + from threading import Thread + thrds = [Thread(target=fun, args=args, kwargs={'output': out[x]}) + for x in range(n)] + [t.start() for t in thrds] + [t.join() for t in thrds] + + def check_func_serial(self, n, fun, args, out): + for i in range(n): + fun(*args, output=out[i]) + + def test_correlate1d(self): + d = numpy.random.randn(5000) + os = numpy.empty((4, d.size)) + ot = numpy.empty_like(os) + k = numpy.arange(5) + self.check_func_serial(4, ndimage.correlate1d, (d, k), os) + self.check_func_thread(4, ndimage.correlate1d, (d, k), ot) + assert_array_equal(os, ot) + + def test_correlate(self): + d = numpy.random.randn(500, 500) + k = numpy.random.randn(10, 10) + os = numpy.empty([4] + list(d.shape)) + ot = numpy.empty_like(os) + self.check_func_serial(4, ndimage.correlate, (d, k), os) + self.check_func_thread(4, ndimage.correlate, (d, k), ot) + assert_array_equal(os, ot) + + def test_median_filter(self): + d = numpy.random.randn(500, 500) + os = numpy.empty([4] + list(d.shape)) + ot = numpy.empty_like(os) + self.check_func_serial(4, ndimage.median_filter, (d, 3), os) + self.check_func_thread(4, ndimage.median_filter, (d, 3), ot) + assert_array_equal(os, ot) + + def test_uniform_filter1d(self): + d = numpy.random.randn(5000) + os = numpy.empty((4, d.size)) + ot = numpy.empty_like(os) + self.check_func_serial(4, ndimage.uniform_filter1d, (d, 5), os) + self.check_func_thread(4, ndimage.uniform_filter1d, (d, 5), ot) + assert_array_equal(os, ot) + + def test_minmax_filter(self): + d = numpy.random.randn(500, 500) + os = numpy.empty([4] + list(d.shape)) + ot = numpy.empty_like(os) + self.check_func_serial(4, ndimage.maximum_filter, (d, 3), os) + self.check_func_thread(4, ndimage.maximum_filter, (d, 3), ot) + assert_array_equal(os, ot) + self.check_func_serial(4, ndimage.minimum_filter, (d, 3), os) + self.check_func_thread(4, ndimage.minimum_filter, (d, 3), ot) + assert_array_equal(os, ot) + + +def test_minmaximum_filter1d(): + # Regression gh-3898 + in_ = numpy.arange(10) + out = ndimage.minimum_filter1d(in_, 1) + assert_equal(in_, out) + out = ndimage.maximum_filter1d(in_, 1) + assert_equal(in_, out) + # Test reflect + out = ndimage.minimum_filter1d(in_, 5, mode='reflect') + assert_equal([0, 0, 0, 1, 2, 3, 4, 5, 6, 7], out) + out = ndimage.maximum_filter1d(in_, 5, mode='reflect') + assert_equal([2, 3, 4, 5, 6, 7, 8, 9, 9, 9], out) + # Test constant + out = ndimage.minimum_filter1d(in_, 5, mode='constant', cval=-1) + assert_equal([-1, -1, 0, 1, 2, 3, 4, 5, -1, -1], out) + out = ndimage.maximum_filter1d(in_, 5, mode='constant', cval=10) + assert_equal([10, 10, 4, 5, 6, 7, 8, 9, 10, 10], out) + # Test nearest + out = ndimage.minimum_filter1d(in_, 5, mode='nearest') + assert_equal([0, 0, 0, 1, 2, 3, 4, 5, 6, 7], out) + out = ndimage.maximum_filter1d(in_, 5, mode='nearest') + assert_equal([2, 3, 4, 5, 6, 7, 8, 9, 9, 9], out) + # Test wrap + out = ndimage.minimum_filter1d(in_, 5, mode='wrap') + assert_equal([0, 0, 0, 1, 2, 3, 4, 5, 0, 0], out) + out = ndimage.maximum_filter1d(in_, 5, mode='wrap') + assert_equal([9, 9, 4, 5, 6, 7, 8, 9, 9, 9], out) + + +def test_uniform_filter1d_roundoff_errors(): + # gh-6930 + in_ = numpy.repeat([0, 1, 0], [9, 9, 9]) + for filter_size in range(3, 10): + out = ndimage.uniform_filter1d(in_, filter_size) + assert_equal(out.sum(), 10 - filter_size) + + +def test_footprint_all_zeros(): + # regression test for gh-6876: footprint of all zeros segfaults + arr = numpy.random.randint(0, 100, (100, 100)) + kernel = numpy.zeros((3, 3), bool) + with assert_raises(ValueError): + ndimage.maximum_filter(arr, footprint=kernel) + + +def test_gaussian_filter(): + # Test gaussian filter with numpy.float16 + # gh-8207 + data = numpy.array([1], dtype=numpy.float16) + sigma = 1.0 + with assert_raises(RuntimeError): + ndimage.gaussian_filter(data, sigma) + + +def test_rank_filter_noninteger_rank(): + # regression test for issue 9388: ValueError for + # non integer rank when performing rank_filter + arr = numpy.random.random((10, 20, 30)) + assert_raises(TypeError, ndimage.rank_filter, arr, 0.5, + footprint=numpy.ones((1, 1, 10), dtype=bool)) + + +def test_size_footprint_both_set(): + # test for input validation, expect user warning when + # size and footprint is set + with suppress_warnings() as sup: + sup.filter(UserWarning, + "ignoring size because footprint is set") + arr = numpy.random.random((10, 20, 30)) + ndimage.rank_filter(arr, 5, size=2, footprint=numpy.ones((1, 1, 10), + dtype=bool)) + + +def test_byte_order_median(): + """Regression test for #413: median_filter does not handle bytes orders.""" + a = numpy.arange(9, dtype=' 3 raise NotImplementedError + x = numpy.ones((4, 6, 8, 10), dtype=numpy.complex128) + with pytest.raises(NotImplementedError): + ndimage.fourier_ellipsoid(x, 3) + + def test_fourier_ellipsoid_1d_complex(self): + # expected result of 1d ellipsoid is the same as for fourier_uniform + for shape in [(32, ), (31, )]: + for type_, dec in zip([numpy.complex64, numpy.complex128], + [5, 14]): + x = numpy.ones(shape, dtype=type_) + a = ndimage.fourier_ellipsoid(x, 5, -1, 0) + b = ndimage.fourier_uniform(x, 5, -1, 0) + assert_array_almost_equal(a, b, decimal=dec) + + @pytest.mark.parametrize('shape', [(0, ), (0, 10), (10, 0)]) + @pytest.mark.parametrize('dtype', + [numpy.float32, numpy.float64, + numpy.complex64, numpy.complex128]) + @pytest.mark.parametrize('test_func', + [ndimage.fourier_ellipsoid, + ndimage.fourier_gaussian, + ndimage.fourier_uniform]) + def test_fourier_zero_length_dims(self, shape, dtype, test_func): + a = numpy.ones(shape, dtype) + b = test_func(a, 3) + assert_equal(a, b) diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_interpolation.py b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_interpolation.py new file mode 100644 index 0000000000000000000000000000000000000000..beb8681e850bd682b789e97f901048d718627dd0 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_interpolation.py @@ -0,0 +1,1327 @@ +import sys + +import numpy +from numpy.testing import (assert_, assert_equal, assert_array_equal, + assert_array_almost_equal, assert_allclose, + suppress_warnings) +import pytest +from pytest import raises as assert_raises +import scipy.ndimage as ndimage + +from . import types + +eps = 1e-12 + +ndimage_to_numpy_mode = { + 'mirror': 'reflect', + 'reflect': 'symmetric', + 'grid-mirror': 'symmetric', + 'grid-wrap': 'wrap', + 'nearest': 'edge', + 'grid-constant': 'constant', +} + + +class TestNdimageInterpolation: + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [1.5, 2.5, 3.5, 4, 4, 4, 4]), + ('wrap', [1.5, 2.5, 3.5, 1.5, 2.5, 3.5, 1.5]), + ('grid-wrap', [1.5, 2.5, 3.5, 2.5, 1.5, 2.5, 3.5]), + ('mirror', [1.5, 2.5, 3.5, 3.5, 2.5, 1.5, 1.5]), + ('reflect', [1.5, 2.5, 3.5, 4, 3.5, 2.5, 1.5]), + ('constant', [1.5, 2.5, 3.5, -1, -1, -1, -1]), + ('grid-constant', [1.5, 2.5, 3.5, 1.5, -1, -1, -1])] + ) + def test_boundaries(self, mode, expected_value): + def shift(x): + return (x[0] + 0.5,) + + data = numpy.array([1, 2, 3, 4.]) + assert_array_equal( + expected_value, + ndimage.geometric_transform(data, shift, cval=-1, mode=mode, + output_shape=(7,), order=1)) + + @pytest.mark.parametrize( + 'mode, expected_value', + [('nearest', [1, 1, 2, 3]), + ('wrap', [3, 1, 2, 3]), + ('grid-wrap', [4, 1, 2, 3]), + ('mirror', [2, 1, 2, 3]), + ('reflect', [1, 1, 2, 3]), + ('constant', [-1, 1, 2, 3]), + ('grid-constant', [-1, 1, 2, 3])] + ) + def test_boundaries2(self, mode, expected_value): + def shift(x): + return (x[0] - 0.9,) + + data = numpy.array([1, 2, 3, 4]) + assert_array_equal( + expected_value, + ndimage.geometric_transform(data, shift, cval=-1, mode=mode, + output_shape=(4,))) + + @pytest.mark.parametrize('mode', ['mirror', 'reflect', 'grid-mirror', + 'grid-wrap', 'grid-constant', + 'nearest']) + @pytest.mark.parametrize('order', range(6)) + def test_boundary_spline_accuracy(self, mode, order): + """Tests based on examples from gh-2640""" + data = numpy.arange(-6, 7, dtype=float) + x = numpy.linspace(-8, 15, num=1000) + y = ndimage.map_coordinates(data, [x], order=order, mode=mode) + + # compute expected value using explicit padding via numpy.pad + npad = 32 + pad_mode = ndimage_to_numpy_mode.get(mode) + padded = numpy.pad(data, npad, mode=pad_mode) + expected = ndimage.map_coordinates(padded, [npad + x], order=order, + mode=mode) + + atol = 1e-5 if mode == 'grid-constant' else 1e-12 + assert_allclose(y, expected, rtol=1e-7, atol=atol) + + @pytest.mark.parametrize('order', range(2, 6)) + @pytest.mark.parametrize('dtype', types) + def test_spline01(self, dtype, order): + data = numpy.ones([], dtype) + out = ndimage.spline_filter(data, order=order) + assert_array_almost_equal(out, 1) + + @pytest.mark.parametrize('order', range(2, 6)) + @pytest.mark.parametrize('dtype', types) + def test_spline02(self, dtype, order): + data = numpy.array([1], dtype) + out = ndimage.spline_filter(data, order=order) + assert_array_almost_equal(out, [1]) + + @pytest.mark.parametrize('order', range(2, 6)) + @pytest.mark.parametrize('dtype', types) + def test_spline03(self, dtype, order): + data = numpy.ones([], dtype) + out = ndimage.spline_filter(data, order, output=dtype) + assert_array_almost_equal(out, 1) + + @pytest.mark.parametrize('order', range(2, 6)) + @pytest.mark.parametrize('dtype', types) + def test_spline04(self, dtype, order): + data = numpy.ones([4], dtype) + out = ndimage.spline_filter(data, order) + assert_array_almost_equal(out, [1, 1, 1, 1]) + + @pytest.mark.parametrize('order', range(2, 6)) + @pytest.mark.parametrize('dtype', types) + def test_spline05(self, dtype, order): + data = numpy.ones([4, 4], dtype) + out = ndimage.spline_filter(data, order=order) + assert_array_almost_equal(out, [[1, 1, 1, 1], + [1, 1, 1, 1], + [1, 1, 1, 1], + [1, 1, 1, 1]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform01(self, order): + data = numpy.array([1]) + + def mapping(x): + return x + + out = ndimage.geometric_transform(data, mapping, data.shape, + order=order) + assert_array_almost_equal(out, [1]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform02(self, order): + data = numpy.ones([4]) + + def mapping(x): + return x + + out = ndimage.geometric_transform(data, mapping, data.shape, + order=order) + assert_array_almost_equal(out, [1, 1, 1, 1]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform03(self, order): + data = numpy.ones([4]) + + def mapping(x): + return (x[0] - 1,) + + out = ndimage.geometric_transform(data, mapping, data.shape, + order=order) + assert_array_almost_equal(out, [0, 1, 1, 1]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform04(self, order): + data = numpy.array([4, 1, 3, 2]) + + def mapping(x): + return (x[0] - 1,) + + out = ndimage.geometric_transform(data, mapping, data.shape, + order=order) + assert_array_almost_equal(out, [0, 4, 1, 3]) + + @pytest.mark.parametrize('order', range(0, 6)) + @pytest.mark.parametrize('dtype', [numpy.float64, numpy.complex128]) + def test_geometric_transform05(self, order, dtype): + data = numpy.array([[1, 1, 1, 1], + [1, 1, 1, 1], + [1, 1, 1, 1]], dtype=dtype) + expected = numpy.array([[0, 1, 1, 1], + [0, 1, 1, 1], + [0, 1, 1, 1]], dtype=dtype) + if data.dtype.kind == 'c': + data -= 1j * data + expected -= 1j * expected + + def mapping(x): + return (x[0], x[1] - 1) + + out = ndimage.geometric_transform(data, mapping, data.shape, + order=order) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform06(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + + def mapping(x): + return (x[0], x[1] - 1) + + out = ndimage.geometric_transform(data, mapping, data.shape, + order=order) + assert_array_almost_equal(out, [[0, 4, 1, 3], + [0, 7, 6, 8], + [0, 3, 5, 3]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform07(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + + def mapping(x): + return (x[0] - 1, x[1]) + + out = ndimage.geometric_transform(data, mapping, data.shape, + order=order) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [4, 1, 3, 2], + [7, 6, 8, 5]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform08(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + + def mapping(x): + return (x[0] - 1, x[1] - 1) + + out = ndimage.geometric_transform(data, mapping, data.shape, + order=order) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform10(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + + def mapping(x): + return (x[0] - 1, x[1] - 1) + + if (order > 1): + filtered = ndimage.spline_filter(data, order=order) + else: + filtered = data + out = ndimage.geometric_transform(filtered, mapping, data.shape, + order=order, prefilter=False) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform13(self, order): + data = numpy.ones([2], numpy.float64) + + def mapping(x): + return (x[0] // 2,) + + out = ndimage.geometric_transform(data, mapping, [4], order=order) + assert_array_almost_equal(out, [1, 1, 1, 1]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform14(self, order): + data = [1, 5, 2, 6, 3, 7, 4, 4] + + def mapping(x): + return (2 * x[0],) + + out = ndimage.geometric_transform(data, mapping, [4], order=order) + assert_array_almost_equal(out, [1, 2, 3, 4]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform15(self, order): + data = [1, 2, 3, 4] + + def mapping(x): + return (x[0] / 2,) + + out = ndimage.geometric_transform(data, mapping, [8], order=order) + assert_array_almost_equal(out[::2], [1, 2, 3, 4]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform16(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9.0, 10, 11, 12]] + + def mapping(x): + return (x[0], x[1] * 2) + + out = ndimage.geometric_transform(data, mapping, (3, 2), + order=order) + assert_array_almost_equal(out, [[1, 3], [5, 7], [9, 11]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform17(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + + def mapping(x): + return (x[0] * 2, x[1]) + + out = ndimage.geometric_transform(data, mapping, (1, 4), + order=order) + assert_array_almost_equal(out, [[1, 2, 3, 4]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform18(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + + def mapping(x): + return (x[0] * 2, x[1] * 2) + + out = ndimage.geometric_transform(data, mapping, (1, 2), + order=order) + assert_array_almost_equal(out, [[1, 3]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform19(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + + def mapping(x): + return (x[0], x[1] / 2) + + out = ndimage.geometric_transform(data, mapping, (3, 8), + order=order) + assert_array_almost_equal(out[..., ::2], data) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform20(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + + def mapping(x): + return (x[0] / 2, x[1]) + + out = ndimage.geometric_transform(data, mapping, (6, 4), + order=order) + assert_array_almost_equal(out[::2, ...], data) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform21(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + + def mapping(x): + return (x[0] / 2, x[1] / 2) + + out = ndimage.geometric_transform(data, mapping, (6, 8), + order=order) + assert_array_almost_equal(out[::2, ::2], data) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform22(self, order): + data = numpy.array([[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]], numpy.float64) + + def mapping1(x): + return (x[0] / 2, x[1] / 2) + + def mapping2(x): + return (x[0] * 2, x[1] * 2) + + out = ndimage.geometric_transform(data, mapping1, + (6, 8), order=order) + out = ndimage.geometric_transform(out, mapping2, + (3, 4), order=order) + assert_array_almost_equal(out, data) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform23(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + + def mapping(x): + return (1, x[0] * 2) + + out = ndimage.geometric_transform(data, mapping, (2,), order=order) + out = out.astype(numpy.int32) + assert_array_almost_equal(out, [5, 7]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_geometric_transform24(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + + def mapping(x, a, b): + return (a, x[0] * b) + + out = ndimage.geometric_transform( + data, mapping, (2,), order=order, extra_arguments=(1,), + extra_keywords={'b': 2}) + assert_array_almost_equal(out, [5, 7]) + + def test_geometric_transform_grid_constant_order1(self): + # verify interpolation outside the original bounds + x = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype=float) + + def mapping(x): + return (x[0] - 0.5), (x[1] - 0.5) + + expected_result = numpy.array([[0.25, 0.75, 1.25], + [1.25, 3.00, 4.00]]) + assert_array_almost_equal( + ndimage.geometric_transform(x, mapping, mode='grid-constant', + order=1), + expected_result, + ) + + @pytest.mark.parametrize('mode', ['grid-constant', 'grid-wrap', 'nearest', + 'mirror', 'reflect']) + @pytest.mark.parametrize('order', range(6)) + def test_geometric_transform_vs_padded(self, order, mode): + x = numpy.arange(144, dtype=float).reshape(12, 12) + + def mapping(x): + return (x[0] - 0.4), (x[1] + 2.3) + + # Manually pad and then extract center after the transform to get the + # expected result. + npad = 24 + pad_mode = ndimage_to_numpy_mode.get(mode) + xp = numpy.pad(x, npad, mode=pad_mode) + center_slice = tuple([slice(npad, -npad)] * x.ndim) + expected_result = ndimage.geometric_transform( + xp, mapping, mode=mode, order=order)[center_slice] + + assert_allclose( + ndimage.geometric_transform(x, mapping, mode=mode, + order=order), + expected_result, + rtol=1e-7, + ) + + def test_geometric_transform_endianness_with_output_parameter(self): + # geometric transform given output ndarray or dtype with + # non-native endianness. see issue #4127 + data = numpy.array([1]) + + def mapping(x): + return x + + for out in [data.dtype, data.dtype.newbyteorder(), + numpy.empty_like(data), + numpy.empty_like(data).astype(data.dtype.newbyteorder())]: + returned = ndimage.geometric_transform(data, mapping, data.shape, + output=out) + result = out if returned is None else returned + assert_array_almost_equal(result, [1]) + + def test_geometric_transform_with_string_output(self): + data = numpy.array([1]) + + def mapping(x): + return x + + out = ndimage.geometric_transform(data, mapping, output='f') + assert_(out.dtype is numpy.dtype('f')) + assert_array_almost_equal(out, [1]) + + @pytest.mark.parametrize('order', range(0, 6)) + @pytest.mark.parametrize('dtype', [numpy.float64, numpy.complex128]) + def test_map_coordinates01(self, order, dtype): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + expected = numpy.array([[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + if data.dtype.kind == 'c': + data = data - 1j * data + expected = expected - 1j * expected + + idx = numpy.indices(data.shape) + idx -= 1 + + out = ndimage.map_coordinates(data, idx, order=order) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_map_coordinates02(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + idx = numpy.indices(data.shape, numpy.float64) + idx -= 0.5 + + out1 = ndimage.shift(data, 0.5, order=order) + out2 = ndimage.map_coordinates(data, idx, order=order) + assert_array_almost_equal(out1, out2) + + def test_map_coordinates03(self): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]], order='F') + idx = numpy.indices(data.shape) - 1 + out = ndimage.map_coordinates(data, idx) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + assert_array_almost_equal(out, ndimage.shift(data, (1, 1))) + idx = numpy.indices(data[::2].shape) - 1 + out = ndimage.map_coordinates(data[::2], idx) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [0, 4, 1, 3]]) + assert_array_almost_equal(out, ndimage.shift(data[::2], (1, 1))) + idx = numpy.indices(data[:, ::2].shape) - 1 + out = ndimage.map_coordinates(data[:, ::2], idx) + assert_array_almost_equal(out, [[0, 0], [0, 4], [0, 7]]) + assert_array_almost_equal(out, ndimage.shift(data[:, ::2], (1, 1))) + + def test_map_coordinates_endianness_with_output_parameter(self): + # output parameter given as array or dtype with either endianness + # see issue #4127 + data = numpy.array([[1, 2], [7, 6]]) + expected = numpy.array([[0, 0], [0, 1]]) + idx = numpy.indices(data.shape) + idx -= 1 + for out in [ + data.dtype, + data.dtype.newbyteorder(), + numpy.empty_like(expected), + numpy.empty_like(expected).astype(expected.dtype.newbyteorder()) + ]: + returned = ndimage.map_coordinates(data, idx, output=out) + result = out if returned is None else returned + assert_array_almost_equal(result, expected) + + def test_map_coordinates_with_string_output(self): + data = numpy.array([[1]]) + idx = numpy.indices(data.shape) + out = ndimage.map_coordinates(data, idx, output='f') + assert_(out.dtype is numpy.dtype('f')) + assert_array_almost_equal(out, [[1]]) + + @pytest.mark.skipif('win32' in sys.platform or numpy.intp(0).itemsize < 8, + reason='do not run on 32 bit or windows ' + '(no sparse memory)') + def test_map_coordinates_large_data(self): + # check crash on large data + try: + n = 30000 + a = numpy.empty(n**2, dtype=numpy.float32).reshape(n, n) + # fill the part we might read + a[n - 3:, n - 3:] = 0 + ndimage.map_coordinates(a, [[n - 1.5], [n - 1.5]], order=1) + except MemoryError as e: + raise pytest.skip('Not enough memory available') from e + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform01(self, order): + data = numpy.array([1]) + out = ndimage.affine_transform(data, [[1]], order=order) + assert_array_almost_equal(out, [1]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform02(self, order): + data = numpy.ones([4]) + out = ndimage.affine_transform(data, [[1]], order=order) + assert_array_almost_equal(out, [1, 1, 1, 1]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform03(self, order): + data = numpy.ones([4]) + out = ndimage.affine_transform(data, [[1]], -1, order=order) + assert_array_almost_equal(out, [0, 1, 1, 1]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform04(self, order): + data = numpy.array([4, 1, 3, 2]) + out = ndimage.affine_transform(data, [[1]], -1, order=order) + assert_array_almost_equal(out, [0, 4, 1, 3]) + + @pytest.mark.parametrize('order', range(0, 6)) + @pytest.mark.parametrize('dtype', [numpy.float64, numpy.complex128]) + def test_affine_transform05(self, order, dtype): + data = numpy.array([[1, 1, 1, 1], + [1, 1, 1, 1], + [1, 1, 1, 1]], dtype=dtype) + expected = numpy.array([[0, 1, 1, 1], + [0, 1, 1, 1], + [0, 1, 1, 1]], dtype=dtype) + if data.dtype.kind == 'c': + data -= 1j * data + expected -= 1j * expected + out = ndimage.affine_transform(data, [[1, 0], [0, 1]], + [0, -1], order=order) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform06(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + out = ndimage.affine_transform(data, [[1, 0], [0, 1]], + [0, -1], order=order) + assert_array_almost_equal(out, [[0, 4, 1, 3], + [0, 7, 6, 8], + [0, 3, 5, 3]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform07(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + out = ndimage.affine_transform(data, [[1, 0], [0, 1]], + [-1, 0], order=order) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [4, 1, 3, 2], + [7, 6, 8, 5]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform08(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + out = ndimage.affine_transform(data, [[1, 0], [0, 1]], + [-1, -1], order=order) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform09(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + if (order > 1): + filtered = ndimage.spline_filter(data, order=order) + else: + filtered = data + out = ndimage.affine_transform(filtered, [[1, 0], [0, 1]], + [-1, -1], order=order, + prefilter=False) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform10(self, order): + data = numpy.ones([2], numpy.float64) + out = ndimage.affine_transform(data, [[0.5]], output_shape=(4,), + order=order) + assert_array_almost_equal(out, [1, 1, 1, 0]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform11(self, order): + data = [1, 5, 2, 6, 3, 7, 4, 4] + out = ndimage.affine_transform(data, [[2]], 0, (4,), order=order) + assert_array_almost_equal(out, [1, 2, 3, 4]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform12(self, order): + data = [1, 2, 3, 4] + out = ndimage.affine_transform(data, [[0.5]], 0, (8,), order=order) + assert_array_almost_equal(out[::2], [1, 2, 3, 4]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform13(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9.0, 10, 11, 12]] + out = ndimage.affine_transform(data, [[1, 0], [0, 2]], 0, (3, 2), + order=order) + assert_array_almost_equal(out, [[1, 3], [5, 7], [9, 11]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform14(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + out = ndimage.affine_transform(data, [[2, 0], [0, 1]], 0, (1, 4), + order=order) + assert_array_almost_equal(out, [[1, 2, 3, 4]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform15(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + out = ndimage.affine_transform(data, [[2, 0], [0, 2]], 0, (1, 2), + order=order) + assert_array_almost_equal(out, [[1, 3]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform16(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + out = ndimage.affine_transform(data, [[1, 0.0], [0, 0.5]], 0, + (3, 8), order=order) + assert_array_almost_equal(out[..., ::2], data) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform17(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + out = ndimage.affine_transform(data, [[0.5, 0], [0, 1]], 0, + (6, 4), order=order) + assert_array_almost_equal(out[::2, ...], data) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform18(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + out = ndimage.affine_transform(data, [[0.5, 0], [0, 0.5]], 0, + (6, 8), order=order) + assert_array_almost_equal(out[::2, ::2], data) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform19(self, order): + data = numpy.array([[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]], numpy.float64) + out = ndimage.affine_transform(data, [[0.5, 0], [0, 0.5]], 0, + (6, 8), order=order) + out = ndimage.affine_transform(out, [[2.0, 0], [0, 2.0]], 0, + (3, 4), order=order) + assert_array_almost_equal(out, data) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform20(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + out = ndimage.affine_transform(data, [[0], [2]], 0, (2,), + order=order) + assert_array_almost_equal(out, [1, 3]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform21(self, order): + data = [[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]] + out = ndimage.affine_transform(data, [[2], [0]], 0, (2,), + order=order) + assert_array_almost_equal(out, [1, 9]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform22(self, order): + # shift and offset interaction; see issue #1547 + data = numpy.array([4, 1, 3, 2]) + out = ndimage.affine_transform(data, [[2]], [-1], (3,), + order=order) + assert_array_almost_equal(out, [0, 1, 2]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform23(self, order): + # shift and offset interaction; see issue #1547 + data = numpy.array([4, 1, 3, 2]) + out = ndimage.affine_transform(data, [[0.5]], [-1], (8,), + order=order) + assert_array_almost_equal(out[::2], [0, 4, 1, 3]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform24(self, order): + # consistency between diagonal and non-diagonal case; see issue #1547 + data = numpy.array([4, 1, 3, 2]) + with suppress_warnings() as sup: + sup.filter(UserWarning, + 'The behavior of affine_transform with a 1-D array .* ' + 'has changed') + out1 = ndimage.affine_transform(data, [2], -1, order=order) + out2 = ndimage.affine_transform(data, [[2]], -1, order=order) + assert_array_almost_equal(out1, out2) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform25(self, order): + # consistency between diagonal and non-diagonal case; see issue #1547 + data = numpy.array([4, 1, 3, 2]) + with suppress_warnings() as sup: + sup.filter(UserWarning, + 'The behavior of affine_transform with a 1-D array .* ' + 'has changed') + out1 = ndimage.affine_transform(data, [0.5], -1, order=order) + out2 = ndimage.affine_transform(data, [[0.5]], -1, order=order) + assert_array_almost_equal(out1, out2) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform26(self, order): + # test homogeneous coordinates + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + if (order > 1): + filtered = ndimage.spline_filter(data, order=order) + else: + filtered = data + tform_original = numpy.eye(2) + offset_original = -numpy.ones((2, 1)) + tform_h1 = numpy.hstack((tform_original, offset_original)) + tform_h2 = numpy.vstack((tform_h1, [[0, 0, 1]])) + out1 = ndimage.affine_transform(filtered, tform_original, + offset_original.ravel(), + order=order, prefilter=False) + out2 = ndimage.affine_transform(filtered, tform_h1, order=order, + prefilter=False) + out3 = ndimage.affine_transform(filtered, tform_h2, order=order, + prefilter=False) + for out in [out1, out2, out3]: + assert_array_almost_equal(out, [[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + + def test_affine_transform27(self): + # test valid homogeneous transformation matrix + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + tform_h1 = numpy.hstack((numpy.eye(2), -numpy.ones((2, 1)))) + tform_h2 = numpy.vstack((tform_h1, [[5, 2, 1]])) + assert_raises(ValueError, ndimage.affine_transform, data, tform_h2) + + def test_affine_transform_1d_endianness_with_output_parameter(self): + # 1d affine transform given output ndarray or dtype with + # either endianness. see issue #7388 + data = numpy.ones((2, 2)) + for out in [numpy.empty_like(data), + numpy.empty_like(data).astype(data.dtype.newbyteorder()), + data.dtype, data.dtype.newbyteorder()]: + with suppress_warnings() as sup: + sup.filter(UserWarning, + 'The behavior of affine_transform with a 1-D array ' + '.* has changed') + returned = ndimage.affine_transform(data, [1, 1], output=out) + result = out if returned is None else returned + assert_array_almost_equal(result, [[1, 1], [1, 1]]) + + def test_affine_transform_multi_d_endianness_with_output_parameter(self): + # affine transform given output ndarray or dtype with either endianness + # see issue #4127 + data = numpy.array([1]) + for out in [data.dtype, data.dtype.newbyteorder(), + numpy.empty_like(data), + numpy.empty_like(data).astype(data.dtype.newbyteorder())]: + returned = ndimage.affine_transform(data, [[1]], output=out) + result = out if returned is None else returned + assert_array_almost_equal(result, [1]) + + def test_affine_transform_output_shape(self): + # don't require output_shape when out of a different size is given + data = numpy.arange(8, dtype=numpy.float64) + out = numpy.ones((16,)) + + ndimage.affine_transform(data, [[1]], output=out) + assert_array_almost_equal(out[:8], data) + + # mismatched output shape raises an error + with pytest.raises(RuntimeError): + ndimage.affine_transform( + data, [[1]], output=out, output_shape=(12,)) + + def test_affine_transform_with_string_output(self): + data = numpy.array([1]) + out = ndimage.affine_transform(data, [[1]], output='f') + assert_(out.dtype is numpy.dtype('f')) + assert_array_almost_equal(out, [1]) + + @pytest.mark.parametrize('shift', + [(1, 0), (0, 1), (-1, 1), (3, -5), (2, 7)]) + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform_shift_via_grid_wrap(self, shift, order): + # For mode 'grid-wrap', integer shifts should match numpy.roll + x = numpy.array([[0, 1], + [2, 3]]) + affine = numpy.zeros((2, 3)) + affine[:2, :2] = numpy.eye(2) + affine[:, 2] = shift + assert_array_almost_equal( + ndimage.affine_transform(x, affine, mode='grid-wrap', order=order), + numpy.roll(x, shift, axis=(0, 1)), + ) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_affine_transform_shift_reflect(self, order): + # shift by x.shape results in reflection + x = numpy.array([[0, 1, 2], + [3, 4, 5]]) + affine = numpy.zeros((2, 3)) + affine[:2, :2] = numpy.eye(2) + affine[:, 2] = x.shape + assert_array_almost_equal( + ndimage.affine_transform(x, affine, mode='reflect', order=order), + x[::-1, ::-1], + ) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift01(self, order): + data = numpy.array([1]) + out = ndimage.shift(data, [1], order=order) + assert_array_almost_equal(out, [0]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift02(self, order): + data = numpy.ones([4]) + out = ndimage.shift(data, [1], order=order) + assert_array_almost_equal(out, [0, 1, 1, 1]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift03(self, order): + data = numpy.ones([4]) + out = ndimage.shift(data, -1, order=order) + assert_array_almost_equal(out, [1, 1, 1, 0]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift04(self, order): + data = numpy.array([4, 1, 3, 2]) + out = ndimage.shift(data, 1, order=order) + assert_array_almost_equal(out, [0, 4, 1, 3]) + + @pytest.mark.parametrize('order', range(0, 6)) + @pytest.mark.parametrize('dtype', [numpy.float64, numpy.complex128]) + def test_shift05(self, order, dtype): + data = numpy.array([[1, 1, 1, 1], + [1, 1, 1, 1], + [1, 1, 1, 1]], dtype=dtype) + expected = numpy.array([[0, 1, 1, 1], + [0, 1, 1, 1], + [0, 1, 1, 1]], dtype=dtype) + if data.dtype.kind == 'c': + data -= 1j * data + expected -= 1j * expected + out = ndimage.shift(data, [0, 1], order=order) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('order', range(0, 6)) + @pytest.mark.parametrize('mode', ['constant', 'grid-constant']) + @pytest.mark.parametrize('dtype', [numpy.float64, numpy.complex128]) + def test_shift_with_nonzero_cval(self, order, mode, dtype): + data = numpy.array([[1, 1, 1, 1], + [1, 1, 1, 1], + [1, 1, 1, 1]], dtype=dtype) + + expected = numpy.array([[0, 1, 1, 1], + [0, 1, 1, 1], + [0, 1, 1, 1]], dtype=dtype) + + if data.dtype.kind == 'c': + data -= 1j * data + expected -= 1j * expected + cval = 5.0 + expected[:, 0] = cval # specific to shift of [0, 1] used below + out = ndimage.shift(data, [0, 1], order=order, mode=mode, cval=cval) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift06(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + out = ndimage.shift(data, [0, 1], order=order) + assert_array_almost_equal(out, [[0, 4, 1, 3], + [0, 7, 6, 8], + [0, 3, 5, 3]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift07(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + out = ndimage.shift(data, [1, 0], order=order) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [4, 1, 3, 2], + [7, 6, 8, 5]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift08(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + out = ndimage.shift(data, [1, 1], order=order) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift09(self, order): + data = numpy.array([[4, 1, 3, 2], + [7, 6, 8, 5], + [3, 5, 3, 6]]) + if (order > 1): + filtered = ndimage.spline_filter(data, order=order) + else: + filtered = data + out = ndimage.shift(filtered, [1, 1], order=order, prefilter=False) + assert_array_almost_equal(out, [[0, 0, 0, 0], + [0, 4, 1, 3], + [0, 7, 6, 8]]) + + @pytest.mark.parametrize('shift', + [(1, 0), (0, 1), (-1, 1), (3, -5), (2, 7)]) + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift_grid_wrap(self, shift, order): + # For mode 'grid-wrap', integer shifts should match numpy.roll + x = numpy.array([[0, 1], + [2, 3]]) + assert_array_almost_equal( + ndimage.shift(x, shift, mode='grid-wrap', order=order), + numpy.roll(x, shift, axis=(0, 1)), + ) + + @pytest.mark.parametrize('shift', + [(1, 0), (0, 1), (-1, 1), (3, -5), (2, 7)]) + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift_grid_constant1(self, shift, order): + # For integer shifts, 'constant' and 'grid-constant' should be equal + x = numpy.arange(20).reshape((5, 4)) + assert_array_almost_equal( + ndimage.shift(x, shift, mode='grid-constant', order=order), + ndimage.shift(x, shift, mode='constant', order=order), + ) + + def test_shift_grid_constant_order1(self): + x = numpy.array([[1, 2, 3], + [4, 5, 6]], dtype=float) + expected_result = numpy.array([[0.25, 0.75, 1.25], + [1.25, 3.00, 4.00]]) + assert_array_almost_equal( + ndimage.shift(x, (0.5, 0.5), mode='grid-constant', order=1), + expected_result, + ) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_shift_reflect(self, order): + # shift by x.shape results in reflection + x = numpy.array([[0, 1, 2], + [3, 4, 5]]) + assert_array_almost_equal( + ndimage.shift(x, x.shape, mode='reflect', order=order), + x[::-1, ::-1], + ) + + @pytest.mark.parametrize('order', range(0, 6)) + @pytest.mark.parametrize('prefilter', [False, True]) + def test_shift_nearest_boundary(self, order, prefilter): + # verify that shifting at least order // 2 beyond the end of the array + # gives a value equal to the edge value. + x = numpy.arange(16) + kwargs = dict(mode='nearest', order=order, prefilter=prefilter) + assert_array_almost_equal( + ndimage.shift(x, order // 2 + 1, **kwargs)[0], x[0], + ) + assert_array_almost_equal( + ndimage.shift(x, -order // 2 - 1, **kwargs)[-1], x[-1], + ) + + @pytest.mark.parametrize('mode', ['grid-constant', 'grid-wrap', 'nearest', + 'mirror', 'reflect']) + @pytest.mark.parametrize('order', range(6)) + def test_shift_vs_padded(self, order, mode): + x = numpy.arange(144, dtype=float).reshape(12, 12) + shift = (0.4, -2.3) + + # manually pad and then extract center to get expected result + npad = 32 + pad_mode = ndimage_to_numpy_mode.get(mode) + xp = numpy.pad(x, npad, mode=pad_mode) + center_slice = tuple([slice(npad, -npad)] * x.ndim) + expected_result = ndimage.shift( + xp, shift, mode=mode, order=order)[center_slice] + + assert_allclose( + ndimage.shift(x, shift, mode=mode, order=order), + expected_result, + rtol=1e-7, + ) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_zoom1(self, order): + for z in [2, [2, 2]]: + arr = numpy.array(list(range(25))).reshape((5, 5)).astype(float) + arr = ndimage.zoom(arr, z, order=order) + assert_equal(arr.shape, (10, 10)) + assert_(numpy.all(arr[-1, :] != 0)) + assert_(numpy.all(arr[-1, :] >= (20 - eps))) + assert_(numpy.all(arr[0, :] <= (5 + eps))) + assert_(numpy.all(arr >= (0 - eps))) + assert_(numpy.all(arr <= (24 + eps))) + + def test_zoom2(self): + arr = numpy.arange(12).reshape((3, 4)) + out = ndimage.zoom(ndimage.zoom(arr, 2), 0.5) + assert_array_equal(out, arr) + + def test_zoom3(self): + arr = numpy.array([[1, 2]]) + out1 = ndimage.zoom(arr, (2, 1)) + out2 = ndimage.zoom(arr, (1, 2)) + + assert_array_almost_equal(out1, numpy.array([[1, 2], [1, 2]])) + assert_array_almost_equal(out2, numpy.array([[1, 1, 2, 2]])) + + @pytest.mark.parametrize('order', range(0, 6)) + @pytest.mark.parametrize('dtype', [numpy.float64, numpy.complex128]) + def test_zoom_affine01(self, order, dtype): + data = numpy.asarray([[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]], dtype=dtype) + if data.dtype.kind == 'c': + data -= 1j * data + with suppress_warnings() as sup: + sup.filter(UserWarning, + 'The behavior of affine_transform with a 1-D array .* ' + 'has changed') + out = ndimage.affine_transform(data, [0.5, 0.5], 0, + (6, 8), order=order) + assert_array_almost_equal(out[::2, ::2], data) + + def test_zoom_infinity(self): + # Ticket #1419 regression test + dim = 8 + ndimage.zoom(numpy.zeros((dim, dim)), 1. / dim, mode='nearest') + + def test_zoom_zoomfactor_one(self): + # Ticket #1122 regression test + arr = numpy.zeros((1, 5, 5)) + zoom = (1.0, 2.0, 2.0) + + out = ndimage.zoom(arr, zoom, cval=7) + ref = numpy.zeros((1, 10, 10)) + assert_array_almost_equal(out, ref) + + def test_zoom_output_shape_roundoff(self): + arr = numpy.zeros((3, 11, 25)) + zoom = (4.0 / 3, 15.0 / 11, 29.0 / 25) + out = ndimage.zoom(arr, zoom) + assert_array_equal(out.shape, (4, 15, 29)) + + @pytest.mark.parametrize('zoom', [(1, 1), (3, 5), (8, 2), (8, 8)]) + @pytest.mark.parametrize('mode', ['nearest', 'constant', 'wrap', 'reflect', + 'mirror', 'grid-wrap', 'grid-mirror', + 'grid-constant']) + def test_zoom_by_int_order0(self, zoom, mode): + # order 0 zoom should be the same as replication via numpy.kron + # Note: This is not True for general x shapes when grid_mode is False, + # but works here for all modes because the size ratio happens to + # always be an integer when x.shape = (2, 2). + x = numpy.array([[0, 1], + [2, 3]], dtype=float) + # x = numpy.arange(16, dtype=float).reshape(4, 4) + assert_array_almost_equal( + ndimage.zoom(x, zoom, order=0, mode=mode), + numpy.kron(x, numpy.ones(zoom)) + ) + + @pytest.mark.parametrize('shape', [(2, 3), (4, 4)]) + @pytest.mark.parametrize('zoom', [(1, 1), (3, 5), (8, 2), (8, 8)]) + @pytest.mark.parametrize('mode', ['nearest', 'reflect', 'mirror', + 'grid-wrap', 'grid-constant']) + def test_zoom_grid_by_int_order0(self, shape, zoom, mode): + # When grid_mode is True, order 0 zoom should be the same as + # replication via numpy.kron. The only exceptions to this are the + # non-grid modes 'constant' and 'wrap'. + x = numpy.arange(numpy.prod(shape), dtype=float).reshape(shape) + assert_array_almost_equal( + ndimage.zoom(x, zoom, order=0, mode=mode, grid_mode=True), + numpy.kron(x, numpy.ones(zoom)) + ) + + @pytest.mark.parametrize('mode', ['constant', 'wrap']) + def test_zoom_grid_mode_warnings(self, mode): + # Warn on use of non-grid modes when grid_mode is True + x = numpy.arange(9, dtype=float).reshape((3, 3)) + with pytest.warns(UserWarning, + match="It is recommended to use mode"): + ndimage.zoom(x, 2, mode=mode, grid_mode=True), + + @pytest.mark.parametrize('order', range(0, 6)) + def test_rotate01(self, order): + data = numpy.array([[0, 0, 0, 0], + [0, 1, 1, 0], + [0, 0, 0, 0]], dtype=numpy.float64) + out = ndimage.rotate(data, 0, order=order) + assert_array_almost_equal(out, data) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_rotate02(self, order): + data = numpy.array([[0, 0, 0, 0], + [0, 1, 0, 0], + [0, 0, 0, 0]], dtype=numpy.float64) + expected = numpy.array([[0, 0, 0], + [0, 0, 0], + [0, 1, 0], + [0, 0, 0]], dtype=numpy.float64) + out = ndimage.rotate(data, 90, order=order) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('order', range(0, 6)) + @pytest.mark.parametrize('dtype', [numpy.float64, numpy.complex128]) + def test_rotate03(self, order, dtype): + data = numpy.array([[0, 0, 0, 0, 0], + [0, 1, 1, 0, 0], + [0, 0, 0, 0, 0]], dtype=dtype) + expected = numpy.array([[0, 0, 0], + [0, 0, 0], + [0, 1, 0], + [0, 1, 0], + [0, 0, 0]], dtype=dtype) + if data.dtype.kind == 'c': + data -= 1j * data + expected -= 1j * expected + out = ndimage.rotate(data, 90, order=order) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_rotate04(self, order): + data = numpy.array([[0, 0, 0, 0, 0], + [0, 1, 1, 0, 0], + [0, 0, 0, 0, 0]], dtype=numpy.float64) + expected = numpy.array([[0, 0, 0, 0, 0], + [0, 0, 1, 0, 0], + [0, 0, 1, 0, 0]], dtype=numpy.float64) + out = ndimage.rotate(data, 90, reshape=False, order=order) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_rotate05(self, order): + data = numpy.empty((4, 3, 3)) + for i in range(3): + data[:, :, i] = numpy.array([[0, 0, 0], + [0, 1, 0], + [0, 1, 0], + [0, 0, 0]], dtype=numpy.float64) + expected = numpy.array([[0, 0, 0, 0], + [0, 1, 1, 0], + [0, 0, 0, 0]], dtype=numpy.float64) + out = ndimage.rotate(data, 90, order=order) + for i in range(3): + assert_array_almost_equal(out[:, :, i], expected) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_rotate06(self, order): + data = numpy.empty((3, 4, 3)) + for i in range(3): + data[:, :, i] = numpy.array([[0, 0, 0, 0], + [0, 1, 1, 0], + [0, 0, 0, 0]], dtype=numpy.float64) + expected = numpy.array([[0, 0, 0], + [0, 1, 0], + [0, 1, 0], + [0, 0, 0]], dtype=numpy.float64) + out = ndimage.rotate(data, 90, order=order) + for i in range(3): + assert_array_almost_equal(out[:, :, i], expected) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_rotate07(self, order): + data = numpy.array([[[0, 0, 0, 0, 0], + [0, 1, 1, 0, 0], + [0, 0, 0, 0, 0]]] * 2, dtype=numpy.float64) + data = data.transpose() + expected = numpy.array([[[0, 0, 0], + [0, 1, 0], + [0, 1, 0], + [0, 0, 0], + [0, 0, 0]]] * 2, dtype=numpy.float64) + expected = expected.transpose([2, 1, 0]) + out = ndimage.rotate(data, 90, axes=(0, 1), order=order) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('order', range(0, 6)) + def test_rotate08(self, order): + data = numpy.array([[[0, 0, 0, 0, 0], + [0, 1, 1, 0, 0], + [0, 0, 0, 0, 0]]] * 2, dtype=numpy.float64) + data = data.transpose() + expected = numpy.array([[[0, 0, 1, 0, 0], + [0, 0, 1, 0, 0], + [0, 0, 0, 0, 0]]] * 2, dtype=numpy.float64) + expected = expected.transpose() + out = ndimage.rotate(data, 90, axes=(0, 1), reshape=False, order=order) + assert_array_almost_equal(out, expected) + + def test_rotate09(self): + data = numpy.array([[0, 0, 0, 0, 0], + [0, 1, 1, 0, 0], + [0, 0, 0, 0, 0]] * 2, dtype=numpy.float64) + with assert_raises(ValueError): + ndimage.rotate(data, 90, axes=(0, data.ndim)) + + def test_rotate10(self): + data = numpy.arange(45, dtype=numpy.float64).reshape((3, 5, 3)) + + # The output of ndimage.rotate before refactoring + expected = numpy.array([[[0.0, 0.0, 0.0], + [0.0, 0.0, 0.0], + [6.54914793, 7.54914793, 8.54914793], + [10.84520162, 11.84520162, 12.84520162], + [0.0, 0.0, 0.0]], + [[6.19286575, 7.19286575, 8.19286575], + [13.4730712, 14.4730712, 15.4730712], + [21.0, 22.0, 23.0], + [28.5269288, 29.5269288, 30.5269288], + [35.80713425, 36.80713425, 37.80713425]], + [[0.0, 0.0, 0.0], + [31.15479838, 32.15479838, 33.15479838], + [35.45085207, 36.45085207, 37.45085207], + [0.0, 0.0, 0.0], + [0.0, 0.0, 0.0]]]) + + out = ndimage.rotate(data, angle=12, reshape=False) + assert_array_almost_equal(out, expected) + + def test_rotate_exact_180(self): + a = numpy.tile(numpy.arange(5), (5, 1)) + b = ndimage.rotate(ndimage.rotate(a, 180), -180) + assert_equal(a, b) + + +def test_zoom_output_shape(): + """Ticket #643""" + x = numpy.arange(12).reshape((3, 4)) + ndimage.zoom(x, 2, output=numpy.zeros((6, 8))) diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_measurements.py b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_measurements.py new file mode 100644 index 0000000000000000000000000000000000000000..135e9a72c94103cc378d87ac9a78e44342bfb55b --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_measurements.py @@ -0,0 +1,1409 @@ +import os.path + +import numpy as np +from numpy.testing import ( + assert_, + assert_allclose, + assert_almost_equal, + assert_array_almost_equal, + assert_array_equal, + assert_equal, + suppress_warnings, +) +from pytest import raises as assert_raises + +import scipy.ndimage as ndimage + + +from . import types + + +class Test_measurements_stats: + """ndimage._measurements._stats() is a utility used by other functions.""" + + def test_a(self): + x = [0, 1, 2, 6] + labels = [0, 0, 1, 1] + index = [0, 1] + for shp in [(4,), (2, 2)]: + x = np.array(x).reshape(shp) + labels = np.array(labels).reshape(shp) + counts, sums = ndimage._measurements._stats( + x, labels=labels, index=index) + assert_array_equal(counts, [2, 2]) + assert_array_equal(sums, [1.0, 8.0]) + + def test_b(self): + # Same data as test_a, but different labels. The label 9 exceeds the + # length of 'labels', so this test will follow a different code path. + x = [0, 1, 2, 6] + labels = [0, 0, 9, 9] + index = [0, 9] + for shp in [(4,), (2, 2)]: + x = np.array(x).reshape(shp) + labels = np.array(labels).reshape(shp) + counts, sums = ndimage._measurements._stats( + x, labels=labels, index=index) + assert_array_equal(counts, [2, 2]) + assert_array_equal(sums, [1.0, 8.0]) + + def test_a_centered(self): + x = [0, 1, 2, 6] + labels = [0, 0, 1, 1] + index = [0, 1] + for shp in [(4,), (2, 2)]: + x = np.array(x).reshape(shp) + labels = np.array(labels).reshape(shp) + counts, sums, centers = ndimage._measurements._stats( + x, labels=labels, index=index, centered=True) + assert_array_equal(counts, [2, 2]) + assert_array_equal(sums, [1.0, 8.0]) + assert_array_equal(centers, [0.5, 8.0]) + + def test_b_centered(self): + x = [0, 1, 2, 6] + labels = [0, 0, 9, 9] + index = [0, 9] + for shp in [(4,), (2, 2)]: + x = np.array(x).reshape(shp) + labels = np.array(labels).reshape(shp) + counts, sums, centers = ndimage._measurements._stats( + x, labels=labels, index=index, centered=True) + assert_array_equal(counts, [2, 2]) + assert_array_equal(sums, [1.0, 8.0]) + assert_array_equal(centers, [0.5, 8.0]) + + def test_nonint_labels(self): + x = [0, 1, 2, 6] + labels = [0.0, 0.0, 9.0, 9.0] + index = [0.0, 9.0] + for shp in [(4,), (2, 2)]: + x = np.array(x).reshape(shp) + labels = np.array(labels).reshape(shp) + counts, sums, centers = ndimage._measurements._stats( + x, labels=labels, index=index, centered=True) + assert_array_equal(counts, [2, 2]) + assert_array_equal(sums, [1.0, 8.0]) + assert_array_equal(centers, [0.5, 8.0]) + + +class Test_measurements_select: + """ndimage._measurements._select() is a utility used by other functions.""" + + def test_basic(self): + x = [0, 1, 6, 2] + cases = [ + ([0, 0, 1, 1], [0, 1]), # "Small" integer labels + ([0, 0, 9, 9], [0, 9]), # A label larger than len(labels) + ([0.0, 0.0, 7.0, 7.0], [0.0, 7.0]), # Non-integer labels + ] + for labels, index in cases: + result = ndimage._measurements._select( + x, labels=labels, index=index) + assert_(len(result) == 0) + result = ndimage._measurements._select( + x, labels=labels, index=index, find_max=True) + assert_(len(result) == 1) + assert_array_equal(result[0], [1, 6]) + result = ndimage._measurements._select( + x, labels=labels, index=index, find_min=True) + assert_(len(result) == 1) + assert_array_equal(result[0], [0, 2]) + result = ndimage._measurements._select( + x, labels=labels, index=index, find_min=True, + find_min_positions=True) + assert_(len(result) == 2) + assert_array_equal(result[0], [0, 2]) + assert_array_equal(result[1], [0, 3]) + assert_equal(result[1].dtype.kind, 'i') + result = ndimage._measurements._select( + x, labels=labels, index=index, find_max=True, + find_max_positions=True) + assert_(len(result) == 2) + assert_array_equal(result[0], [1, 6]) + assert_array_equal(result[1], [1, 2]) + assert_equal(result[1].dtype.kind, 'i') + + +def test_label01(): + data = np.ones([]) + out, n = ndimage.label(data) + assert_array_almost_equal(out, 1) + assert_equal(n, 1) + + +def test_label02(): + data = np.zeros([]) + out, n = ndimage.label(data) + assert_array_almost_equal(out, 0) + assert_equal(n, 0) + + +def test_label03(): + data = np.ones([1]) + out, n = ndimage.label(data) + assert_array_almost_equal(out, [1]) + assert_equal(n, 1) + + +def test_label04(): + data = np.zeros([1]) + out, n = ndimage.label(data) + assert_array_almost_equal(out, [0]) + assert_equal(n, 0) + + +def test_label05(): + data = np.ones([5]) + out, n = ndimage.label(data) + assert_array_almost_equal(out, [1, 1, 1, 1, 1]) + assert_equal(n, 1) + + +def test_label06(): + data = np.array([1, 0, 1, 1, 0, 1]) + out, n = ndimage.label(data) + assert_array_almost_equal(out, [1, 0, 2, 2, 0, 3]) + assert_equal(n, 3) + + +def test_label07(): + data = np.array([[0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0]]) + out, n = ndimage.label(data) + assert_array_almost_equal(out, [[0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0]]) + assert_equal(n, 0) + + +def test_label08(): + data = np.array([[1, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0], + [1, 1, 0, 0, 0, 0], + [1, 1, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 0]]) + out, n = ndimage.label(data) + assert_array_almost_equal(out, [[1, 0, 0, 0, 0, 0], + [0, 0, 2, 2, 0, 0], + [0, 0, 2, 2, 2, 0], + [3, 3, 0, 0, 0, 0], + [3, 3, 0, 0, 0, 0], + [0, 0, 0, 4, 4, 0]]) + assert_equal(n, 4) + + +def test_label09(): + data = np.array([[1, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0], + [1, 1, 0, 0, 0, 0], + [1, 1, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 0]]) + struct = ndimage.generate_binary_structure(2, 2) + out, n = ndimage.label(data, struct) + assert_array_almost_equal(out, [[1, 0, 0, 0, 0, 0], + [0, 0, 2, 2, 0, 0], + [0, 0, 2, 2, 2, 0], + [2, 2, 0, 0, 0, 0], + [2, 2, 0, 0, 0, 0], + [0, 0, 0, 3, 3, 0]]) + assert_equal(n, 3) + + +def test_label10(): + data = np.array([[0, 0, 0, 0, 0, 0], + [0, 1, 1, 0, 1, 0], + [0, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0]]) + struct = ndimage.generate_binary_structure(2, 2) + out, n = ndimage.label(data, struct) + assert_array_almost_equal(out, [[0, 0, 0, 0, 0, 0], + [0, 1, 1, 0, 1, 0], + [0, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0]]) + assert_equal(n, 1) + + +def test_label11(): + for type in types: + data = np.array([[1, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0], + [1, 1, 0, 0, 0, 0], + [1, 1, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 0]], type) + out, n = ndimage.label(data) + expected = [[1, 0, 0, 0, 0, 0], + [0, 0, 2, 2, 0, 0], + [0, 0, 2, 2, 2, 0], + [3, 3, 0, 0, 0, 0], + [3, 3, 0, 0, 0, 0], + [0, 0, 0, 4, 4, 0]] + assert_array_almost_equal(out, expected) + assert_equal(n, 4) + + +def test_label11_inplace(): + for type in types: + data = np.array([[1, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0], + [1, 1, 0, 0, 0, 0], + [1, 1, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 0]], type) + n = ndimage.label(data, output=data) + expected = [[1, 0, 0, 0, 0, 0], + [0, 0, 2, 2, 0, 0], + [0, 0, 2, 2, 2, 0], + [3, 3, 0, 0, 0, 0], + [3, 3, 0, 0, 0, 0], + [0, 0, 0, 4, 4, 0]] + assert_array_almost_equal(data, expected) + assert_equal(n, 4) + + +def test_label12(): + for type in types: + data = np.array([[0, 0, 0, 0, 1, 1], + [0, 0, 0, 0, 0, 1], + [0, 0, 1, 0, 1, 1], + [0, 0, 1, 1, 1, 1], + [0, 0, 0, 1, 1, 0]], type) + out, n = ndimage.label(data) + expected = [[0, 0, 0, 0, 1, 1], + [0, 0, 0, 0, 0, 1], + [0, 0, 1, 0, 1, 1], + [0, 0, 1, 1, 1, 1], + [0, 0, 0, 1, 1, 0]] + assert_array_almost_equal(out, expected) + assert_equal(n, 1) + + +def test_label13(): + for type in types: + data = np.array([[1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1], + [1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1], + [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], + [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], + type) + out, n = ndimage.label(data) + expected = [[1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1], + [1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1], + [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], + [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] + assert_array_almost_equal(out, expected) + assert_equal(n, 1) + + +def test_label_output_typed(): + data = np.ones([5]) + for t in types: + output = np.zeros([5], dtype=t) + n = ndimage.label(data, output=output) + assert_array_almost_equal(output, 1) + assert_equal(n, 1) + + +def test_label_output_dtype(): + data = np.ones([5]) + for t in types: + output, n = ndimage.label(data, output=t) + assert_array_almost_equal(output, 1) + assert output.dtype == t + + +def test_label_output_wrong_size(): + data = np.ones([5]) + for t in types: + output = np.zeros([10], t) + assert_raises((RuntimeError, ValueError), + ndimage.label, data, output=output) + + +def test_label_structuring_elements(): + data = np.loadtxt(os.path.join(os.path.dirname( + __file__), "data", "label_inputs.txt")) + strels = np.loadtxt(os.path.join( + os.path.dirname(__file__), "data", "label_strels.txt")) + results = np.loadtxt(os.path.join( + os.path.dirname(__file__), "data", "label_results.txt")) + data = data.reshape((-1, 7, 7)) + strels = strels.reshape((-1, 3, 3)) + results = results.reshape((-1, 7, 7)) + r = 0 + for i in range(data.shape[0]): + d = data[i, :, :] + for j in range(strels.shape[0]): + s = strels[j, :, :] + assert_equal(ndimage.label(d, s)[0], results[r, :, :]) + r += 1 + + +def test_ticket_742(): + def SE(img, thresh=.7, size=4): + mask = img > thresh + rank = len(mask.shape) + la, co = ndimage.label(mask, + ndimage.generate_binary_structure(rank, rank)) + _ = ndimage.find_objects(la) + + if np.dtype(np.intp) != np.dtype('i'): + shape = (3, 1240, 1240) + a = np.random.rand(np.prod(shape)).reshape(shape) + # shouldn't crash + SE(a) + + +def test_gh_issue_3025(): + """Github issue #3025 - improper merging of labels""" + d = np.zeros((60, 320)) + d[:, :257] = 1 + d[:, 260:] = 1 + d[36, 257] = 1 + d[35, 258] = 1 + d[35, 259] = 1 + assert ndimage.label(d, np.ones((3, 3)))[1] == 1 + + +def test_label_default_dtype(): + test_array = np.random.rand(10, 10) + label, no_features = ndimage.label(test_array > 0.5) + assert_(label.dtype in (np.int32, np.int64)) + # Shouldn't raise an exception + ndimage.find_objects(label) + + +def test_find_objects01(): + data = np.ones([], dtype=int) + out = ndimage.find_objects(data) + assert_(out == [()]) + + +def test_find_objects02(): + data = np.zeros([], dtype=int) + out = ndimage.find_objects(data) + assert_(out == []) + + +def test_find_objects03(): + data = np.ones([1], dtype=int) + out = ndimage.find_objects(data) + assert_equal(out, [(slice(0, 1, None),)]) + + +def test_find_objects04(): + data = np.zeros([1], dtype=int) + out = ndimage.find_objects(data) + assert_equal(out, []) + + +def test_find_objects05(): + data = np.ones([5], dtype=int) + out = ndimage.find_objects(data) + assert_equal(out, [(slice(0, 5, None),)]) + + +def test_find_objects06(): + data = np.array([1, 0, 2, 2, 0, 3]) + out = ndimage.find_objects(data) + assert_equal(out, [(slice(0, 1, None),), + (slice(2, 4, None),), + (slice(5, 6, None),)]) + + +def test_find_objects07(): + data = np.array([[0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0]]) + out = ndimage.find_objects(data) + assert_equal(out, []) + + +def test_find_objects08(): + data = np.array([[1, 0, 0, 0, 0, 0], + [0, 0, 2, 2, 0, 0], + [0, 0, 2, 2, 2, 0], + [3, 3, 0, 0, 0, 0], + [3, 3, 0, 0, 0, 0], + [0, 0, 0, 4, 4, 0]]) + out = ndimage.find_objects(data) + assert_equal(out, [(slice(0, 1, None), slice(0, 1, None)), + (slice(1, 3, None), slice(2, 5, None)), + (slice(3, 5, None), slice(0, 2, None)), + (slice(5, 6, None), slice(3, 5, None))]) + + +def test_find_objects09(): + data = np.array([[1, 0, 0, 0, 0, 0], + [0, 0, 2, 2, 0, 0], + [0, 0, 2, 2, 2, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 4, 4, 0]]) + out = ndimage.find_objects(data) + assert_equal(out, [(slice(0, 1, None), slice(0, 1, None)), + (slice(1, 3, None), slice(2, 5, None)), + None, + (slice(5, 6, None), slice(3, 5, None))]) + + +def test_value_indices01(): + "Test dictionary keys and entries" + data = np.array([[1, 0, 0, 0, 0, 0], + [0, 0, 2, 2, 0, 0], + [0, 0, 2, 2, 2, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 4, 4, 0]]) + vi = ndimage.value_indices(data, ignore_value=0) + true_keys = [1, 2, 4] + assert_equal(list(vi.keys()), true_keys) + + truevi = {} + for k in true_keys: + truevi[k] = np.where(data == k) + + vi = ndimage.value_indices(data, ignore_value=0) + assert_equal(vi, truevi) + + +def test_value_indices02(): + "Test input checking" + data = np.zeros((5, 4), dtype=np.float32) + msg = "Parameter 'arr' must be an integer array" + with assert_raises(ValueError, match=msg): + ndimage.value_indices(data) + + +def test_value_indices03(): + "Test different input array shapes, from 1-D to 4-D" + for shape in [(36,), (18, 2), (3, 3, 4), (3, 3, 2, 2)]: + a = np.array((12*[1]+12*[2]+12*[3]), dtype=np.int32).reshape(shape) + trueKeys = np.unique(a) + vi = ndimage.value_indices(a) + assert_equal(list(vi.keys()), list(trueKeys)) + for k in trueKeys: + trueNdx = np.where(a == k) + assert_equal(vi[k], trueNdx) + + +def test_sum01(): + for type in types: + input = np.array([], type) + output = ndimage.sum(input) + assert_equal(output, 0.0) + + +def test_sum02(): + for type in types: + input = np.zeros([0, 4], type) + output = ndimage.sum(input) + assert_equal(output, 0.0) + + +def test_sum03(): + for type in types: + input = np.ones([], type) + output = ndimage.sum(input) + assert_almost_equal(output, 1.0) + + +def test_sum04(): + for type in types: + input = np.array([1, 2], type) + output = ndimage.sum(input) + assert_almost_equal(output, 3.0) + + +def test_sum05(): + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.sum(input) + assert_almost_equal(output, 10.0) + + +def test_sum06(): + labels = np.array([], bool) + for type in types: + input = np.array([], type) + output = ndimage.sum(input, labels=labels) + assert_equal(output, 0.0) + + +def test_sum07(): + labels = np.ones([0, 4], bool) + for type in types: + input = np.zeros([0, 4], type) + output = ndimage.sum(input, labels=labels) + assert_equal(output, 0.0) + + +def test_sum08(): + labels = np.array([1, 0], bool) + for type in types: + input = np.array([1, 2], type) + output = ndimage.sum(input, labels=labels) + assert_equal(output, 1.0) + + +def test_sum09(): + labels = np.array([1, 0], bool) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.sum(input, labels=labels) + assert_almost_equal(output, 4.0) + + +def test_sum10(): + labels = np.array([1, 0], bool) + input = np.array([[1, 2], [3, 4]], bool) + output = ndimage.sum(input, labels=labels) + assert_almost_equal(output, 2.0) + + +def test_sum11(): + labels = np.array([1, 2], np.int8) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.sum(input, labels=labels, + index=2) + assert_almost_equal(output, 6.0) + + +def test_sum12(): + labels = np.array([[1, 2], [2, 4]], np.int8) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.sum(input, labels=labels, index=[4, 8, 2]) + assert_array_almost_equal(output, [4.0, 0.0, 5.0]) + + +def test_sum_labels(): + labels = np.array([[1, 2], [2, 4]], np.int8) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output_sum = ndimage.sum(input, labels=labels, index=[4, 8, 2]) + output_labels = ndimage.sum_labels( + input, labels=labels, index=[4, 8, 2]) + + assert (output_sum == output_labels).all() + assert_array_almost_equal(output_labels, [4.0, 0.0, 5.0]) + + +def test_mean01(): + labels = np.array([1, 0], bool) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.mean(input, labels=labels) + assert_almost_equal(output, 2.0) + + +def test_mean02(): + labels = np.array([1, 0], bool) + input = np.array([[1, 2], [3, 4]], bool) + output = ndimage.mean(input, labels=labels) + assert_almost_equal(output, 1.0) + + +def test_mean03(): + labels = np.array([1, 2]) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.mean(input, labels=labels, + index=2) + assert_almost_equal(output, 3.0) + + +def test_mean04(): + labels = np.array([[1, 2], [2, 4]], np.int8) + with np.errstate(all='ignore'): + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.mean(input, labels=labels, + index=[4, 8, 2]) + assert_array_almost_equal(output[[0, 2]], [4.0, 2.5]) + assert_(np.isnan(output[1])) + + +def test_minimum01(): + labels = np.array([1, 0], bool) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.minimum(input, labels=labels) + assert_almost_equal(output, 1.0) + + +def test_minimum02(): + labels = np.array([1, 0], bool) + input = np.array([[2, 2], [2, 4]], bool) + output = ndimage.minimum(input, labels=labels) + assert_almost_equal(output, 1.0) + + +def test_minimum03(): + labels = np.array([1, 2]) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.minimum(input, labels=labels, + index=2) + assert_almost_equal(output, 2.0) + + +def test_minimum04(): + labels = np.array([[1, 2], [2, 3]]) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.minimum(input, labels=labels, + index=[2, 3, 8]) + assert_array_almost_equal(output, [2.0, 4.0, 0.0]) + + +def test_maximum01(): + labels = np.array([1, 0], bool) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.maximum(input, labels=labels) + assert_almost_equal(output, 3.0) + + +def test_maximum02(): + labels = np.array([1, 0], bool) + input = np.array([[2, 2], [2, 4]], bool) + output = ndimage.maximum(input, labels=labels) + assert_almost_equal(output, 1.0) + + +def test_maximum03(): + labels = np.array([1, 2]) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.maximum(input, labels=labels, + index=2) + assert_almost_equal(output, 4.0) + + +def test_maximum04(): + labels = np.array([[1, 2], [2, 3]]) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.maximum(input, labels=labels, + index=[2, 3, 8]) + assert_array_almost_equal(output, [3.0, 4.0, 0.0]) + + +def test_maximum05(): + # Regression test for ticket #501 (Trac) + x = np.array([-3, -2, -1]) + assert_equal(ndimage.maximum(x), -1) + + +def test_median01(): + a = np.array([[1, 2, 0, 1], + [5, 3, 0, 4], + [0, 0, 0, 7], + [9, 3, 0, 0]]) + labels = np.array([[1, 1, 0, 2], + [1, 1, 0, 2], + [0, 0, 0, 2], + [3, 3, 0, 0]]) + output = ndimage.median(a, labels=labels, index=[1, 2, 3]) + assert_array_almost_equal(output, [2.5, 4.0, 6.0]) + + +def test_median02(): + a = np.array([[1, 2, 0, 1], + [5, 3, 0, 4], + [0, 0, 0, 7], + [9, 3, 0, 0]]) + output = ndimage.median(a) + assert_almost_equal(output, 1.0) + + +def test_median03(): + a = np.array([[1, 2, 0, 1], + [5, 3, 0, 4], + [0, 0, 0, 7], + [9, 3, 0, 0]]) + labels = np.array([[1, 1, 0, 2], + [1, 1, 0, 2], + [0, 0, 0, 2], + [3, 3, 0, 0]]) + output = ndimage.median(a, labels=labels) + assert_almost_equal(output, 3.0) + + +def test_median_gh12836_bool(): + # test boolean addition fix on example from gh-12836 + a = np.asarray([1, 1], dtype=bool) + output = ndimage.median(a, labels=np.ones((2,)), index=[1]) + assert_array_almost_equal(output, [1.0]) + + +def test_median_no_int_overflow(): + # test integer overflow fix on example from gh-12836 + a = np.asarray([65, 70], dtype=np.int8) + output = ndimage.median(a, labels=np.ones((2,)), index=[1]) + assert_array_almost_equal(output, [67.5]) + + +def test_variance01(): + with np.errstate(all='ignore'): + for type in types: + input = np.array([], type) + with suppress_warnings() as sup: + sup.filter(RuntimeWarning, "Mean of empty slice") + output = ndimage.variance(input) + assert_(np.isnan(output)) + + +def test_variance02(): + for type in types: + input = np.array([1], type) + output = ndimage.variance(input) + assert_almost_equal(output, 0.0) + + +def test_variance03(): + for type in types: + input = np.array([1, 3], type) + output = ndimage.variance(input) + assert_almost_equal(output, 1.0) + + +def test_variance04(): + input = np.array([1, 0], bool) + output = ndimage.variance(input) + assert_almost_equal(output, 0.25) + + +def test_variance05(): + labels = [2, 2, 3] + for type in types: + input = np.array([1, 3, 8], type) + output = ndimage.variance(input, labels, 2) + assert_almost_equal(output, 1.0) + + +def test_variance06(): + labels = [2, 2, 3, 3, 4] + with np.errstate(all='ignore'): + for type in types: + input = np.array([1, 3, 8, 10, 8], type) + output = ndimage.variance(input, labels, [2, 3, 4]) + assert_array_almost_equal(output, [1.0, 1.0, 0.0]) + + +def test_standard_deviation01(): + with np.errstate(all='ignore'): + for type in types: + input = np.array([], type) + with suppress_warnings() as sup: + sup.filter(RuntimeWarning, "Mean of empty slice") + output = ndimage.standard_deviation(input) + assert_(np.isnan(output)) + + +def test_standard_deviation02(): + for type in types: + input = np.array([1], type) + output = ndimage.standard_deviation(input) + assert_almost_equal(output, 0.0) + + +def test_standard_deviation03(): + for type in types: + input = np.array([1, 3], type) + output = ndimage.standard_deviation(input) + assert_almost_equal(output, np.sqrt(1.0)) + + +def test_standard_deviation04(): + input = np.array([1, 0], bool) + output = ndimage.standard_deviation(input) + assert_almost_equal(output, 0.5) + + +def test_standard_deviation05(): + labels = [2, 2, 3] + for type in types: + input = np.array([1, 3, 8], type) + output = ndimage.standard_deviation(input, labels, 2) + assert_almost_equal(output, 1.0) + + +def test_standard_deviation06(): + labels = [2, 2, 3, 3, 4] + with np.errstate(all='ignore'): + for type in types: + input = np.array([1, 3, 8, 10, 8], type) + output = ndimage.standard_deviation(input, labels, [2, 3, 4]) + assert_array_almost_equal(output, [1.0, 1.0, 0.0]) + + +def test_standard_deviation07(): + labels = [1] + with np.errstate(all='ignore'): + for type in types: + input = np.array([-0.00619519], type) + output = ndimage.standard_deviation(input, labels, [1]) + assert_array_almost_equal(output, [0]) + + +def test_minimum_position01(): + labels = np.array([1, 0], bool) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.minimum_position(input, labels=labels) + assert_equal(output, (0, 0)) + + +def test_minimum_position02(): + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 0, 2], + [1, 5, 1, 1]], type) + output = ndimage.minimum_position(input) + assert_equal(output, (1, 2)) + + +def test_minimum_position03(): + input = np.array([[5, 4, 2, 5], + [3, 7, 0, 2], + [1, 5, 1, 1]], bool) + output = ndimage.minimum_position(input) + assert_equal(output, (1, 2)) + + +def test_minimum_position04(): + input = np.array([[5, 4, 2, 5], + [3, 7, 1, 2], + [1, 5, 1, 1]], bool) + output = ndimage.minimum_position(input) + assert_equal(output, (0, 0)) + + +def test_minimum_position05(): + labels = [1, 2, 0, 4] + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 0, 2], + [1, 5, 2, 3]], type) + output = ndimage.minimum_position(input, labels) + assert_equal(output, (2, 0)) + + +def test_minimum_position06(): + labels = [1, 2, 3, 4] + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 0, 2], + [1, 5, 1, 1]], type) + output = ndimage.minimum_position(input, labels, 2) + assert_equal(output, (0, 1)) + + +def test_minimum_position07(): + labels = [1, 2, 3, 4] + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 0, 2], + [1, 5, 1, 1]], type) + output = ndimage.minimum_position(input, labels, + [2, 3]) + assert_equal(output[0], (0, 1)) + assert_equal(output[1], (1, 2)) + + +def test_maximum_position01(): + labels = np.array([1, 0], bool) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output = ndimage.maximum_position(input, + labels=labels) + assert_equal(output, (1, 0)) + + +def test_maximum_position02(): + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 8, 2], + [1, 5, 1, 1]], type) + output = ndimage.maximum_position(input) + assert_equal(output, (1, 2)) + + +def test_maximum_position03(): + input = np.array([[5, 4, 2, 5], + [3, 7, 8, 2], + [1, 5, 1, 1]], bool) + output = ndimage.maximum_position(input) + assert_equal(output, (0, 0)) + + +def test_maximum_position04(): + labels = [1, 2, 0, 4] + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 8, 2], + [1, 5, 1, 1]], type) + output = ndimage.maximum_position(input, labels) + assert_equal(output, (1, 1)) + + +def test_maximum_position05(): + labels = [1, 2, 0, 4] + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 8, 2], + [1, 5, 1, 1]], type) + output = ndimage.maximum_position(input, labels, 1) + assert_equal(output, (0, 0)) + + +def test_maximum_position06(): + labels = [1, 2, 0, 4] + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 8, 2], + [1, 5, 1, 1]], type) + output = ndimage.maximum_position(input, labels, + [1, 2]) + assert_equal(output[0], (0, 0)) + assert_equal(output[1], (1, 1)) + + +def test_maximum_position07(): + # Test float labels + labels = np.array([1.0, 2.5, 0.0, 4.5]) + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 8, 2], + [1, 5, 1, 1]], type) + output = ndimage.maximum_position(input, labels, + [1.0, 4.5]) + assert_equal(output[0], (0, 0)) + assert_equal(output[1], (0, 3)) + + +def test_extrema01(): + labels = np.array([1, 0], bool) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output1 = ndimage.extrema(input, labels=labels) + output2 = ndimage.minimum(input, labels=labels) + output3 = ndimage.maximum(input, labels=labels) + output4 = ndimage.minimum_position(input, + labels=labels) + output5 = ndimage.maximum_position(input, + labels=labels) + assert_equal(output1, (output2, output3, output4, output5)) + + +def test_extrema02(): + labels = np.array([1, 2]) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output1 = ndimage.extrema(input, labels=labels, + index=2) + output2 = ndimage.minimum(input, labels=labels, + index=2) + output3 = ndimage.maximum(input, labels=labels, + index=2) + output4 = ndimage.minimum_position(input, + labels=labels, index=2) + output5 = ndimage.maximum_position(input, + labels=labels, index=2) + assert_equal(output1, (output2, output3, output4, output5)) + + +def test_extrema03(): + labels = np.array([[1, 2], [2, 3]]) + for type in types: + input = np.array([[1, 2], [3, 4]], type) + output1 = ndimage.extrema(input, labels=labels, + index=[2, 3, 8]) + output2 = ndimage.minimum(input, labels=labels, + index=[2, 3, 8]) + output3 = ndimage.maximum(input, labels=labels, + index=[2, 3, 8]) + output4 = ndimage.minimum_position(input, + labels=labels, index=[2, 3, 8]) + output5 = ndimage.maximum_position(input, + labels=labels, index=[2, 3, 8]) + assert_array_almost_equal(output1[0], output2) + assert_array_almost_equal(output1[1], output3) + assert_array_almost_equal(output1[2], output4) + assert_array_almost_equal(output1[3], output5) + + +def test_extrema04(): + labels = [1, 2, 0, 4] + for type in types: + input = np.array([[5, 4, 2, 5], + [3, 7, 8, 2], + [1, 5, 1, 1]], type) + output1 = ndimage.extrema(input, labels, [1, 2]) + output2 = ndimage.minimum(input, labels, [1, 2]) + output3 = ndimage.maximum(input, labels, [1, 2]) + output4 = ndimage.minimum_position(input, labels, + [1, 2]) + output5 = ndimage.maximum_position(input, labels, + [1, 2]) + assert_array_almost_equal(output1[0], output2) + assert_array_almost_equal(output1[1], output3) + assert_array_almost_equal(output1[2], output4) + assert_array_almost_equal(output1[3], output5) + + +def test_center_of_mass01(): + expected = [0.0, 0.0] + for type in types: + input = np.array([[1, 0], [0, 0]], type) + output = ndimage.center_of_mass(input) + assert_array_almost_equal(output, expected) + + +def test_center_of_mass02(): + expected = [1, 0] + for type in types: + input = np.array([[0, 0], [1, 0]], type) + output = ndimage.center_of_mass(input) + assert_array_almost_equal(output, expected) + + +def test_center_of_mass03(): + expected = [0, 1] + for type in types: + input = np.array([[0, 1], [0, 0]], type) + output = ndimage.center_of_mass(input) + assert_array_almost_equal(output, expected) + + +def test_center_of_mass04(): + expected = [1, 1] + for type in types: + input = np.array([[0, 0], [0, 1]], type) + output = ndimage.center_of_mass(input) + assert_array_almost_equal(output, expected) + + +def test_center_of_mass05(): + expected = [0.5, 0.5] + for type in types: + input = np.array([[1, 1], [1, 1]], type) + output = ndimage.center_of_mass(input) + assert_array_almost_equal(output, expected) + + +def test_center_of_mass06(): + expected = [0.5, 0.5] + input = np.array([[1, 2], [3, 1]], bool) + output = ndimage.center_of_mass(input) + assert_array_almost_equal(output, expected) + + +def test_center_of_mass07(): + labels = [1, 0] + expected = [0.5, 0.0] + input = np.array([[1, 2], [3, 1]], bool) + output = ndimage.center_of_mass(input, labels) + assert_array_almost_equal(output, expected) + + +def test_center_of_mass08(): + labels = [1, 2] + expected = [0.5, 1.0] + input = np.array([[5, 2], [3, 1]], bool) + output = ndimage.center_of_mass(input, labels, 2) + assert_array_almost_equal(output, expected) + + +def test_center_of_mass09(): + labels = [1, 2] + expected = [(0.5, 0.0), (0.5, 1.0)] + input = np.array([[1, 2], [1, 1]], bool) + output = ndimage.center_of_mass(input, labels, [1, 2]) + assert_array_almost_equal(output, expected) + + +def test_histogram01(): + expected = np.ones(10) + input = np.arange(10) + output = ndimage.histogram(input, 0, 10, 10) + assert_array_almost_equal(output, expected) + + +def test_histogram02(): + labels = [1, 1, 1, 1, 2, 2, 2, 2] + expected = [0, 2, 0, 1, 1] + input = np.array([1, 1, 3, 4, 3, 3, 3, 3]) + output = ndimage.histogram(input, 0, 4, 5, labels, 1) + assert_array_almost_equal(output, expected) + + +def test_histogram03(): + labels = [1, 0, 1, 1, 2, 2, 2, 2] + expected1 = [0, 1, 0, 1, 1] + expected2 = [0, 0, 0, 3, 0] + input = np.array([1, 1, 3, 4, 3, 5, 3, 3]) + output = ndimage.histogram(input, 0, 4, 5, labels, (1, 2)) + + assert_array_almost_equal(output[0], expected1) + assert_array_almost_equal(output[1], expected2) + + +def test_stat_funcs_2d(): + a = np.array([[5, 6, 0, 0, 0], [8, 9, 0, 0, 0], [0, 0, 0, 3, 5]]) + lbl = np.array([[1, 1, 0, 0, 0], [1, 1, 0, 0, 0], [0, 0, 0, 2, 2]]) + + mean = ndimage.mean(a, labels=lbl, index=[1, 2]) + assert_array_equal(mean, [7.0, 4.0]) + + var = ndimage.variance(a, labels=lbl, index=[1, 2]) + assert_array_equal(var, [2.5, 1.0]) + + std = ndimage.standard_deviation(a, labels=lbl, index=[1, 2]) + assert_array_almost_equal(std, np.sqrt([2.5, 1.0])) + + med = ndimage.median(a, labels=lbl, index=[1, 2]) + assert_array_equal(med, [7.0, 4.0]) + + min = ndimage.minimum(a, labels=lbl, index=[1, 2]) + assert_array_equal(min, [5, 3]) + + max = ndimage.maximum(a, labels=lbl, index=[1, 2]) + assert_array_equal(max, [9, 5]) + + +class TestWatershedIft: + + def test_watershed_ift01(self): + data = np.array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], np.uint8) + markers = np.array([[-1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], np.int8) + out = ndimage.watershed_ift(data, markers, structure=[[1, 1, 1], + [1, 1, 1], + [1, 1, 1]]) + expected = [[-1, -1, -1, -1, -1, -1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, -1, -1, -1, -1, -1, -1], + [-1, -1, -1, -1, -1, -1, -1]] + assert_array_almost_equal(out, expected) + + def test_watershed_ift02(self): + data = np.array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], np.uint8) + markers = np.array([[-1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], np.int8) + out = ndimage.watershed_ift(data, markers) + expected = [[-1, -1, -1, -1, -1, -1, -1], + [-1, -1, 1, 1, 1, -1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, -1, 1, 1, 1, -1, -1], + [-1, -1, -1, -1, -1, -1, -1], + [-1, -1, -1, -1, -1, -1, -1]] + assert_array_almost_equal(out, expected) + + def test_watershed_ift03(self): + data = np.array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 0, 1, 0, 1, 0], + [0, 1, 0, 1, 0, 1, 0], + [0, 1, 0, 1, 0, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0]], np.uint8) + markers = np.array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 2, 0, 3, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, -1]], np.int8) + out = ndimage.watershed_ift(data, markers) + expected = [[-1, -1, -1, -1, -1, -1, -1], + [-1, -1, 2, -1, 3, -1, -1], + [-1, 2, 2, 3, 3, 3, -1], + [-1, 2, 2, 3, 3, 3, -1], + [-1, 2, 2, 3, 3, 3, -1], + [-1, -1, 2, -1, 3, -1, -1], + [-1, -1, -1, -1, -1, -1, -1]] + assert_array_almost_equal(out, expected) + + def test_watershed_ift04(self): + data = np.array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 0, 1, 0, 1, 0], + [0, 1, 0, 1, 0, 1, 0], + [0, 1, 0, 1, 0, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0]], np.uint8) + markers = np.array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 2, 0, 3, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, -1]], + np.int8) + out = ndimage.watershed_ift(data, markers, + structure=[[1, 1, 1], + [1, 1, 1], + [1, 1, 1]]) + expected = [[-1, -1, -1, -1, -1, -1, -1], + [-1, 2, 2, 3, 3, 3, -1], + [-1, 2, 2, 3, 3, 3, -1], + [-1, 2, 2, 3, 3, 3, -1], + [-1, 2, 2, 3, 3, 3, -1], + [-1, 2, 2, 3, 3, 3, -1], + [-1, -1, -1, -1, -1, -1, -1]] + assert_array_almost_equal(out, expected) + + def test_watershed_ift05(self): + data = np.array([[0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 0, 1, 0, 1, 0], + [0, 1, 0, 1, 0, 1, 0], + [0, 1, 0, 1, 0, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0]], np.uint8) + markers = np.array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 3, 0, 2, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, -1]], + np.int8) + out = ndimage.watershed_ift(data, markers, + structure=[[1, 1, 1], + [1, 1, 1], + [1, 1, 1]]) + expected = [[-1, -1, -1, -1, -1, -1, -1], + [-1, 3, 3, 2, 2, 2, -1], + [-1, 3, 3, 2, 2, 2, -1], + [-1, 3, 3, 2, 2, 2, -1], + [-1, 3, 3, 2, 2, 2, -1], + [-1, 3, 3, 2, 2, 2, -1], + [-1, -1, -1, -1, -1, -1, -1]] + assert_array_almost_equal(out, expected) + + def test_watershed_ift06(self): + data = np.array([[0, 1, 0, 0, 0, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], np.uint8) + markers = np.array([[-1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], np.int8) + out = ndimage.watershed_ift(data, markers, + structure=[[1, 1, 1], + [1, 1, 1], + [1, 1, 1]]) + expected = [[-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, -1, -1, -1, -1, -1, -1], + [-1, -1, -1, -1, -1, -1, -1]] + assert_array_almost_equal(out, expected) + + def test_watershed_ift07(self): + shape = (7, 6) + data = np.zeros(shape, dtype=np.uint8) + data = data.transpose() + data[...] = np.array([[0, 1, 0, 0, 0, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 0, 0, 0, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], np.uint8) + markers = np.array([[-1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], np.int8) + out = np.zeros(shape, dtype=np.int16) + out = out.transpose() + ndimage.watershed_ift(data, markers, + structure=[[1, 1, 1], + [1, 1, 1], + [1, 1, 1]], + output=out) + expected = [[-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, 1, 1, 1, 1, 1, -1], + [-1, -1, -1, -1, -1, -1, -1], + [-1, -1, -1, -1, -1, -1, -1]] + assert_array_almost_equal(out, expected) + + def test_watershed_ift08(self): + # Test cost larger than uint8. See gh-10069. + data = np.array([[256, 0], + [0, 0]], np.uint16) + markers = np.array([[1, 0], + [0, 0]], np.int8) + out = ndimage.watershed_ift(data, markers) + expected = [[1, 1], + [1, 1]] + assert_array_almost_equal(out, expected) + + def test_watershed_ift09(self): + # Test large cost. See gh-19575 + data = np.array([[np.iinfo(np.uint16).max, 0], + [0, 0]], np.uint16) + markers = np.array([[1, 0], + [0, 0]], np.int8) + out = ndimage.watershed_ift(data, markers) + expected = [[1, 1], + [1, 1]] + assert_allclose(out, expected) diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_morphology.py b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_morphology.py new file mode 100644 index 0000000000000000000000000000000000000000..d0f47d651f32143c1594b1fe833e51f0ec4f5fb7 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_morphology.py @@ -0,0 +1,2395 @@ +import numpy +import numpy as np +from numpy.testing import (assert_, assert_equal, assert_array_equal, + assert_array_almost_equal) +import pytest +from pytest import raises as assert_raises + +from scipy import ndimage + +from . import types + + +class TestNdimageMorphology: + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_bf01(self, dtype): + # brute force (bf) distance transform + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out, ft = ndimage.distance_transform_bf(data, 'euclidean', + return_indices=True) + expected = [[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 2, 4, 2, 1, 0, 0], + [0, 0, 1, 4, 8, 4, 1, 0, 0], + [0, 0, 1, 2, 4, 2, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]] + assert_array_almost_equal(out * out, expected) + + expected = [[[0, 0, 0, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 1, 1, 1], + [2, 2, 2, 2, 1, 2, 2, 2, 2], + [3, 3, 3, 2, 1, 2, 3, 3, 3], + [4, 4, 4, 4, 6, 4, 4, 4, 4], + [5, 5, 6, 6, 7, 6, 6, 5, 5], + [6, 6, 6, 7, 7, 7, 6, 6, 6], + [7, 7, 7, 7, 7, 7, 7, 7, 7], + [8, 8, 8, 8, 8, 8, 8, 8, 8]], + [[0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 2, 4, 6, 6, 7, 8], + [0, 1, 1, 2, 4, 6, 7, 7, 8], + [0, 1, 1, 1, 6, 7, 7, 7, 8], + [0, 1, 2, 2, 4, 6, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8]]] + assert_array_almost_equal(ft, expected) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_bf02(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out, ft = ndimage.distance_transform_bf(data, 'cityblock', + return_indices=True) + + expected = [[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 2, 2, 2, 1, 0, 0], + [0, 0, 1, 2, 3, 2, 1, 0, 0], + [0, 0, 1, 2, 2, 2, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]] + assert_array_almost_equal(out, expected) + + expected = [[[0, 0, 0, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 1, 1, 1], + [2, 2, 2, 2, 1, 2, 2, 2, 2], + [3, 3, 3, 3, 1, 3, 3, 3, 3], + [4, 4, 4, 4, 7, 4, 4, 4, 4], + [5, 5, 6, 7, 7, 7, 6, 5, 5], + [6, 6, 6, 7, 7, 7, 6, 6, 6], + [7, 7, 7, 7, 7, 7, 7, 7, 7], + [8, 8, 8, 8, 8, 8, 8, 8, 8]], + [[0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 2, 4, 6, 6, 7, 8], + [0, 1, 1, 1, 4, 7, 7, 7, 8], + [0, 1, 1, 1, 4, 7, 7, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8]]] + assert_array_almost_equal(expected, ft) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_bf03(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out, ft = ndimage.distance_transform_bf(data, 'chessboard', + return_indices=True) + + expected = [[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 2, 1, 1, 0, 0], + [0, 0, 1, 2, 2, 2, 1, 0, 0], + [0, 0, 1, 1, 2, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]] + assert_array_almost_equal(out, expected) + + expected = [[[0, 0, 0, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 1, 1, 1], + [2, 2, 2, 2, 1, 2, 2, 2, 2], + [3, 3, 4, 2, 2, 2, 4, 3, 3], + [4, 4, 5, 6, 6, 6, 5, 4, 4], + [5, 5, 6, 6, 7, 6, 6, 5, 5], + [6, 6, 6, 7, 7, 7, 6, 6, 6], + [7, 7, 7, 7, 7, 7, 7, 7, 7], + [8, 8, 8, 8, 8, 8, 8, 8, 8]], + [[0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 2, 5, 6, 6, 7, 8], + [0, 1, 1, 2, 6, 6, 7, 7, 8], + [0, 1, 1, 2, 6, 7, 7, 7, 8], + [0, 1, 2, 2, 6, 6, 7, 7, 8], + [0, 1, 2, 4, 5, 6, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8]]] + assert_array_almost_equal(ft, expected) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_bf04(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + tdt, tft = ndimage.distance_transform_bf(data, return_indices=1) + dts = [] + fts = [] + dt = numpy.zeros(data.shape, dtype=numpy.float64) + ndimage.distance_transform_bf(data, distances=dt) + dts.append(dt) + ft = ndimage.distance_transform_bf( + data, return_distances=False, return_indices=1) + fts.append(ft) + ft = numpy.indices(data.shape, dtype=numpy.int32) + ndimage.distance_transform_bf( + data, return_distances=False, return_indices=True, indices=ft) + fts.append(ft) + dt, ft = ndimage.distance_transform_bf( + data, return_indices=1) + dts.append(dt) + fts.append(ft) + dt = numpy.zeros(data.shape, dtype=numpy.float64) + ft = ndimage.distance_transform_bf( + data, distances=dt, return_indices=True) + dts.append(dt) + fts.append(ft) + ft = numpy.indices(data.shape, dtype=numpy.int32) + dt = ndimage.distance_transform_bf( + data, return_indices=True, indices=ft) + dts.append(dt) + fts.append(ft) + dt = numpy.zeros(data.shape, dtype=numpy.float64) + ft = numpy.indices(data.shape, dtype=numpy.int32) + ndimage.distance_transform_bf( + data, distances=dt, return_indices=True, indices=ft) + dts.append(dt) + fts.append(ft) + for dt in dts: + assert_array_almost_equal(tdt, dt) + for ft in fts: + assert_array_almost_equal(tft, ft) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_bf05(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out, ft = ndimage.distance_transform_bf( + data, 'euclidean', return_indices=True, sampling=[2, 2]) + expected = [[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 4, 4, 4, 0, 0, 0], + [0, 0, 4, 8, 16, 8, 4, 0, 0], + [0, 0, 4, 16, 32, 16, 4, 0, 0], + [0, 0, 4, 8, 16, 8, 4, 0, 0], + [0, 0, 0, 4, 4, 4, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]] + assert_array_almost_equal(out * out, expected) + + expected = [[[0, 0, 0, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 1, 1, 1], + [2, 2, 2, 2, 1, 2, 2, 2, 2], + [3, 3, 3, 2, 1, 2, 3, 3, 3], + [4, 4, 4, 4, 6, 4, 4, 4, 4], + [5, 5, 6, 6, 7, 6, 6, 5, 5], + [6, 6, 6, 7, 7, 7, 6, 6, 6], + [7, 7, 7, 7, 7, 7, 7, 7, 7], + [8, 8, 8, 8, 8, 8, 8, 8, 8]], + [[0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 2, 4, 6, 6, 7, 8], + [0, 1, 1, 2, 4, 6, 7, 7, 8], + [0, 1, 1, 1, 6, 7, 7, 7, 8], + [0, 1, 2, 2, 4, 6, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8]]] + assert_array_almost_equal(ft, expected) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_bf06(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out, ft = ndimage.distance_transform_bf( + data, 'euclidean', return_indices=True, sampling=[2, 1]) + expected = [[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 4, 1, 0, 0, 0], + [0, 0, 1, 4, 8, 4, 1, 0, 0], + [0, 0, 1, 4, 9, 4, 1, 0, 0], + [0, 0, 1, 4, 8, 4, 1, 0, 0], + [0, 0, 0, 1, 4, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]] + assert_array_almost_equal(out * out, expected) + + expected = [[[0, 0, 0, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 1, 1, 1], + [2, 2, 2, 2, 2, 2, 2, 2, 2], + [3, 3, 3, 3, 2, 3, 3, 3, 3], + [4, 4, 4, 4, 4, 4, 4, 4, 4], + [5, 5, 5, 5, 6, 5, 5, 5, 5], + [6, 6, 6, 6, 7, 6, 6, 6, 6], + [7, 7, 7, 7, 7, 7, 7, 7, 7], + [8, 8, 8, 8, 8, 8, 8, 8, 8]], + [[0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 2, 6, 6, 6, 7, 8], + [0, 1, 1, 1, 6, 7, 7, 7, 8], + [0, 1, 1, 1, 7, 7, 7, 7, 8], + [0, 1, 1, 1, 6, 7, 7, 7, 8], + [0, 1, 2, 2, 4, 6, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8]]] + assert_array_almost_equal(ft, expected) + + def test_distance_transform_bf07(self): + # test input validation per discussion on PR #13302 + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]]) + with assert_raises(RuntimeError): + ndimage.distance_transform_bf( + data, return_distances=False, return_indices=False + ) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_cdt01(self, dtype): + # chamfer type distance (cdt) transform + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out, ft = ndimage.distance_transform_cdt( + data, 'cityblock', return_indices=True) + bf = ndimage.distance_transform_bf(data, 'cityblock') + assert_array_almost_equal(bf, out) + + expected = [[[0, 0, 0, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 1, 1, 1], + [2, 2, 2, 1, 1, 1, 2, 2, 2], + [3, 3, 2, 1, 1, 1, 2, 3, 3], + [4, 4, 4, 4, 1, 4, 4, 4, 4], + [5, 5, 5, 5, 7, 7, 6, 5, 5], + [6, 6, 6, 6, 7, 7, 6, 6, 6], + [7, 7, 7, 7, 7, 7, 7, 7, 7], + [8, 8, 8, 8, 8, 8, 8, 8, 8]], + [[0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 1, 1, 4, 7, 7, 7, 8], + [0, 1, 1, 1, 4, 5, 6, 7, 8], + [0, 1, 2, 2, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8]]] + assert_array_almost_equal(ft, expected) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_cdt02(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out, ft = ndimage.distance_transform_cdt(data, 'chessboard', + return_indices=True) + bf = ndimage.distance_transform_bf(data, 'chessboard') + assert_array_almost_equal(bf, out) + + expected = [[[0, 0, 0, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 1, 1, 1], + [2, 2, 2, 1, 1, 1, 2, 2, 2], + [3, 3, 2, 2, 1, 2, 2, 3, 3], + [4, 4, 3, 2, 2, 2, 3, 4, 4], + [5, 5, 4, 6, 7, 6, 4, 5, 5], + [6, 6, 6, 6, 7, 7, 6, 6, 6], + [7, 7, 7, 7, 7, 7, 7, 7, 7], + [8, 8, 8, 8, 8, 8, 8, 8, 8]], + [[0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 2, 3, 4, 6, 7, 8], + [0, 1, 1, 2, 2, 6, 6, 7, 8], + [0, 1, 1, 1, 2, 6, 7, 7, 8], + [0, 1, 1, 2, 6, 6, 7, 7, 8], + [0, 1, 2, 2, 5, 6, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8], + [0, 1, 2, 3, 4, 5, 6, 7, 8]]] + assert_array_almost_equal(ft, expected) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_cdt03(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + tdt, tft = ndimage.distance_transform_cdt(data, return_indices=True) + dts = [] + fts = [] + dt = numpy.zeros(data.shape, dtype=numpy.int32) + ndimage.distance_transform_cdt(data, distances=dt) + dts.append(dt) + ft = ndimage.distance_transform_cdt( + data, return_distances=False, return_indices=True) + fts.append(ft) + ft = numpy.indices(data.shape, dtype=numpy.int32) + ndimage.distance_transform_cdt( + data, return_distances=False, return_indices=True, indices=ft) + fts.append(ft) + dt, ft = ndimage.distance_transform_cdt( + data, return_indices=True) + dts.append(dt) + fts.append(ft) + dt = numpy.zeros(data.shape, dtype=numpy.int32) + ft = ndimage.distance_transform_cdt( + data, distances=dt, return_indices=True) + dts.append(dt) + fts.append(ft) + ft = numpy.indices(data.shape, dtype=numpy.int32) + dt = ndimage.distance_transform_cdt( + data, return_indices=True, indices=ft) + dts.append(dt) + fts.append(ft) + dt = numpy.zeros(data.shape, dtype=numpy.int32) + ft = numpy.indices(data.shape, dtype=numpy.int32) + ndimage.distance_transform_cdt(data, distances=dt, + return_indices=True, indices=ft) + dts.append(dt) + fts.append(ft) + for dt in dts: + assert_array_almost_equal(tdt, dt) + for ft in fts: + assert_array_almost_equal(tft, ft) + + def test_distance_transform_cdt04(self): + # test input validation per discussion on PR #13302 + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]]) + indices_out = numpy.zeros((data.ndim,) + data.shape, dtype=numpy.int32) + with assert_raises(RuntimeError): + ndimage.distance_transform_bf( + data, + return_distances=True, + return_indices=False, + indices=indices_out + ) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_cdt05(self, dtype): + # test custom metric type per discussion on issue #17381 + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + metric_arg = np.ones((3, 3)) + actual = ndimage.distance_transform_cdt(data, metric=metric_arg) + assert actual.sum() == -21 + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_edt01(self, dtype): + # euclidean distance transform (edt) + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out, ft = ndimage.distance_transform_edt(data, return_indices=True) + bf = ndimage.distance_transform_bf(data, 'euclidean') + assert_array_almost_equal(bf, out) + + dt = ft - numpy.indices(ft.shape[1:], dtype=ft.dtype) + dt = dt.astype(numpy.float64) + numpy.multiply(dt, dt, dt) + dt = numpy.add.reduce(dt, axis=0) + numpy.sqrt(dt, dt) + + assert_array_almost_equal(bf, dt) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_edt02(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + tdt, tft = ndimage.distance_transform_edt(data, return_indices=True) + dts = [] + fts = [] + dt = numpy.zeros(data.shape, dtype=numpy.float64) + ndimage.distance_transform_edt(data, distances=dt) + dts.append(dt) + ft = ndimage.distance_transform_edt( + data, return_distances=0, return_indices=True) + fts.append(ft) + ft = numpy.indices(data.shape, dtype=numpy.int32) + ndimage.distance_transform_edt( + data, return_distances=False, return_indices=True, indices=ft) + fts.append(ft) + dt, ft = ndimage.distance_transform_edt( + data, return_indices=True) + dts.append(dt) + fts.append(ft) + dt = numpy.zeros(data.shape, dtype=numpy.float64) + ft = ndimage.distance_transform_edt( + data, distances=dt, return_indices=True) + dts.append(dt) + fts.append(ft) + ft = numpy.indices(data.shape, dtype=numpy.int32) + dt = ndimage.distance_transform_edt( + data, return_indices=True, indices=ft) + dts.append(dt) + fts.append(ft) + dt = numpy.zeros(data.shape, dtype=numpy.float64) + ft = numpy.indices(data.shape, dtype=numpy.int32) + ndimage.distance_transform_edt( + data, distances=dt, return_indices=True, indices=ft) + dts.append(dt) + fts.append(ft) + for dt in dts: + assert_array_almost_equal(tdt, dt) + for ft in fts: + assert_array_almost_equal(tft, ft) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_edt03(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + ref = ndimage.distance_transform_bf(data, 'euclidean', sampling=[2, 2]) + out = ndimage.distance_transform_edt(data, sampling=[2, 2]) + assert_array_almost_equal(ref, out) + + @pytest.mark.parametrize('dtype', types) + def test_distance_transform_edt4(self, dtype): + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype) + ref = ndimage.distance_transform_bf(data, 'euclidean', sampling=[2, 1]) + out = ndimage.distance_transform_edt(data, sampling=[2, 1]) + assert_array_almost_equal(ref, out) + + def test_distance_transform_edt5(self): + # Ticket #954 regression test + out = ndimage.distance_transform_edt(False) + assert_array_almost_equal(out, [0.]) + + def test_distance_transform_edt6(self): + # test input validation per discussion on PR #13302 + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0, 0]]) + distances_out = numpy.zeros(data.shape, dtype=numpy.float64) + with assert_raises(RuntimeError): + ndimage.distance_transform_bf( + data, + return_indices=True, + return_distances=False, + distances=distances_out + ) + + def test_generate_structure01(self): + struct = ndimage.generate_binary_structure(0, 1) + assert_array_almost_equal(struct, 1) + + def test_generate_structure02(self): + struct = ndimage.generate_binary_structure(1, 1) + assert_array_almost_equal(struct, [1, 1, 1]) + + def test_generate_structure03(self): + struct = ndimage.generate_binary_structure(2, 1) + assert_array_almost_equal(struct, [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]]) + + def test_generate_structure04(self): + struct = ndimage.generate_binary_structure(2, 2) + assert_array_almost_equal(struct, [[1, 1, 1], + [1, 1, 1], + [1, 1, 1]]) + + def test_iterate_structure01(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + out = ndimage.iterate_structure(struct, 2) + assert_array_almost_equal(out, [[0, 0, 1, 0, 0], + [0, 1, 1, 1, 0], + [1, 1, 1, 1, 1], + [0, 1, 1, 1, 0], + [0, 0, 1, 0, 0]]) + + def test_iterate_structure02(self): + struct = [[0, 1], + [1, 1], + [0, 1]] + out = ndimage.iterate_structure(struct, 2) + assert_array_almost_equal(out, [[0, 0, 1], + [0, 1, 1], + [1, 1, 1], + [0, 1, 1], + [0, 0, 1]]) + + def test_iterate_structure03(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + out = ndimage.iterate_structure(struct, 2, 1) + expected = [[0, 0, 1, 0, 0], + [0, 1, 1, 1, 0], + [1, 1, 1, 1, 1], + [0, 1, 1, 1, 0], + [0, 0, 1, 0, 0]] + assert_array_almost_equal(out[0], expected) + assert_equal(out[1], [2, 2]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion01(self, dtype): + data = numpy.ones([], dtype) + out = ndimage.binary_erosion(data) + assert_array_almost_equal(out, 1) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion02(self, dtype): + data = numpy.ones([], dtype) + out = ndimage.binary_erosion(data, border_value=1) + assert_array_almost_equal(out, 1) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion03(self, dtype): + data = numpy.ones([1], dtype) + out = ndimage.binary_erosion(data) + assert_array_almost_equal(out, [0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion04(self, dtype): + data = numpy.ones([1], dtype) + out = ndimage.binary_erosion(data, border_value=1) + assert_array_almost_equal(out, [1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion05(self, dtype): + data = numpy.ones([3], dtype) + out = ndimage.binary_erosion(data) + assert_array_almost_equal(out, [0, 1, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion06(self, dtype): + data = numpy.ones([3], dtype) + out = ndimage.binary_erosion(data, border_value=1) + assert_array_almost_equal(out, [1, 1, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion07(self, dtype): + data = numpy.ones([5], dtype) + out = ndimage.binary_erosion(data) + assert_array_almost_equal(out, [0, 1, 1, 1, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion08(self, dtype): + data = numpy.ones([5], dtype) + out = ndimage.binary_erosion(data, border_value=1) + assert_array_almost_equal(out, [1, 1, 1, 1, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion09(self, dtype): + data = numpy.ones([5], dtype) + data[2] = 0 + out = ndimage.binary_erosion(data) + assert_array_almost_equal(out, [0, 0, 0, 0, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion10(self, dtype): + data = numpy.ones([5], dtype) + data[2] = 0 + out = ndimage.binary_erosion(data, border_value=1) + assert_array_almost_equal(out, [1, 0, 0, 0, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion11(self, dtype): + data = numpy.ones([5], dtype) + data[2] = 0 + struct = [1, 0, 1] + out = ndimage.binary_erosion(data, struct, border_value=1) + assert_array_almost_equal(out, [1, 0, 1, 0, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion12(self, dtype): + data = numpy.ones([5], dtype) + data[2] = 0 + struct = [1, 0, 1] + out = ndimage.binary_erosion(data, struct, border_value=1, origin=-1) + assert_array_almost_equal(out, [0, 1, 0, 1, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion13(self, dtype): + data = numpy.ones([5], dtype) + data[2] = 0 + struct = [1, 0, 1] + out = ndimage.binary_erosion(data, struct, border_value=1, origin=1) + assert_array_almost_equal(out, [1, 1, 0, 1, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion14(self, dtype): + data = numpy.ones([5], dtype) + data[2] = 0 + struct = [1, 1] + out = ndimage.binary_erosion(data, struct, border_value=1) + assert_array_almost_equal(out, [1, 1, 0, 0, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion15(self, dtype): + data = numpy.ones([5], dtype) + data[2] = 0 + struct = [1, 1] + out = ndimage.binary_erosion(data, struct, border_value=1, origin=-1) + assert_array_almost_equal(out, [1, 0, 0, 1, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion16(self, dtype): + data = numpy.ones([1, 1], dtype) + out = ndimage.binary_erosion(data, border_value=1) + assert_array_almost_equal(out, [[1]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion17(self, dtype): + data = numpy.ones([1, 1], dtype) + out = ndimage.binary_erosion(data) + assert_array_almost_equal(out, [[0]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion18(self, dtype): + data = numpy.ones([1, 3], dtype) + out = ndimage.binary_erosion(data) + assert_array_almost_equal(out, [[0, 0, 0]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion19(self, dtype): + data = numpy.ones([1, 3], dtype) + out = ndimage.binary_erosion(data, border_value=1) + assert_array_almost_equal(out, [[1, 1, 1]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion20(self, dtype): + data = numpy.ones([3, 3], dtype) + out = ndimage.binary_erosion(data) + assert_array_almost_equal(out, [[0, 0, 0], + [0, 1, 0], + [0, 0, 0]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion21(self, dtype): + data = numpy.ones([3, 3], dtype) + out = ndimage.binary_erosion(data, border_value=1) + assert_array_almost_equal(out, [[1, 1, 1], + [1, 1, 1], + [1, 1, 1]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion22(self, dtype): + expected = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 1, 1], + [0, 0, 1, 1, 1, 1, 1, 1], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 0, 0, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_erosion(data, border_value=1) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion23(self, dtype): + struct = ndimage.generate_binary_structure(2, 2) + expected = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 1, 1], + [0, 0, 1, 1, 1, 1, 1, 1], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 0, 0, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_erosion(data, struct, border_value=1) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion24(self, dtype): + struct = [[0, 1], + [1, 1]] + expected = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 1, 1], + [0, 0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 0, 0, 0, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 1, 1], + [0, 0, 1, 1, 1, 1, 1, 1], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 0, 0, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_erosion(data, struct, border_value=1) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion25(self, dtype): + struct = [[0, 1, 0], + [1, 0, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 1, 1], + [0, 0, 1, 1, 1, 0, 1, 1], + [0, 0, 1, 0, 1, 1, 0, 0], + [0, 1, 0, 1, 1, 1, 1, 0], + [0, 1, 1, 0, 0, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_erosion(data, struct, border_value=1) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_erosion26(self, dtype): + struct = [[0, 1, 0], + [1, 0, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 1], + [0, 0, 0, 0, 1, 0, 0, 1], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 1]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 1, 1], + [0, 0, 1, 1, 1, 0, 1, 1], + [0, 0, 1, 0, 1, 1, 0, 0], + [0, 1, 0, 1, 1, 1, 1, 0], + [0, 1, 1, 0, 0, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_erosion(data, struct, border_value=1, + origin=(-1, -1)) + assert_array_almost_equal(out, expected) + + def test_binary_erosion27(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], bool) + out = ndimage.binary_erosion(data, struct, border_value=1, + iterations=2) + assert_array_almost_equal(out, expected) + + def test_binary_erosion28(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], bool) + out = numpy.zeros(data.shape, bool) + ndimage.binary_erosion(data, struct, border_value=1, + iterations=2, output=out) + assert_array_almost_equal(out, expected) + + def test_binary_erosion29(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [1, 1, 1, 1, 1, 1, 1], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0]], bool) + out = ndimage.binary_erosion(data, struct, + border_value=1, iterations=3) + assert_array_almost_equal(out, expected) + + def test_binary_erosion30(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [1, 1, 1, 1, 1, 1, 1], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0]], bool) + out = numpy.zeros(data.shape, bool) + ndimage.binary_erosion(data, struct, border_value=1, + iterations=3, output=out) + assert_array_almost_equal(out, expected) + + # test with output memory overlap + ndimage.binary_erosion(data, struct, border_value=1, + iterations=3, output=data) + assert_array_almost_equal(data, expected) + + def test_binary_erosion31(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0], + [1, 1, 1, 1, 1, 0, 1], + [0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 1]] + data = numpy.array([[0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [1, 1, 1, 1, 1, 1, 1], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0]], bool) + out = numpy.zeros(data.shape, bool) + ndimage.binary_erosion(data, struct, border_value=1, + iterations=1, output=out, origin=(-1, -1)) + assert_array_almost_equal(out, expected) + + def test_binary_erosion32(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], bool) + out = ndimage.binary_erosion(data, struct, + border_value=1, iterations=2) + assert_array_almost_equal(out, expected) + + def test_binary_erosion33(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 1, 1], + [0, 0, 0, 0, 0, 0, 1], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + mask = [[1, 1, 1, 1, 1, 0, 0], + [1, 1, 1, 1, 1, 1, 0], + [1, 1, 1, 1, 1, 1, 1], + [1, 1, 1, 1, 1, 1, 1], + [1, 1, 1, 1, 1, 1, 1], + [1, 1, 1, 1, 1, 1, 1], + [1, 1, 1, 1, 1, 1, 1]] + data = numpy.array([[0, 0, 0, 0, 0, 1, 1], + [0, 0, 0, 1, 0, 0, 1], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], bool) + out = ndimage.binary_erosion(data, struct, + border_value=1, mask=mask, iterations=-1) + assert_array_almost_equal(out, expected) + + def test_binary_erosion34(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + mask = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 0, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]], bool) + out = ndimage.binary_erosion(data, struct, + border_value=1, mask=mask) + assert_array_almost_equal(out, expected) + + def test_binary_erosion35(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + mask = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 0, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [1, 1, 1, 1, 1, 1, 1], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0]], bool) + tmp = [[0, 0, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0], + [1, 1, 1, 1, 1, 0, 1], + [0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 1]] + expected = numpy.logical_and(tmp, mask) + tmp = numpy.logical_and(data, numpy.logical_not(mask)) + expected = numpy.logical_or(expected, tmp) + out = numpy.zeros(data.shape, bool) + ndimage.binary_erosion(data, struct, border_value=1, + iterations=1, output=out, + origin=(-1, -1), mask=mask) + assert_array_almost_equal(out, expected) + + def test_binary_erosion36(self): + struct = [[0, 1, 0], + [1, 0, 1], + [0, 1, 0]] + mask = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + tmp = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 1], + [0, 0, 0, 0, 1, 0, 0, 1], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 1]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 1, 1], + [0, 0, 1, 1, 1, 0, 1, 1], + [0, 0, 1, 0, 1, 1, 0, 0], + [0, 1, 0, 1, 1, 1, 1, 0], + [0, 1, 1, 0, 0, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]]) + expected = numpy.logical_and(tmp, mask) + tmp = numpy.logical_and(data, numpy.logical_not(mask)) + expected = numpy.logical_or(expected, tmp) + out = ndimage.binary_erosion(data, struct, mask=mask, + border_value=1, origin=(-1, -1)) + assert_array_almost_equal(out, expected) + + def test_binary_erosion37(self): + a = numpy.array([[1, 0, 1], + [0, 1, 0], + [1, 0, 1]], dtype=bool) + b = numpy.zeros_like(a) + out = ndimage.binary_erosion(a, structure=a, output=b, iterations=0, + border_value=True, brute_force=True) + assert_(out is b) + assert_array_equal( + ndimage.binary_erosion(a, structure=a, iterations=0, + border_value=True), + b) + + def test_binary_erosion38(self): + data = numpy.array([[1, 0, 1], + [0, 1, 0], + [1, 0, 1]], dtype=bool) + iterations = 2.0 + with assert_raises(TypeError): + _ = ndimage.binary_erosion(data, iterations=iterations) + + def test_binary_erosion39(self): + iterations = numpy.int32(3) + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [1, 1, 1, 1, 1, 1, 1], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0]], bool) + out = numpy.zeros(data.shape, bool) + ndimage.binary_erosion(data, struct, border_value=1, + iterations=iterations, output=out) + assert_array_almost_equal(out, expected) + + def test_binary_erosion40(self): + iterations = numpy.int64(3) + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [1, 1, 1, 1, 1, 1, 1], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 0, 0, 0]], bool) + out = numpy.zeros(data.shape, bool) + ndimage.binary_erosion(data, struct, border_value=1, + iterations=iterations, output=out) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation01(self, dtype): + data = numpy.ones([], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, 1) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation02(self, dtype): + data = numpy.zeros([], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, 0) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation03(self, dtype): + data = numpy.ones([1], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation04(self, dtype): + data = numpy.zeros([1], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation05(self, dtype): + data = numpy.ones([3], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [1, 1, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation06(self, dtype): + data = numpy.zeros([3], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [0, 0, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation07(self, dtype): + data = numpy.zeros([3], dtype) + data[1] = 1 + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [1, 1, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation08(self, dtype): + data = numpy.zeros([5], dtype) + data[1] = 1 + data[3] = 1 + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [1, 1, 1, 1, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation09(self, dtype): + data = numpy.zeros([5], dtype) + data[1] = 1 + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [1, 1, 1, 0, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation10(self, dtype): + data = numpy.zeros([5], dtype) + data[1] = 1 + out = ndimage.binary_dilation(data, origin=-1) + assert_array_almost_equal(out, [0, 1, 1, 1, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation11(self, dtype): + data = numpy.zeros([5], dtype) + data[1] = 1 + out = ndimage.binary_dilation(data, origin=1) + assert_array_almost_equal(out, [1, 1, 0, 0, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation12(self, dtype): + data = numpy.zeros([5], dtype) + data[1] = 1 + struct = [1, 0, 1] + out = ndimage.binary_dilation(data, struct) + assert_array_almost_equal(out, [1, 0, 1, 0, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation13(self, dtype): + data = numpy.zeros([5], dtype) + data[1] = 1 + struct = [1, 0, 1] + out = ndimage.binary_dilation(data, struct, border_value=1) + assert_array_almost_equal(out, [1, 0, 1, 0, 1]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation14(self, dtype): + data = numpy.zeros([5], dtype) + data[1] = 1 + struct = [1, 0, 1] + out = ndimage.binary_dilation(data, struct, origin=-1) + assert_array_almost_equal(out, [0, 1, 0, 1, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation15(self, dtype): + data = numpy.zeros([5], dtype) + data[1] = 1 + struct = [1, 0, 1] + out = ndimage.binary_dilation(data, struct, + origin=-1, border_value=1) + assert_array_almost_equal(out, [1, 1, 0, 1, 0]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation16(self, dtype): + data = numpy.ones([1, 1], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [[1]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation17(self, dtype): + data = numpy.zeros([1, 1], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [[0]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation18(self, dtype): + data = numpy.ones([1, 3], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [[1, 1, 1]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation19(self, dtype): + data = numpy.ones([3, 3], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [[1, 1, 1], + [1, 1, 1], + [1, 1, 1]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation20(self, dtype): + data = numpy.zeros([3, 3], dtype) + data[1, 1] = 1 + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation21(self, dtype): + struct = ndimage.generate_binary_structure(2, 2) + data = numpy.zeros([3, 3], dtype) + data[1, 1] = 1 + out = ndimage.binary_dilation(data, struct) + assert_array_almost_equal(out, [[1, 1, 1], + [1, 1, 1], + [1, 1, 1]]) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation22(self, dtype): + expected = [[0, 1, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_dilation(data) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation23(self, dtype): + expected = [[1, 1, 1, 1, 1, 1, 1, 1], + [1, 1, 1, 0, 0, 0, 0, 1], + [1, 1, 0, 0, 0, 1, 0, 1], + [1, 0, 0, 1, 1, 1, 1, 1], + [1, 0, 1, 1, 1, 1, 0, 1], + [1, 1, 1, 1, 1, 1, 1, 1], + [1, 0, 1, 0, 0, 1, 0, 1], + [1, 1, 1, 1, 1, 1, 1, 1]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_dilation(data, border_value=1) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation24(self, dtype): + expected = [[1, 1, 0, 0, 0, 0, 0, 0], + [1, 0, 0, 0, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 0, 0], + [0, 1, 0, 0, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_dilation(data, origin=(1, 1)) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation25(self, dtype): + expected = [[1, 1, 0, 0, 0, 0, 1, 1], + [1, 0, 0, 0, 1, 0, 1, 1], + [0, 0, 1, 1, 1, 1, 1, 1], + [0, 1, 1, 1, 1, 0, 1, 1], + [1, 1, 1, 1, 1, 1, 1, 1], + [0, 1, 0, 0, 1, 0, 1, 1], + [1, 1, 1, 1, 1, 1, 1, 1], + [1, 1, 1, 1, 1, 1, 1, 1]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_dilation(data, origin=(1, 1), border_value=1) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation26(self, dtype): + struct = ndimage.generate_binary_structure(2, 2) + expected = [[1, 1, 1, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_dilation(data, struct) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation27(self, dtype): + struct = [[0, 1], + [1, 1]] + expected = [[0, 1, 0, 0, 0, 0, 0, 0], + [1, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 1, 1, 0, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_dilation(data, struct) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation28(self, dtype): + expected = [[1, 1, 1, 1], + [1, 0, 0, 1], + [1, 0, 0, 1], + [1, 1, 1, 1]] + data = numpy.array([[0, 0, 0, 0], + [0, 0, 0, 0], + [0, 0, 0, 0], + [0, 0, 0, 0]], dtype) + out = ndimage.binary_dilation(data, border_value=1) + assert_array_almost_equal(out, expected) + + def test_binary_dilation29(self): + struct = [[0, 1], + [1, 1]] + expected = [[0, 0, 0, 0, 0], + [0, 0, 0, 1, 0], + [0, 0, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 0]] + + data = numpy.array([[0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 1, 0], + [0, 0, 0, 0, 0]], bool) + out = ndimage.binary_dilation(data, struct, iterations=2) + assert_array_almost_equal(out, expected) + + def test_binary_dilation30(self): + struct = [[0, 1], + [1, 1]] + expected = [[0, 0, 0, 0, 0], + [0, 0, 0, 1, 0], + [0, 0, 1, 1, 0], + [0, 1, 1, 1, 0], + [0, 0, 0, 0, 0]] + + data = numpy.array([[0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 1, 0], + [0, 0, 0, 0, 0]], bool) + out = numpy.zeros(data.shape, bool) + ndimage.binary_dilation(data, struct, iterations=2, output=out) + assert_array_almost_equal(out, expected) + + def test_binary_dilation31(self): + struct = [[0, 1], + [1, 1]] + expected = [[0, 0, 0, 1, 0], + [0, 0, 1, 1, 0], + [0, 1, 1, 1, 0], + [1, 1, 1, 1, 0], + [0, 0, 0, 0, 0]] + + data = numpy.array([[0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 1, 0], + [0, 0, 0, 0, 0]], bool) + out = ndimage.binary_dilation(data, struct, iterations=3) + assert_array_almost_equal(out, expected) + + def test_binary_dilation32(self): + struct = [[0, 1], + [1, 1]] + expected = [[0, 0, 0, 1, 0], + [0, 0, 1, 1, 0], + [0, 1, 1, 1, 0], + [1, 1, 1, 1, 0], + [0, 0, 0, 0, 0]] + + data = numpy.array([[0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 1, 0], + [0, 0, 0, 0, 0]], bool) + out = numpy.zeros(data.shape, bool) + ndimage.binary_dilation(data, struct, iterations=3, output=out) + assert_array_almost_equal(out, expected) + + def test_binary_dilation33(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0, 0], + [0, 1, 1, 0, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + mask = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 1, 0], + [0, 0, 0, 0, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0, 0], + [0, 1, 1, 0, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + data = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + + out = ndimage.binary_dilation(data, struct, iterations=-1, + mask=mask, border_value=0) + assert_array_almost_equal(out, expected) + + def test_binary_dilation34(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 1, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + mask = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + data = numpy.zeros(mask.shape, bool) + out = ndimage.binary_dilation(data, struct, iterations=-1, + mask=mask, border_value=1) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_dilation35(self, dtype): + tmp = [[1, 1, 0, 0, 0, 0, 1, 1], + [1, 0, 0, 0, 1, 0, 1, 1], + [0, 0, 1, 1, 1, 1, 1, 1], + [0, 1, 1, 1, 1, 0, 1, 1], + [1, 1, 1, 1, 1, 1, 1, 1], + [0, 1, 0, 0, 1, 0, 1, 1], + [1, 1, 1, 1, 1, 1, 1, 1], + [1, 1, 1, 1, 1, 1, 1, 1]] + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]]) + mask = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + expected = numpy.logical_and(tmp, mask) + tmp = numpy.logical_and(data, numpy.logical_not(mask)) + expected = numpy.logical_or(expected, tmp) + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_dilation(data, mask=mask, + origin=(1, 1), border_value=1) + assert_array_almost_equal(out, expected) + + def test_binary_propagation01(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0, 0], + [0, 1, 1, 0, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + mask = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 1, 0], + [0, 0, 0, 0, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 0, 0, 0], + [0, 1, 1, 0, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + data = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + + out = ndimage.binary_propagation(data, struct, + mask=mask, border_value=0) + assert_array_almost_equal(out, expected) + + def test_binary_propagation02(self): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 1, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + mask = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + data = numpy.zeros(mask.shape, bool) + out = ndimage.binary_propagation(data, struct, + mask=mask, border_value=1) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_opening01(self, dtype): + expected = [[0, 1, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 1, 1, 1, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 0, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_opening(data) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_opening02(self, dtype): + struct = ndimage.generate_binary_structure(2, 2) + expected = [[1, 1, 1, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[1, 1, 1, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 0, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_opening(data, struct) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_closing01(self, dtype): + expected = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 1, 0, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 0, 1, 0, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_closing(data) + assert_array_almost_equal(out, expected) + + @pytest.mark.parametrize('dtype', types) + def test_binary_closing02(self, dtype): + struct = ndimage.generate_binary_structure(2, 2) + expected = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[1, 1, 1, 0, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [1, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 0, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_closing(data, struct) + assert_array_almost_equal(out, expected) + + def test_binary_fill_holes01(self): + expected = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + out = ndimage.binary_fill_holes(data) + assert_array_almost_equal(out, expected) + + def test_binary_fill_holes02(self): + expected = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 1, 1, 1, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 1, 0, 0, 1, 0, 0], + [0, 0, 0, 1, 1, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + out = ndimage.binary_fill_holes(data) + assert_array_almost_equal(out, expected) + + def test_binary_fill_holes03(self): + expected = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 0, 1, 1, 1], + [0, 1, 1, 1, 0, 1, 1, 1], + [0, 1, 1, 1, 0, 1, 1, 1], + [0, 0, 1, 0, 0, 1, 1, 1], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + data = numpy.array([[0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 1, 0, 1, 0, 1, 1, 1], + [0, 1, 0, 1, 0, 1, 0, 1], + [0, 1, 0, 1, 0, 1, 0, 1], + [0, 0, 1, 0, 0, 1, 1, 1], + [0, 0, 0, 0, 0, 0, 0, 0]], bool) + out = ndimage.binary_fill_holes(data) + assert_array_almost_equal(out, expected) + + def test_grey_erosion01(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + output = ndimage.grey_erosion(array, footprint=footprint) + assert_array_almost_equal([[2, 2, 1, 1, 1], + [2, 3, 1, 3, 1], + [5, 5, 3, 3, 1]], output) + + def test_grey_erosion01_overlap(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + ndimage.grey_erosion(array, footprint=footprint, output=array) + assert_array_almost_equal([[2, 2, 1, 1, 1], + [2, 3, 1, 3, 1], + [5, 5, 3, 3, 1]], array) + + def test_grey_erosion02(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + output = ndimage.grey_erosion(array, footprint=footprint, + structure=structure) + assert_array_almost_equal([[2, 2, 1, 1, 1], + [2, 3, 1, 3, 1], + [5, 5, 3, 3, 1]], output) + + def test_grey_erosion03(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[1, 1, 1], [1, 1, 1]] + output = ndimage.grey_erosion(array, footprint=footprint, + structure=structure) + assert_array_almost_equal([[1, 1, 0, 0, 0], + [1, 2, 0, 2, 0], + [4, 4, 2, 2, 0]], output) + + def test_grey_dilation01(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[0, 1, 1], [1, 0, 1]] + output = ndimage.grey_dilation(array, footprint=footprint) + assert_array_almost_equal([[7, 7, 9, 9, 5], + [7, 9, 8, 9, 7], + [8, 8, 8, 7, 7]], output) + + def test_grey_dilation02(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[0, 1, 1], [1, 0, 1]] + structure = [[0, 0, 0], [0, 0, 0]] + output = ndimage.grey_dilation(array, footprint=footprint, + structure=structure) + assert_array_almost_equal([[7, 7, 9, 9, 5], + [7, 9, 8, 9, 7], + [8, 8, 8, 7, 7]], output) + + def test_grey_dilation03(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[0, 1, 1], [1, 0, 1]] + structure = [[1, 1, 1], [1, 1, 1]] + output = ndimage.grey_dilation(array, footprint=footprint, + structure=structure) + assert_array_almost_equal([[8, 8, 10, 10, 6], + [8, 10, 9, 10, 8], + [9, 9, 9, 8, 8]], output) + + def test_grey_opening01(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + tmp = ndimage.grey_erosion(array, footprint=footprint) + expected = ndimage.grey_dilation(tmp, footprint=footprint) + output = ndimage.grey_opening(array, footprint=footprint) + assert_array_almost_equal(expected, output) + + def test_grey_opening02(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp = ndimage.grey_erosion(array, footprint=footprint, + structure=structure) + expected = ndimage.grey_dilation(tmp, footprint=footprint, + structure=structure) + output = ndimage.grey_opening(array, footprint=footprint, + structure=structure) + assert_array_almost_equal(expected, output) + + def test_grey_closing01(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + tmp = ndimage.grey_dilation(array, footprint=footprint) + expected = ndimage.grey_erosion(tmp, footprint=footprint) + output = ndimage.grey_closing(array, footprint=footprint) + assert_array_almost_equal(expected, output) + + def test_grey_closing02(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp = ndimage.grey_dilation(array, footprint=footprint, + structure=structure) + expected = ndimage.grey_erosion(tmp, footprint=footprint, + structure=structure) + output = ndimage.grey_closing(array, footprint=footprint, + structure=structure) + assert_array_almost_equal(expected, output) + + def test_morphological_gradient01(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp1 = ndimage.grey_dilation(array, footprint=footprint, + structure=structure) + tmp2 = ndimage.grey_erosion(array, footprint=footprint, + structure=structure) + expected = tmp1 - tmp2 + output = numpy.zeros(array.shape, array.dtype) + ndimage.morphological_gradient(array, footprint=footprint, + structure=structure, output=output) + assert_array_almost_equal(expected, output) + + def test_morphological_gradient02(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp1 = ndimage.grey_dilation(array, footprint=footprint, + structure=structure) + tmp2 = ndimage.grey_erosion(array, footprint=footprint, + structure=structure) + expected = tmp1 - tmp2 + output = ndimage.morphological_gradient(array, footprint=footprint, + structure=structure) + assert_array_almost_equal(expected, output) + + def test_morphological_laplace01(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp1 = ndimage.grey_dilation(array, footprint=footprint, + structure=structure) + tmp2 = ndimage.grey_erosion(array, footprint=footprint, + structure=structure) + expected = tmp1 + tmp2 - 2 * array + output = numpy.zeros(array.shape, array.dtype) + ndimage.morphological_laplace(array, footprint=footprint, + structure=structure, output=output) + assert_array_almost_equal(expected, output) + + def test_morphological_laplace02(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp1 = ndimage.grey_dilation(array, footprint=footprint, + structure=structure) + tmp2 = ndimage.grey_erosion(array, footprint=footprint, + structure=structure) + expected = tmp1 + tmp2 - 2 * array + output = ndimage.morphological_laplace(array, footprint=footprint, + structure=structure) + assert_array_almost_equal(expected, output) + + def test_white_tophat01(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp = ndimage.grey_opening(array, footprint=footprint, + structure=structure) + expected = array - tmp + output = numpy.zeros(array.shape, array.dtype) + ndimage.white_tophat(array, footprint=footprint, + structure=structure, output=output) + assert_array_almost_equal(expected, output) + + def test_white_tophat02(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp = ndimage.grey_opening(array, footprint=footprint, + structure=structure) + expected = array - tmp + output = ndimage.white_tophat(array, footprint=footprint, + structure=structure) + assert_array_almost_equal(expected, output) + + def test_white_tophat03(self): + array = numpy.array([[1, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 0, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 1]], dtype=numpy.bool_) + structure = numpy.ones((3, 3), dtype=numpy.bool_) + expected = numpy.array([[0, 1, 1, 0, 0, 0, 0], + [1, 0, 0, 1, 1, 1, 0], + [1, 0, 0, 1, 1, 1, 0], + [0, 1, 1, 0, 0, 0, 1], + [0, 1, 1, 0, 1, 0, 1], + [0, 1, 1, 0, 0, 0, 1], + [0, 0, 0, 1, 1, 1, 1]], dtype=numpy.bool_) + + output = ndimage.white_tophat(array, structure=structure) + assert_array_equal(expected, output) + + def test_white_tophat04(self): + array = numpy.eye(5, dtype=numpy.bool_) + structure = numpy.ones((3, 3), dtype=numpy.bool_) + + # Check that type mismatch is properly handled + output = numpy.empty_like(array, dtype=numpy.float64) + ndimage.white_tophat(array, structure=structure, output=output) + + def test_black_tophat01(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp = ndimage.grey_closing(array, footprint=footprint, + structure=structure) + expected = tmp - array + output = numpy.zeros(array.shape, array.dtype) + ndimage.black_tophat(array, footprint=footprint, + structure=structure, output=output) + assert_array_almost_equal(expected, output) + + def test_black_tophat02(self): + array = numpy.array([[3, 2, 5, 1, 4], + [7, 6, 9, 3, 5], + [5, 8, 3, 7, 1]]) + footprint = [[1, 0, 1], [1, 1, 0]] + structure = [[0, 0, 0], [0, 0, 0]] + tmp = ndimage.grey_closing(array, footprint=footprint, + structure=structure) + expected = tmp - array + output = ndimage.black_tophat(array, footprint=footprint, + structure=structure) + assert_array_almost_equal(expected, output) + + def test_black_tophat03(self): + array = numpy.array([[1, 0, 0, 0, 0, 0, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 0, 1, 0], + [0, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 1]], dtype=numpy.bool_) + structure = numpy.ones((3, 3), dtype=numpy.bool_) + expected = numpy.array([[0, 1, 1, 1, 1, 1, 1], + [1, 0, 0, 0, 0, 0, 1], + [1, 0, 0, 0, 0, 0, 1], + [1, 0, 0, 0, 0, 0, 1], + [1, 0, 0, 0, 1, 0, 1], + [1, 0, 0, 0, 0, 0, 1], + [1, 1, 1, 1, 1, 1, 0]], dtype=numpy.bool_) + + output = ndimage.black_tophat(array, structure=structure) + assert_array_equal(expected, output) + + def test_black_tophat04(self): + array = numpy.eye(5, dtype=numpy.bool_) + structure = numpy.ones((3, 3), dtype=numpy.bool_) + + # Check that type mismatch is properly handled + output = numpy.empty_like(array, dtype=numpy.float64) + ndimage.black_tophat(array, structure=structure, output=output) + + @pytest.mark.parametrize('dtype', types) + def test_hit_or_miss01(self, dtype): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0], + [0, 1, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0]] + data = numpy.array([[0, 1, 0, 0, 0], + [1, 1, 1, 0, 0], + [0, 1, 0, 1, 1], + [0, 0, 1, 1, 1], + [0, 1, 1, 1, 0], + [0, 1, 1, 1, 1], + [0, 1, 1, 1, 1], + [0, 0, 0, 0, 0]], dtype) + out = numpy.zeros(data.shape, bool) + ndimage.binary_hit_or_miss(data, struct, output=out) + assert_array_almost_equal(expected, out) + + @pytest.mark.parametrize('dtype', types) + def test_hit_or_miss02(self, dtype): + struct = [[0, 1, 0], + [1, 1, 1], + [0, 1, 0]] + expected = [[0, 0, 0, 0, 0, 0, 0, 0], + [0, 1, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 1, 0, 0, 1, 1, 1, 0], + [1, 1, 1, 0, 0, 1, 0, 0], + [0, 1, 0, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_hit_or_miss(data, struct) + assert_array_almost_equal(expected, out) + + @pytest.mark.parametrize('dtype', types) + def test_hit_or_miss03(self, dtype): + struct1 = [[0, 0, 0], + [1, 1, 1], + [0, 0, 0]] + struct2 = [[1, 1, 1], + [0, 0, 0], + [1, 1, 1]] + expected = [[0, 0, 0, 0, 0, 1, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0], + [0, 0, 1, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0, 0, 0]] + data = numpy.array([[0, 1, 0, 0, 1, 1, 1, 0], + [1, 1, 1, 0, 0, 0, 0, 0], + [0, 1, 0, 1, 1, 1, 1, 0], + [0, 0, 1, 1, 1, 1, 1, 0], + [0, 1, 1, 1, 0, 1, 1, 0], + [0, 0, 0, 0, 1, 1, 1, 0], + [0, 1, 1, 1, 1, 1, 1, 0], + [0, 0, 0, 0, 0, 0, 0, 0]], dtype) + out = ndimage.binary_hit_or_miss(data, struct1, struct2) + assert_array_almost_equal(expected, out) + + +class TestDilateFix: + + def setup_method(self): + # dilation related setup + self.array = numpy.array([[0, 0, 0, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 1, 0], + [0, 0, 1, 1, 0], + [0, 0, 0, 0, 0]], dtype=numpy.uint8) + + self.sq3x3 = numpy.ones((3, 3)) + dilated3x3 = ndimage.binary_dilation(self.array, structure=self.sq3x3) + self.dilated3x3 = dilated3x3.view(numpy.uint8) + + def test_dilation_square_structure(self): + result = ndimage.grey_dilation(self.array, structure=self.sq3x3) + # +1 accounts for difference between grey and binary dilation + assert_array_almost_equal(result, self.dilated3x3 + 1) + + def test_dilation_scalar_size(self): + result = ndimage.grey_dilation(self.array, size=3) + assert_array_almost_equal(result, self.dilated3x3) + + +class TestBinaryOpeningClosing: + + def setup_method(self): + a = numpy.zeros((5, 5), dtype=bool) + a[1:4, 1:4] = True + a[4, 4] = True + self.array = a + self.sq3x3 = numpy.ones((3, 3)) + self.opened_old = ndimage.binary_opening(self.array, self.sq3x3, + 1, None, 0) + self.closed_old = ndimage.binary_closing(self.array, self.sq3x3, + 1, None, 0) + + def test_opening_new_arguments(self): + opened_new = ndimage.binary_opening(self.array, self.sq3x3, 1, None, + 0, None, 0, False) + assert_array_equal(opened_new, self.opened_old) + + def test_closing_new_arguments(self): + closed_new = ndimage.binary_closing(self.array, self.sq3x3, 1, None, + 0, None, 0, False) + assert_array_equal(closed_new, self.closed_old) + + +def test_binary_erosion_noninteger_iterations(): + # regression test for gh-9905, gh-9909: ValueError for + # non integer iterations + data = numpy.ones([1]) + assert_raises(TypeError, ndimage.binary_erosion, data, iterations=0.5) + assert_raises(TypeError, ndimage.binary_erosion, data, iterations=1.5) + + +def test_binary_dilation_noninteger_iterations(): + # regression test for gh-9905, gh-9909: ValueError for + # non integer iterations + data = numpy.ones([1]) + assert_raises(TypeError, ndimage.binary_dilation, data, iterations=0.5) + assert_raises(TypeError, ndimage.binary_dilation, data, iterations=1.5) + + +def test_binary_opening_noninteger_iterations(): + # regression test for gh-9905, gh-9909: ValueError for + # non integer iterations + data = numpy.ones([1]) + assert_raises(TypeError, ndimage.binary_opening, data, iterations=0.5) + assert_raises(TypeError, ndimage.binary_opening, data, iterations=1.5) + + +def test_binary_closing_noninteger_iterations(): + # regression test for gh-9905, gh-9909: ValueError for + # non integer iterations + data = numpy.ones([1]) + assert_raises(TypeError, ndimage.binary_closing, data, iterations=0.5) + assert_raises(TypeError, ndimage.binary_closing, data, iterations=1.5) + + +def test_binary_closing_noninteger_brute_force_passes_when_true(): + # regression test for gh-9905, gh-9909: ValueError for + # non integer iterations + data = numpy.ones([1]) + + assert ndimage.binary_erosion( + data, iterations=2, brute_force=1.5 + ) == ndimage.binary_erosion(data, iterations=2, brute_force=bool(1.5)) + assert ndimage.binary_erosion( + data, iterations=2, brute_force=0.0 + ) == ndimage.binary_erosion(data, iterations=2, brute_force=bool(0.0)) + + +@pytest.mark.parametrize( + 'function', + ['binary_erosion', 'binary_dilation', 'binary_opening', 'binary_closing'], +) +@pytest.mark.parametrize('iterations', [1, 5]) +@pytest.mark.parametrize('brute_force', [False, True]) +def test_binary_input_as_output(function, iterations, brute_force): + rstate = numpy.random.RandomState(123) + data = rstate.randint(low=0, high=2, size=100).astype(bool) + ndi_func = getattr(ndimage, function) + + # input data is not modified + data_orig = data.copy() + expected = ndi_func(data, brute_force=brute_force, iterations=iterations) + assert_array_equal(data, data_orig) + + # data should now contain the expected result + ndi_func(data, brute_force=brute_force, iterations=iterations, output=data) + assert_array_equal(expected, data) + + +def test_binary_hit_or_miss_input_as_output(): + rstate = numpy.random.RandomState(123) + data = rstate.randint(low=0, high=2, size=100).astype(bool) + + # input data is not modified + data_orig = data.copy() + expected = ndimage.binary_hit_or_miss(data) + assert_array_equal(data, data_orig) + + # data should now contain the expected result + ndimage.binary_hit_or_miss(data, output=data) + assert_array_equal(expected, data) + + +def test_distance_transform_cdt_invalid_metric(): + msg = 'invalid metric provided' + with pytest.raises(ValueError, match=msg): + ndimage.distance_transform_cdt(np.ones((5, 5)), + metric="garbage") diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_ni_support.py b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_ni_support.py new file mode 100644 index 0000000000000000000000000000000000000000..a25429eebc8b3739e00465b43fd28ba24b320b45 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_ni_support.py @@ -0,0 +1,77 @@ +import pytest + +import numpy as np +from .._ni_support import _get_output + + +@pytest.mark.parametrize( + 'dtype', + [ + # String specifiers + 'f4', 'float32', 'complex64', 'complex128', + # Type and dtype specifiers + np.float32, float, np.dtype('f4'), + # Derive from input + None, + ], +) +def test_get_output_basic(dtype): + shape = (2, 3) + + input_ = np.zeros(shape, 'float32') + + # For None, derive dtype from input + expected_dtype = 'float32' if dtype is None else dtype + + # Output is dtype-specifier, retrieve shape from input + result = _get_output(dtype, input_) + assert result.shape == shape + assert result.dtype == np.dtype(expected_dtype) + + # Output is dtype specifier, with explicit shape, overriding input + result = _get_output(dtype, input_, shape=(3, 2)) + assert result.shape == (3, 2) + assert result.dtype == np.dtype(expected_dtype) + + # Output is pre-allocated array, return directly + output = np.zeros(shape, dtype) + result = _get_output(output, input_) + assert result is output + + +def test_get_output_complex(): + shape = (2, 3) + + input_ = np.zeros(shape) + + # None, promote input type to complex + result = _get_output(None, input_, complex_output=True) + assert result.shape == shape + assert result.dtype == np.dtype('complex128') + + # Explicit type, promote type to complex + with pytest.warns(UserWarning, match='promoting specified output dtype to complex'): + result = _get_output(float, input_, complex_output=True) + assert result.shape == shape + assert result.dtype == np.dtype('complex128') + + # String specifier, simply verify complex output + result = _get_output('complex64', input_, complex_output=True) + assert result.shape == shape + assert result.dtype == np.dtype('complex64') + + +def test_get_output_error_cases(): + input_ = np.zeros((2, 3), 'float32') + + # Two separate paths can raise the same error + with pytest.raises(RuntimeError, match='output must have complex dtype'): + _get_output('float32', input_, complex_output=True) + with pytest.raises(RuntimeError, match='output must have complex dtype'): + _get_output(np.zeros((2, 3)), input_, complex_output=True) + + with pytest.raises(RuntimeError, match='output must have numeric dtype'): + _get_output('void', input_) + + with pytest.raises(RuntimeError, match='shape not correct'): + _get_output(np.zeros((3, 2)), input_) diff --git a/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_splines.py b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_splines.py new file mode 100644 index 0000000000000000000000000000000000000000..a74e55111f8fac906f58a947db4a214da82a3cae --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/ndimage/tests/test_splines.py @@ -0,0 +1,65 @@ +"""Tests for spline filtering.""" +import numpy as np +import pytest + +from numpy.testing import assert_almost_equal + +from scipy import ndimage + + +def get_spline_knot_values(order): + """Knot values to the right of a B-spline's center.""" + knot_values = {0: [1], + 1: [1], + 2: [6, 1], + 3: [4, 1], + 4: [230, 76, 1], + 5: [66, 26, 1]} + + return knot_values[order] + + +def make_spline_knot_matrix(n, order, mode='mirror'): + """Matrix to invert to find the spline coefficients.""" + knot_values = get_spline_knot_values(order) + + matrix = np.zeros((n, n)) + for diag, knot_value in enumerate(knot_values): + indices = np.arange(diag, n) + if diag == 0: + matrix[indices, indices] = knot_value + else: + matrix[indices, indices - diag] = knot_value + matrix[indices - diag, indices] = knot_value + + knot_values_sum = knot_values[0] + 2 * sum(knot_values[1:]) + + if mode == 'mirror': + start, step = 1, 1 + elif mode == 'reflect': + start, step = 0, 1 + elif mode == 'grid-wrap': + start, step = -1, -1 + else: + raise ValueError(f'unsupported mode {mode}') + + for row in range(len(knot_values) - 1): + for idx, knot_value in enumerate(knot_values[row + 1:]): + matrix[row, start + step*idx] += knot_value + matrix[-row - 1, -start - 1 - step*idx] += knot_value + + return matrix / knot_values_sum + + +@pytest.mark.parametrize('order', [0, 1, 2, 3, 4, 5]) +@pytest.mark.parametrize('mode', ['mirror', 'grid-wrap', 'reflect']) +def test_spline_filter_vs_matrix_solution(order, mode): + n = 100 + eye = np.eye(n, dtype=float) + spline_filter_axis_0 = ndimage.spline_filter1d(eye, axis=0, order=order, + mode=mode) + spline_filter_axis_1 = ndimage.spline_filter1d(eye, axis=1, order=order, + mode=mode) + matrix = make_spline_knot_matrix(n, order, mode=mode) + assert_almost_equal(eye, np.dot(spline_filter_axis_0, matrix)) + assert_almost_equal(eye, np.dot(spline_filter_axis_1, matrix.T)) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/__init__.py b/venv/lib/python3.10/site-packages/scipy/stats/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..8a8bdcd73a2fd80b87d9c570b40bddbe318a8d60 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/__init__.py @@ -0,0 +1,643 @@ +""" +.. _statsrefmanual: + +========================================== +Statistical functions (:mod:`scipy.stats`) +========================================== + +.. currentmodule:: scipy.stats + +This module contains a large number of probability distributions, +summary and frequency statistics, correlation functions and statistical +tests, masked statistics, kernel density estimation, quasi-Monte Carlo +functionality, and more. + +Statistics is a very large area, and there are topics that are out of scope +for SciPy and are covered by other packages. Some of the most important ones +are: + +- `statsmodels `__: + regression, linear models, time series analysis, extensions to topics + also covered by ``scipy.stats``. +- `Pandas `__: tabular data, time series + functionality, interfaces to other statistical languages. +- `PyMC `__: Bayesian statistical + modeling, probabilistic machine learning. +- `scikit-learn `__: classification, regression, + model selection. +- `Seaborn `__: statistical data visualization. +- `rpy2 `__: Python to R bridge. + + +Probability distributions +========================= + +Each univariate distribution is an instance of a subclass of `rv_continuous` +(`rv_discrete` for discrete distributions): + +.. autosummary:: + :toctree: generated/ + + rv_continuous + rv_discrete + rv_histogram + +Continuous distributions +------------------------ + +.. autosummary:: + :toctree: generated/ + + alpha -- Alpha + anglit -- Anglit + arcsine -- Arcsine + argus -- Argus + beta -- Beta + betaprime -- Beta Prime + bradford -- Bradford + burr -- Burr (Type III) + burr12 -- Burr (Type XII) + cauchy -- Cauchy + chi -- Chi + chi2 -- Chi-squared + cosine -- Cosine + crystalball -- Crystalball + dgamma -- Double Gamma + dweibull -- Double Weibull + erlang -- Erlang + expon -- Exponential + exponnorm -- Exponentially Modified Normal + exponweib -- Exponentiated Weibull + exponpow -- Exponential Power + f -- F (Snecdor F) + fatiguelife -- Fatigue Life (Birnbaum-Saunders) + fisk -- Fisk + foldcauchy -- Folded Cauchy + foldnorm -- Folded Normal + genlogistic -- Generalized Logistic + gennorm -- Generalized normal + genpareto -- Generalized Pareto + genexpon -- Generalized Exponential + genextreme -- Generalized Extreme Value + gausshyper -- Gauss Hypergeometric + gamma -- Gamma + gengamma -- Generalized gamma + genhalflogistic -- Generalized Half Logistic + genhyperbolic -- Generalized Hyperbolic + geninvgauss -- Generalized Inverse Gaussian + gibrat -- Gibrat + gompertz -- Gompertz (Truncated Gumbel) + gumbel_r -- Right Sided Gumbel, Log-Weibull, Fisher-Tippett, Extreme Value Type I + gumbel_l -- Left Sided Gumbel, etc. + halfcauchy -- Half Cauchy + halflogistic -- Half Logistic + halfnorm -- Half Normal + halfgennorm -- Generalized Half Normal + hypsecant -- Hyperbolic Secant + invgamma -- Inverse Gamma + invgauss -- Inverse Gaussian + invweibull -- Inverse Weibull + jf_skew_t -- Jones and Faddy Skew-T + johnsonsb -- Johnson SB + johnsonsu -- Johnson SU + kappa4 -- Kappa 4 parameter + kappa3 -- Kappa 3 parameter + ksone -- Distribution of Kolmogorov-Smirnov one-sided test statistic + kstwo -- Distribution of Kolmogorov-Smirnov two-sided test statistic + kstwobign -- Limiting Distribution of scaled Kolmogorov-Smirnov two-sided test statistic. + laplace -- Laplace + laplace_asymmetric -- Asymmetric Laplace + levy -- Levy + levy_l + levy_stable + logistic -- Logistic + loggamma -- Log-Gamma + loglaplace -- Log-Laplace (Log Double Exponential) + lognorm -- Log-Normal + loguniform -- Log-Uniform + lomax -- Lomax (Pareto of the second kind) + maxwell -- Maxwell + mielke -- Mielke's Beta-Kappa + moyal -- Moyal + nakagami -- Nakagami + ncx2 -- Non-central chi-squared + ncf -- Non-central F + nct -- Non-central Student's T + norm -- Normal (Gaussian) + norminvgauss -- Normal Inverse Gaussian + pareto -- Pareto + pearson3 -- Pearson type III + powerlaw -- Power-function + powerlognorm -- Power log normal + powernorm -- Power normal + rdist -- R-distribution + rayleigh -- Rayleigh + rel_breitwigner -- Relativistic Breit-Wigner + rice -- Rice + recipinvgauss -- Reciprocal Inverse Gaussian + semicircular -- Semicircular + skewcauchy -- Skew Cauchy + skewnorm -- Skew normal + studentized_range -- Studentized Range + t -- Student's T + trapezoid -- Trapezoidal + triang -- Triangular + truncexpon -- Truncated Exponential + truncnorm -- Truncated Normal + truncpareto -- Truncated Pareto + truncweibull_min -- Truncated minimum Weibull distribution + tukeylambda -- Tukey-Lambda + uniform -- Uniform + vonmises -- Von-Mises (Circular) + vonmises_line -- Von-Mises (Line) + wald -- Wald + weibull_min -- Minimum Weibull (see Frechet) + weibull_max -- Maximum Weibull (see Frechet) + wrapcauchy -- Wrapped Cauchy + +The ``fit`` method of the univariate continuous distributions uses +maximum likelihood estimation to fit the distribution to a data set. +The ``fit`` method can accept regular data or *censored data*. +Censored data is represented with instances of the `CensoredData` +class. + +.. autosummary:: + :toctree: generated/ + + CensoredData + + +Multivariate distributions +-------------------------- + +.. autosummary:: + :toctree: generated/ + + multivariate_normal -- Multivariate normal distribution + matrix_normal -- Matrix normal distribution + dirichlet -- Dirichlet + dirichlet_multinomial -- Dirichlet multinomial distribution + wishart -- Wishart + invwishart -- Inverse Wishart + multinomial -- Multinomial distribution + special_ortho_group -- SO(N) group + ortho_group -- O(N) group + unitary_group -- U(N) group + random_correlation -- random correlation matrices + multivariate_t -- Multivariate t-distribution + multivariate_hypergeom -- Multivariate hypergeometric distribution + random_table -- Distribution of random tables with given marginals + uniform_direction -- Uniform distribution on S(N-1) + vonmises_fisher -- Von Mises-Fisher distribution + +`scipy.stats.multivariate_normal` methods accept instances +of the following class to represent the covariance. + +.. autosummary:: + :toctree: generated/ + + Covariance -- Representation of a covariance matrix + + +Discrete distributions +---------------------- + +.. autosummary:: + :toctree: generated/ + + bernoulli -- Bernoulli + betabinom -- Beta-Binomial + betanbinom -- Beta-Negative Binomial + binom -- Binomial + boltzmann -- Boltzmann (Truncated Discrete Exponential) + dlaplace -- Discrete Laplacian + geom -- Geometric + hypergeom -- Hypergeometric + logser -- Logarithmic (Log-Series, Series) + nbinom -- Negative Binomial + nchypergeom_fisher -- Fisher's Noncentral Hypergeometric + nchypergeom_wallenius -- Wallenius's Noncentral Hypergeometric + nhypergeom -- Negative Hypergeometric + planck -- Planck (Discrete Exponential) + poisson -- Poisson + randint -- Discrete Uniform + skellam -- Skellam + yulesimon -- Yule-Simon + zipf -- Zipf (Zeta) + zipfian -- Zipfian + + +An overview of statistical functions is given below. Many of these functions +have a similar version in `scipy.stats.mstats` which work for masked arrays. + +Summary statistics +================== + +.. autosummary:: + :toctree: generated/ + + describe -- Descriptive statistics + gmean -- Geometric mean + hmean -- Harmonic mean + pmean -- Power mean + kurtosis -- Fisher or Pearson kurtosis + mode -- Modal value + moment -- Central moment + expectile -- Expectile + skew -- Skewness + kstat -- + kstatvar -- + tmean -- Truncated arithmetic mean + tvar -- Truncated variance + tmin -- + tmax -- + tstd -- + tsem -- + variation -- Coefficient of variation + find_repeats + rankdata + tiecorrect + trim_mean + gstd -- Geometric Standard Deviation + iqr + sem + bayes_mvs + mvsdist + entropy + differential_entropy + median_abs_deviation + +Frequency statistics +==================== + +.. autosummary:: + :toctree: generated/ + + cumfreq + percentileofscore + scoreatpercentile + relfreq + +.. autosummary:: + :toctree: generated/ + + binned_statistic -- Compute a binned statistic for a set of data. + binned_statistic_2d -- Compute a 2-D binned statistic for a set of data. + binned_statistic_dd -- Compute a d-D binned statistic for a set of data. + +Hypothesis Tests and related functions +====================================== +SciPy has many functions for performing hypothesis tests that return a +test statistic and a p-value, and several of them return confidence intervals +and/or other related information. + +The headings below are based on common uses of the functions within, but due to +the wide variety of statistical procedures, any attempt at coarse-grained +categorization will be imperfect. Also, note that tests within the same heading +are not interchangeable in general (e.g. many have different distributional +assumptions). + +One Sample Tests / Paired Sample Tests +-------------------------------------- +One sample tests are typically used to assess whether a single sample was +drawn from a specified distribution or a distribution with specified properties +(e.g. zero mean). + +.. autosummary:: + :toctree: generated/ + + ttest_1samp + binomtest + quantile_test + skewtest + kurtosistest + normaltest + jarque_bera + shapiro + anderson + cramervonmises + ks_1samp + goodness_of_fit + chisquare + power_divergence + +Paired sample tests are often used to assess whether two samples were drawn +from the same distribution; they differ from the independent sample tests below +in that each observation in one sample is treated as paired with a +closely-related observation in the other sample (e.g. when environmental +factors are controlled between observations within a pair but not among pairs). +They can also be interpreted or used as one-sample tests (e.g. tests on the +mean or median of *differences* between paired observations). + +.. autosummary:: + :toctree: generated/ + + ttest_rel + wilcoxon + +Association/Correlation Tests +----------------------------- + +These tests are often used to assess whether there is a relationship (e.g. +linear) between paired observations in multiple samples or among the +coordinates of multivariate observations. + +.. autosummary:: + :toctree: generated/ + + linregress + pearsonr + spearmanr + pointbiserialr + kendalltau + weightedtau + somersd + siegelslopes + theilslopes + page_trend_test + multiscale_graphcorr + +These association tests and are to work with samples in the form of contingency +tables. Supporting functions are available in `scipy.stats.contingency`. + +.. autosummary:: + :toctree: generated/ + + chi2_contingency + fisher_exact + barnard_exact + boschloo_exact + +Independent Sample Tests +------------------------ +Independent sample tests are typically used to assess whether multiple samples +were independently drawn from the same distribution or different distributions +with a shared property (e.g. equal means). + +Some tests are specifically for comparing two samples. + +.. autosummary:: + :toctree: generated/ + + ttest_ind_from_stats + poisson_means_test + ttest_ind + mannwhitneyu + bws_test + ranksums + brunnermunzel + mood + ansari + cramervonmises_2samp + epps_singleton_2samp + ks_2samp + kstest + +Others are generalized to multiple samples. + +.. autosummary:: + :toctree: generated/ + + f_oneway + tukey_hsd + dunnett + kruskal + alexandergovern + fligner + levene + bartlett + median_test + friedmanchisquare + anderson_ksamp + +Resampling and Monte Carlo Methods +---------------------------------- +The following functions can reproduce the p-value and confidence interval +results of most of the functions above, and often produce accurate results in a +wider variety of conditions. They can also be used to perform hypothesis tests +and generate confidence intervals for custom statistics. This flexibility comes +at the cost of greater computational requirements and stochastic results. + +.. autosummary:: + :toctree: generated/ + + monte_carlo_test + permutation_test + bootstrap + +Instances of the following object can be passed into some hypothesis test +functions to perform a resampling or Monte Carlo version of the hypothesis +test. + +.. autosummary:: + :toctree: generated/ + + MonteCarloMethod + PermutationMethod + BootstrapMethod + +Multiple Hypothesis Testing and Meta-Analysis +--------------------------------------------- +These functions are for assessing the results of individual tests as a whole. +Functions for performing specific multiple hypothesis tests (e.g. post hoc +tests) are listed above. + +.. autosummary:: + :toctree: generated/ + + combine_pvalues + false_discovery_control + + +The following functions are related to the tests above but do not belong in the +above categories. + +Quasi-Monte Carlo +================= + +.. toctree:: + :maxdepth: 4 + + stats.qmc + +Contingency Tables +================== + +.. toctree:: + :maxdepth: 4 + + stats.contingency + +Masked statistics functions +=========================== + +.. toctree:: + + stats.mstats + + +Other statistical functionality +=============================== + +Transformations +--------------- + +.. autosummary:: + :toctree: generated/ + + boxcox + boxcox_normmax + boxcox_llf + yeojohnson + yeojohnson_normmax + yeojohnson_llf + obrientransform + sigmaclip + trimboth + trim1 + zmap + zscore + gzscore + +Statistical distances +--------------------- + +.. autosummary:: + :toctree: generated/ + + wasserstein_distance + wasserstein_distance_nd + energy_distance + +Sampling +-------- + +.. toctree:: + :maxdepth: 4 + + stats.sampling + +Random variate generation / CDF Inversion +----------------------------------------- + +.. autosummary:: + :toctree: generated/ + + rvs_ratio_uniforms + +Fitting / Survival Analysis +--------------------------- + +.. autosummary:: + :toctree: generated/ + + fit + ecdf + logrank + +Directional statistical functions +--------------------------------- + +.. autosummary:: + :toctree: generated/ + + directional_stats + circmean + circvar + circstd + +Sensitivity Analysis +-------------------- + +.. autosummary:: + :toctree: generated/ + + sobol_indices + +Plot-tests +---------- + +.. autosummary:: + :toctree: generated/ + + ppcc_max + ppcc_plot + probplot + boxcox_normplot + yeojohnson_normplot + +Univariate and multivariate kernel density estimation +----------------------------------------------------- + +.. autosummary:: + :toctree: generated/ + + gaussian_kde + +Warnings / Errors used in :mod:`scipy.stats` +-------------------------------------------- + +.. autosummary:: + :toctree: generated/ + + DegenerateDataWarning + ConstantInputWarning + NearConstantInputWarning + FitError + +Result classes used in :mod:`scipy.stats` +----------------------------------------- + +.. warning:: + + These classes are private, but they are included here because instances + of them are returned by other statistical functions. User import and + instantiation is not supported. + +.. toctree:: + :maxdepth: 2 + + stats._result_classes + +""" # noqa: E501 + +from ._warnings_errors import (ConstantInputWarning, NearConstantInputWarning, + DegenerateDataWarning, FitError) +from ._stats_py import * +from ._variation import variation +from .distributions import * +from ._morestats import * +from ._multicomp import * +from ._binomtest import binomtest +from ._binned_statistic import * +from ._kde import gaussian_kde +from . import mstats +from . import qmc +from ._multivariate import * +from . import contingency +from .contingency import chi2_contingency +from ._censored_data import CensoredData +from ._resampling import (bootstrap, monte_carlo_test, permutation_test, + MonteCarloMethod, PermutationMethod, BootstrapMethod) +from ._entropy import * +from ._hypotests import * +from ._rvs_sampling import rvs_ratio_uniforms +from ._page_trend_test import page_trend_test +from ._mannwhitneyu import mannwhitneyu +from ._bws_test import bws_test +from ._fit import fit, goodness_of_fit +from ._covariance import Covariance +from ._sensitivity_analysis import * +from ._survival import * + +# Deprecated namespaces, to be removed in v2.0.0 +from . import ( + biasedurn, kde, morestats, mstats_basic, mstats_extras, mvn, stats +) + + +__all__ = [s for s in dir() if not s.startswith("_")] # Remove dunders. + +from scipy._lib._testutils import PytestTester +test = PytestTester(__name__) +del PytestTester diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_ansari_swilk_statistics.cpython-310-x86_64-linux-gnu.so b/venv/lib/python3.10/site-packages/scipy/stats/_ansari_swilk_statistics.cpython-310-x86_64-linux-gnu.so new file mode 100644 index 0000000000000000000000000000000000000000..6fdec6e54d13b20068131ae5ded931faeef44f70 Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/stats/_ansari_swilk_statistics.cpython-310-x86_64-linux-gnu.so differ diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_axis_nan_policy.py b/venv/lib/python3.10/site-packages/scipy/stats/_axis_nan_policy.py new file mode 100644 index 0000000000000000000000000000000000000000..b83274df7ec044b51fb5b378459e0c7d7e063af4 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_axis_nan_policy.py @@ -0,0 +1,642 @@ +# Many scipy.stats functions support `axis` and `nan_policy` parameters. +# When the two are combined, it can be tricky to get all the behavior just +# right. This file contains utility functions useful for scipy.stats functions +# that support `axis` and `nan_policy`, including a decorator that +# automatically adds `axis` and `nan_policy` arguments to a function. + +import numpy as np +from functools import wraps +from scipy._lib._docscrape import FunctionDoc, Parameter +from scipy._lib._util import _contains_nan, AxisError, _get_nan +import inspect + + +def _broadcast_arrays(arrays, axis=None): + """ + Broadcast shapes of arrays, ignoring incompatibility of specified axes + """ + new_shapes = _broadcast_array_shapes(arrays, axis=axis) + if axis is None: + new_shapes = [new_shapes]*len(arrays) + return [np.broadcast_to(array, new_shape) + for array, new_shape in zip(arrays, new_shapes)] + + +def _broadcast_array_shapes(arrays, axis=None): + """ + Broadcast shapes of arrays, ignoring incompatibility of specified axes + """ + shapes = [np.asarray(arr).shape for arr in arrays] + return _broadcast_shapes(shapes, axis) + + +def _broadcast_shapes(shapes, axis=None): + """ + Broadcast shapes, ignoring incompatibility of specified axes + """ + if not shapes: + return shapes + + # input validation + if axis is not None: + axis = np.atleast_1d(axis) + axis_int = axis.astype(int) + if not np.array_equal(axis_int, axis): + raise AxisError('`axis` must be an integer, a ' + 'tuple of integers, or `None`.') + axis = axis_int + + # First, ensure all shapes have same number of dimensions by prepending 1s. + n_dims = max([len(shape) for shape in shapes]) + new_shapes = np.ones((len(shapes), n_dims), dtype=int) + for row, shape in zip(new_shapes, shapes): + row[len(row)-len(shape):] = shape # can't use negative indices (-0:) + + # Remove the shape elements of the axes to be ignored, but remember them. + if axis is not None: + axis[axis < 0] = n_dims + axis[axis < 0] + axis = np.sort(axis) + if axis[-1] >= n_dims or axis[0] < 0: + message = (f"`axis` is out of bounds " + f"for array of dimension {n_dims}") + raise AxisError(message) + + if len(np.unique(axis)) != len(axis): + raise AxisError("`axis` must contain only distinct elements") + + removed_shapes = new_shapes[:, axis] + new_shapes = np.delete(new_shapes, axis, axis=1) + + # If arrays are broadcastable, shape elements that are 1 may be replaced + # with a corresponding non-1 shape element. Assuming arrays are + # broadcastable, that final shape element can be found with: + new_shape = np.max(new_shapes, axis=0) + # except in case of an empty array: + new_shape *= new_shapes.all(axis=0) + + # Among all arrays, there can only be one unique non-1 shape element. + # Therefore, if any non-1 shape element does not match what we found + # above, the arrays must not be broadcastable after all. + if np.any(~((new_shapes == 1) | (new_shapes == new_shape))): + raise ValueError("Array shapes are incompatible for broadcasting.") + + if axis is not None: + # Add back the shape elements that were ignored + new_axis = axis - np.arange(len(axis)) + new_shapes = [tuple(np.insert(new_shape, new_axis, removed_shape)) + for removed_shape in removed_shapes] + return new_shapes + else: + return tuple(new_shape) + + +def _broadcast_array_shapes_remove_axis(arrays, axis=None): + """ + Broadcast shapes of arrays, dropping specified axes + + Given a sequence of arrays `arrays` and an integer or tuple `axis`, find + the shape of the broadcast result after consuming/dropping `axis`. + In other words, return output shape of a typical hypothesis test on + `arrays` vectorized along `axis`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats._axis_nan_policy import _broadcast_array_shapes + >>> a = np.zeros((5, 2, 1)) + >>> b = np.zeros((9, 3)) + >>> _broadcast_array_shapes((a, b), 1) + (5, 3) + """ + # Note that here, `axis=None` means do not consume/drop any axes - _not_ + # ravel arrays before broadcasting. + shapes = [arr.shape for arr in arrays] + return _broadcast_shapes_remove_axis(shapes, axis) + + +def _broadcast_shapes_remove_axis(shapes, axis=None): + """ + Broadcast shapes, dropping specified axes + + Same as _broadcast_array_shapes, but given a sequence + of array shapes `shapes` instead of the arrays themselves. + """ + shapes = _broadcast_shapes(shapes, axis) + shape = shapes[0] + if axis is not None: + shape = np.delete(shape, axis) + return tuple(shape) + + +def _broadcast_concatenate(arrays, axis, paired=False): + """Concatenate arrays along an axis with broadcasting.""" + arrays = _broadcast_arrays(arrays, axis if not paired else None) + res = np.concatenate(arrays, axis=axis) + return res + + +# TODO: add support for `axis` tuples +def _remove_nans(samples, paired): + "Remove nans from paired or unpaired 1D samples" + # potential optimization: don't copy arrays that don't contain nans + if not paired: + return [sample[~np.isnan(sample)] for sample in samples] + + # for paired samples, we need to remove the whole pair when any part + # has a nan + nans = np.isnan(samples[0]) + for sample in samples[1:]: + nans = nans | np.isnan(sample) + not_nans = ~nans + return [sample[not_nans] for sample in samples] + + +def _remove_sentinel(samples, paired, sentinel): + "Remove sentinel values from paired or unpaired 1D samples" + # could consolidate with `_remove_nans`, but it's not quite as simple as + # passing `sentinel=np.nan` because `(np.nan == np.nan) is False` + + # potential optimization: don't copy arrays that don't contain sentinel + if not paired: + return [sample[sample != sentinel] for sample in samples] + + # for paired samples, we need to remove the whole pair when any part + # has a nan + sentinels = (samples[0] == sentinel) + for sample in samples[1:]: + sentinels = sentinels | (sample == sentinel) + not_sentinels = ~sentinels + return [sample[not_sentinels] for sample in samples] + + +def _masked_arrays_2_sentinel_arrays(samples): + # masked arrays in `samples` are converted to regular arrays, and values + # corresponding with masked elements are replaced with a sentinel value + + # return without modifying arrays if none have a mask + has_mask = False + for sample in samples: + mask = getattr(sample, 'mask', False) + has_mask = has_mask or np.any(mask) + if not has_mask: + return samples, None # None means there is no sentinel value + + # Choose a sentinel value. We can't use `np.nan`, because sentinel (masked) + # values are always omitted, but there are different nan policies. + dtype = np.result_type(*samples) + dtype = dtype if np.issubdtype(dtype, np.number) else np.float64 + for i in range(len(samples)): + # Things get more complicated if the arrays are of different types. + # We could have different sentinel values for each array, but + # the purpose of this code is convenience, not efficiency. + samples[i] = samples[i].astype(dtype, copy=False) + + inexact = np.issubdtype(dtype, np.inexact) + info = np.finfo if inexact else np.iinfo + max_possible, min_possible = info(dtype).max, info(dtype).min + nextafter = np.nextafter if inexact else (lambda x, _: x - 1) + + sentinel = max_possible + # For simplicity, min_possible/np.infs are not candidate sentinel values + while sentinel > min_possible: + for sample in samples: + if np.any(sample == sentinel): # choose a new sentinel value + sentinel = nextafter(sentinel, -np.inf) + break + else: # when sentinel value is OK, break the while loop + break + else: + message = ("This function replaces masked elements with sentinel " + "values, but the data contains all distinct values of this " + "data type. Consider promoting the dtype to `np.float64`.") + raise ValueError(message) + + # replace masked elements with sentinel value + out_samples = [] + for sample in samples: + mask = getattr(sample, 'mask', None) + if mask is not None: # turn all masked arrays into sentinel arrays + mask = np.broadcast_to(mask, sample.shape) + sample = sample.data.copy() if np.any(mask) else sample.data + sample = np.asarray(sample) # `sample.data` could be a memoryview? + sample[mask] = sentinel + out_samples.append(sample) + + return out_samples, sentinel + + +def _check_empty_inputs(samples, axis): + """ + Check for empty sample; return appropriate output for a vectorized hypotest + """ + # if none of the samples are empty, we need to perform the test + if not any(sample.size == 0 for sample in samples): + return None + # otherwise, the statistic and p-value will be either empty arrays or + # arrays with NaNs. Produce the appropriate array and return it. + output_shape = _broadcast_array_shapes_remove_axis(samples, axis) + output = np.ones(output_shape) * _get_nan(*samples) + return output + + +def _add_reduced_axes(res, reduced_axes, keepdims): + """ + Add reduced axes back to all the arrays in the result object + if keepdims = True. + """ + return ([np.expand_dims(output, reduced_axes) for output in res] + if keepdims else res) + + +# Standard docstring / signature entries for `axis`, `nan_policy`, `keepdims` +_name = 'axis' +_desc = ( + """If an int, the axis of the input along which to compute the statistic. +The statistic of each axis-slice (e.g. row) of the input will appear in a +corresponding element of the output. +If ``None``, the input will be raveled before computing the statistic.""" + .split('\n')) + + +def _get_axis_params(default_axis=0, _name=_name, _desc=_desc): # bind NOW + _type = f"int or None, default: {default_axis}" + _axis_parameter_doc = Parameter(_name, _type, _desc) + _axis_parameter = inspect.Parameter(_name, + inspect.Parameter.KEYWORD_ONLY, + default=default_axis) + return _axis_parameter_doc, _axis_parameter + + +_name = 'nan_policy' +_type = "{'propagate', 'omit', 'raise'}" +_desc = ( + """Defines how to handle input NaNs. + +- ``propagate``: if a NaN is present in the axis slice (e.g. row) along + which the statistic is computed, the corresponding entry of the output + will be NaN. +- ``omit``: NaNs will be omitted when performing the calculation. + If insufficient data remains in the axis slice along which the + statistic is computed, the corresponding entry of the output will be + NaN. +- ``raise``: if a NaN is present, a ``ValueError`` will be raised.""" + .split('\n')) +_nan_policy_parameter_doc = Parameter(_name, _type, _desc) +_nan_policy_parameter = inspect.Parameter(_name, + inspect.Parameter.KEYWORD_ONLY, + default='propagate') + +_name = 'keepdims' +_type = "bool, default: False" +_desc = ( + """If this is set to True, the axes which are reduced are left +in the result as dimensions with size one. With this option, +the result will broadcast correctly against the input array.""" + .split('\n')) +_keepdims_parameter_doc = Parameter(_name, _type, _desc) +_keepdims_parameter = inspect.Parameter(_name, + inspect.Parameter.KEYWORD_ONLY, + default=False) + +_standard_note_addition = ( + """\nBeginning in SciPy 1.9, ``np.matrix`` inputs (not recommended for new +code) are converted to ``np.ndarray`` before the calculation is performed. In +this case, the output will be a scalar or ``np.ndarray`` of appropriate shape +rather than a 2D ``np.matrix``. Similarly, while masked elements of masked +arrays are ignored, the output will be a scalar or ``np.ndarray`` rather than a +masked array with ``mask=False``.""").split('\n') + + +def _axis_nan_policy_factory(tuple_to_result, default_axis=0, + n_samples=1, paired=False, + result_to_tuple=None, too_small=0, + n_outputs=2, kwd_samples=[], override=None): + """Factory for a wrapper that adds axis/nan_policy params to a function. + + Parameters + ---------- + tuple_to_result : callable + Callable that returns an object of the type returned by the function + being wrapped (e.g. the namedtuple or dataclass returned by a + statistical test) provided the separate components (e.g. statistic, + pvalue). + default_axis : int, default: 0 + The default value of the axis argument. Standard is 0 except when + backwards compatibility demands otherwise (e.g. `None`). + n_samples : int or callable, default: 1 + The number of data samples accepted by the function + (e.g. `mannwhitneyu`), a callable that accepts a dictionary of + parameters passed into the function and returns the number of data + samples (e.g. `wilcoxon`), or `None` to indicate an arbitrary number + of samples (e.g. `kruskal`). + paired : {False, True} + Whether the function being wrapped treats the samples as paired (i.e. + corresponding elements of each sample should be considered as different + components of the same sample.) + result_to_tuple : callable, optional + Function that unpacks the results of the function being wrapped into + a tuple. This is essentially the inverse of `tuple_to_result`. Default + is `None`, which is appropriate for statistical tests that return a + statistic, pvalue tuple (rather than, e.g., a non-iterable datalass). + too_small : int or callable, default: 0 + The largest unnacceptably small sample for the function being wrapped. + For example, some functions require samples of size two or more or they + raise an error. This argument prevents the error from being raised when + input is not 1D and instead places a NaN in the corresponding element + of the result. If callable, it must accept a list of samples, axis, + and a dictionary of keyword arguments passed to the wrapper function as + arguments and return a bool indicating weather the samples passed are + too small. + n_outputs : int or callable, default: 2 + The number of outputs produced by the function given 1d sample(s). For + example, hypothesis tests that return a namedtuple or result object + with attributes ``statistic`` and ``pvalue`` use the default + ``n_outputs=2``; summary statistics with scalar output use + ``n_outputs=1``. Alternatively, may be a callable that accepts a + dictionary of arguments passed into the wrapped function and returns + the number of outputs corresponding with those arguments. + kwd_samples : sequence, default: [] + The names of keyword parameters that should be treated as samples. For + example, `gmean` accepts as its first argument a sample `a` but + also `weights` as a fourth, optional keyword argument. In this case, we + use `n_samples=1` and kwd_samples=['weights']. + override : dict, default: {'vectorization': False, 'nan_propagation': True} + Pass a dictionary with ``'vectorization': True`` to ensure that the + decorator overrides the function's behavior for multimensional input. + Use ``'nan_propagation': False`` to ensure that the decorator does not + override the function's behavior for ``nan_policy='propagate'``. + (See `scipy.stats.mode`, for example.) + """ + # Specify which existing behaviors the decorator must override + temp = override or {} + override = {'vectorization': False, + 'nan_propagation': True} + override.update(temp) + + if result_to_tuple is None: + def result_to_tuple(res): + return res + + if not callable(too_small): + def is_too_small(samples, *ts_args, axis=-1, **ts_kwargs): + for sample in samples: + if sample.shape[axis] <= too_small: + return True + return False + else: + is_too_small = too_small + + def axis_nan_policy_decorator(hypotest_fun_in): + @wraps(hypotest_fun_in) + def axis_nan_policy_wrapper(*args, _no_deco=False, **kwds): + + if _no_deco: # for testing, decorator does nothing + return hypotest_fun_in(*args, **kwds) + + # We need to be flexible about whether position or keyword + # arguments are used, but we need to make sure users don't pass + # both for the same parameter. To complicate matters, some + # functions accept samples with *args, and some functions already + # accept `axis` and `nan_policy` as positional arguments. + # The strategy is to make sure that there is no duplication + # between `args` and `kwds`, combine the two into `kwds`, then + # the samples, `nan_policy`, and `axis` from `kwds`, as they are + # dealt with separately. + + # Check for intersection between positional and keyword args + params = list(inspect.signature(hypotest_fun_in).parameters) + if n_samples is None: + # Give unique names to each positional sample argument + # Note that *args can't be provided as a keyword argument + params = [f"arg{i}" for i in range(len(args))] + params[1:] + + # raise if there are too many positional args + maxarg = (np.inf if inspect.getfullargspec(hypotest_fun_in).varargs + else len(inspect.getfullargspec(hypotest_fun_in).args)) + if len(args) > maxarg: # let the function raise the right error + hypotest_fun_in(*args, **kwds) + + # raise if multiple values passed for same parameter + d_args = dict(zip(params, args)) + intersection = set(d_args) & set(kwds) + if intersection: # let the function raise the right error + hypotest_fun_in(*args, **kwds) + + # Consolidate other positional and keyword args into `kwds` + kwds.update(d_args) + + # rename avoids UnboundLocalError + if callable(n_samples): + # Future refactoring idea: no need for callable n_samples. + # Just replace `n_samples` and `kwd_samples` with a single + # list of the names of all samples, and treat all of them + # as `kwd_samples` are treated below. + n_samp = n_samples(kwds) + else: + n_samp = n_samples or len(args) + + # get the number of outputs + n_out = n_outputs # rename to avoid UnboundLocalError + if callable(n_out): + n_out = n_out(kwds) + + # If necessary, rearrange function signature: accept other samples + # as positional args right after the first n_samp args + kwd_samp = [name for name in kwd_samples + if kwds.get(name, None) is not None] + n_kwd_samp = len(kwd_samp) + if not kwd_samp: + hypotest_fun_out = hypotest_fun_in + else: + def hypotest_fun_out(*samples, **kwds): + new_kwds = dict(zip(kwd_samp, samples[n_samp:])) + kwds.update(new_kwds) + return hypotest_fun_in(*samples[:n_samp], **kwds) + + # Extract the things we need here + try: # if something is missing + samples = [np.atleast_1d(kwds.pop(param)) + for param in (params[:n_samp] + kwd_samp)] + except KeyError: # let the function raise the right error + # might need to revisit this if required arg is not a "sample" + hypotest_fun_in(*args, **kwds) + vectorized = True if 'axis' in params else False + vectorized = vectorized and not override['vectorization'] + axis = kwds.pop('axis', default_axis) + nan_policy = kwds.pop('nan_policy', 'propagate') + keepdims = kwds.pop("keepdims", False) + del args # avoid the possibility of passing both `args` and `kwds` + + # convert masked arrays to regular arrays with sentinel values + samples, sentinel = _masked_arrays_2_sentinel_arrays(samples) + + # standardize to always work along last axis + reduced_axes = axis + if axis is None: + if samples: + # when axis=None, take the maximum of all dimensions since + # all the dimensions are reduced. + n_dims = np.max([sample.ndim for sample in samples]) + reduced_axes = tuple(range(n_dims)) + samples = [np.asarray(sample.ravel()) for sample in samples] + else: + samples = _broadcast_arrays(samples, axis=axis) + axis = np.atleast_1d(axis) + n_axes = len(axis) + # move all axes in `axis` to the end to be raveled + samples = [np.moveaxis(sample, axis, range(-len(axis), 0)) + for sample in samples] + shapes = [sample.shape for sample in samples] + # New shape is unchanged for all axes _not_ in `axis` + # At the end, we append the product of the shapes of the axes + # in `axis`. Appending -1 doesn't work for zero-size arrays! + new_shapes = [shape[:-n_axes] + (np.prod(shape[-n_axes:]),) + for shape in shapes] + samples = [sample.reshape(new_shape) + for sample, new_shape in zip(samples, new_shapes)] + axis = -1 # work over the last axis + NaN = _get_nan(*samples) + + # if axis is not needed, just handle nan_policy and return + ndims = np.array([sample.ndim for sample in samples]) + if np.all(ndims <= 1): + # Addresses nan_policy == "raise" + if nan_policy != 'propagate' or override['nan_propagation']: + contains_nan = [_contains_nan(sample, nan_policy)[0] + for sample in samples] + else: + # Behave as though there are no NaNs (even if there are) + contains_nan = [False]*len(samples) + + # Addresses nan_policy == "propagate" + if any(contains_nan) and (nan_policy == 'propagate' + and override['nan_propagation']): + res = np.full(n_out, NaN) + res = _add_reduced_axes(res, reduced_axes, keepdims) + return tuple_to_result(*res) + + # Addresses nan_policy == "omit" + if any(contains_nan) and nan_policy == 'omit': + # consider passing in contains_nan + samples = _remove_nans(samples, paired) + + # ideally, this is what the behavior would be: + # if is_too_small(samples): + # return tuple_to_result(NaN, NaN) + # but some existing functions raise exceptions, and changing + # behavior of those would break backward compatibility. + + if sentinel: + samples = _remove_sentinel(samples, paired, sentinel) + res = hypotest_fun_out(*samples, **kwds) + res = result_to_tuple(res) + res = _add_reduced_axes(res, reduced_axes, keepdims) + return tuple_to_result(*res) + + # check for empty input + # ideally, move this to the top, but some existing functions raise + # exceptions for empty input, so overriding it would break + # backward compatibility. + empty_output = _check_empty_inputs(samples, axis) + # only return empty output if zero sized input is too small. + if ( + empty_output is not None + and (is_too_small(samples, kwds) or empty_output.size == 0) + ): + res = [empty_output.copy() for i in range(n_out)] + res = _add_reduced_axes(res, reduced_axes, keepdims) + return tuple_to_result(*res) + + # otherwise, concatenate all samples along axis, remembering where + # each separate sample begins + lengths = np.array([sample.shape[axis] for sample in samples]) + split_indices = np.cumsum(lengths) + x = _broadcast_concatenate(samples, axis) + + # Addresses nan_policy == "raise" + if nan_policy != 'propagate' or override['nan_propagation']: + contains_nan, _ = _contains_nan(x, nan_policy) + else: + contains_nan = False # behave like there are no NaNs + + if vectorized and not contains_nan and not sentinel: + res = hypotest_fun_out(*samples, axis=axis, **kwds) + res = result_to_tuple(res) + res = _add_reduced_axes(res, reduced_axes, keepdims) + return tuple_to_result(*res) + + # Addresses nan_policy == "omit" + if contains_nan and nan_policy == 'omit': + def hypotest_fun(x): + samples = np.split(x, split_indices)[:n_samp+n_kwd_samp] + samples = _remove_nans(samples, paired) + if sentinel: + samples = _remove_sentinel(samples, paired, sentinel) + if is_too_small(samples, kwds): + return np.full(n_out, NaN) + return result_to_tuple(hypotest_fun_out(*samples, **kwds)) + + # Addresses nan_policy == "propagate" + elif (contains_nan and nan_policy == 'propagate' + and override['nan_propagation']): + def hypotest_fun(x): + if np.isnan(x).any(): + return np.full(n_out, NaN) + + samples = np.split(x, split_indices)[:n_samp+n_kwd_samp] + if sentinel: + samples = _remove_sentinel(samples, paired, sentinel) + if is_too_small(samples, kwds): + return np.full(n_out, NaN) + return result_to_tuple(hypotest_fun_out(*samples, **kwds)) + + else: + def hypotest_fun(x): + samples = np.split(x, split_indices)[:n_samp+n_kwd_samp] + if sentinel: + samples = _remove_sentinel(samples, paired, sentinel) + if is_too_small(samples, kwds): + return np.full(n_out, NaN) + return result_to_tuple(hypotest_fun_out(*samples, **kwds)) + + x = np.moveaxis(x, axis, 0) + res = np.apply_along_axis(hypotest_fun, axis=0, arr=x) + res = _add_reduced_axes(res, reduced_axes, keepdims) + return tuple_to_result(*res) + + _axis_parameter_doc, _axis_parameter = _get_axis_params(default_axis) + doc = FunctionDoc(axis_nan_policy_wrapper) + parameter_names = [param.name for param in doc['Parameters']] + if 'axis' in parameter_names: + doc['Parameters'][parameter_names.index('axis')] = ( + _axis_parameter_doc) + else: + doc['Parameters'].append(_axis_parameter_doc) + if 'nan_policy' in parameter_names: + doc['Parameters'][parameter_names.index('nan_policy')] = ( + _nan_policy_parameter_doc) + else: + doc['Parameters'].append(_nan_policy_parameter_doc) + if 'keepdims' in parameter_names: + doc['Parameters'][parameter_names.index('keepdims')] = ( + _keepdims_parameter_doc) + else: + doc['Parameters'].append(_keepdims_parameter_doc) + doc['Notes'] += _standard_note_addition + doc = str(doc).split("\n", 1)[1] # remove signature + axis_nan_policy_wrapper.__doc__ = str(doc) + + sig = inspect.signature(axis_nan_policy_wrapper) + parameters = sig.parameters + parameter_list = list(parameters.values()) + if 'axis' not in parameters: + parameter_list.append(_axis_parameter) + if 'nan_policy' not in parameters: + parameter_list.append(_nan_policy_parameter) + if 'keepdims' not in parameters: + parameter_list.append(_keepdims_parameter) + sig = sig.replace(parameters=parameter_list) + axis_nan_policy_wrapper.__signature__ = sig + + return axis_nan_policy_wrapper + return axis_nan_policy_decorator diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_biasedurn.cpython-310-x86_64-linux-gnu.so b/venv/lib/python3.10/site-packages/scipy/stats/_biasedurn.cpython-310-x86_64-linux-gnu.so new file mode 100644 index 0000000000000000000000000000000000000000..beb4bcc89a74e31bd7b49b3f5720c516c059a97c Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/stats/_biasedurn.cpython-310-x86_64-linux-gnu.so differ diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_bws_test.py b/venv/lib/python3.10/site-packages/scipy/stats/_bws_test.py new file mode 100644 index 0000000000000000000000000000000000000000..6496ecfba798dc7ad719f784a57896e296590675 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_bws_test.py @@ -0,0 +1,177 @@ +import numpy as np +from functools import partial +from scipy import stats + + +def _bws_input_validation(x, y, alternative, method): + ''' Input validation and standardization for bws test''' + x, y = np.atleast_1d(x, y) + if x.ndim > 1 or y.ndim > 1: + raise ValueError('`x` and `y` must be exactly one-dimensional.') + if np.isnan(x).any() or np.isnan(y).any(): + raise ValueError('`x` and `y` must not contain NaNs.') + if np.size(x) == 0 or np.size(y) == 0: + raise ValueError('`x` and `y` must be of nonzero size.') + + z = stats.rankdata(np.concatenate((x, y))) + x, y = z[:len(x)], z[len(x):] + + alternatives = {'two-sided', 'less', 'greater'} + alternative = alternative.lower() + if alternative not in alternatives: + raise ValueError(f'`alternative` must be one of {alternatives}.') + + method = stats.PermutationMethod() if method is None else method + if not isinstance(method, stats.PermutationMethod): + raise ValueError('`method` must be an instance of ' + '`scipy.stats.PermutationMethod`') + + return x, y, alternative, method + + +def _bws_statistic(x, y, alternative, axis): + '''Compute the BWS test statistic for two independent samples''' + # Public function currently does not accept `axis`, but `permutation_test` + # uses `axis` to make vectorized call. + + Ri, Hj = np.sort(x, axis=axis), np.sort(y, axis=axis) + n, m = Ri.shape[axis], Hj.shape[axis] + i, j = np.arange(1, n+1), np.arange(1, m+1) + + Bx_num = Ri - (m + n)/n * i + By_num = Hj - (m + n)/m * j + + if alternative == 'two-sided': + Bx_num *= Bx_num + By_num *= By_num + else: + Bx_num *= np.abs(Bx_num) + By_num *= np.abs(By_num) + + Bx_den = i/(n+1) * (1 - i/(n+1)) * m*(m+n)/n + By_den = j/(m+1) * (1 - j/(m+1)) * n*(m+n)/m + + Bx = 1/n * np.sum(Bx_num/Bx_den, axis=axis) + By = 1/m * np.sum(By_num/By_den, axis=axis) + + B = (Bx + By) / 2 if alternative == 'two-sided' else (Bx - By) / 2 + + return B + + +def bws_test(x, y, *, alternative="two-sided", method=None): + r'''Perform the Baumgartner-Weiss-Schindler test on two independent samples. + + The Baumgartner-Weiss-Schindler (BWS) test is a nonparametric test of + the null hypothesis that the distribution underlying sample `x` + is the same as the distribution underlying sample `y`. Unlike + the Kolmogorov-Smirnov, Wilcoxon, and Cramer-Von Mises tests, + the BWS test weights the integral by the variance of the difference + in cumulative distribution functions (CDFs), emphasizing the tails of the + distributions, which increases the power of the test in many applications. + + Parameters + ---------- + x, y : array-like + 1-d arrays of samples. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + Let *F(u)* and *G(u)* be the cumulative distribution functions of the + distributions underlying `x` and `y`, respectively. Then the following + alternative hypotheses are available: + + * 'two-sided': the distributions are not equal, i.e. *F(u) ≠ G(u)* for + at least one *u*. + * 'less': the distribution underlying `x` is stochastically less than + the distribution underlying `y`, i.e. *F(u) >= G(u)* for all *u*. + * 'greater': the distribution underlying `x` is stochastically greater + than the distribution underlying `y`, i.e. *F(u) <= G(u)* for all + *u*. + + Under a more restrictive set of assumptions, the alternative hypotheses + can be expressed in terms of the locations of the distributions; + see [2] section 5.1. + method : PermutationMethod, optional + Configures the method used to compute the p-value. The default is + the default `PermutationMethod` object. + + Returns + ------- + res : PermutationTestResult + An object with attributes: + + statistic : float + The observed test statistic of the data. + pvalue : float + The p-value for the given alternative. + null_distribution : ndarray + The values of the test statistic generated under the null hypothesis. + + See also + -------- + scipy.stats.wilcoxon, scipy.stats.mannwhitneyu, scipy.stats.ttest_ind + + Notes + ----- + When ``alternative=='two-sided'``, the statistic is defined by the + equations given in [1]_ Section 2. This statistic is not appropriate for + one-sided alternatives; in that case, the statistic is the *negative* of + that given by the equations in [1]_ Section 2. Consequently, when the + distribution of the first sample is stochastically greater than that of the + second sample, the statistic will tend to be positive. + + References + ---------- + .. [1] Neuhäuser, M. (2005). Exact Tests Based on the + Baumgartner-Weiss-Schindler Statistic: A Survey. Statistical Papers, + 46(1), 1-29. + .. [2] Fay, M. P., & Proschan, M. A. (2010). Wilcoxon-Mann-Whitney or t-test? + On assumptions for hypothesis tests and multiple interpretations of + decision rules. Statistics surveys, 4, 1. + + Examples + -------- + We follow the example of table 3 in [1]_: Fourteen children were divided + randomly into two groups. Their ranks at performing a specific tests are + as follows. + + >>> import numpy as np + >>> x = [1, 2, 3, 4, 6, 7, 8] + >>> y = [5, 9, 10, 11, 12, 13, 14] + + We use the BWS test to assess whether there is a statistically significant + difference between the two groups. + The null hypothesis is that there is no difference in the distributions of + performance between the two groups. We decide that a significance level of + 1% is required to reject the null hypothesis in favor of the alternative + that the distributions are different. + Since the number of samples is very small, we can compare the observed test + statistic against the *exact* distribution of the test statistic under the + null hypothesis. + + >>> from scipy.stats import bws_test + >>> res = bws_test(x, y) + >>> print(res.statistic) + 5.132167152575315 + + This agrees with :math:`B = 5.132` reported in [1]_. The *p*-value produced + by `bws_test` also agrees with :math:`p = 0.0029` reported in [1]_. + + >>> print(res.pvalue) + 0.002913752913752914 + + Because the p-value is below our threshold of 1%, we take this as evidence + against the null hypothesis in favor of the alternative that there is a + difference in performance between the two groups. + ''' + + x, y, alternative, method = _bws_input_validation(x, y, alternative, + method) + bws_statistic = partial(_bws_statistic, alternative=alternative) + + permutation_alternative = 'less' if alternative == 'less' else 'greater' + res = stats.permutation_test((x, y), bws_statistic, + alternative=permutation_alternative, + **method._asdict()) + + return res diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_censored_data.py b/venv/lib/python3.10/site-packages/scipy/stats/_censored_data.py new file mode 100644 index 0000000000000000000000000000000000000000..f6fee500f1d97db0bae9ebff26824d4d894c7f39 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_censored_data.py @@ -0,0 +1,459 @@ +import numpy as np + + +def _validate_1d(a, name, allow_inf=False): + if np.ndim(a) != 1: + raise ValueError(f'`{name}` must be a one-dimensional sequence.') + if np.isnan(a).any(): + raise ValueError(f'`{name}` must not contain nan.') + if not allow_inf and np.isinf(a).any(): + raise ValueError(f'`{name}` must contain only finite values.') + + +def _validate_interval(interval): + interval = np.asarray(interval) + if interval.shape == (0,): + # The input was a sequence with length 0. + interval = interval.reshape((0, 2)) + if interval.ndim != 2 or interval.shape[-1] != 2: + raise ValueError('`interval` must be a two-dimensional array with ' + 'shape (m, 2), where m is the number of ' + 'interval-censored values, but got shape ' + f'{interval.shape}') + + if np.isnan(interval).any(): + raise ValueError('`interval` must not contain nan.') + if np.isinf(interval).all(axis=1).any(): + raise ValueError('In each row in `interval`, both values must not' + ' be infinite.') + if (interval[:, 0] > interval[:, 1]).any(): + raise ValueError('In each row of `interval`, the left value must not' + ' exceed the right value.') + + uncensored_mask = interval[:, 0] == interval[:, 1] + left_mask = np.isinf(interval[:, 0]) + right_mask = np.isinf(interval[:, 1]) + interval_mask = np.isfinite(interval).all(axis=1) & ~uncensored_mask + + uncensored2 = interval[uncensored_mask, 0] + left2 = interval[left_mask, 1] + right2 = interval[right_mask, 0] + interval2 = interval[interval_mask] + + return uncensored2, left2, right2, interval2 + + +def _validate_x_censored(x, censored): + x = np.asarray(x) + if x.ndim != 1: + raise ValueError('`x` must be one-dimensional.') + censored = np.asarray(censored) + if censored.ndim != 1: + raise ValueError('`censored` must be one-dimensional.') + if (~np.isfinite(x)).any(): + raise ValueError('`x` must not contain nan or inf.') + if censored.size != x.size: + raise ValueError('`x` and `censored` must have the same length.') + return x, censored.astype(bool) + + +class CensoredData: + """ + Instances of this class represent censored data. + + Instances may be passed to the ``fit`` method of continuous + univariate SciPy distributions for maximum likelihood estimation. + The *only* method of the univariate continuous distributions that + understands `CensoredData` is the ``fit`` method. An instance of + `CensoredData` can not be passed to methods such as ``pdf`` and + ``cdf``. + + An observation is said to be *censored* when the precise value is unknown, + but it has a known upper and/or lower bound. The conventional terminology + is: + + * left-censored: an observation is below a certain value but it is + unknown by how much. + * right-censored: an observation is above a certain value but it is + unknown by how much. + * interval-censored: an observation lies somewhere on an interval between + two values. + + Left-, right-, and interval-censored data can be represented by + `CensoredData`. + + For convenience, the class methods ``left_censored`` and + ``right_censored`` are provided to create a `CensoredData` + instance from a single one-dimensional array of measurements + and a corresponding boolean array to indicate which measurements + are censored. The class method ``interval_censored`` accepts two + one-dimensional arrays that hold the lower and upper bounds of the + intervals. + + Parameters + ---------- + uncensored : array_like, 1D + Uncensored observations. + left : array_like, 1D + Left-censored observations. + right : array_like, 1D + Right-censored observations. + interval : array_like, 2D, with shape (m, 2) + Interval-censored observations. Each row ``interval[k, :]`` + represents the interval for the kth interval-censored observation. + + Notes + ----- + In the input array `interval`, the lower bound of the interval may + be ``-inf``, and the upper bound may be ``inf``, but at least one must be + finite. When the lower bound is ``-inf``, the row represents a left- + censored observation, and when the upper bound is ``inf``, the row + represents a right-censored observation. If the length of an interval + is 0 (i.e. ``interval[k, 0] == interval[k, 1]``, the observation is + treated as uncensored. So one can represent all the types of censored + and uncensored data in ``interval``, but it is generally more convenient + to use `uncensored`, `left` and `right` for uncensored, left-censored and + right-censored observations, respectively. + + Examples + -------- + In the most general case, a censored data set may contain values that + are left-censored, right-censored, interval-censored, and uncensored. + For example, here we create a data set with five observations. Two + are uncensored (values 1 and 1.5), one is a left-censored observation + of 0, one is a right-censored observation of 10 and one is + interval-censored in the interval [2, 3]. + + >>> import numpy as np + >>> from scipy.stats import CensoredData + >>> data = CensoredData(uncensored=[1, 1.5], left=[0], right=[10], + ... interval=[[2, 3]]) + >>> print(data) + CensoredData(5 values: 2 not censored, 1 left-censored, + 1 right-censored, 1 interval-censored) + + Equivalently, + + >>> data = CensoredData(interval=[[1, 1], + ... [1.5, 1.5], + ... [-np.inf, 0], + ... [10, np.inf], + ... [2, 3]]) + >>> print(data) + CensoredData(5 values: 2 not censored, 1 left-censored, + 1 right-censored, 1 interval-censored) + + A common case is to have a mix of uncensored observations and censored + observations that are all right-censored (or all left-censored). For + example, consider an experiment in which six devices are started at + various times and left running until they fail. Assume that time is + measured in hours, and the experiment is stopped after 30 hours, even + if all the devices have not failed by that time. We might end up with + data such as this:: + + Device Start-time Fail-time Time-to-failure + 1 0 13 13 + 2 2 24 22 + 3 5 22 17 + 4 8 23 15 + 5 10 *** >20 + 6 12 *** >18 + + Two of the devices had not failed when the experiment was stopped; + the observations of the time-to-failure for these two devices are + right-censored. We can represent this data with + + >>> data = CensoredData(uncensored=[13, 22, 17, 15], right=[20, 18]) + >>> print(data) + CensoredData(6 values: 4 not censored, 2 right-censored) + + Alternatively, we can use the method `CensoredData.right_censored` to + create a representation of this data. The time-to-failure observations + are put the list ``ttf``. The ``censored`` list indicates which values + in ``ttf`` are censored. + + >>> ttf = [13, 22, 17, 15, 20, 18] + >>> censored = [False, False, False, False, True, True] + + Pass these lists to `CensoredData.right_censored` to create an + instance of `CensoredData`. + + >>> data = CensoredData.right_censored(ttf, censored) + >>> print(data) + CensoredData(6 values: 4 not censored, 2 right-censored) + + If the input data is interval censored and already stored in two + arrays, one holding the low end of the intervals and another + holding the high ends, the class method ``interval_censored`` can + be used to create the `CensoredData` instance. + + This example creates an instance with four interval-censored values. + The intervals are [10, 11], [0.5, 1], [2, 3], and [12.5, 13.5]. + + >>> a = [10, 0.5, 2, 12.5] # Low ends of the intervals + >>> b = [11, 1.0, 3, 13.5] # High ends of the intervals + >>> data = CensoredData.interval_censored(low=a, high=b) + >>> print(data) + CensoredData(4 values: 0 not censored, 4 interval-censored) + + Finally, we create and censor some data from the `weibull_min` + distribution, and then fit `weibull_min` to that data. We'll assume + that the location parameter is known to be 0. + + >>> from scipy.stats import weibull_min + >>> rng = np.random.default_rng() + + Create the random data set. + + >>> x = weibull_min.rvs(2.5, loc=0, scale=30, size=250, random_state=rng) + >>> x[x > 40] = 40 # Right-censor values greater or equal to 40. + + Create the `CensoredData` instance with the `right_censored` method. + The censored values are those where the value is 40. + + >>> data = CensoredData.right_censored(x, x == 40) + >>> print(data) + CensoredData(250 values: 215 not censored, 35 right-censored) + + 35 values have been right-censored. + + Fit `weibull_min` to the censored data. We expect to shape and scale + to be approximately 2.5 and 30, respectively. + + >>> weibull_min.fit(data, floc=0) + (2.3575922823897315, 0, 30.40650074451254) + + """ + + def __init__(self, uncensored=None, *, left=None, right=None, + interval=None): + if uncensored is None: + uncensored = [] + if left is None: + left = [] + if right is None: + right = [] + if interval is None: + interval = np.empty((0, 2)) + + _validate_1d(uncensored, 'uncensored') + _validate_1d(left, 'left') + _validate_1d(right, 'right') + uncensored2, left2, right2, interval2 = _validate_interval(interval) + + self._uncensored = np.concatenate((uncensored, uncensored2)) + self._left = np.concatenate((left, left2)) + self._right = np.concatenate((right, right2)) + # Note that by construction, the private attribute _interval + # will be a 2D array that contains only finite values representing + # intervals with nonzero but finite length. + self._interval = interval2 + + def __repr__(self): + uncensored_str = " ".join(np.array_repr(self._uncensored).split()) + left_str = " ".join(np.array_repr(self._left).split()) + right_str = " ".join(np.array_repr(self._right).split()) + interval_str = " ".join(np.array_repr(self._interval).split()) + return (f"CensoredData(uncensored={uncensored_str}, left={left_str}, " + f"right={right_str}, interval={interval_str})") + + def __str__(self): + num_nc = len(self._uncensored) + num_lc = len(self._left) + num_rc = len(self._right) + num_ic = len(self._interval) + n = num_nc + num_lc + num_rc + num_ic + parts = [f'{num_nc} not censored'] + if num_lc > 0: + parts.append(f'{num_lc} left-censored') + if num_rc > 0: + parts.append(f'{num_rc} right-censored') + if num_ic > 0: + parts.append(f'{num_ic} interval-censored') + return f'CensoredData({n} values: ' + ', '.join(parts) + ')' + + # This is not a complete implementation of the arithmetic operators. + # All we need is subtracting a scalar and dividing by a scalar. + + def __sub__(self, other): + return CensoredData(uncensored=self._uncensored - other, + left=self._left - other, + right=self._right - other, + interval=self._interval - other) + + def __truediv__(self, other): + return CensoredData(uncensored=self._uncensored / other, + left=self._left / other, + right=self._right / other, + interval=self._interval / other) + + def __len__(self): + """ + The number of values (censored and not censored). + """ + return (len(self._uncensored) + len(self._left) + len(self._right) + + len(self._interval)) + + def num_censored(self): + """ + Number of censored values. + """ + return len(self._left) + len(self._right) + len(self._interval) + + @classmethod + def right_censored(cls, x, censored): + """ + Create a `CensoredData` instance of right-censored data. + + Parameters + ---------- + x : array_like + `x` is the array of observed data or measurements. + `x` must be a one-dimensional sequence of finite numbers. + censored : array_like of bool + `censored` must be a one-dimensional sequence of boolean + values. If ``censored[k]`` is True, the corresponding value + in `x` is right-censored. That is, the value ``x[k]`` + is the lower bound of the true (but unknown) value. + + Returns + ------- + data : `CensoredData` + An instance of `CensoredData` that represents the + collection of uncensored and right-censored values. + + Examples + -------- + >>> from scipy.stats import CensoredData + + Two uncensored values (4 and 10) and two right-censored values + (24 and 25). + + >>> data = CensoredData.right_censored([4, 10, 24, 25], + ... [False, False, True, True]) + >>> data + CensoredData(uncensored=array([ 4., 10.]), + left=array([], dtype=float64), right=array([24., 25.]), + interval=array([], shape=(0, 2), dtype=float64)) + >>> print(data) + CensoredData(4 values: 2 not censored, 2 right-censored) + """ + x, censored = _validate_x_censored(x, censored) + return cls(uncensored=x[~censored], right=x[censored]) + + @classmethod + def left_censored(cls, x, censored): + """ + Create a `CensoredData` instance of left-censored data. + + Parameters + ---------- + x : array_like + `x` is the array of observed data or measurements. + `x` must be a one-dimensional sequence of finite numbers. + censored : array_like of bool + `censored` must be a one-dimensional sequence of boolean + values. If ``censored[k]`` is True, the corresponding value + in `x` is left-censored. That is, the value ``x[k]`` + is the upper bound of the true (but unknown) value. + + Returns + ------- + data : `CensoredData` + An instance of `CensoredData` that represents the + collection of uncensored and left-censored values. + + Examples + -------- + >>> from scipy.stats import CensoredData + + Two uncensored values (0.12 and 0.033) and two left-censored values + (both 1e-3). + + >>> data = CensoredData.left_censored([0.12, 0.033, 1e-3, 1e-3], + ... [False, False, True, True]) + >>> data + CensoredData(uncensored=array([0.12 , 0.033]), + left=array([0.001, 0.001]), right=array([], dtype=float64), + interval=array([], shape=(0, 2), dtype=float64)) + >>> print(data) + CensoredData(4 values: 2 not censored, 2 left-censored) + """ + x, censored = _validate_x_censored(x, censored) + return cls(uncensored=x[~censored], left=x[censored]) + + @classmethod + def interval_censored(cls, low, high): + """ + Create a `CensoredData` instance of interval-censored data. + + This method is useful when all the data is interval-censored, and + the low and high ends of the intervals are already stored in + separate one-dimensional arrays. + + Parameters + ---------- + low : array_like + The one-dimensional array containing the low ends of the + intervals. + high : array_like + The one-dimensional array containing the high ends of the + intervals. + + Returns + ------- + data : `CensoredData` + An instance of `CensoredData` that represents the + collection of censored values. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import CensoredData + + ``a`` and ``b`` are the low and high ends of a collection of + interval-censored values. + + >>> a = [0.5, 2.0, 3.0, 5.5] + >>> b = [1.0, 2.5, 3.5, 7.0] + >>> data = CensoredData.interval_censored(low=a, high=b) + >>> print(data) + CensoredData(4 values: 0 not censored, 4 interval-censored) + """ + _validate_1d(low, 'low', allow_inf=True) + _validate_1d(high, 'high', allow_inf=True) + if len(low) != len(high): + raise ValueError('`low` and `high` must have the same length.') + interval = np.column_stack((low, high)) + uncensored, left, right, interval = _validate_interval(interval) + return cls(uncensored=uncensored, left=left, right=right, + interval=interval) + + def _uncensor(self): + """ + This function is used when a non-censored version of the data + is needed to create a rough estimate of the parameters of a + distribution via the method of moments or some similar method. + The data is "uncensored" by taking the given endpoints as the + data for the left- or right-censored data, and the mean for the + interval-censored data. + """ + data = np.concatenate((self._uncensored, self._left, self._right, + self._interval.mean(axis=1))) + return data + + def _supported(self, a, b): + """ + Return a subset of self containing the values that are in + (or overlap with) the interval (a, b). + """ + uncensored = self._uncensored + uncensored = uncensored[(a < uncensored) & (uncensored < b)] + left = self._left + left = left[a < left] + right = self._right + right = right[right < b] + interval = self._interval + interval = interval[(a < interval[:, 1]) & (interval[:, 0] < b)] + return CensoredData(uncensored, left=left, right=right, + interval=interval) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_common.py b/venv/lib/python3.10/site-packages/scipy/stats/_common.py new file mode 100644 index 0000000000000000000000000000000000000000..4011d425cc4afea3c7ee8937526b13f1f92b0850 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_common.py @@ -0,0 +1,5 @@ +from collections import namedtuple + + +ConfidenceInterval = namedtuple("ConfidenceInterval", ["low", "high"]) +ConfidenceInterval. __doc__ = "Class for confidence intervals." diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_constants.py b/venv/lib/python3.10/site-packages/scipy/stats/_constants.py new file mode 100644 index 0000000000000000000000000000000000000000..374fadda992e135c025a658e9f9dcec9263c0eb9 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_constants.py @@ -0,0 +1,39 @@ +""" +Statistics-related constants. + +""" +import numpy as np + + +# The smallest representable positive number such that 1.0 + _EPS != 1.0. +_EPS = np.finfo(float).eps + +# The largest [in magnitude] usable floating value. +_XMAX = np.finfo(float).max + +# The log of the largest usable floating value; useful for knowing +# when exp(something) will overflow +_LOGXMAX = np.log(_XMAX) + +# The smallest [in magnitude] usable (i.e. not subnormal) double precision +# floating value. +_XMIN = np.finfo(float).tiny + +# The log of the smallest [in magnitude] usable (i.e not subnormal) +# double precision floating value. +_LOGXMIN = np.log(_XMIN) + +# -special.psi(1) +_EULER = 0.577215664901532860606512090082402431042 + +# special.zeta(3, 1) Apery's constant +_ZETA3 = 1.202056903159594285399738161511449990765 + +# sqrt(pi) +_SQRT_PI = 1.772453850905516027298167483341145182798 + +# sqrt(2/pi) +_SQRT_2_OVER_PI = 0.7978845608028654 + +# log(sqrt(2/pi)) +_LOG_SQRT_2_OVER_PI = -0.22579135264472744 diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_continuous_distns.py b/venv/lib/python3.10/site-packages/scipy/stats/_continuous_distns.py new file mode 100644 index 0000000000000000000000000000000000000000..13af050097f4008b154041e8885435341eb2bbac --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_continuous_distns.py @@ -0,0 +1,11933 @@ +# +# Author: Travis Oliphant 2002-2011 with contributions from +# SciPy Developers 2004-2011 +# +import warnings +from collections.abc import Iterable +from functools import wraps, cached_property +import ctypes + +import numpy as np +from numpy.polynomial import Polynomial +from scipy._lib.doccer import (extend_notes_in_docstring, + replace_notes_in_docstring, + inherit_docstring_from) +from scipy._lib._ccallback import LowLevelCallable +from scipy import optimize +from scipy import integrate +import scipy.special as sc + +import scipy.special._ufuncs as scu +from scipy._lib._util import _lazyselect, _lazywhere + +from . import _stats +from ._tukeylambda_stats import (tukeylambda_variance as _tlvar, + tukeylambda_kurtosis as _tlkurt) +from ._distn_infrastructure import ( + get_distribution_names, _kurtosis, + rv_continuous, _skew, _get_fixed_fit_value, _check_shape, _ShapeInfo) +from ._ksstats import kolmogn, kolmognp, kolmogni +from ._constants import (_XMIN, _LOGXMIN, _EULER, _ZETA3, _SQRT_PI, + _SQRT_2_OVER_PI, _LOG_SQRT_2_OVER_PI) +from ._censored_data import CensoredData +import scipy.stats._boost as _boost +from scipy.optimize import root_scalar +from scipy.stats._warnings_errors import FitError +import scipy.stats as stats + + +def _remove_optimizer_parameters(kwds): + """ + Remove the optimizer-related keyword arguments 'loc', 'scale' and + 'optimizer' from `kwds`. Then check that `kwds` is empty, and + raise `TypeError("Unknown arguments: %s." % kwds)` if it is not. + + This function is used in the fit method of distributions that override + the default method and do not use the default optimization code. + + `kwds` is modified in-place. + """ + kwds.pop('loc', None) + kwds.pop('scale', None) + kwds.pop('optimizer', None) + kwds.pop('method', None) + if kwds: + raise TypeError("Unknown arguments: %s." % kwds) + + +def _call_super_mom(fun): + # If fit method is overridden only for MLE and doesn't specify what to do + # if method == 'mm' or with censored data, this decorator calls the generic + # implementation. + @wraps(fun) + def wrapper(self, data, *args, **kwds): + method = kwds.get('method', 'mle').lower() + censored = isinstance(data, CensoredData) + if method == 'mm' or (censored and data.num_censored() > 0): + return super(type(self), self).fit(data, *args, **kwds) + else: + if censored: + # data is an instance of CensoredData, but actually holds + # no censored values, so replace it with the array of + # uncensored values. + data = data._uncensored + return fun(self, data, *args, **kwds) + + return wrapper + + +def _get_left_bracket(fun, rbrack, lbrack=None): + # find left bracket for `root_scalar`. A guess for lbrack may be provided. + lbrack = lbrack or rbrack - 1 + diff = rbrack - lbrack + + # if there is no sign change in `fun` between the brackets, expand + # rbrack - lbrack until a sign change occurs + def interval_contains_root(lbrack, rbrack): + # return true if the signs disagree. + return np.sign(fun(lbrack)) != np.sign(fun(rbrack)) + + while not interval_contains_root(lbrack, rbrack): + diff *= 2 + lbrack = rbrack - diff + + msg = ("The solver could not find a bracket containing a " + "root to an MLE first order condition.") + if np.isinf(lbrack): + raise FitSolverError(msg) + + return lbrack + + +class ksone_gen(rv_continuous): + r"""Kolmogorov-Smirnov one-sided test statistic distribution. + + This is the distribution of the one-sided Kolmogorov-Smirnov (KS) + statistics :math:`D_n^+` and :math:`D_n^-` + for a finite sample size ``n >= 1`` (the shape parameter). + + %(before_notes)s + + See Also + -------- + kstwobign, kstwo, kstest + + Notes + ----- + :math:`D_n^+` and :math:`D_n^-` are given by + + .. math:: + + D_n^+ &= \text{sup}_x (F_n(x) - F(x)),\\ + D_n^- &= \text{sup}_x (F(x) - F_n(x)),\\ + + where :math:`F` is a continuous CDF and :math:`F_n` is an empirical CDF. + `ksone` describes the distribution under the null hypothesis of the KS test + that the empirical CDF corresponds to :math:`n` i.i.d. random variates + with CDF :math:`F`. + + %(after_notes)s + + References + ---------- + .. [1] Birnbaum, Z. W. and Tingey, F.H. "One-sided confidence contours + for probability distribution functions", The Annals of Mathematical + Statistics, 22(4), pp 592-596 (1951). + + %(example)s + + """ + def _argcheck(self, n): + return (n >= 1) & (n == np.round(n)) + + def _shape_info(self): + return [_ShapeInfo("n", True, (1, np.inf), (True, False))] + + def _pdf(self, x, n): + return -scu._smirnovp(n, x) + + def _cdf(self, x, n): + return scu._smirnovc(n, x) + + def _sf(self, x, n): + return sc.smirnov(n, x) + + def _ppf(self, q, n): + return scu._smirnovci(n, q) + + def _isf(self, q, n): + return sc.smirnovi(n, q) + + +ksone = ksone_gen(a=0.0, b=1.0, name='ksone') + + +class kstwo_gen(rv_continuous): + r"""Kolmogorov-Smirnov two-sided test statistic distribution. + + This is the distribution of the two-sided Kolmogorov-Smirnov (KS) + statistic :math:`D_n` for a finite sample size ``n >= 1`` + (the shape parameter). + + %(before_notes)s + + See Also + -------- + kstwobign, ksone, kstest + + Notes + ----- + :math:`D_n` is given by + + .. math:: + + D_n = \text{sup}_x |F_n(x) - F(x)| + + where :math:`F` is a (continuous) CDF and :math:`F_n` is an empirical CDF. + `kstwo` describes the distribution under the null hypothesis of the KS test + that the empirical CDF corresponds to :math:`n` i.i.d. random variates + with CDF :math:`F`. + + %(after_notes)s + + References + ---------- + .. [1] Simard, R., L'Ecuyer, P. "Computing the Two-Sided + Kolmogorov-Smirnov Distribution", Journal of Statistical Software, + Vol 39, 11, 1-18 (2011). + + %(example)s + + """ + def _argcheck(self, n): + return (n >= 1) & (n == np.round(n)) + + def _shape_info(self): + return [_ShapeInfo("n", True, (1, np.inf), (True, False))] + + def _get_support(self, n): + return (0.5/(n if not isinstance(n, Iterable) else np.asanyarray(n)), + 1.0) + + def _pdf(self, x, n): + return kolmognp(n, x) + + def _cdf(self, x, n): + return kolmogn(n, x) + + def _sf(self, x, n): + return kolmogn(n, x, cdf=False) + + def _ppf(self, q, n): + return kolmogni(n, q, cdf=True) + + def _isf(self, q, n): + return kolmogni(n, q, cdf=False) + + +# Use the pdf, (not the ppf) to compute moments +kstwo = kstwo_gen(momtype=0, a=0.0, b=1.0, name='kstwo') + + +class kstwobign_gen(rv_continuous): + r"""Limiting distribution of scaled Kolmogorov-Smirnov two-sided test statistic. + + This is the asymptotic distribution of the two-sided Kolmogorov-Smirnov + statistic :math:`\sqrt{n} D_n` that measures the maximum absolute + distance of the theoretical (continuous) CDF from the empirical CDF. + (see `kstest`). + + %(before_notes)s + + See Also + -------- + ksone, kstwo, kstest + + Notes + ----- + :math:`\sqrt{n} D_n` is given by + + .. math:: + + D_n = \text{sup}_x |F_n(x) - F(x)| + + where :math:`F` is a continuous CDF and :math:`F_n` is an empirical CDF. + `kstwobign` describes the asymptotic distribution (i.e. the limit of + :math:`\sqrt{n} D_n`) under the null hypothesis of the KS test that the + empirical CDF corresponds to i.i.d. random variates with CDF :math:`F`. + + %(after_notes)s + + References + ---------- + .. [1] Feller, W. "On the Kolmogorov-Smirnov Limit Theorems for Empirical + Distributions", Ann. Math. Statist. Vol 19, 177-189 (1948). + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + return -scu._kolmogp(x) + + def _cdf(self, x): + return scu._kolmogc(x) + + def _sf(self, x): + return sc.kolmogorov(x) + + def _ppf(self, q): + return scu._kolmogci(q) + + def _isf(self, q): + return sc.kolmogi(q) + + +kstwobign = kstwobign_gen(a=0.0, name='kstwobign') + + +## Normal distribution + +# loc = mu, scale = std +# Keep these implementations out of the class definition so they can be reused +# by other distributions. +_norm_pdf_C = np.sqrt(2*np.pi) +_norm_pdf_logC = np.log(_norm_pdf_C) + + +def _norm_pdf(x): + return np.exp(-x**2/2.0) / _norm_pdf_C + + +def _norm_logpdf(x): + return -x**2 / 2.0 - _norm_pdf_logC + + +def _norm_cdf(x): + return sc.ndtr(x) + + +def _norm_logcdf(x): + return sc.log_ndtr(x) + + +def _norm_ppf(q): + return sc.ndtri(q) + + +def _norm_sf(x): + return _norm_cdf(-x) + + +def _norm_logsf(x): + return _norm_logcdf(-x) + + +def _norm_isf(q): + return -_norm_ppf(q) + + +class norm_gen(rv_continuous): + r"""A normal continuous random variable. + + The location (``loc``) keyword specifies the mean. + The scale (``scale``) keyword specifies the standard deviation. + + %(before_notes)s + + Notes + ----- + The probability density function for `norm` is: + + .. math:: + + f(x) = \frac{\exp(-x^2/2)}{\sqrt{2\pi}} + + for a real number :math:`x`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return random_state.standard_normal(size) + + def _pdf(self, x): + # norm.pdf(x) = exp(-x**2/2)/sqrt(2*pi) + return _norm_pdf(x) + + def _logpdf(self, x): + return _norm_logpdf(x) + + def _cdf(self, x): + return _norm_cdf(x) + + def _logcdf(self, x): + return _norm_logcdf(x) + + def _sf(self, x): + return _norm_sf(x) + + def _logsf(self, x): + return _norm_logsf(x) + + def _ppf(self, q): + return _norm_ppf(q) + + def _isf(self, q): + return _norm_isf(q) + + def _stats(self): + return 0.0, 1.0, 0.0, 0.0 + + def _entropy(self): + return 0.5*(np.log(2*np.pi)+1) + + @_call_super_mom + @replace_notes_in_docstring(rv_continuous, notes="""\ + For the normal distribution, method of moments and maximum likelihood + estimation give identical fits, and explicit formulas for the estimates + are available. + This function uses these explicit formulas for the maximum likelihood + estimation of the normal distribution parameters, so the + `optimizer` and `method` arguments are ignored.\n\n""") + def fit(self, data, **kwds): + floc = kwds.pop('floc', None) + fscale = kwds.pop('fscale', None) + + _remove_optimizer_parameters(kwds) + + if floc is not None and fscale is not None: + # This check is for consistency with `rv_continuous.fit`. + # Without this check, this function would just return the + # parameters that were given. + raise ValueError("All parameters fixed. There is nothing to " + "optimize.") + + data = np.asarray(data) + + if not np.isfinite(data).all(): + raise ValueError("The data contains non-finite values.") + + if floc is None: + loc = data.mean() + else: + loc = floc + + if fscale is None: + scale = np.sqrt(((data - loc)**2).mean()) + else: + scale = fscale + + return loc, scale + + def _munp(self, n): + """ + @returns Moments of standard normal distribution for integer n >= 0 + + See eq. 16 of https://arxiv.org/abs/1209.4340v2 + """ + if n % 2 == 0: + return sc.factorial2(n - 1) + else: + return 0. + + +norm = norm_gen(name='norm') + + +class alpha_gen(rv_continuous): + r"""An alpha continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `alpha` ([1]_, [2]_) is: + + .. math:: + + f(x, a) = \frac{1}{x^2 \Phi(a) \sqrt{2\pi}} * + \exp(-\frac{1}{2} (a-1/x)^2) + + where :math:`\Phi` is the normal CDF, :math:`x > 0`, and :math:`a > 0`. + + `alpha` takes ``a`` as a shape parameter. + + %(after_notes)s + + References + ---------- + .. [1] Johnson, Kotz, and Balakrishnan, "Continuous Univariate + Distributions, Volume 1", Second Edition, John Wiley and Sons, + p. 173 (1994). + .. [2] Anthony A. Salvia, "Reliability applications of the Alpha + Distribution", IEEE Transactions on Reliability, Vol. R-34, + No. 3, pp. 251-252 (1985). + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [_ShapeInfo("a", False, (0, np.inf), (False, False))] + + def _pdf(self, x, a): + # alpha.pdf(x, a) = 1/(x**2*Phi(a)*sqrt(2*pi)) * exp(-1/2 * (a-1/x)**2) + return 1.0/(x**2)/_norm_cdf(a)*_norm_pdf(a-1.0/x) + + def _logpdf(self, x, a): + return -2*np.log(x) + _norm_logpdf(a-1.0/x) - np.log(_norm_cdf(a)) + + def _cdf(self, x, a): + return _norm_cdf(a-1.0/x) / _norm_cdf(a) + + def _ppf(self, q, a): + return 1.0/np.asarray(a - _norm_ppf(q*_norm_cdf(a))) + + def _stats(self, a): + return [np.inf]*2 + [np.nan]*2 + + +alpha = alpha_gen(a=0.0, name='alpha') + + +class anglit_gen(rv_continuous): + r"""An anglit continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `anglit` is: + + .. math:: + + f(x) = \sin(2x + \pi/2) = \cos(2x) + + for :math:`-\pi/4 \le x \le \pi/4`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + # anglit.pdf(x) = sin(2*x + \pi/2) = cos(2*x) + return np.cos(2*x) + + def _cdf(self, x): + return np.sin(x+np.pi/4)**2.0 + + def _sf(self, x): + return np.cos(x + np.pi / 4) ** 2.0 + + def _ppf(self, q): + return np.arcsin(np.sqrt(q))-np.pi/4 + + def _stats(self): + return 0.0, np.pi*np.pi/16-0.5, 0.0, -2*(np.pi**4 - 96)/(np.pi*np.pi-8)**2 + + def _entropy(self): + return 1-np.log(2) + + +anglit = anglit_gen(a=-np.pi/4, b=np.pi/4, name='anglit') + + +class arcsine_gen(rv_continuous): + r"""An arcsine continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `arcsine` is: + + .. math:: + + f(x) = \frac{1}{\pi \sqrt{x (1-x)}} + + for :math:`0 < x < 1`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + # arcsine.pdf(x) = 1/(pi*sqrt(x*(1-x))) + with np.errstate(divide='ignore'): + return 1.0/np.pi/np.sqrt(x*(1-x)) + + def _cdf(self, x): + return 2.0/np.pi*np.arcsin(np.sqrt(x)) + + def _ppf(self, q): + return np.sin(np.pi/2.0*q)**2.0 + + def _stats(self): + mu = 0.5 + mu2 = 1.0/8 + g1 = 0 + g2 = -3.0/2.0 + return mu, mu2, g1, g2 + + def _entropy(self): + return -0.24156447527049044468 + + +arcsine = arcsine_gen(a=0.0, b=1.0, name='arcsine') + + +class FitDataError(ValueError): + """Raised when input data is inconsistent with fixed parameters.""" + # This exception is raised by, for example, beta_gen.fit when both floc + # and fscale are fixed and there are values in the data not in the open + # interval (floc, floc+fscale). + def __init__(self, distr, lower, upper): + self.args = ( + "Invalid values in `data`. Maximum likelihood " + f"estimation with {distr!r} requires that {lower!r} < " + f"(x - loc)/scale < {upper!r} for each x in `data`.", + ) + + +class FitSolverError(FitError): + """ + Raised when a solver fails to converge while fitting a distribution. + """ + # This exception is raised by, for example, beta_gen.fit when + # optimize.fsolve returns with ier != 1. + def __init__(self, mesg): + emsg = "Solver for the MLE equations failed to converge: " + emsg += mesg.replace('\n', '') + self.args = (emsg,) + + +def _beta_mle_a(a, b, n, s1): + # The zeros of this function give the MLE for `a`, with + # `b`, `n` and `s1` given. `s1` is the sum of the logs of + # the data. `n` is the number of data points. + psiab = sc.psi(a + b) + func = s1 - n * (-psiab + sc.psi(a)) + return func + + +def _beta_mle_ab(theta, n, s1, s2): + # Zeros of this function are critical points of + # the maximum likelihood function. Solving this system + # for theta (which contains a and b) gives the MLE for a and b + # given `n`, `s1` and `s2`. `s1` is the sum of the logs of the data, + # and `s2` is the sum of the logs of 1 - data. `n` is the number + # of data points. + a, b = theta + psiab = sc.psi(a + b) + func = [s1 - n * (-psiab + sc.psi(a)), + s2 - n * (-psiab + sc.psi(b))] + return func + + +class beta_gen(rv_continuous): + r"""A beta continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `beta` is: + + .. math:: + + f(x, a, b) = \frac{\Gamma(a+b) x^{a-1} (1-x)^{b-1}} + {\Gamma(a) \Gamma(b)} + + for :math:`0 <= x <= 1`, :math:`a > 0`, :math:`b > 0`, where + :math:`\Gamma` is the gamma function (`scipy.special.gamma`). + + `beta` takes :math:`a` and :math:`b` as shape parameters. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + ia = _ShapeInfo("a", False, (0, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + return [ia, ib] + + def _rvs(self, a, b, size=None, random_state=None): + return random_state.beta(a, b, size) + + def _pdf(self, x, a, b): + # gamma(a+b) * x**(a-1) * (1-x)**(b-1) + # beta.pdf(x, a, b) = ------------------------------------ + # gamma(a)*gamma(b) + with np.errstate(over='ignore'): + return _boost._beta_pdf(x, a, b) + + def _logpdf(self, x, a, b): + lPx = sc.xlog1py(b - 1.0, -x) + sc.xlogy(a - 1.0, x) + lPx -= sc.betaln(a, b) + return lPx + + def _cdf(self, x, a, b): + return _boost._beta_cdf(x, a, b) + + def _sf(self, x, a, b): + return _boost._beta_sf(x, a, b) + + def _isf(self, x, a, b): + with np.errstate(over='ignore'): # see gh-17432 + return _boost._beta_isf(x, a, b) + + def _ppf(self, q, a, b): + with np.errstate(over='ignore'): # see gh-17432 + return _boost._beta_ppf(q, a, b) + + def _stats(self, a, b): + return ( + _boost._beta_mean(a, b), + _boost._beta_variance(a, b), + _boost._beta_skewness(a, b), + _boost._beta_kurtosis_excess(a, b)) + + def _fitstart(self, data): + if isinstance(data, CensoredData): + data = data._uncensor() + + g1 = _skew(data) + g2 = _kurtosis(data) + + def func(x): + a, b = x + sk = 2*(b-a)*np.sqrt(a + b + 1) / (a + b + 2) / np.sqrt(a*b) + ku = a**3 - a**2*(2*b-1) + b**2*(b+1) - 2*a*b*(b+2) + ku /= a*b*(a+b+2)*(a+b+3) + ku *= 6 + return [sk-g1, ku-g2] + a, b = optimize.fsolve(func, (1.0, 1.0)) + return super()._fitstart(data, args=(a, b)) + + @_call_super_mom + @extend_notes_in_docstring(rv_continuous, notes="""\ + In the special case where `method="MLE"` and + both `floc` and `fscale` are given, a + `ValueError` is raised if any value `x` in `data` does not satisfy + `floc < x < floc + fscale`.\n\n""") + def fit(self, data, *args, **kwds): + # Override rv_continuous.fit, so we can more efficiently handle the + # case where floc and fscale are given. + + floc = kwds.get('floc', None) + fscale = kwds.get('fscale', None) + + if floc is None or fscale is None: + # do general fit + return super().fit(data, *args, **kwds) + + # We already got these from kwds, so just pop them. + kwds.pop('floc', None) + kwds.pop('fscale', None) + + f0 = _get_fixed_fit_value(kwds, ['f0', 'fa', 'fix_a']) + f1 = _get_fixed_fit_value(kwds, ['f1', 'fb', 'fix_b']) + + _remove_optimizer_parameters(kwds) + + if f0 is not None and f1 is not None: + # This check is for consistency with `rv_continuous.fit`. + raise ValueError("All parameters fixed. There is nothing to " + "optimize.") + + # Special case: loc and scale are constrained, so we are fitting + # just the shape parameters. This can be done much more efficiently + # than the method used in `rv_continuous.fit`. (See the subsection + # "Two unknown parameters" in the section "Maximum likelihood" of + # the Wikipedia article on the Beta distribution for the formulas.) + + if not np.isfinite(data).all(): + raise ValueError("The data contains non-finite values.") + + # Normalize the data to the interval [0, 1]. + data = (np.ravel(data) - floc) / fscale + if np.any(data <= 0) or np.any(data >= 1): + raise FitDataError("beta", lower=floc, upper=floc + fscale) + + xbar = data.mean() + + if f0 is not None or f1 is not None: + # One of the shape parameters is fixed. + + if f0 is not None: + # The shape parameter a is fixed, so swap the parameters + # and flip the data. We always solve for `a`. The result + # will be swapped back before returning. + b = f0 + data = 1 - data + xbar = 1 - xbar + else: + b = f1 + + # Initial guess for a. Use the formula for the mean of the beta + # distribution, E[x] = a / (a + b), to generate a reasonable + # starting point based on the mean of the data and the given + # value of b. + a = b * xbar / (1 - xbar) + + # Compute the MLE for `a` by solving _beta_mle_a. + theta, info, ier, mesg = optimize.fsolve( + _beta_mle_a, a, + args=(b, len(data), np.log(data).sum()), + full_output=True + ) + if ier != 1: + raise FitSolverError(mesg=mesg) + a = theta[0] + + if f0 is not None: + # The shape parameter a was fixed, so swap back the + # parameters. + a, b = b, a + + else: + # Neither of the shape parameters is fixed. + + # s1 and s2 are used in the extra arguments passed to _beta_mle_ab + # by optimize.fsolve. + s1 = np.log(data).sum() + s2 = sc.log1p(-data).sum() + + # Use the "method of moments" to estimate the initial + # guess for a and b. + fac = xbar * (1 - xbar) / data.var(ddof=0) - 1 + a = xbar * fac + b = (1 - xbar) * fac + + # Compute the MLE for a and b by solving _beta_mle_ab. + theta, info, ier, mesg = optimize.fsolve( + _beta_mle_ab, [a, b], + args=(len(data), s1, s2), + full_output=True + ) + if ier != 1: + raise FitSolverError(mesg=mesg) + a, b = theta + + return a, b, floc, fscale + + def _entropy(self, a, b): + def regular(a, b): + return (sc.betaln(a, b) - (a - 1) * sc.psi(a) - + (b - 1) * sc.psi(b) + (a + b - 2) * sc.psi(a + b)) + + def asymptotic_ab_large(a, b): + sum_ab = a + b + log_term = 0.5 * ( + np.log(2*np.pi) + np.log(a) + np.log(b) - 3*np.log(sum_ab) + 1 + ) + t1 = 110/sum_ab + 20*sum_ab**-2.0 + sum_ab**-3.0 - 2*sum_ab**-4.0 + t2 = -50/a - 10*a**-2.0 - a**-3.0 + a**-4.0 + t3 = -50/b - 10*b**-2.0 - b**-3.0 + b**-4.0 + return log_term + (t1 + t2 + t3) / 120 + + def asymptotic_b_large(a, b): + sum_ab = a + b + t1 = sc.gammaln(a) - (a - 1) * sc.psi(a) + t2 = ( + - 1/(2*b) + 1/(12*b) - b**-2.0/12 - b**-3.0/120 + b**-4.0/120 + + b**-5.0/252 - b**-6.0/252 + 1/sum_ab - 1/(12*sum_ab) + + sum_ab**-2.0/6 + sum_ab**-3.0/120 - sum_ab**-4.0/60 + - sum_ab**-5.0/252 + sum_ab**-6.0/126 + ) + log_term = sum_ab*np.log1p(a/b) + np.log(b) - 2*np.log(sum_ab) + return t1 + t2 + log_term + + def threshold_large(v): + if v == 1.0: + return 1000 + + j = np.log10(v) + digits = int(j) + d = int(v / 10 ** digits) + 2 + return d*10**(7 + j) + + if a >= 4.96e6 and b >= 4.96e6: + return asymptotic_ab_large(a, b) + elif a <= 4.9e6 and b - a >= 1e6 and b >= threshold_large(a): + return asymptotic_b_large(a, b) + elif b <= 4.9e6 and a - b >= 1e6 and a >= threshold_large(b): + return asymptotic_b_large(b, a) + else: + return regular(a, b) + + +beta = beta_gen(a=0.0, b=1.0, name='beta') + + +class betaprime_gen(rv_continuous): + r"""A beta prime continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `betaprime` is: + + .. math:: + + f(x, a, b) = \frac{x^{a-1} (1+x)^{-a-b}}{\beta(a, b)} + + for :math:`x >= 0`, :math:`a > 0`, :math:`b > 0`, where + :math:`\beta(a, b)` is the beta function (see `scipy.special.beta`). + + `betaprime` takes ``a`` and ``b`` as shape parameters. + + The distribution is related to the `beta` distribution as follows: + If :math:`X` follows a beta distribution with parameters :math:`a, b`, + then :math:`Y = X/(1-X)` has a beta prime distribution with + parameters :math:`a, b` ([1]_). + + The beta prime distribution is a reparametrized version of the + F distribution. The beta prime distribution with shape parameters + ``a`` and ``b`` and ``scale = s`` is equivalent to the F distribution + with parameters ``d1 = 2*a``, ``d2 = 2*b`` and ``scale = (a/b)*s``. + For example, + + >>> from scipy.stats import betaprime, f + >>> x = [1, 2, 5, 10] + >>> a = 12 + >>> b = 5 + >>> betaprime.pdf(x, a, b, scale=2) + array([0.00541179, 0.08331299, 0.14669185, 0.03150079]) + >>> f.pdf(x, 2*a, 2*b, scale=(a/b)*2) + array([0.00541179, 0.08331299, 0.14669185, 0.03150079]) + + %(after_notes)s + + References + ---------- + .. [1] Beta prime distribution, Wikipedia, + https://en.wikipedia.org/wiki/Beta_prime_distribution + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + ia = _ShapeInfo("a", False, (0, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + return [ia, ib] + + def _rvs(self, a, b, size=None, random_state=None): + u1 = gamma.rvs(a, size=size, random_state=random_state) + u2 = gamma.rvs(b, size=size, random_state=random_state) + return u1 / u2 + + def _pdf(self, x, a, b): + # betaprime.pdf(x, a, b) = x**(a-1) * (1+x)**(-a-b) / beta(a, b) + return np.exp(self._logpdf(x, a, b)) + + def _logpdf(self, x, a, b): + return sc.xlogy(a - 1.0, x) - sc.xlog1py(a + b, x) - sc.betaln(a, b) + + def _cdf(self, x, a, b): + # note: f2 is the direct way to compute the cdf if the relationship + # to the beta distribution is used. + # however, for very large x, x/(1+x) == 1. since the distribution + # has very fat tails if b is small, this can cause inaccurate results + # use the following relationship of the incomplete beta function: + # betainc(x, a, b) = 1 - betainc(1-x, b, a) + # see gh-17631 + return _lazywhere( + x > 1, [x, a, b], + lambda x_, a_, b_: beta._sf(1/(1+x_), b_, a_), + f2=lambda x_, a_, b_: beta._cdf(x_/(1+x_), a_, b_)) + + def _sf(self, x, a, b): + return _lazywhere( + x > 1, [x, a, b], + lambda x_, a_, b_: beta._cdf(1/(1+x_), b_, a_), + f2=lambda x_, a_, b_: beta._sf(x_/(1+x_), a_, b_) + ) + + def _ppf(self, p, a, b): + p, a, b = np.broadcast_arrays(p, a, b) + # by default, compute compute the ppf by solving the following: + # p = beta._cdf(x/(1+x), a, b). This implies x = r/(1-r) with + # r = beta._ppf(p, a, b). This can cause numerical issues if r is + # very close to 1. in that case, invert the alternative expression of + # the cdf: p = beta._sf(1/(1+x), b, a). + r = stats.beta._ppf(p, a, b) + with np.errstate(divide='ignore'): + out = r / (1 - r) + i = (r > 0.9999) + out[i] = 1/stats.beta._isf(p[i], b[i], a[i]) - 1 + return out + + def _munp(self, n, a, b): + return _lazywhere( + b > n, (a, b), + lambda a, b: np.prod([(a+i-1)/(b-i) for i in range(1, n+1)], axis=0), + fillvalue=np.inf) + + +betaprime = betaprime_gen(a=0.0, name='betaprime') + + +class bradford_gen(rv_continuous): + r"""A Bradford continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `bradford` is: + + .. math:: + + f(x, c) = \frac{c}{\log(1+c) (1+cx)} + + for :math:`0 <= x <= 1` and :math:`c > 0`. + + `bradford` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # bradford.pdf(x, c) = c / (k * (1+c*x)) + return c / (c*x + 1.0) / sc.log1p(c) + + def _cdf(self, x, c): + return sc.log1p(c*x) / sc.log1p(c) + + def _ppf(self, q, c): + return sc.expm1(q * sc.log1p(c)) / c + + def _stats(self, c, moments='mv'): + k = np.log(1.0+c) + mu = (c-k)/(c*k) + mu2 = ((c+2.0)*k-2.0*c)/(2*c*k*k) + g1 = None + g2 = None + if 's' in moments: + g1 = np.sqrt(2)*(12*c*c-9*c*k*(c+2)+2*k*k*(c*(c+3)+3)) + g1 /= np.sqrt(c*(c*(k-2)+2*k))*(3*c*(k-2)+6*k) + if 'k' in moments: + g2 = (c**3*(k-3)*(k*(3*k-16)+24)+12*k*c*c*(k-4)*(k-3) + + 6*c*k*k*(3*k-14) + 12*k**3) + g2 /= 3*c*(c*(k-2)+2*k)**2 + return mu, mu2, g1, g2 + + def _entropy(self, c): + k = np.log(1+c) + return k/2.0 - np.log(c/k) + + +bradford = bradford_gen(a=0.0, b=1.0, name='bradford') + + +class burr_gen(rv_continuous): + r"""A Burr (Type III) continuous random variable. + + %(before_notes)s + + See Also + -------- + fisk : a special case of either `burr` or `burr12` with ``d=1`` + burr12 : Burr Type XII distribution + mielke : Mielke Beta-Kappa / Dagum distribution + + Notes + ----- + The probability density function for `burr` is: + + .. math:: + + f(x; c, d) = c d \frac{x^{-c - 1}} + {{(1 + x^{-c})}^{d + 1}} + + for :math:`x >= 0` and :math:`c, d > 0`. + + `burr` takes ``c`` and ``d`` as shape parameters for :math:`c` and + :math:`d`. + + This is the PDF corresponding to the third CDF given in Burr's list; + specifically, it is equation (11) in Burr's paper [1]_. The distribution + is also commonly referred to as the Dagum distribution [2]_. If the + parameter :math:`c < 1` then the mean of the distribution does not + exist and if :math:`c < 2` the variance does not exist [2]_. + The PDF is finite at the left endpoint :math:`x = 0` if :math:`c * d >= 1`. + + %(after_notes)s + + References + ---------- + .. [1] Burr, I. W. "Cumulative frequency functions", Annals of + Mathematical Statistics, 13(2), pp 215-232 (1942). + .. [2] https://en.wikipedia.org/wiki/Dagum_distribution + .. [3] Kleiber, Christian. "A guide to the Dagum distributions." + Modeling Income Distributions and Lorenz Curves pp 97-117 (2008). + + %(example)s + + """ + # Do not set _support_mask to rv_continuous._open_support_mask + # Whether the left-hand endpoint is suitable for pdf evaluation is dependent + # on the values of c and d: if c*d >= 1, the pdf is finite, otherwise infinite. + + def _shape_info(self): + ic = _ShapeInfo("c", False, (0, np.inf), (False, False)) + id = _ShapeInfo("d", False, (0, np.inf), (False, False)) + return [ic, id] + + def _pdf(self, x, c, d): + # burr.pdf(x, c, d) = c * d * x**(-c-1) * (1+x**(-c))**(-d-1) + output = _lazywhere( + x == 0, [x, c, d], + lambda x_, c_, d_: c_ * d_ * (x_**(c_*d_-1)) / (1 + x_**c_), + f2=lambda x_, c_, d_: (c_ * d_ * (x_ ** (-c_ - 1.0)) / + ((1 + x_ ** (-c_)) ** (d_ + 1.0)))) + if output.ndim == 0: + return output[()] + return output + + def _logpdf(self, x, c, d): + output = _lazywhere( + x == 0, [x, c, d], + lambda x_, c_, d_: (np.log(c_) + np.log(d_) + sc.xlogy(c_*d_ - 1, x_) + - (d_+1) * sc.log1p(x_**(c_))), + f2=lambda x_, c_, d_: (np.log(c_) + np.log(d_) + + sc.xlogy(-c_ - 1, x_) + - sc.xlog1py(d_+1, x_**(-c_)))) + if output.ndim == 0: + return output[()] + return output + + def _cdf(self, x, c, d): + return (1 + x**(-c))**(-d) + + def _logcdf(self, x, c, d): + return sc.log1p(x**(-c)) * (-d) + + def _sf(self, x, c, d): + return np.exp(self._logsf(x, c, d)) + + def _logsf(self, x, c, d): + return np.log1p(- (1 + x**(-c))**(-d)) + + def _ppf(self, q, c, d): + return (q**(-1.0/d) - 1)**(-1.0/c) + + def _isf(self, q, c, d): + _q = sc.xlog1py(-1.0 / d, -q) + return sc.expm1(_q) ** (-1.0 / c) + + def _stats(self, c, d): + nc = np.arange(1, 5).reshape(4,1) / c + # ek is the kth raw moment, e1 is the mean e2-e1**2 variance etc. + e1, e2, e3, e4 = sc.beta(d + nc, 1. - nc) * d + mu = np.where(c > 1.0, e1, np.nan) + mu2_if_c = e2 - mu**2 + mu2 = np.where(c > 2.0, mu2_if_c, np.nan) + g1 = _lazywhere( + c > 3.0, + (c, e1, e2, e3, mu2_if_c), + lambda c, e1, e2, e3, mu2_if_c: ((e3 - 3*e2*e1 + 2*e1**3) + / np.sqrt((mu2_if_c)**3)), + fillvalue=np.nan) + g2 = _lazywhere( + c > 4.0, + (c, e1, e2, e3, e4, mu2_if_c), + lambda c, e1, e2, e3, e4, mu2_if_c: ( + ((e4 - 4*e3*e1 + 6*e2*e1**2 - 3*e1**4) / mu2_if_c**2) - 3), + fillvalue=np.nan) + if np.ndim(c) == 0: + return mu.item(), mu2.item(), g1.item(), g2.item() + return mu, mu2, g1, g2 + + def _munp(self, n, c, d): + def __munp(n, c, d): + nc = 1. * n / c + return d * sc.beta(1.0 - nc, d + nc) + n, c, d = np.asarray(n), np.asarray(c), np.asarray(d) + return _lazywhere((c > n) & (n == n) & (d == d), (c, d, n), + lambda c, d, n: __munp(n, c, d), + np.nan) + + +burr = burr_gen(a=0.0, name='burr') + + +class burr12_gen(rv_continuous): + r"""A Burr (Type XII) continuous random variable. + + %(before_notes)s + + See Also + -------- + fisk : a special case of either `burr` or `burr12` with ``d=1`` + burr : Burr Type III distribution + + Notes + ----- + The probability density function for `burr12` is: + + .. math:: + + f(x; c, d) = c d \frac{x^{c-1}} + {(1 + x^c)^{d + 1}} + + for :math:`x >= 0` and :math:`c, d > 0`. + + `burr12` takes ``c`` and ``d`` as shape parameters for :math:`c` + and :math:`d`. + + This is the PDF corresponding to the twelfth CDF given in Burr's list; + specifically, it is equation (20) in Burr's paper [1]_. + + %(after_notes)s + + The Burr type 12 distribution is also sometimes referred to as + the Singh-Maddala distribution from NIST [2]_. + + References + ---------- + .. [1] Burr, I. W. "Cumulative frequency functions", Annals of + Mathematical Statistics, 13(2), pp 215-232 (1942). + + .. [2] https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/b12pdf.htm + + .. [3] "Burr distribution", + https://en.wikipedia.org/wiki/Burr_distribution + + %(example)s + + """ + def _shape_info(self): + ic = _ShapeInfo("c", False, (0, np.inf), (False, False)) + id = _ShapeInfo("d", False, (0, np.inf), (False, False)) + return [ic, id] + + def _pdf(self, x, c, d): + # burr12.pdf(x, c, d) = c * d * x**(c-1) * (1+x**(c))**(-d-1) + return np.exp(self._logpdf(x, c, d)) + + def _logpdf(self, x, c, d): + return np.log(c) + np.log(d) + sc.xlogy(c - 1, x) + sc.xlog1py(-d-1, x**c) + + def _cdf(self, x, c, d): + return -sc.expm1(self._logsf(x, c, d)) + + def _logcdf(self, x, c, d): + return sc.log1p(-(1 + x**c)**(-d)) + + def _sf(self, x, c, d): + return np.exp(self._logsf(x, c, d)) + + def _logsf(self, x, c, d): + return sc.xlog1py(-d, x**c) + + def _ppf(self, q, c, d): + # The following is an implementation of + # ((1 - q)**(-1.0/d) - 1)**(1.0/c) + # that does a better job handling small values of q. + return sc.expm1(-1/d * sc.log1p(-q))**(1/c) + + def _munp(self, n, c, d): + def moment_if_exists(n, c, d): + nc = 1. * n / c + return d * sc.beta(1.0 + nc, d - nc) + + return _lazywhere(c * d > n, (n, c, d), moment_if_exists, + fillvalue=np.nan) + + +burr12 = burr12_gen(a=0.0, name='burr12') + + +class fisk_gen(burr_gen): + r"""A Fisk continuous random variable. + + The Fisk distribution is also known as the log-logistic distribution. + + %(before_notes)s + + See Also + -------- + burr + + Notes + ----- + The probability density function for `fisk` is: + + .. math:: + + f(x, c) = \frac{c x^{c-1}} + {(1 + x^c)^2} + + for :math:`x >= 0` and :math:`c > 0`. + + Please note that the above expression can be transformed into the following + one, which is also commonly used: + + .. math:: + + f(x, c) = \frac{c x^{-c-1}} + {(1 + x^{-c})^2} + + `fisk` takes ``c`` as a shape parameter for :math:`c`. + + `fisk` is a special case of `burr` or `burr12` with ``d=1``. + + Suppose ``X`` is a logistic random variable with location ``l`` + and scale ``s``. Then ``Y = exp(X)`` is a Fisk (log-logistic) + random variable with ``scale = exp(l)`` and shape ``c = 1/s``. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # fisk.pdf(x, c) = c * x**(-c-1) * (1 + x**(-c))**(-2) + return burr._pdf(x, c, 1.0) + + def _cdf(self, x, c): + return burr._cdf(x, c, 1.0) + + def _sf(self, x, c): + return burr._sf(x, c, 1.0) + + def _logpdf(self, x, c): + # fisk.pdf(x, c) = c * x**(-c-1) * (1 + x**(-c))**(-2) + return burr._logpdf(x, c, 1.0) + + def _logcdf(self, x, c): + return burr._logcdf(x, c, 1.0) + + def _logsf(self, x, c): + return burr._logsf(x, c, 1.0) + + def _ppf(self, x, c): + return burr._ppf(x, c, 1.0) + + def _isf(self, q, c): + return burr._isf(q, c, 1.0) + + def _munp(self, n, c): + return burr._munp(n, c, 1.0) + + def _stats(self, c): + return burr._stats(c, 1.0) + + def _entropy(self, c): + return 2 - np.log(c) + + +fisk = fisk_gen(a=0.0, name='fisk') + + +class cauchy_gen(rv_continuous): + r"""A Cauchy continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `cauchy` is + + .. math:: + + f(x) = \frac{1}{\pi (1 + x^2)} + + for a real number :math:`x`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + # cauchy.pdf(x) = 1 / (pi * (1 + x**2)) + return 1.0/np.pi/(1.0+x*x) + + def _cdf(self, x): + return 0.5 + 1.0/np.pi*np.arctan(x) + + def _ppf(self, q): + return np.tan(np.pi*q-np.pi/2.0) + + def _sf(self, x): + return 0.5 - 1.0/np.pi*np.arctan(x) + + def _isf(self, q): + return np.tan(np.pi/2.0-np.pi*q) + + def _stats(self): + return np.nan, np.nan, np.nan, np.nan + + def _entropy(self): + return np.log(4*np.pi) + + def _fitstart(self, data, args=None): + # Initialize ML guesses using quartiles instead of moments. + if isinstance(data, CensoredData): + data = data._uncensor() + p25, p50, p75 = np.percentile(data, [25, 50, 75]) + return p50, (p75 - p25)/2 + + +cauchy = cauchy_gen(name='cauchy') + + +class chi_gen(rv_continuous): + r"""A chi continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `chi` is: + + .. math:: + + f(x, k) = \frac{1}{2^{k/2-1} \Gamma \left( k/2 \right)} + x^{k-1} \exp \left( -x^2/2 \right) + + for :math:`x >= 0` and :math:`k > 0` (degrees of freedom, denoted ``df`` + in the implementation). :math:`\Gamma` is the gamma function + (`scipy.special.gamma`). + + Special cases of `chi` are: + + - ``chi(1, loc, scale)`` is equivalent to `halfnorm` + - ``chi(2, 0, scale)`` is equivalent to `rayleigh` + - ``chi(3, 0, scale)`` is equivalent to `maxwell` + + `chi` takes ``df`` as a shape parameter. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("df", False, (0, np.inf), (False, False))] + + def _rvs(self, df, size=None, random_state=None): + return np.sqrt(chi2.rvs(df, size=size, random_state=random_state)) + + def _pdf(self, x, df): + # x**(df-1) * exp(-x**2/2) + # chi.pdf(x, df) = ------------------------- + # 2**(df/2-1) * gamma(df/2) + return np.exp(self._logpdf(x, df)) + + def _logpdf(self, x, df): + l = np.log(2) - .5*np.log(2)*df - sc.gammaln(.5*df) + return l + sc.xlogy(df - 1., x) - .5*x**2 + + def _cdf(self, x, df): + return sc.gammainc(.5*df, .5*x**2) + + def _sf(self, x, df): + return sc.gammaincc(.5*df, .5*x**2) + + def _ppf(self, q, df): + return np.sqrt(2*sc.gammaincinv(.5*df, q)) + + def _isf(self, q, df): + return np.sqrt(2*sc.gammainccinv(.5*df, q)) + + def _stats(self, df): + # poch(df/2, 1/2) = gamma(df/2 + 1/2) / gamma(df/2) + mu = np.sqrt(2) * sc.poch(0.5 * df, 0.5) + mu2 = df - mu*mu + g1 = (2*mu**3.0 + mu*(1-2*df))/np.asarray(np.power(mu2, 1.5)) + g2 = 2*df*(1.0-df)-6*mu**4 + 4*mu**2 * (2*df-1) + g2 /= np.asarray(mu2**2.0) + return mu, mu2, g1, g2 + + def _entropy(self, df): + + def regular_formula(df): + return (sc.gammaln(.5 * df) + + 0.5 * (df - np.log(2) - (df - 1) * sc.digamma(0.5 * df))) + + def asymptotic_formula(df): + return (0.5 + np.log(np.pi)/2 - (df**-1)/6 - (df**-2)/6 + - 4/45*(df**-3) + (df**-4)/15) + + return _lazywhere(df < 3e2, (df, ), regular_formula, + f2=asymptotic_formula) + + +chi = chi_gen(a=0.0, name='chi') + + +class chi2_gen(rv_continuous): + r"""A chi-squared continuous random variable. + + For the noncentral chi-square distribution, see `ncx2`. + + %(before_notes)s + + See Also + -------- + ncx2 + + Notes + ----- + The probability density function for `chi2` is: + + .. math:: + + f(x, k) = \frac{1}{2^{k/2} \Gamma \left( k/2 \right)} + x^{k/2-1} \exp \left( -x/2 \right) + + for :math:`x > 0` and :math:`k > 0` (degrees of freedom, denoted ``df`` + in the implementation). + + `chi2` takes ``df`` as a shape parameter. + + The chi-squared distribution is a special case of the gamma + distribution, with gamma parameters ``a = df/2``, ``loc = 0`` and + ``scale = 2``. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("df", False, (0, np.inf), (False, False))] + + def _rvs(self, df, size=None, random_state=None): + return random_state.chisquare(df, size) + + def _pdf(self, x, df): + # chi2.pdf(x, df) = 1 / (2*gamma(df/2)) * (x/2)**(df/2-1) * exp(-x/2) + return np.exp(self._logpdf(x, df)) + + def _logpdf(self, x, df): + return sc.xlogy(df/2.-1, x) - x/2. - sc.gammaln(df/2.) - (np.log(2)*df)/2. + + def _cdf(self, x, df): + return sc.chdtr(df, x) + + def _sf(self, x, df): + return sc.chdtrc(df, x) + + def _isf(self, p, df): + return sc.chdtri(df, p) + + def _ppf(self, p, df): + return 2*sc.gammaincinv(df/2, p) + + def _stats(self, df): + mu = df + mu2 = 2*df + g1 = 2*np.sqrt(2.0/df) + g2 = 12.0/df + return mu, mu2, g1, g2 + + def _entropy(self, df): + half_df = 0.5 * df + + def regular_formula(half_df): + return (half_df + np.log(2) + sc.gammaln(half_df) + + (1 - half_df) * sc.psi(half_df)) + + def asymptotic_formula(half_df): + # plug in the above formula the following asymptotic + # expansions: + # ln(gamma(a)) ~ (a - 0.5) * ln(a) - a + 0.5 * ln(2 * pi) + + # 1/(12 * a) - 1/(360 * a**3) + # psi(a) ~ ln(a) - 1/(2 * a) - 1/(3 * a**2) + 1/120 * a**4) + c = np.log(2) + 0.5*(1 + np.log(2*np.pi)) + h = 0.5/half_df + return (h*(-2/3 + h*(-1/3 + h*(-4/45 + h/7.5))) + + 0.5*np.log(half_df) + c) + + return _lazywhere(half_df < 125, (half_df, ), + regular_formula, + f2=asymptotic_formula) + + +chi2 = chi2_gen(a=0.0, name='chi2') + + +class cosine_gen(rv_continuous): + r"""A cosine continuous random variable. + + %(before_notes)s + + Notes + ----- + The cosine distribution is an approximation to the normal distribution. + The probability density function for `cosine` is: + + .. math:: + + f(x) = \frac{1}{2\pi} (1+\cos(x)) + + for :math:`-\pi \le x \le \pi`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + # cosine.pdf(x) = 1/(2*pi) * (1+cos(x)) + return 1.0/2/np.pi*(1+np.cos(x)) + + def _logpdf(self, x): + c = np.cos(x) + return _lazywhere(c != -1, (c,), + lambda c: np.log1p(c) - np.log(2*np.pi), + fillvalue=-np.inf) + + def _cdf(self, x): + return scu._cosine_cdf(x) + + def _sf(self, x): + return scu._cosine_cdf(-x) + + def _ppf(self, p): + return scu._cosine_invcdf(p) + + def _isf(self, p): + return -scu._cosine_invcdf(p) + + def _stats(self): + v = (np.pi * np.pi / 3.0) - 2.0 + k = -6.0 * (np.pi**4 - 90) / (5.0 * (np.pi * np.pi - 6)**2) + return 0.0, v, 0.0, k + + def _entropy(self): + return np.log(4*np.pi)-1.0 + + +cosine = cosine_gen(a=-np.pi, b=np.pi, name='cosine') + + +class dgamma_gen(rv_continuous): + r"""A double gamma continuous random variable. + + The double gamma distribution is also known as the reflected gamma + distribution [1]_. + + %(before_notes)s + + Notes + ----- + The probability density function for `dgamma` is: + + .. math:: + + f(x, a) = \frac{1}{2\Gamma(a)} |x|^{a-1} \exp(-|x|) + + for a real number :math:`x` and :math:`a > 0`. :math:`\Gamma` is the + gamma function (`scipy.special.gamma`). + + `dgamma` takes ``a`` as a shape parameter for :math:`a`. + + %(after_notes)s + + References + ---------- + .. [1] Johnson, Kotz, and Balakrishnan, "Continuous Univariate + Distributions, Volume 1", Second Edition, John Wiley and Sons + (1994). + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("a", False, (0, np.inf), (False, False))] + + def _rvs(self, a, size=None, random_state=None): + u = random_state.uniform(size=size) + gm = gamma.rvs(a, size=size, random_state=random_state) + return gm * np.where(u >= 0.5, 1, -1) + + def _pdf(self, x, a): + # dgamma.pdf(x, a) = 1 / (2*gamma(a)) * abs(x)**(a-1) * exp(-abs(x)) + ax = abs(x) + return 1.0/(2*sc.gamma(a))*ax**(a-1.0) * np.exp(-ax) + + def _logpdf(self, x, a): + ax = abs(x) + return sc.xlogy(a - 1.0, ax) - ax - np.log(2) - sc.gammaln(a) + + def _cdf(self, x, a): + return np.where(x > 0, + 0.5 + 0.5*sc.gammainc(a, x), + 0.5*sc.gammaincc(a, -x)) + + def _sf(self, x, a): + return np.where(x > 0, + 0.5*sc.gammaincc(a, x), + 0.5 + 0.5*sc.gammainc(a, -x)) + + def _entropy(self, a): + return stats.gamma._entropy(a) - np.log(0.5) + + def _ppf(self, q, a): + return np.where(q > 0.5, + sc.gammaincinv(a, 2*q - 1), + -sc.gammainccinv(a, 2*q)) + + def _isf(self, q, a): + return np.where(q > 0.5, + -sc.gammaincinv(a, 2*q - 1), + sc.gammainccinv(a, 2*q)) + + def _stats(self, a): + mu2 = a*(a+1.0) + return 0.0, mu2, 0.0, (a+2.0)*(a+3.0)/mu2-3.0 + + +dgamma = dgamma_gen(name='dgamma') + + +class dweibull_gen(rv_continuous): + r"""A double Weibull continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `dweibull` is given by + + .. math:: + + f(x, c) = c / 2 |x|^{c-1} \exp(-|x|^c) + + for a real number :math:`x` and :math:`c > 0`. + + `dweibull` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _rvs(self, c, size=None, random_state=None): + u = random_state.uniform(size=size) + w = weibull_min.rvs(c, size=size, random_state=random_state) + return w * (np.where(u >= 0.5, 1, -1)) + + def _pdf(self, x, c): + # dweibull.pdf(x, c) = c / 2 * abs(x)**(c-1) * exp(-abs(x)**c) + ax = abs(x) + Px = c / 2.0 * ax**(c-1.0) * np.exp(-ax**c) + return Px + + def _logpdf(self, x, c): + ax = abs(x) + return np.log(c) - np.log(2.0) + sc.xlogy(c - 1.0, ax) - ax**c + + def _cdf(self, x, c): + Cx1 = 0.5 * np.exp(-abs(x)**c) + return np.where(x > 0, 1 - Cx1, Cx1) + + def _ppf(self, q, c): + fac = 2. * np.where(q <= 0.5, q, 1. - q) + fac = np.power(-np.log(fac), 1.0 / c) + return np.where(q > 0.5, fac, -fac) + + def _sf(self, x, c): + half_weibull_min_sf = 0.5 * stats.weibull_min._sf(np.abs(x), c) + return np.where(x > 0, half_weibull_min_sf, 1 - half_weibull_min_sf) + + def _isf(self, q, c): + double_q = 2. * np.where(q <= 0.5, q, 1. - q) + weibull_min_isf = stats.weibull_min._isf(double_q, c) + return np.where(q > 0.5, -weibull_min_isf, weibull_min_isf) + + def _munp(self, n, c): + return (1 - (n % 2)) * sc.gamma(1.0 + 1.0 * n / c) + + # since we know that all odd moments are zeros, return them at once. + # returning Nones from _stats makes the public stats call _munp + # so overall we're saving one or two gamma function evaluations here. + def _stats(self, c): + return 0, None, 0, None + + def _entropy(self, c): + h = stats.weibull_min._entropy(c) - np.log(0.5) + return h + + +dweibull = dweibull_gen(name='dweibull') + + +class expon_gen(rv_continuous): + r"""An exponential continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `expon` is: + + .. math:: + + f(x) = \exp(-x) + + for :math:`x \ge 0`. + + %(after_notes)s + + A common parameterization for `expon` is in terms of the rate parameter + ``lambda``, such that ``pdf = lambda * exp(-lambda * x)``. This + parameterization corresponds to using ``scale = 1 / lambda``. + + The exponential distribution is a special case of the gamma + distributions, with gamma shape parameter ``a = 1``. + + %(example)s + + """ + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return random_state.standard_exponential(size) + + def _pdf(self, x): + # expon.pdf(x) = exp(-x) + return np.exp(-x) + + def _logpdf(self, x): + return -x + + def _cdf(self, x): + return -sc.expm1(-x) + + def _ppf(self, q): + return -sc.log1p(-q) + + def _sf(self, x): + return np.exp(-x) + + def _logsf(self, x): + return -x + + def _isf(self, q): + return -np.log(q) + + def _stats(self): + return 1.0, 1.0, 2.0, 6.0 + + def _entropy(self): + return 1.0 + + @_call_super_mom + @replace_notes_in_docstring(rv_continuous, notes="""\ + When `method='MLE'`, + this function uses explicit formulas for the maximum likelihood + estimation of the exponential distribution parameters, so the + `optimizer`, `loc` and `scale` keyword arguments are + ignored.\n\n""") + def fit(self, data, *args, **kwds): + if len(args) > 0: + raise TypeError("Too many arguments.") + + floc = kwds.pop('floc', None) + fscale = kwds.pop('fscale', None) + + _remove_optimizer_parameters(kwds) + + if floc is not None and fscale is not None: + # This check is for consistency with `rv_continuous.fit`. + raise ValueError("All parameters fixed. There is nothing to " + "optimize.") + + data = np.asarray(data) + + if not np.isfinite(data).all(): + raise ValueError("The data contains non-finite values.") + + data_min = data.min() + + if floc is None: + # ML estimate of the location is the minimum of the data. + loc = data_min + else: + loc = floc + if data_min < loc: + # There are values that are less than the specified loc. + raise FitDataError("expon", lower=floc, upper=np.inf) + + if fscale is None: + # ML estimate of the scale is the shifted mean. + scale = data.mean() - loc + else: + scale = fscale + + # We expect the return values to be floating point, so ensure it + # by explicitly converting to float. + return float(loc), float(scale) + + +expon = expon_gen(a=0.0, name='expon') + + +class exponnorm_gen(rv_continuous): + r"""An exponentially modified Normal continuous random variable. + + Also known as the exponentially modified Gaussian distribution [1]_. + + %(before_notes)s + + Notes + ----- + The probability density function for `exponnorm` is: + + .. math:: + + f(x, K) = \frac{1}{2K} \exp\left(\frac{1}{2 K^2} - x / K \right) + \text{erfc}\left(-\frac{x - 1/K}{\sqrt{2}}\right) + + where :math:`x` is a real number and :math:`K > 0`. + + It can be thought of as the sum of a standard normal random variable + and an independent exponentially distributed random variable with rate + ``1/K``. + + %(after_notes)s + + An alternative parameterization of this distribution (for example, in + the Wikipedia article [1]_) involves three parameters, :math:`\mu`, + :math:`\lambda` and :math:`\sigma`. + + In the present parameterization this corresponds to having ``loc`` and + ``scale`` equal to :math:`\mu` and :math:`\sigma`, respectively, and + shape parameter :math:`K = 1/(\sigma\lambda)`. + + .. versionadded:: 0.16.0 + + References + ---------- + .. [1] Exponentially modified Gaussian distribution, Wikipedia, + https://en.wikipedia.org/wiki/Exponentially_modified_Gaussian_distribution + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("K", False, (0, np.inf), (False, False))] + + def _rvs(self, K, size=None, random_state=None): + expval = random_state.standard_exponential(size) * K + gval = random_state.standard_normal(size) + return expval + gval + + def _pdf(self, x, K): + return np.exp(self._logpdf(x, K)) + + def _logpdf(self, x, K): + invK = 1.0 / K + exparg = invK * (0.5 * invK - x) + return exparg + _norm_logcdf(x - invK) - np.log(K) + + def _cdf(self, x, K): + invK = 1.0 / K + expval = invK * (0.5 * invK - x) + logprod = expval + _norm_logcdf(x - invK) + return _norm_cdf(x) - np.exp(logprod) + + def _sf(self, x, K): + invK = 1.0 / K + expval = invK * (0.5 * invK - x) + logprod = expval + _norm_logcdf(x - invK) + return _norm_cdf(-x) + np.exp(logprod) + + def _stats(self, K): + K2 = K * K + opK2 = 1.0 + K2 + skw = 2 * K**3 * opK2**(-1.5) + krt = 6.0 * K2 * K2 * opK2**(-2) + return K, opK2, skw, krt + + +exponnorm = exponnorm_gen(name='exponnorm') + + +def _pow1pm1(x, y): + """ + Compute (1 + x)**y - 1. + + Uses expm1 and xlog1py to avoid loss of precision when + (1 + x)**y is close to 1. + + Note that the inverse of this function with respect to x is + ``_pow1pm1(x, 1/y)``. That is, if + + t = _pow1pm1(x, y) + + then + + x = _pow1pm1(t, 1/y) + """ + return np.expm1(sc.xlog1py(y, x)) + + +class exponweib_gen(rv_continuous): + r"""An exponentiated Weibull continuous random variable. + + %(before_notes)s + + See Also + -------- + weibull_min, numpy.random.Generator.weibull + + Notes + ----- + The probability density function for `exponweib` is: + + .. math:: + + f(x, a, c) = a c [1-\exp(-x^c)]^{a-1} \exp(-x^c) x^{c-1} + + and its cumulative distribution function is: + + .. math:: + + F(x, a, c) = [1-\exp(-x^c)]^a + + for :math:`x > 0`, :math:`a > 0`, :math:`c > 0`. + + `exponweib` takes :math:`a` and :math:`c` as shape parameters: + + * :math:`a` is the exponentiation parameter, + with the special case :math:`a=1` corresponding to the + (non-exponentiated) Weibull distribution `weibull_min`. + * :math:`c` is the shape parameter of the non-exponentiated Weibull law. + + %(after_notes)s + + References + ---------- + https://en.wikipedia.org/wiki/Exponentiated_Weibull_distribution + + %(example)s + + """ + def _shape_info(self): + ia = _ShapeInfo("a", False, (0, np.inf), (False, False)) + ic = _ShapeInfo("c", False, (0, np.inf), (False, False)) + return [ia, ic] + + def _pdf(self, x, a, c): + # exponweib.pdf(x, a, c) = + # a * c * (1-exp(-x**c))**(a-1) * exp(-x**c)*x**(c-1) + return np.exp(self._logpdf(x, a, c)) + + def _logpdf(self, x, a, c): + negxc = -x**c + exm1c = -sc.expm1(negxc) + logp = (np.log(a) + np.log(c) + sc.xlogy(a - 1.0, exm1c) + + negxc + sc.xlogy(c - 1.0, x)) + return logp + + def _cdf(self, x, a, c): + exm1c = -sc.expm1(-x**c) + return exm1c**a + + def _ppf(self, q, a, c): + return (-sc.log1p(-q**(1.0/a)))**np.asarray(1.0/c) + + def _sf(self, x, a, c): + return -_pow1pm1(-np.exp(-x**c), a) + + def _isf(self, p, a, c): + return (-np.log(-_pow1pm1(-p, 1/a)))**(1/c) + + +exponweib = exponweib_gen(a=0.0, name='exponweib') + + +class exponpow_gen(rv_continuous): + r"""An exponential power continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `exponpow` is: + + .. math:: + + f(x, b) = b x^{b-1} \exp(1 + x^b - \exp(x^b)) + + for :math:`x \ge 0`, :math:`b > 0`. Note that this is a different + distribution from the exponential power distribution that is also known + under the names "generalized normal" or "generalized Gaussian". + + `exponpow` takes ``b`` as a shape parameter for :math:`b`. + + %(after_notes)s + + References + ---------- + http://www.math.wm.edu/~leemis/chart/UDR/PDFs/Exponentialpower.pdf + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("b", False, (0, np.inf), (False, False))] + + def _pdf(self, x, b): + # exponpow.pdf(x, b) = b * x**(b-1) * exp(1 + x**b - exp(x**b)) + return np.exp(self._logpdf(x, b)) + + def _logpdf(self, x, b): + xb = x**b + f = 1 + np.log(b) + sc.xlogy(b - 1.0, x) + xb - np.exp(xb) + return f + + def _cdf(self, x, b): + return -sc.expm1(-sc.expm1(x**b)) + + def _sf(self, x, b): + return np.exp(-sc.expm1(x**b)) + + def _isf(self, x, b): + return (sc.log1p(-np.log(x)))**(1./b) + + def _ppf(self, q, b): + return pow(sc.log1p(-sc.log1p(-q)), 1.0/b) + + +exponpow = exponpow_gen(a=0.0, name='exponpow') + + +class fatiguelife_gen(rv_continuous): + r"""A fatigue-life (Birnbaum-Saunders) continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `fatiguelife` is: + + .. math:: + + f(x, c) = \frac{x+1}{2c\sqrt{2\pi x^3}} \exp(-\frac{(x-1)^2}{2x c^2}) + + for :math:`x >= 0` and :math:`c > 0`. + + `fatiguelife` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + References + ---------- + .. [1] "Birnbaum-Saunders distribution", + https://en.wikipedia.org/wiki/Birnbaum-Saunders_distribution + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _rvs(self, c, size=None, random_state=None): + z = random_state.standard_normal(size) + x = 0.5*c*z + x2 = x*x + t = 1.0 + 2*x2 + 2*x*np.sqrt(1 + x2) + return t + + def _pdf(self, x, c): + # fatiguelife.pdf(x, c) = + # (x+1) / (2*c*sqrt(2*pi*x**3)) * exp(-(x-1)**2/(2*x*c**2)) + return np.exp(self._logpdf(x, c)) + + def _logpdf(self, x, c): + return (np.log(x+1) - (x-1)**2 / (2.0*x*c**2) - np.log(2*c) - + 0.5*(np.log(2*np.pi) + 3*np.log(x))) + + def _cdf(self, x, c): + return _norm_cdf(1.0 / c * (np.sqrt(x) - 1.0/np.sqrt(x))) + + def _ppf(self, q, c): + tmp = c * _norm_ppf(q) + return 0.25 * (tmp + np.sqrt(tmp**2 + 4))**2 + + def _sf(self, x, c): + return _norm_sf(1.0 / c * (np.sqrt(x) - 1.0/np.sqrt(x))) + + def _isf(self, q, c): + tmp = -c * _norm_ppf(q) + return 0.25 * (tmp + np.sqrt(tmp**2 + 4))**2 + + def _stats(self, c): + # NB: the formula for kurtosis in wikipedia seems to have an error: + # it's 40, not 41. At least it disagrees with the one from Wolfram + # Alpha. And the latter one, below, passes the tests, while the wiki + # one doesn't So far I didn't have the guts to actually check the + # coefficients from the expressions for the raw moments. + c2 = c*c + mu = c2 / 2.0 + 1.0 + den = 5.0 * c2 + 4.0 + mu2 = c2*den / 4.0 + g1 = 4 * c * (11*c2 + 6.0) / np.power(den, 1.5) + g2 = 6 * c2 * (93*c2 + 40.0) / den**2.0 + return mu, mu2, g1, g2 + + +fatiguelife = fatiguelife_gen(a=0.0, name='fatiguelife') + + +class foldcauchy_gen(rv_continuous): + r"""A folded Cauchy continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `foldcauchy` is: + + .. math:: + + f(x, c) = \frac{1}{\pi (1+(x-c)^2)} + \frac{1}{\pi (1+(x+c)^2)} + + for :math:`x \ge 0` and :math:`c \ge 0`. + + `foldcauchy` takes ``c`` as a shape parameter for :math:`c`. + + %(example)s + + """ + def _argcheck(self, c): + return c >= 0 + + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (True, False))] + + def _rvs(self, c, size=None, random_state=None): + return abs(cauchy.rvs(loc=c, size=size, + random_state=random_state)) + + def _pdf(self, x, c): + # foldcauchy.pdf(x, c) = 1/(pi*(1+(x-c)**2)) + 1/(pi*(1+(x+c)**2)) + return 1.0/np.pi*(1.0/(1+(x-c)**2) + 1.0/(1+(x+c)**2)) + + def _cdf(self, x, c): + return 1.0/np.pi*(np.arctan(x-c) + np.arctan(x+c)) + + def _sf(self, x, c): + # 1 - CDF(x, c) = 1 - (atan(x - c) + atan(x + c))/pi + # = ((pi/2 - atan(x - c)) + (pi/2 - atan(x + c)))/pi + # = (acot(x - c) + acot(x + c))/pi + # = (atan2(1, x - c) + atan2(1, x + c))/pi + return (np.arctan2(1, x - c) + np.arctan2(1, x + c))/np.pi + + def _stats(self, c): + return np.inf, np.inf, np.nan, np.nan + + +foldcauchy = foldcauchy_gen(a=0.0, name='foldcauchy') + + +class f_gen(rv_continuous): + r"""An F continuous random variable. + + For the noncentral F distribution, see `ncf`. + + %(before_notes)s + + See Also + -------- + ncf + + Notes + ----- + The F distribution with :math:`df_1 > 0` and :math:`df_2 > 0` degrees of freedom is + the distribution of the ratio of two independent chi-squared distributions with + :math:`df_1` and :math:`df_2` degrees of freedom, after rescaling by + :math:`df_2 / df_1`. + + The probability density function for `f` is: + + .. math:: + + f(x, df_1, df_2) = \frac{df_2^{df_2/2} df_1^{df_1/2} x^{df_1 / 2-1}} + {(df_2+df_1 x)^{(df_1+df_2)/2} + B(df_1/2, df_2/2)} + + for :math:`x > 0`. + + `f` accepts shape parameters ``dfn`` and ``dfd`` for :math:`df_1`, the degrees of + freedom of the chi-squared distribution in the numerator, and :math:`df_2`, the + degrees of freedom of the chi-squared distribution in the denominator, respectively. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + idfn = _ShapeInfo("dfn", False, (0, np.inf), (False, False)) + idfd = _ShapeInfo("dfd", False, (0, np.inf), (False, False)) + return [idfn, idfd] + + def _rvs(self, dfn, dfd, size=None, random_state=None): + return random_state.f(dfn, dfd, size) + + def _pdf(self, x, dfn, dfd): + # df2**(df2/2) * df1**(df1/2) * x**(df1/2-1) + # F.pdf(x, df1, df2) = -------------------------------------------- + # (df2+df1*x)**((df1+df2)/2) * B(df1/2, df2/2) + return np.exp(self._logpdf(x, dfn, dfd)) + + def _logpdf(self, x, dfn, dfd): + n = 1.0 * dfn + m = 1.0 * dfd + lPx = (m/2 * np.log(m) + n/2 * np.log(n) + sc.xlogy(n/2 - 1, x) + - (((n+m)/2) * np.log(m + n*x) + sc.betaln(n/2, m/2))) + return lPx + + def _cdf(self, x, dfn, dfd): + return sc.fdtr(dfn, dfd, x) + + def _sf(self, x, dfn, dfd): + return sc.fdtrc(dfn, dfd, x) + + def _ppf(self, q, dfn, dfd): + return sc.fdtri(dfn, dfd, q) + + def _stats(self, dfn, dfd): + v1, v2 = 1. * dfn, 1. * dfd + v2_2, v2_4, v2_6, v2_8 = v2 - 2., v2 - 4., v2 - 6., v2 - 8. + + mu = _lazywhere( + v2 > 2, (v2, v2_2), + lambda v2, v2_2: v2 / v2_2, + np.inf) + + mu2 = _lazywhere( + v2 > 4, (v1, v2, v2_2, v2_4), + lambda v1, v2, v2_2, v2_4: + 2 * v2 * v2 * (v1 + v2_2) / (v1 * v2_2**2 * v2_4), + np.inf) + + g1 = _lazywhere( + v2 > 6, (v1, v2_2, v2_4, v2_6), + lambda v1, v2_2, v2_4, v2_6: + (2 * v1 + v2_2) / v2_6 * np.sqrt(v2_4 / (v1 * (v1 + v2_2))), + np.nan) + g1 *= np.sqrt(8.) + + g2 = _lazywhere( + v2 > 8, (g1, v2_6, v2_8), + lambda g1, v2_6, v2_8: (8 + g1 * g1 * v2_6) / v2_8, + np.nan) + g2 *= 3. / 2. + + return mu, mu2, g1, g2 + + def _entropy(self, dfn, dfd): + # the formula found in literature is incorrect. This one yields the + # same result as numerical integration using the generic entropy + # definition. This is also tested in tests/test_conntinous_basic + half_dfn = 0.5 * dfn + half_dfd = 0.5 * dfd + half_sum = 0.5 * (dfn + dfd) + + return (np.log(dfd) - np.log(dfn) + sc.betaln(half_dfn, half_dfd) + + (1 - half_dfn) * sc.psi(half_dfn) - (1 + half_dfd) * + sc.psi(half_dfd) + half_sum * sc.psi(half_sum)) + + +f = f_gen(a=0.0, name='f') + + +## Folded Normal +## abs(Z) where (Z is normal with mu=L and std=S so that c=abs(L)/S) +## +## note: regress docs have scale parameter correct, but first parameter +## he gives is a shape parameter A = c * scale + +## Half-normal is folded normal with shape-parameter c=0. + +class foldnorm_gen(rv_continuous): + r"""A folded normal continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `foldnorm` is: + + .. math:: + + f(x, c) = \sqrt{2/\pi} cosh(c x) \exp(-\frac{x^2+c^2}{2}) + + for :math:`x \ge 0` and :math:`c \ge 0`. + + `foldnorm` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + %(example)s + + """ + def _argcheck(self, c): + return c >= 0 + + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (True, False))] + + def _rvs(self, c, size=None, random_state=None): + return abs(random_state.standard_normal(size) + c) + + def _pdf(self, x, c): + # foldnormal.pdf(x, c) = sqrt(2/pi) * cosh(c*x) * exp(-(x**2+c**2)/2) + return _norm_pdf(x + c) + _norm_pdf(x-c) + + def _cdf(self, x, c): + sqrt_two = np.sqrt(2) + return 0.5 * (sc.erf((x - c)/sqrt_two) + sc.erf((x + c)/sqrt_two)) + + def _sf(self, x, c): + return _norm_sf(x - c) + _norm_sf(x + c) + + def _stats(self, c): + # Regina C. Elandt, Technometrics 3, 551 (1961) + # https://www.jstor.org/stable/1266561 + # + c2 = c*c + expfac = np.exp(-0.5*c2) / np.sqrt(2.*np.pi) + + mu = 2.*expfac + c * sc.erf(c/np.sqrt(2)) + mu2 = c2 + 1 - mu*mu + + g1 = 2. * (mu*mu*mu - c2*mu - expfac) + g1 /= np.power(mu2, 1.5) + + g2 = c2 * (c2 + 6.) + 3 + 8.*expfac*mu + g2 += (2. * (c2 - 3.) - 3. * mu**2) * mu**2 + g2 = g2 / mu2**2.0 - 3. + + return mu, mu2, g1, g2 + + +foldnorm = foldnorm_gen(a=0.0, name='foldnorm') + + +class weibull_min_gen(rv_continuous): + r"""Weibull minimum continuous random variable. + + The Weibull Minimum Extreme Value distribution, from extreme value theory + (Fisher-Gnedenko theorem), is also often simply called the Weibull + distribution. It arises as the limiting distribution of the rescaled + minimum of iid random variables. + + %(before_notes)s + + See Also + -------- + weibull_max, numpy.random.Generator.weibull, exponweib + + Notes + ----- + The probability density function for `weibull_min` is: + + .. math:: + + f(x, c) = c x^{c-1} \exp(-x^c) + + for :math:`x > 0`, :math:`c > 0`. + + `weibull_min` takes ``c`` as a shape parameter for :math:`c`. + (named :math:`k` in Wikipedia article and :math:`a` in + ``numpy.random.weibull``). Special shape values are :math:`c=1` and + :math:`c=2` where Weibull distribution reduces to the `expon` and + `rayleigh` distributions respectively. + + Suppose ``X`` is an exponentially distributed random variable with + scale ``s``. Then ``Y = X**k`` is `weibull_min` distributed with shape + ``c = 1/k`` and scale ``s**k``. + + %(after_notes)s + + References + ---------- + https://en.wikipedia.org/wiki/Weibull_distribution + + https://en.wikipedia.org/wiki/Fisher-Tippett-Gnedenko_theorem + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # weibull_min.pdf(x, c) = c * x**(c-1) * exp(-x**c) + return c*pow(x, c-1)*np.exp(-pow(x, c)) + + def _logpdf(self, x, c): + return np.log(c) + sc.xlogy(c - 1, x) - pow(x, c) + + def _cdf(self, x, c): + return -sc.expm1(-pow(x, c)) + + def _ppf(self, q, c): + return pow(-sc.log1p(-q), 1.0/c) + + def _sf(self, x, c): + return np.exp(self._logsf(x, c)) + + def _logsf(self, x, c): + return -pow(x, c) + + def _isf(self, q, c): + return (-np.log(q))**(1/c) + + def _munp(self, n, c): + return sc.gamma(1.0+n*1.0/c) + + def _entropy(self, c): + return -_EULER / c - np.log(c) + _EULER + 1 + + @extend_notes_in_docstring(rv_continuous, notes="""\ + If ``method='mm'``, parameters fixed by the user are respected, and the + remaining parameters are used to match distribution and sample moments + where possible. For example, if the user fixes the location with + ``floc``, the parameters will only match the distribution skewness and + variance to the sample skewness and variance; no attempt will be made + to match the means or minimize a norm of the errors. + \n\n""") + def fit(self, data, *args, **kwds): + + if isinstance(data, CensoredData): + if data.num_censored() == 0: + data = data._uncensor() + else: + return super().fit(data, *args, **kwds) + + if kwds.pop('superfit', False): + return super().fit(data, *args, **kwds) + + # this extracts fixed shape, location, and scale however they + # are specified, and also leaves them in `kwds` + data, fc, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + method = kwds.get("method", "mle").lower() + + # See https://en.wikipedia.org/wiki/Weibull_distribution#Moments for + # moment formulas. + def skew(c): + gamma1 = sc.gamma(1+1/c) + gamma2 = sc.gamma(1+2/c) + gamma3 = sc.gamma(1+3/c) + num = 2 * gamma1**3 - 3*gamma1*gamma2 + gamma3 + den = (gamma2 - gamma1**2)**(3/2) + return num/den + + # For c in [1e2, 3e4], population skewness appears to approach + # asymptote near -1.139, but past c > 3e4, skewness begins to vary + # wildly, and MoM won't provide a good guess. Get out early. + s = stats.skew(data) + max_c = 1e4 + s_min = skew(max_c) + if s < s_min and method != "mm" and fc is None and not args: + return super().fit(data, *args, **kwds) + + # If method is method of moments, we don't need the user's guesses. + # Otherwise, extract the guesses from args and kwds. + if method == "mm": + c, loc, scale = None, None, None + else: + c = args[0] if len(args) else None + loc = kwds.pop('loc', None) + scale = kwds.pop('scale', None) + + if fc is None and c is None: # not fixed and no guess: use MoM + # Solve for c that matches sample distribution skewness to sample + # skewness. + # we start having numerical issues with `weibull_min` with + # parameters outside this range - and not just in this method. + # We could probably improve the situation by doing everything + # in the log space, but that is for another time. + c = root_scalar(lambda c: skew(c) - s, bracket=[0.02, max_c], + method='bisect').root + elif fc is not None: # fixed: use it + c = fc + + if fscale is None and scale is None: + v = np.var(data) + scale = np.sqrt(v / (sc.gamma(1+2/c) - sc.gamma(1+1/c)**2)) + elif fscale is not None: + scale = fscale + + if floc is None and loc is None: + m = np.mean(data) + loc = m - scale*sc.gamma(1 + 1/c) + elif floc is not None: + loc = floc + + if method == 'mm': + return c, loc, scale + else: + # At this point, parameter "guesses" may equal the fixed parameters + # in kwds. No harm in passing them as guesses, too. + return super().fit(data, c, loc=loc, scale=scale, **kwds) + + +weibull_min = weibull_min_gen(a=0.0, name='weibull_min') + + +class truncweibull_min_gen(rv_continuous): + r"""A doubly truncated Weibull minimum continuous random variable. + + %(before_notes)s + + See Also + -------- + weibull_min, truncexpon + + Notes + ----- + The probability density function for `truncweibull_min` is: + + .. math:: + + f(x, a, b, c) = \frac{c x^{c-1} \exp(-x^c)}{\exp(-a^c) - \exp(-b^c)} + + for :math:`a < x <= b`, :math:`0 \le a < b` and :math:`c > 0`. + + `truncweibull_min` takes :math:`a`, :math:`b`, and :math:`c` as shape + parameters. + + Notice that the truncation values, :math:`a` and :math:`b`, are defined in + standardized form: + + .. math:: + + a = (u_l - loc)/scale + b = (u_r - loc)/scale + + where :math:`u_l` and :math:`u_r` are the specific left and right + truncation values, respectively. In other words, the support of the + distribution becomes :math:`(a*scale + loc) < x <= (b*scale + loc)` when + :math:`loc` and/or :math:`scale` are provided. + + %(after_notes)s + + References + ---------- + + .. [1] Rinne, H. "The Weibull Distribution: A Handbook". CRC Press (2009). + + %(example)s + + """ + def _argcheck(self, c, a, b): + return (a >= 0.) & (b > a) & (c > 0.) + + def _shape_info(self): + ic = _ShapeInfo("c", False, (0, np.inf), (False, False)) + ia = _ShapeInfo("a", False, (0, np.inf), (True, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + return [ic, ia, ib] + + def _fitstart(self, data): + # Arbitrary, but default a=b=c=1 is not valid + return super()._fitstart(data, args=(1, 0, 1)) + + def _get_support(self, c, a, b): + return a, b + + def _pdf(self, x, c, a, b): + denum = (np.exp(-pow(a, c)) - np.exp(-pow(b, c))) + return (c * pow(x, c-1) * np.exp(-pow(x, c))) / denum + + def _logpdf(self, x, c, a, b): + logdenum = np.log(np.exp(-pow(a, c)) - np.exp(-pow(b, c))) + return np.log(c) + sc.xlogy(c - 1, x) - pow(x, c) - logdenum + + def _cdf(self, x, c, a, b): + num = (np.exp(-pow(a, c)) - np.exp(-pow(x, c))) + denum = (np.exp(-pow(a, c)) - np.exp(-pow(b, c))) + return num / denum + + def _logcdf(self, x, c, a, b): + lognum = np.log(np.exp(-pow(a, c)) - np.exp(-pow(x, c))) + logdenum = np.log(np.exp(-pow(a, c)) - np.exp(-pow(b, c))) + return lognum - logdenum + + def _sf(self, x, c, a, b): + num = (np.exp(-pow(x, c)) - np.exp(-pow(b, c))) + denum = (np.exp(-pow(a, c)) - np.exp(-pow(b, c))) + return num / denum + + def _logsf(self, x, c, a, b): + lognum = np.log(np.exp(-pow(x, c)) - np.exp(-pow(b, c))) + logdenum = np.log(np.exp(-pow(a, c)) - np.exp(-pow(b, c))) + return lognum - logdenum + + def _isf(self, q, c, a, b): + return pow( + -np.log((1 - q) * np.exp(-pow(b, c)) + q * np.exp(-pow(a, c))), 1/c + ) + + def _ppf(self, q, c, a, b): + return pow( + -np.log((1 - q) * np.exp(-pow(a, c)) + q * np.exp(-pow(b, c))), 1/c + ) + + def _munp(self, n, c, a, b): + gamma_fun = sc.gamma(n/c + 1.) * ( + sc.gammainc(n/c + 1., pow(b, c)) - sc.gammainc(n/c + 1., pow(a, c)) + ) + denum = (np.exp(-pow(a, c)) - np.exp(-pow(b, c))) + return gamma_fun / denum + + +truncweibull_min = truncweibull_min_gen(name='truncweibull_min') + + +class weibull_max_gen(rv_continuous): + r"""Weibull maximum continuous random variable. + + The Weibull Maximum Extreme Value distribution, from extreme value theory + (Fisher-Gnedenko theorem), is the limiting distribution of rescaled + maximum of iid random variables. This is the distribution of -X + if X is from the `weibull_min` function. + + %(before_notes)s + + See Also + -------- + weibull_min + + Notes + ----- + The probability density function for `weibull_max` is: + + .. math:: + + f(x, c) = c (-x)^{c-1} \exp(-(-x)^c) + + for :math:`x < 0`, :math:`c > 0`. + + `weibull_max` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + References + ---------- + https://en.wikipedia.org/wiki/Weibull_distribution + + https://en.wikipedia.org/wiki/Fisher-Tippett-Gnedenko_theorem + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # weibull_max.pdf(x, c) = c * (-x)**(c-1) * exp(-(-x)**c) + return c*pow(-x, c-1)*np.exp(-pow(-x, c)) + + def _logpdf(self, x, c): + return np.log(c) + sc.xlogy(c-1, -x) - pow(-x, c) + + def _cdf(self, x, c): + return np.exp(-pow(-x, c)) + + def _logcdf(self, x, c): + return -pow(-x, c) + + def _sf(self, x, c): + return -sc.expm1(-pow(-x, c)) + + def _ppf(self, q, c): + return -pow(-np.log(q), 1.0/c) + + def _munp(self, n, c): + val = sc.gamma(1.0+n*1.0/c) + if int(n) % 2: + sgn = -1 + else: + sgn = 1 + return sgn * val + + def _entropy(self, c): + return -_EULER / c - np.log(c) + _EULER + 1 + + +weibull_max = weibull_max_gen(b=0.0, name='weibull_max') + + +class genlogistic_gen(rv_continuous): + r"""A generalized logistic continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `genlogistic` is: + + .. math:: + + f(x, c) = c \frac{\exp(-x)} + {(1 + \exp(-x))^{c+1}} + + for real :math:`x` and :math:`c > 0`. In literature, different + generalizations of the logistic distribution can be found. This is the type 1 + generalized logistic distribution according to [1]_. It is also referred to + as the skew-logistic distribution [2]_. + + `genlogistic` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + References + ---------- + .. [1] Johnson et al. "Continuous Univariate Distributions", Volume 2, + Wiley. 1995. + .. [2] "Generalized Logistic Distribution", Wikipedia, + https://en.wikipedia.org/wiki/Generalized_logistic_distribution + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # genlogistic.pdf(x, c) = c * exp(-x) / (1 + exp(-x))**(c+1) + return np.exp(self._logpdf(x, c)) + + def _logpdf(self, x, c): + # Two mathematically equivalent expressions for log(pdf(x, c)): + # log(pdf(x, c)) = log(c) - x - (c + 1)*log(1 + exp(-x)) + # = log(c) + c*x - (c + 1)*log(1 + exp(x)) + mult = -(c - 1) * (x < 0) - 1 + absx = np.abs(x) + return np.log(c) + mult*absx - (c+1) * sc.log1p(np.exp(-absx)) + + def _cdf(self, x, c): + Cx = (1+np.exp(-x))**(-c) + return Cx + + def _logcdf(self, x, c): + return -c * np.log1p(np.exp(-x)) + + def _ppf(self, q, c): + return -np.log(sc.powm1(q, -1.0/c)) + + def _sf(self, x, c): + return -sc.expm1(self._logcdf(x, c)) + + def _isf(self, q, c): + return self._ppf(1 - q, c) + + def _stats(self, c): + mu = _EULER + sc.psi(c) + mu2 = np.pi*np.pi/6.0 + sc.zeta(2, c) + g1 = -2*sc.zeta(3, c) + 2*_ZETA3 + g1 /= np.power(mu2, 1.5) + g2 = np.pi**4/15.0 + 6*sc.zeta(4, c) + g2 /= mu2**2.0 + return mu, mu2, g1, g2 + + def _entropy(self, c): + return _lazywhere(c < 8e6, (c, ), + lambda c: -np.log(c) + sc.psi(c + 1) + _EULER + 1, + # asymptotic expansion: psi(c) ~ log(c) - 1/(2 * c) + # a = -log(c) + psi(c + 1) + # = -log(c) + psi(c) + 1/c + # ~ -log(c) + log(c) - 1/(2 * c) + 1/c + # = 1/(2 * c) + f2=lambda c: 1/(2 * c) + _EULER + 1) + + +genlogistic = genlogistic_gen(name='genlogistic') + + +class genpareto_gen(rv_continuous): + r"""A generalized Pareto continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `genpareto` is: + + .. math:: + + f(x, c) = (1 + c x)^{-1 - 1/c} + + defined for :math:`x \ge 0` if :math:`c \ge 0`, and for + :math:`0 \le x \le -1/c` if :math:`c < 0`. + + `genpareto` takes ``c`` as a shape parameter for :math:`c`. + + For :math:`c=0`, `genpareto` reduces to the exponential + distribution, `expon`: + + .. math:: + + f(x, 0) = \exp(-x) + + For :math:`c=-1`, `genpareto` is uniform on ``[0, 1]``: + + .. math:: + + f(x, -1) = 1 + + %(after_notes)s + + %(example)s + + """ + def _argcheck(self, c): + return np.isfinite(c) + + def _shape_info(self): + return [_ShapeInfo("c", False, (-np.inf, np.inf), (False, False))] + + def _get_support(self, c): + c = np.asarray(c) + b = _lazywhere(c < 0, (c,), + lambda c: -1. / c, + np.inf) + a = np.where(c >= 0, self.a, self.a) + return a, b + + def _pdf(self, x, c): + # genpareto.pdf(x, c) = (1 + c * x)**(-1 - 1/c) + return np.exp(self._logpdf(x, c)) + + def _logpdf(self, x, c): + return _lazywhere((x == x) & (c != 0), (x, c), + lambda x, c: -sc.xlog1py(c + 1., c*x) / c, + -x) + + def _cdf(self, x, c): + return -sc.inv_boxcox1p(-x, -c) + + def _sf(self, x, c): + return sc.inv_boxcox(-x, -c) + + def _logsf(self, x, c): + return _lazywhere((x == x) & (c != 0), (x, c), + lambda x, c: -sc.log1p(c*x) / c, + -x) + + def _ppf(self, q, c): + return -sc.boxcox1p(-q, -c) + + def _isf(self, q, c): + return -sc.boxcox(q, -c) + + def _stats(self, c, moments='mv'): + if 'm' not in moments: + m = None + else: + m = _lazywhere(c < 1, (c,), + lambda xi: 1/(1 - xi), + np.inf) + if 'v' not in moments: + v = None + else: + v = _lazywhere(c < 1/2, (c,), + lambda xi: 1 / (1 - xi)**2 / (1 - 2*xi), + np.nan) + if 's' not in moments: + s = None + else: + s = _lazywhere(c < 1/3, (c,), + lambda xi: (2 * (1 + xi) * np.sqrt(1 - 2*xi) / + (1 - 3*xi)), + np.nan) + if 'k' not in moments: + k = None + else: + k = _lazywhere(c < 1/4, (c,), + lambda xi: (3 * (1 - 2*xi) * (2*xi**2 + xi + 3) / + (1 - 3*xi) / (1 - 4*xi) - 3), + np.nan) + return m, v, s, k + + def _munp(self, n, c): + def __munp(n, c): + val = 0.0 + k = np.arange(0, n + 1) + for ki, cnk in zip(k, sc.comb(n, k)): + val = val + cnk * (-1) ** ki / (1.0 - c * ki) + return np.where(c * n < 1, val * (-1.0 / c) ** n, np.inf) + return _lazywhere(c != 0, (c,), + lambda c: __munp(n, c), + sc.gamma(n + 1)) + + def _entropy(self, c): + return 1. + c + + +genpareto = genpareto_gen(a=0.0, name='genpareto') + + +class genexpon_gen(rv_continuous): + r"""A generalized exponential continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `genexpon` is: + + .. math:: + + f(x, a, b, c) = (a + b (1 - \exp(-c x))) + \exp(-a x - b x + \frac{b}{c} (1-\exp(-c x))) + + for :math:`x \ge 0`, :math:`a, b, c > 0`. + + `genexpon` takes :math:`a`, :math:`b` and :math:`c` as shape parameters. + + %(after_notes)s + + References + ---------- + H.K. Ryu, "An Extension of Marshall and Olkin's Bivariate Exponential + Distribution", Journal of the American Statistical Association, 1993. + + N. Balakrishnan, Asit P. Basu (editors), *The Exponential Distribution: + Theory, Methods and Applications*, Gordon and Breach, 1995. + ISBN 10: 2884491929 + + %(example)s + + """ + def _shape_info(self): + ia = _ShapeInfo("a", False, (0, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + ic = _ShapeInfo("c", False, (0, np.inf), (False, False)) + return [ia, ib, ic] + + def _pdf(self, x, a, b, c): + # genexpon.pdf(x, a, b, c) = (a + b * (1 - exp(-c*x))) * \ + # exp(-a*x - b*x + b/c * (1-exp(-c*x))) + return (a + b*(-sc.expm1(-c*x)))*np.exp((-a-b)*x + + b*(-sc.expm1(-c*x))/c) + + def _logpdf(self, x, a, b, c): + return np.log(a+b*(-sc.expm1(-c*x))) + (-a-b)*x+b*(-sc.expm1(-c*x))/c + + def _cdf(self, x, a, b, c): + return -sc.expm1((-a-b)*x + b*(-sc.expm1(-c*x))/c) + + def _ppf(self, p, a, b, c): + s = a + b + t = (b - c*np.log1p(-p))/s + return (t + sc.lambertw(-b/s * np.exp(-t)).real)/c + + def _sf(self, x, a, b, c): + return np.exp((-a-b)*x + b*(-sc.expm1(-c*x))/c) + + def _isf(self, p, a, b, c): + s = a + b + t = (b - c*np.log(p))/s + return (t + sc.lambertw(-b/s * np.exp(-t)).real)/c + + +genexpon = genexpon_gen(a=0.0, name='genexpon') + + +class genextreme_gen(rv_continuous): + r"""A generalized extreme value continuous random variable. + + %(before_notes)s + + See Also + -------- + gumbel_r + + Notes + ----- + For :math:`c=0`, `genextreme` is equal to `gumbel_r` with + probability density function + + .. math:: + + f(x) = \exp(-\exp(-x)) \exp(-x), + + where :math:`-\infty < x < \infty`. + + For :math:`c \ne 0`, the probability density function for `genextreme` is: + + .. math:: + + f(x, c) = \exp(-(1-c x)^{1/c}) (1-c x)^{1/c-1}, + + where :math:`-\infty < x \le 1/c` if :math:`c > 0` and + :math:`1/c \le x < \infty` if :math:`c < 0`. + + Note that several sources and software packages use the opposite + convention for the sign of the shape parameter :math:`c`. + + `genextreme` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + %(example)s + + """ + def _argcheck(self, c): + return np.isfinite(c) + + def _shape_info(self): + return [_ShapeInfo("c", False, (-np.inf, np.inf), (False, False))] + + def _get_support(self, c): + _b = np.where(c > 0, 1.0 / np.maximum(c, _XMIN), np.inf) + _a = np.where(c < 0, 1.0 / np.minimum(c, -_XMIN), -np.inf) + return _a, _b + + def _loglogcdf(self, x, c): + # Returns log(-log(cdf(x, c))) + return _lazywhere((x == x) & (c != 0), (x, c), + lambda x, c: sc.log1p(-c*x)/c, -x) + + def _pdf(self, x, c): + # genextreme.pdf(x, c) = + # exp(-exp(-x))*exp(-x), for c==0 + # exp(-(1-c*x)**(1/c))*(1-c*x)**(1/c-1), for x \le 1/c, c > 0 + return np.exp(self._logpdf(x, c)) + + def _logpdf(self, x, c): + cx = _lazywhere((x == x) & (c != 0), (x, c), lambda x, c: c*x, 0.0) + logex2 = sc.log1p(-cx) + logpex2 = self._loglogcdf(x, c) + pex2 = np.exp(logpex2) + # Handle special cases + np.putmask(logpex2, (c == 0) & (x == -np.inf), 0.0) + logpdf = _lazywhere(~((cx == 1) | (cx == -np.inf)), + (pex2, logpex2, logex2), + lambda pex2, lpex2, lex2: -pex2 + lpex2 - lex2, + fillvalue=-np.inf) + np.putmask(logpdf, (c == 1) & (x == 1), 0.0) + return logpdf + + def _logcdf(self, x, c): + return -np.exp(self._loglogcdf(x, c)) + + def _cdf(self, x, c): + return np.exp(self._logcdf(x, c)) + + def _sf(self, x, c): + return -sc.expm1(self._logcdf(x, c)) + + def _ppf(self, q, c): + x = -np.log(-np.log(q)) + return _lazywhere((x == x) & (c != 0), (x, c), + lambda x, c: -sc.expm1(-c * x) / c, x) + + def _isf(self, q, c): + x = -np.log(-sc.log1p(-q)) + return _lazywhere((x == x) & (c != 0), (x, c), + lambda x, c: -sc.expm1(-c * x) / c, x) + + def _stats(self, c): + def g(n): + return sc.gamma(n * c + 1) + g1 = g(1) + g2 = g(2) + g3 = g(3) + g4 = g(4) + g2mg12 = np.where(abs(c) < 1e-7, (c*np.pi)**2.0/6.0, g2-g1**2.0) + gam2k = np.where(abs(c) < 1e-7, np.pi**2.0/6.0, + sc.expm1(sc.gammaln(2.0*c+1.0)-2*sc.gammaln(c + 1.0))/c**2.0) + eps = 1e-14 + gamk = np.where(abs(c) < eps, -_EULER, sc.expm1(sc.gammaln(c + 1))/c) + + m = np.where(c < -1.0, np.nan, -gamk) + v = np.where(c < -0.5, np.nan, g1**2.0*gam2k) + + # skewness + sk1 = _lazywhere(c >= -1./3, + (c, g1, g2, g3, g2mg12), + lambda c, g1, g2, g3, g2mg12: + np.sign(c)*(-g3 + (g2 + 2*g2mg12)*g1)/g2mg12**1.5, + fillvalue=np.nan) + sk = np.where(abs(c) <= eps**0.29, 12*np.sqrt(6)*_ZETA3/np.pi**3, sk1) + + # kurtosis + ku1 = _lazywhere(c >= -1./4, + (g1, g2, g3, g4, g2mg12), + lambda g1, g2, g3, g4, g2mg12: + (g4 + (-4*g3 + 3*(g2 + g2mg12)*g1)*g1)/g2mg12**2, + fillvalue=np.nan) + ku = np.where(abs(c) <= (eps)**0.23, 12.0/5.0, ku1-3.0) + return m, v, sk, ku + + def _fitstart(self, data): + if isinstance(data, CensoredData): + data = data._uncensor() + # This is better than the default shape of (1,). + g = _skew(data) + if g < 0: + a = 0.5 + else: + a = -0.5 + return super()._fitstart(data, args=(a,)) + + def _munp(self, n, c): + k = np.arange(0, n+1) + vals = 1.0/c**n * np.sum( + sc.comb(n, k) * (-1)**k * sc.gamma(c*k + 1), + axis=0) + return np.where(c*n > -1, vals, np.inf) + + def _entropy(self, c): + return _EULER*(1 - c) + 1 + + +genextreme = genextreme_gen(name='genextreme') + + +def _digammainv(y): + """Inverse of the digamma function (real positive arguments only). + + This function is used in the `fit` method of `gamma_gen`. + The function uses either optimize.fsolve or optimize.newton + to solve `sc.digamma(x) - y = 0`. There is probably room for + improvement, but currently it works over a wide range of y: + + >>> import numpy as np + >>> rng = np.random.default_rng() + >>> y = 64*rng.standard_normal(1000000) + >>> y.min(), y.max() + (-311.43592651416662, 351.77388222276869) + >>> x = [_digammainv(t) for t in y] + >>> np.abs(sc.digamma(x) - y).max() + 1.1368683772161603e-13 + + """ + _em = 0.5772156649015328606065120 + + def func(x): + return sc.digamma(x) - y + + if y > -0.125: + x0 = np.exp(y) + 0.5 + if y < 10: + # Some experimentation shows that newton reliably converges + # must faster than fsolve in this y range. For larger y, + # newton sometimes fails to converge. + value = optimize.newton(func, x0, tol=1e-10) + return value + elif y > -3: + x0 = np.exp(y/2.332) + 0.08661 + else: + x0 = 1.0 / (-y - _em) + + value, info, ier, mesg = optimize.fsolve(func, x0, xtol=1e-11, + full_output=True) + if ier != 1: + raise RuntimeError("_digammainv: fsolve failed, y = %r" % y) + + return value[0] + + +## Gamma (Use MATLAB and MATHEMATICA (b=theta=scale, a=alpha=shape) definition) + +## gamma(a, loc, scale) with a an integer is the Erlang distribution +## gamma(1, loc, scale) is the Exponential distribution +## gamma(df/2, 0, 2) is the chi2 distribution with df degrees of freedom. + +class gamma_gen(rv_continuous): + r"""A gamma continuous random variable. + + %(before_notes)s + + See Also + -------- + erlang, expon + + Notes + ----- + The probability density function for `gamma` is: + + .. math:: + + f(x, a) = \frac{x^{a-1} e^{-x}}{\Gamma(a)} + + for :math:`x \ge 0`, :math:`a > 0`. Here :math:`\Gamma(a)` refers to the + gamma function. + + `gamma` takes ``a`` as a shape parameter for :math:`a`. + + When :math:`a` is an integer, `gamma` reduces to the Erlang + distribution, and when :math:`a=1` to the exponential distribution. + + Gamma distributions are sometimes parameterized with two variables, + with a probability density function of: + + .. math:: + + f(x, \alpha, \beta) = + \frac{\beta^\alpha x^{\alpha - 1} e^{-\beta x }}{\Gamma(\alpha)} + + Note that this parameterization is equivalent to the above, with + ``scale = 1 / beta``. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("a", False, (0, np.inf), (False, False))] + + def _rvs(self, a, size=None, random_state=None): + return random_state.standard_gamma(a, size) + + def _pdf(self, x, a): + # gamma.pdf(x, a) = x**(a-1) * exp(-x) / gamma(a) + return np.exp(self._logpdf(x, a)) + + def _logpdf(self, x, a): + return sc.xlogy(a-1.0, x) - x - sc.gammaln(a) + + def _cdf(self, x, a): + return sc.gammainc(a, x) + + def _sf(self, x, a): + return sc.gammaincc(a, x) + + def _ppf(self, q, a): + return sc.gammaincinv(a, q) + + def _isf(self, q, a): + return sc.gammainccinv(a, q) + + def _stats(self, a): + return a, a, 2.0/np.sqrt(a), 6.0/a + + def _entropy(self, a): + + def regular_formula(a): + return sc.psi(a) * (1-a) + a + sc.gammaln(a) + + def asymptotic_formula(a): + # plug in above formula the expansions: + # psi(a) ~ ln(a) - 1/2a - 1/12a^2 + 1/120a^4 + # gammaln(a) ~ a * ln(a) - a - 1/2 * ln(a) + 1/2 ln(2 * pi) + + # 1/12a - 1/360a^3 + return (0.5 * (1. + np.log(2*np.pi) + np.log(a)) - 1/(3 * a) + - (a**-2.)/12 - (a**-3.)/90 + (a**-4.)/120) + + return _lazywhere(a < 250, (a, ), regular_formula, + f2=asymptotic_formula) + + def _fitstart(self, data): + # The skewness of the gamma distribution is `2 / np.sqrt(a)`. + # We invert that to estimate the shape `a` using the skewness + # of the data. The formula is regularized with 1e-8 in the + # denominator to allow for degenerate data where the skewness + # is close to 0. + if isinstance(data, CensoredData): + data = data._uncensor() + sk = _skew(data) + a = 4 / (1e-8 + sk**2) + return super()._fitstart(data, args=(a,)) + + @extend_notes_in_docstring(rv_continuous, notes="""\ + When the location is fixed by using the argument `floc` + and `method='MLE'`, this + function uses explicit formulas or solves a simpler numerical + problem than the full ML optimization problem. So in that case, + the `optimizer`, `loc` and `scale` arguments are ignored. + \n\n""") + def fit(self, data, *args, **kwds): + floc = kwds.get('floc', None) + method = kwds.get('method', 'mle') + + if (isinstance(data, CensoredData) or + floc is None and method.lower() != 'mm'): + # loc is not fixed or we're not doing standard MLE. + # Use the default fit method. + return super().fit(data, *args, **kwds) + + # We already have this value, so just pop it from kwds. + kwds.pop('floc', None) + + f0 = _get_fixed_fit_value(kwds, ['f0', 'fa', 'fix_a']) + fscale = kwds.pop('fscale', None) + + _remove_optimizer_parameters(kwds) + + if f0 is not None and floc is not None and fscale is not None: + # This check is for consistency with `rv_continuous.fit`. + # Without this check, this function would just return the + # parameters that were given. + raise ValueError("All parameters fixed. There is nothing to " + "optimize.") + + # Fixed location is handled by shifting the data. + data = np.asarray(data) + + if not np.isfinite(data).all(): + raise ValueError("The data contains non-finite values.") + + # Use explicit formulas for mm (gh-19884) + if method.lower() == 'mm': + m1 = np.mean(data) + m2 = np.var(data) + m3 = np.mean((data - m1) ** 3) + a, loc, scale = f0, floc, fscale + # Three unknowns + if a is None and loc is None and scale is None: + scale = m3 / (2 * m2) + # Two unknowns + if loc is None and scale is None: + scale = np.sqrt(m2 / a) + if a is None and scale is None: + scale = m2 / (m1 - loc) + if a is None and loc is None: + a = m2 / (scale ** 2) + # One unknown + if a is None: + a = (m1 - loc) / scale + if loc is None: + loc = m1 - a * scale + if scale is None: + scale = (m1 - loc) / a + return a, loc, scale + + # Special case: loc is fixed. + + # NB: data == loc is ok if a >= 1; the below check is more strict. + if np.any(data <= floc): + raise FitDataError("gamma", lower=floc, upper=np.inf) + + if floc != 0: + # Don't do the subtraction in-place, because `data` might be a + # view of the input array. + data = data - floc + xbar = data.mean() + + # Three cases to handle: + # * shape and scale both free + # * shape fixed, scale free + # * shape free, scale fixed + + if fscale is None: + # scale is free + if f0 is not None: + # shape is fixed + a = f0 + else: + # shape and scale are both free. + # The MLE for the shape parameter `a` is the solution to: + # np.log(a) - sc.digamma(a) - np.log(xbar) + + # np.log(data).mean() = 0 + s = np.log(xbar) - np.log(data).mean() + aest = (3-s + np.sqrt((s-3)**2 + 24*s)) / (12*s) + xa = aest*(1-0.4) + xb = aest*(1+0.4) + a = optimize.brentq(lambda a: np.log(a) - sc.digamma(a) - s, + xa, xb, disp=0) + + # The MLE for the scale parameter is just the data mean + # divided by the shape parameter. + scale = xbar / a + else: + # scale is fixed, shape is free + # The MLE for the shape parameter `a` is the solution to: + # sc.digamma(a) - np.log(data).mean() + np.log(fscale) = 0 + c = np.log(data).mean() - np.log(fscale) + a = _digammainv(c) + scale = fscale + + return a, floc, scale + + +gamma = gamma_gen(a=0.0, name='gamma') + + +class erlang_gen(gamma_gen): + """An Erlang continuous random variable. + + %(before_notes)s + + See Also + -------- + gamma + + Notes + ----- + The Erlang distribution is a special case of the Gamma distribution, with + the shape parameter `a` an integer. Note that this restriction is not + enforced by `erlang`. It will, however, generate a warning the first time + a non-integer value is used for the shape parameter. + + Refer to `gamma` for examples. + + """ + + def _argcheck(self, a): + allint = np.all(np.floor(a) == a) + if not allint: + # An Erlang distribution shouldn't really have a non-integer + # shape parameter, so warn the user. + message = ('The shape parameter of the erlang distribution ' + f'has been given a non-integer value {a!r}.') + warnings.warn(message, RuntimeWarning, stacklevel=3) + return a > 0 + + def _shape_info(self): + return [_ShapeInfo("a", True, (1, np.inf), (True, False))] + + def _fitstart(self, data): + # Override gamma_gen_fitstart so that an integer initial value is + # used. (Also regularize the division, to avoid issues when + # _skew(data) is 0 or close to 0.) + if isinstance(data, CensoredData): + data = data._uncensor() + a = int(4.0 / (1e-8 + _skew(data)**2)) + return super(gamma_gen, self)._fitstart(data, args=(a,)) + + # Trivial override of the fit method, so we can monkey-patch its + # docstring. + @extend_notes_in_docstring(rv_continuous, notes="""\ + The Erlang distribution is generally defined to have integer values + for the shape parameter. This is not enforced by the `erlang` class. + When fitting the distribution, it will generally return a non-integer + value for the shape parameter. By using the keyword argument + `f0=`, the fit method can be constrained to fit the data to + a specific integer shape parameter.""") + def fit(self, data, *args, **kwds): + return super().fit(data, *args, **kwds) + + +erlang = erlang_gen(a=0.0, name='erlang') + + +class gengamma_gen(rv_continuous): + r"""A generalized gamma continuous random variable. + + %(before_notes)s + + See Also + -------- + gamma, invgamma, weibull_min + + Notes + ----- + The probability density function for `gengamma` is ([1]_): + + .. math:: + + f(x, a, c) = \frac{|c| x^{c a-1} \exp(-x^c)}{\Gamma(a)} + + for :math:`x \ge 0`, :math:`a > 0`, and :math:`c \ne 0`. + :math:`\Gamma` is the gamma function (`scipy.special.gamma`). + + `gengamma` takes :math:`a` and :math:`c` as shape parameters. + + %(after_notes)s + + References + ---------- + .. [1] E.W. Stacy, "A Generalization of the Gamma Distribution", + Annals of Mathematical Statistics, Vol 33(3), pp. 1187--1192. + + %(example)s + + """ + def _argcheck(self, a, c): + return (a > 0) & (c != 0) + + def _shape_info(self): + ia = _ShapeInfo("a", False, (0, np.inf), (False, False)) + ic = _ShapeInfo("c", False, (-np.inf, np.inf), (False, False)) + return [ia, ic] + + def _pdf(self, x, a, c): + return np.exp(self._logpdf(x, a, c)) + + def _logpdf(self, x, a, c): + return _lazywhere((x != 0) | (c > 0), (x, c), + lambda x, c: (np.log(abs(c)) + sc.xlogy(c*a - 1, x) + - x**c - sc.gammaln(a)), + fillvalue=-np.inf) + + def _cdf(self, x, a, c): + xc = x**c + val1 = sc.gammainc(a, xc) + val2 = sc.gammaincc(a, xc) + return np.where(c > 0, val1, val2) + + def _rvs(self, a, c, size=None, random_state=None): + r = random_state.standard_gamma(a, size=size) + return r**(1./c) + + def _sf(self, x, a, c): + xc = x**c + val1 = sc.gammainc(a, xc) + val2 = sc.gammaincc(a, xc) + return np.where(c > 0, val2, val1) + + def _ppf(self, q, a, c): + val1 = sc.gammaincinv(a, q) + val2 = sc.gammainccinv(a, q) + return np.where(c > 0, val1, val2)**(1.0/c) + + def _isf(self, q, a, c): + val1 = sc.gammaincinv(a, q) + val2 = sc.gammainccinv(a, q) + return np.where(c > 0, val2, val1)**(1.0/c) + + def _munp(self, n, a, c): + # Pochhammer symbol: sc.pocha,n) = gamma(a+n)/gamma(a) + return sc.poch(a, n*1.0/c) + + def _entropy(self, a, c): + def regular(a, c): + val = sc.psi(a) + A = a * (1 - val) + val / c + B = sc.gammaln(a) - np.log(abs(c)) + h = A + B + return h + + def asymptotic(a, c): + # using asymptotic expansions for gammaln and psi (see gh-18093) + return (norm._entropy() - np.log(a)/2 + - np.log(np.abs(c)) + (a**-1.)/6 - (a**-3.)/90 + + (np.log(a) - (a**-1.)/2 - (a**-2.)/12 + (a**-4.)/120)/c) + + h = _lazywhere(a >= 2e2, (a, c), f=asymptotic, f2=regular) + return h + + +gengamma = gengamma_gen(a=0.0, name='gengamma') + + +class genhalflogistic_gen(rv_continuous): + r"""A generalized half-logistic continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `genhalflogistic` is: + + .. math:: + + f(x, c) = \frac{2 (1 - c x)^{1/(c-1)}}{[1 + (1 - c x)^{1/c}]^2} + + for :math:`0 \le x \le 1/c`, and :math:`c > 0`. + + `genhalflogistic` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _get_support(self, c): + return self.a, 1.0/c + + def _pdf(self, x, c): + # genhalflogistic.pdf(x, c) = + # 2 * (1-c*x)**(1/c-1) / (1+(1-c*x)**(1/c))**2 + limit = 1.0/c + tmp = np.asarray(1-c*x) + tmp0 = tmp**(limit-1) + tmp2 = tmp0*tmp + return 2*tmp0 / (1+tmp2)**2 + + def _cdf(self, x, c): + limit = 1.0/c + tmp = np.asarray(1-c*x) + tmp2 = tmp**(limit) + return (1.0-tmp2) / (1+tmp2) + + def _ppf(self, q, c): + return 1.0/c*(1-((1.0-q)/(1.0+q))**c) + + def _entropy(self, c): + return 2 - (2*c+1)*np.log(2) + + +genhalflogistic = genhalflogistic_gen(a=0.0, name='genhalflogistic') + + +class genhyperbolic_gen(rv_continuous): + r"""A generalized hyperbolic continuous random variable. + + %(before_notes)s + + See Also + -------- + t, norminvgauss, geninvgauss, laplace, cauchy + + Notes + ----- + The probability density function for `genhyperbolic` is: + + .. math:: + + f(x, p, a, b) = + \frac{(a^2 - b^2)^{p/2}} + {\sqrt{2\pi}a^{p-1/2} + K_p\Big(\sqrt{a^2 - b^2}\Big)} + e^{bx} \times \frac{K_{p - 1/2} + (a \sqrt{1 + x^2})} + {(\sqrt{1 + x^2})^{1/2 - p}} + + for :math:`x, p \in ( - \infty; \infty)`, + :math:`|b| < a` if :math:`p \ge 0`, + :math:`|b| \le a` if :math:`p < 0`. + :math:`K_{p}(.)` denotes the modified Bessel function of the second + kind and order :math:`p` (`scipy.special.kv`) + + `genhyperbolic` takes ``p`` as a tail parameter, + ``a`` as a shape parameter, + ``b`` as a skewness parameter. + + %(after_notes)s + + The original parameterization of the Generalized Hyperbolic Distribution + is found in [1]_ as follows + + .. math:: + + f(x, \lambda, \alpha, \beta, \delta, \mu) = + \frac{(\gamma/\delta)^\lambda}{\sqrt{2\pi}K_\lambda(\delta \gamma)} + e^{\beta (x - \mu)} \times \frac{K_{\lambda - 1/2} + (\alpha \sqrt{\delta^2 + (x - \mu)^2})} + {(\sqrt{\delta^2 + (x - \mu)^2} / \alpha)^{1/2 - \lambda}} + + for :math:`x \in ( - \infty; \infty)`, + :math:`\gamma := \sqrt{\alpha^2 - \beta^2}`, + :math:`\lambda, \mu \in ( - \infty; \infty)`, + :math:`\delta \ge 0, |\beta| < \alpha` if :math:`\lambda \ge 0`, + :math:`\delta > 0, |\beta| \le \alpha` if :math:`\lambda < 0`. + + The location-scale-based parameterization implemented in + SciPy is based on [2]_, where :math:`a = \alpha\delta`, + :math:`b = \beta\delta`, :math:`p = \lambda`, + :math:`scale=\delta` and :math:`loc=\mu` + + Moments are implemented based on [3]_ and [4]_. + + For the distributions that are a special case such as Student's t, + it is not recommended to rely on the implementation of genhyperbolic. + To avoid potential numerical problems and for performance reasons, + the methods of the specific distributions should be used. + + References + ---------- + .. [1] O. Barndorff-Nielsen, "Hyperbolic Distributions and Distributions + on Hyperbolae", Scandinavian Journal of Statistics, Vol. 5(3), + pp. 151-157, 1978. https://www.jstor.org/stable/4615705 + + .. [2] Eberlein E., Prause K. (2002) The Generalized Hyperbolic Model: + Financial Derivatives and Risk Measures. In: Geman H., Madan D., + Pliska S.R., Vorst T. (eds) Mathematical Finance - Bachelier + Congress 2000. Springer Finance. Springer, Berlin, Heidelberg. + :doi:`10.1007/978-3-662-12429-1_12` + + .. [3] Scott, David J, Würtz, Diethelm, Dong, Christine and Tran, + Thanh Tam, (2009), Moments of the generalized hyperbolic + distribution, MPRA Paper, University Library of Munich, Germany, + https://EconPapers.repec.org/RePEc:pra:mprapa:19081. + + .. [4] E. Eberlein and E. A. von Hammerstein. Generalized hyperbolic + and inverse Gaussian distributions: Limiting cases and approximation + of processes. FDM Preprint 80, April 2003. University of Freiburg. + https://freidok.uni-freiburg.de/fedora/objects/freidok:7974/datastreams/FILE1/content + + %(example)s + + """ + + def _argcheck(self, p, a, b): + return (np.logical_and(np.abs(b) < a, p >= 0) + | np.logical_and(np.abs(b) <= a, p < 0)) + + def _shape_info(self): + ip = _ShapeInfo("p", False, (-np.inf, np.inf), (False, False)) + ia = _ShapeInfo("a", False, (0, np.inf), (True, False)) + ib = _ShapeInfo("b", False, (-np.inf, np.inf), (False, False)) + return [ip, ia, ib] + + def _fitstart(self, data): + # Arbitrary, but the default p = a = b = 1 is not valid; the + # distribution requires |b| < a if p >= 0. + return super()._fitstart(data, args=(1, 1, 0.5)) + + def _logpdf(self, x, p, a, b): + # kve instead of kv works better for large values of p + # and smaller values of sqrt(a^2 - b^2) + @np.vectorize + def _logpdf_single(x, p, a, b): + return _stats.genhyperbolic_logpdf(x, p, a, b) + + return _logpdf_single(x, p, a, b) + + def _pdf(self, x, p, a, b): + # kve instead of kv works better for large values of p + # and smaller values of sqrt(a^2 - b^2) + @np.vectorize + def _pdf_single(x, p, a, b): + return _stats.genhyperbolic_pdf(x, p, a, b) + + return _pdf_single(x, p, a, b) + + # np.vectorize isn't currently designed to be used as a decorator, + # so use a lambda instead. This allows us to decorate the function + # with `np.vectorize` and still provide the `otypes` parameter. + # The first argument to `vectorize` is `func.__get__(object)` for + # compatibility with Python 3.9. In Python 3.10, this can be + # simplified to just `func`. + @lambda func: np.vectorize(func.__get__(object), otypes=[np.float64]) + @staticmethod + def _integrate_pdf(x0, x1, p, a, b): + """ + Integrate the pdf of the genhyberbolic distribution from x0 to x1. + This is a private function used by _cdf() and _sf() only; either x0 + will be -inf or x1 will be inf. + """ + user_data = np.array([p, a, b], float).ctypes.data_as(ctypes.c_void_p) + llc = LowLevelCallable.from_cython(_stats, '_genhyperbolic_pdf', + user_data) + d = np.sqrt((a + b)*(a - b)) + mean = b/d * sc.kv(p + 1, d) / sc.kv(p, d) + epsrel = 1e-10 + epsabs = 0 + if x0 < mean < x1: + # If the interval includes the mean, integrate over the two + # intervals [x0, mean] and [mean, x1] and add. If we try to do + # the integral in one call of quad and the non-infinite endpoint + # is far in the tail, quad might return an incorrect result + # because it does not "see" the peak of the PDF. + intgrl = (integrate.quad(llc, x0, mean, + epsrel=epsrel, epsabs=epsabs)[0] + + integrate.quad(llc, mean, x1, + epsrel=epsrel, epsabs=epsabs)[0]) + else: + intgrl = integrate.quad(llc, x0, x1, + epsrel=epsrel, epsabs=epsabs)[0] + if np.isnan(intgrl): + msg = ("Infinite values encountered in scipy.special.kve. " + "Values replaced by NaN to avoid incorrect results.") + warnings.warn(msg, RuntimeWarning, stacklevel=3) + return max(0.0, min(1.0, intgrl)) + + def _cdf(self, x, p, a, b): + return self._integrate_pdf(-np.inf, x, p, a, b) + + def _sf(self, x, p, a, b): + return self._integrate_pdf(x, np.inf, p, a, b) + + def _rvs(self, p, a, b, size=None, random_state=None): + # note: X = b * V + sqrt(V) * X has a + # generalized hyperbolic distribution + # if X is standard normal and V is + # geninvgauss(p = p, b = t2, loc = loc, scale = t3) + t1 = np.float_power(a, 2) - np.float_power(b, 2) + # b in the GIG + t2 = np.float_power(t1, 0.5) + # scale in the GIG + t3 = np.float_power(t1, - 0.5) + gig = geninvgauss.rvs( + p=p, + b=t2, + scale=t3, + size=size, + random_state=random_state + ) + normst = norm.rvs(size=size, random_state=random_state) + + return b * gig + np.sqrt(gig) * normst + + def _stats(self, p, a, b): + # https://mpra.ub.uni-muenchen.de/19081/1/MPRA_paper_19081.pdf + # https://freidok.uni-freiburg.de/fedora/objects/freidok:7974/datastreams/FILE1/content + # standardized moments + p, a, b = np.broadcast_arrays(p, a, b) + t1 = np.float_power(a, 2) - np.float_power(b, 2) + t1 = np.float_power(t1, 0.5) + t2 = np.float_power(1, 2) * np.float_power(t1, - 1) + integers = np.linspace(0, 4, 5) + # make integers perpendicular to existing dimensions + integers = integers.reshape(integers.shape + (1,) * p.ndim) + b0, b1, b2, b3, b4 = sc.kv(p + integers, t1) + r1, r2, r3, r4 = (b / b0 for b in (b1, b2, b3, b4)) + + m = b * t2 * r1 + v = ( + t2 * r1 + np.float_power(b, 2) * np.float_power(t2, 2) * + (r2 - np.float_power(r1, 2)) + ) + m3e = ( + np.float_power(b, 3) * np.float_power(t2, 3) * + (r3 - 3 * b2 * b1 * np.float_power(b0, -2) + + 2 * np.float_power(r1, 3)) + + 3 * b * np.float_power(t2, 2) * + (r2 - np.float_power(r1, 2)) + ) + s = m3e * np.float_power(v, - 3 / 2) + m4e = ( + np.float_power(b, 4) * np.float_power(t2, 4) * + (r4 - 4 * b3 * b1 * np.float_power(b0, - 2) + + 6 * b2 * np.float_power(b1, 2) * np.float_power(b0, - 3) - + 3 * np.float_power(r1, 4)) + + np.float_power(b, 2) * np.float_power(t2, 3) * + (6 * r3 - 12 * b2 * b1 * np.float_power(b0, - 2) + + 6 * np.float_power(r1, 3)) + + 3 * np.float_power(t2, 2) * r2 + ) + k = m4e * np.float_power(v, -2) - 3 + + return m, v, s, k + + +genhyperbolic = genhyperbolic_gen(name='genhyperbolic') + + +class gompertz_gen(rv_continuous): + r"""A Gompertz (or truncated Gumbel) continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `gompertz` is: + + .. math:: + + f(x, c) = c \exp(x) \exp(-c (e^x-1)) + + for :math:`x \ge 0`, :math:`c > 0`. + + `gompertz` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # gompertz.pdf(x, c) = c * exp(x) * exp(-c*(exp(x)-1)) + return np.exp(self._logpdf(x, c)) + + def _logpdf(self, x, c): + return np.log(c) + x - c * sc.expm1(x) + + def _cdf(self, x, c): + return -sc.expm1(-c * sc.expm1(x)) + + def _ppf(self, q, c): + return sc.log1p(-1.0 / c * sc.log1p(-q)) + + def _sf(self, x, c): + return np.exp(-c * sc.expm1(x)) + + def _isf(self, p, c): + return sc.log1p(-np.log(p)/c) + + def _entropy(self, c): + return 1.0 - np.log(c) - sc._ufuncs._scaled_exp1(c)/c + + +gompertz = gompertz_gen(a=0.0, name='gompertz') + + +def _average_with_log_weights(x, logweights): + x = np.asarray(x) + logweights = np.asarray(logweights) + maxlogw = logweights.max() + weights = np.exp(logweights - maxlogw) + return np.average(x, weights=weights) + + +class gumbel_r_gen(rv_continuous): + r"""A right-skewed Gumbel continuous random variable. + + %(before_notes)s + + See Also + -------- + gumbel_l, gompertz, genextreme + + Notes + ----- + The probability density function for `gumbel_r` is: + + .. math:: + + f(x) = \exp(-(x + e^{-x})) + + The Gumbel distribution is sometimes referred to as a type I Fisher-Tippett + distribution. It is also related to the extreme value distribution, + log-Weibull and Gompertz distributions. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + # gumbel_r.pdf(x) = exp(-(x + exp(-x))) + return np.exp(self._logpdf(x)) + + def _logpdf(self, x): + return -x - np.exp(-x) + + def _cdf(self, x): + return np.exp(-np.exp(-x)) + + def _logcdf(self, x): + return -np.exp(-x) + + def _ppf(self, q): + return -np.log(-np.log(q)) + + def _sf(self, x): + return -sc.expm1(-np.exp(-x)) + + def _isf(self, p): + return -np.log(-np.log1p(-p)) + + def _stats(self): + return _EULER, np.pi*np.pi/6.0, 12*np.sqrt(6)/np.pi**3 * _ZETA3, 12.0/5 + + def _entropy(self): + # https://en.wikipedia.org/wiki/Gumbel_distribution + return _EULER + 1. + + @_call_super_mom + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + data, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + + # By the method of maximum likelihood, the estimators of the + # location and scale are the roots of the equations defined in + # `func` and the value of the expression for `loc` that follows. + # The first `func` is a first order derivative of the log-likelihood + # equation and the second is from Source: Statistical Distributions, + # 3rd Edition. Evans, Hastings, and Peacock (2000), Page 101. + + def get_loc_from_scale(scale): + return -scale * (sc.logsumexp(-data / scale) - np.log(len(data))) + + if fscale is not None: + # if the scale is fixed, the location can be analytically + # determined. + scale = fscale + loc = get_loc_from_scale(scale) + else: + # A different function is solved depending on whether the location + # is fixed. + if floc is not None: + loc = floc + + # equation to use if the location is fixed. + # note that one cannot use the equation in Evans, Hastings, + # and Peacock (2000) (since it assumes that the derivative + # w.r.t. the log-likelihood is zero). however, it is easy to + # derive the MLE condition directly if loc is fixed + def func(scale): + term1 = (loc - data) * np.exp((loc - data) / scale) + data + term2 = len(data) * (loc + scale) + return term1.sum() - term2 + else: + + # equation to use if both location and scale are free + def func(scale): + sdata = -data / scale + wavg = _average_with_log_weights(data, logweights=sdata) + return data.mean() - wavg - scale + + # set brackets for `root_scalar` to use when optimizing over the + # scale such that a root is likely between them. Use user supplied + # guess or default 1. + brack_start = kwds.get('scale', 1) + lbrack, rbrack = brack_start / 2, brack_start * 2 + + # if a root is not between the brackets, iteratively expand them + # until they include a sign change, checking after each bracket is + # modified. + def interval_contains_root(lbrack, rbrack): + # return true if the signs disagree. + return (np.sign(func(lbrack)) != + np.sign(func(rbrack))) + while (not interval_contains_root(lbrack, rbrack) + and (lbrack > 0 or rbrack < np.inf)): + lbrack /= 2 + rbrack *= 2 + + res = optimize.root_scalar(func, bracket=(lbrack, rbrack), + rtol=1e-14, xtol=1e-14) + scale = res.root + loc = floc if floc is not None else get_loc_from_scale(scale) + return loc, scale + + +gumbel_r = gumbel_r_gen(name='gumbel_r') + + +class gumbel_l_gen(rv_continuous): + r"""A left-skewed Gumbel continuous random variable. + + %(before_notes)s + + See Also + -------- + gumbel_r, gompertz, genextreme + + Notes + ----- + The probability density function for `gumbel_l` is: + + .. math:: + + f(x) = \exp(x - e^x) + + The Gumbel distribution is sometimes referred to as a type I Fisher-Tippett + distribution. It is also related to the extreme value distribution, + log-Weibull and Gompertz distributions. + + %(after_notes)s + + %(example)s + + """ + + def _shape_info(self): + return [] + + def _pdf(self, x): + # gumbel_l.pdf(x) = exp(x - exp(x)) + return np.exp(self._logpdf(x)) + + def _logpdf(self, x): + return x - np.exp(x) + + def _cdf(self, x): + return -sc.expm1(-np.exp(x)) + + def _ppf(self, q): + return np.log(-sc.log1p(-q)) + + def _logsf(self, x): + return -np.exp(x) + + def _sf(self, x): + return np.exp(-np.exp(x)) + + def _isf(self, x): + return np.log(-np.log(x)) + + def _stats(self): + return -_EULER, np.pi*np.pi/6.0, \ + -12*np.sqrt(6)/np.pi**3 * _ZETA3, 12.0/5 + + def _entropy(self): + return _EULER + 1. + + @_call_super_mom + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + # The fit method of `gumbel_r` can be used for this distribution with + # small modifications. The process to do this is + # 1. pass the sign negated data into `gumbel_r.fit` + # - if the location is fixed, it should also be negated. + # 2. negate the sign of the resulting location, leaving the scale + # unmodified. + # `gumbel_r.fit` holds necessary input checks. + + if kwds.get('floc') is not None: + kwds['floc'] = -kwds['floc'] + loc_r, scale_r, = gumbel_r.fit(-np.asarray(data), *args, **kwds) + return -loc_r, scale_r + + +gumbel_l = gumbel_l_gen(name='gumbel_l') + + +class halfcauchy_gen(rv_continuous): + r"""A Half-Cauchy continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `halfcauchy` is: + + .. math:: + + f(x) = \frac{2}{\pi (1 + x^2)} + + for :math:`x \ge 0`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + # halfcauchy.pdf(x) = 2 / (pi * (1 + x**2)) + return 2.0/np.pi/(1.0+x*x) + + def _logpdf(self, x): + return np.log(2.0/np.pi) - sc.log1p(x*x) + + def _cdf(self, x): + return 2.0/np.pi*np.arctan(x) + + def _ppf(self, q): + return np.tan(np.pi/2*q) + + def _sf(self, x): + return 2.0/np.pi * np.arctan2(1, x) + + def _isf(self, p): + return 1.0/np.tan(np.pi*p/2) + + def _stats(self): + return np.inf, np.inf, np.nan, np.nan + + def _entropy(self): + return np.log(2*np.pi) + + @_call_super_mom + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + if kwds.pop('superfit', False): + return super().fit(data, *args, **kwds) + + data, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + + # location is independent from the scale + data_min = np.min(data) + if floc is not None: + if data_min < floc: + # There are values that are less than the specified loc. + raise FitDataError("halfcauchy", lower=floc, upper=np.inf) + loc = floc + else: + # if not provided, location MLE is the minimal data point + loc = data_min + + # find scale + def find_scale(loc, data): + shifted_data = data - loc + n = data.size + shifted_data_squared = np.square(shifted_data) + + def fun_to_solve(scale): + denominator = scale**2 + shifted_data_squared + return 2 * np.sum(shifted_data_squared/denominator) - n + + small = np.finfo(1.0).tiny**0.5 # avoid underflow + res = root_scalar(fun_to_solve, bracket=(small, np.max(shifted_data))) + return res.root + + if fscale is not None: + scale = fscale + else: + scale = find_scale(loc, data) + + return loc, scale + + +halfcauchy = halfcauchy_gen(a=0.0, name='halfcauchy') + + +class halflogistic_gen(rv_continuous): + r"""A half-logistic continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `halflogistic` is: + + .. math:: + + f(x) = \frac{ 2 e^{-x} }{ (1+e^{-x})^2 } + = \frac{1}{2} \text{sech}(x/2)^2 + + for :math:`x \ge 0`. + + %(after_notes)s + + References + ---------- + .. [1] Asgharzadeh et al (2011). "Comparisons of Methods of Estimation for the + Half-Logistic Distribution". Selcuk J. Appl. Math. 93-108. + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + # halflogistic.pdf(x) = 2 * exp(-x) / (1+exp(-x))**2 + # = 1/2 * sech(x/2)**2 + return np.exp(self._logpdf(x)) + + def _logpdf(self, x): + return np.log(2) - x - 2. * sc.log1p(np.exp(-x)) + + def _cdf(self, x): + return np.tanh(x/2.0) + + def _ppf(self, q): + return 2*np.arctanh(q) + + def _sf(self, x): + return 2 * sc.expit(-x) + + def _isf(self, q): + return _lazywhere(q < 0.5, (q, ), + lambda q: -sc.logit(0.5 * q), + f2=lambda q: 2*np.arctanh(1 - q)) + + def _munp(self, n): + if n == 1: + return 2*np.log(2) + if n == 2: + return np.pi*np.pi/3.0 + if n == 3: + return 9*_ZETA3 + if n == 4: + return 7*np.pi**4 / 15.0 + return 2*(1-pow(2.0, 1-n))*sc.gamma(n+1)*sc.zeta(n, 1) + + def _entropy(self): + return 2-np.log(2) + + @_call_super_mom + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + if kwds.pop('superfit', False): + return super().fit(data, *args, **kwds) + + data, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + + def find_scale(data, loc): + # scale is solution to a fix point problem ([1] 2.6) + # use approximate MLE as starting point ([1] 3.1) + n_observations = data.shape[0] + sorted_data = np.sort(data, axis=0) + p = np.arange(1, n_observations + 1)/(n_observations + 1) + q = 1 - p + pp1 = 1 + p + alpha = p - 0.5 * q * pp1 * np.log(pp1 / q) + beta = 0.5 * q * pp1 + sorted_data = sorted_data - loc + B = 2 * np.sum(alpha[1:] * sorted_data[1:]) + C = 2 * np.sum(beta[1:] * sorted_data[1:]**2) + # starting guess + scale = ((B + np.sqrt(B**2 + 8 * n_observations * C)) + /(4 * n_observations)) + + # relative tolerance of fix point iterator + rtol = 1e-8 + relative_residual = 1 + shifted_mean = sorted_data.mean() # y_mean - y_min + + # find fix point by repeated application of eq. (2.6) + # simplify as + # exp(-x) / (1 + exp(-x)) = 1 / (1 + exp(x)) + # = expit(-x)) + while relative_residual > rtol: + sum_term = sorted_data * sc.expit(-sorted_data/scale) + scale_new = shifted_mean - 2/n_observations * sum_term.sum() + relative_residual = abs((scale - scale_new)/scale) + scale = scale_new + return scale + + # location is independent from the scale + data_min = np.min(data) + if floc is not None: + if data_min < floc: + # There are values that are less than the specified loc. + raise FitDataError("halflogistic", lower=floc, upper=np.inf) + loc = floc + else: + # if not provided, location MLE is the minimal data point + loc = data_min + + # scale depends on location + scale = fscale if fscale is not None else find_scale(data, loc) + + return loc, scale + + +halflogistic = halflogistic_gen(a=0.0, name='halflogistic') + + +class halfnorm_gen(rv_continuous): + r"""A half-normal continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `halfnorm` is: + + .. math:: + + f(x) = \sqrt{2/\pi} \exp(-x^2 / 2) + + for :math:`x >= 0`. + + `halfnorm` is a special case of `chi` with ``df=1``. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return abs(random_state.standard_normal(size=size)) + + def _pdf(self, x): + # halfnorm.pdf(x) = sqrt(2/pi) * exp(-x**2/2) + return np.sqrt(2.0/np.pi)*np.exp(-x*x/2.0) + + def _logpdf(self, x): + return 0.5 * np.log(2.0/np.pi) - x*x/2.0 + + def _cdf(self, x): + return sc.erf(x / np.sqrt(2)) + + def _ppf(self, q): + return _norm_ppf((1+q)/2.0) + + def _sf(self, x): + return 2 * _norm_sf(x) + + def _isf(self, p): + return _norm_isf(p/2) + + def _stats(self): + return (np.sqrt(2.0/np.pi), + 1-2.0/np.pi, + np.sqrt(2)*(4-np.pi)/(np.pi-2)**1.5, + 8*(np.pi-3)/(np.pi-2)**2) + + def _entropy(self): + return 0.5*np.log(np.pi/2.0)+0.5 + + @_call_super_mom + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + if kwds.pop('superfit', False): + return super().fit(data, *args, **kwds) + + data, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + + data_min = np.min(data) + + if floc is not None: + if data_min < floc: + # There are values that are less than the specified loc. + raise FitDataError("halfnorm", lower=floc, upper=np.inf) + loc = floc + else: + loc = data_min + + if fscale is not None: + scale = fscale + else: + scale = stats.moment(data, order=2, center=loc)**0.5 + + return loc, scale + + +halfnorm = halfnorm_gen(a=0.0, name='halfnorm') + + +class hypsecant_gen(rv_continuous): + r"""A hyperbolic secant continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `hypsecant` is: + + .. math:: + + f(x) = \frac{1}{\pi} \text{sech}(x) + + for a real number :math:`x`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + # hypsecant.pdf(x) = 1/pi * sech(x) + return 1.0/(np.pi*np.cosh(x)) + + def _cdf(self, x): + return 2.0/np.pi*np.arctan(np.exp(x)) + + def _ppf(self, q): + return np.log(np.tan(np.pi*q/2.0)) + + def _sf(self, x): + return 2.0/np.pi*np.arctan(np.exp(-x)) + + def _isf(self, q): + return -np.log(np.tan(np.pi*q/2.0)) + + def _stats(self): + return 0, np.pi*np.pi/4, 0, 2 + + def _entropy(self): + return np.log(2*np.pi) + + +hypsecant = hypsecant_gen(name='hypsecant') + + +class gausshyper_gen(rv_continuous): + r"""A Gauss hypergeometric continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `gausshyper` is: + + .. math:: + + f(x, a, b, c, z) = C x^{a-1} (1-x)^{b-1} (1+zx)^{-c} + + for :math:`0 \le x \le 1`, :math:`a,b > 0`, :math:`c` a real number, + :math:`z > -1`, and :math:`C = \frac{1}{B(a, b) F[2, 1](c, a; a+b; -z)}`. + :math:`F[2, 1]` is the Gauss hypergeometric function + `scipy.special.hyp2f1`. + + `gausshyper` takes :math:`a`, :math:`b`, :math:`c` and :math:`z` as shape + parameters. + + %(after_notes)s + + References + ---------- + .. [1] Armero, C., and M. J. Bayarri. "Prior Assessments for Prediction in + Queues." *Journal of the Royal Statistical Society*. Series D (The + Statistician) 43, no. 1 (1994): 139-53. doi:10.2307/2348939 + + %(example)s + + """ + + def _argcheck(self, a, b, c, z): + # z > -1 per gh-10134 + return (a > 0) & (b > 0) & (c == c) & (z > -1) + + def _shape_info(self): + ia = _ShapeInfo("a", False, (0, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + ic = _ShapeInfo("c", False, (-np.inf, np.inf), (False, False)) + iz = _ShapeInfo("z", False, (-1, np.inf), (False, False)) + return [ia, ib, ic, iz] + + def _pdf(self, x, a, b, c, z): + normalization_constant = sc.beta(a, b) * sc.hyp2f1(c, a, a + b, -z) + return (1./normalization_constant * x**(a - 1.) * (1. - x)**(b - 1.0) + / (1.0 + z*x)**c) + + def _munp(self, n, a, b, c, z): + fac = sc.beta(n+a, b) / sc.beta(a, b) + num = sc.hyp2f1(c, a+n, a+b+n, -z) + den = sc.hyp2f1(c, a, a+b, -z) + return fac*num / den + + +gausshyper = gausshyper_gen(a=0.0, b=1.0, name='gausshyper') + + +class invgamma_gen(rv_continuous): + r"""An inverted gamma continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `invgamma` is: + + .. math:: + + f(x, a) = \frac{x^{-a-1}}{\Gamma(a)} \exp(-\frac{1}{x}) + + for :math:`x >= 0`, :math:`a > 0`. :math:`\Gamma` is the gamma function + (`scipy.special.gamma`). + + `invgamma` takes ``a`` as a shape parameter for :math:`a`. + + `invgamma` is a special case of `gengamma` with ``c=-1``, and it is a + different parameterization of the scaled inverse chi-squared distribution. + Specifically, if the scaled inverse chi-squared distribution is + parameterized with degrees of freedom :math:`\nu` and scaling parameter + :math:`\tau^2`, then it can be modeled using `invgamma` with + ``a=`` :math:`\nu/2` and ``scale=`` :math:`\nu \tau^2/2`. + + %(after_notes)s + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, a): + # invgamma.pdf(x, a) = x**(-a-1) / gamma(a) * exp(-1/x) + return np.exp(self._logpdf(x, a)) + + def _logpdf(self, x, a): + return -(a+1) * np.log(x) - sc.gammaln(a) - 1.0/x + + def _cdf(self, x, a): + return sc.gammaincc(a, 1.0 / x) + + def _ppf(self, q, a): + return 1.0 / sc.gammainccinv(a, q) + + def _sf(self, x, a): + return sc.gammainc(a, 1.0 / x) + + def _isf(self, q, a): + return 1.0 / sc.gammaincinv(a, q) + + def _stats(self, a, moments='mvsk'): + m1 = _lazywhere(a > 1, (a,), lambda x: 1. / (x - 1.), np.inf) + m2 = _lazywhere(a > 2, (a,), lambda x: 1. / (x - 1.)**2 / (x - 2.), + np.inf) + + g1, g2 = None, None + if 's' in moments: + g1 = _lazywhere( + a > 3, (a,), + lambda x: 4. * np.sqrt(x - 2.) / (x - 3.), np.nan) + if 'k' in moments: + g2 = _lazywhere( + a > 4, (a,), + lambda x: 6. * (5. * x - 11.) / (x - 3.) / (x - 4.), np.nan) + return m1, m2, g1, g2 + + def _entropy(self, a): + def regular(a): + h = a - (a + 1.0) * sc.psi(a) + sc.gammaln(a) + return h + + def asymptotic(a): + # gammaln(a) ~ a * ln(a) - a - 0.5 * ln(a) + 0.5 * ln(2 * pi) + # psi(a) ~ ln(a) - 1 / (2 * a) + h = ((1 - 3*np.log(a) + np.log(2) + np.log(np.pi))/2 + + 2/3*a**-1. + a**-2./12 - a**-3./90 - a**-4./120) + return h + + h = _lazywhere(a >= 2e2, (a,), f=asymptotic, f2=regular) + return h + + +invgamma = invgamma_gen(a=0.0, name='invgamma') + + +class invgauss_gen(rv_continuous): + r"""An inverse Gaussian continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `invgauss` is: + + .. math:: + + f(x; \mu) = \frac{1}{\sqrt{2 \pi x^3}} + \exp\left(-\frac{(x-\mu)^2}{2 \mu^2 x}\right) + + for :math:`x \ge 0` and :math:`\mu > 0`. + + `invgauss` takes ``mu`` as a shape parameter for :math:`\mu`. + + %(after_notes)s + + A common shape-scale parameterization of the inverse Gaussian distribution + has density + + .. math:: + + f(x; \nu, \lambda) = \sqrt{\frac{\lambda}{2 \pi x^3}} + \exp\left( -\frac{\lambda(x-\nu)^2}{2 \nu^2 x}\right) + + Using ``nu`` for :math:`\nu` and ``lam`` for :math:`\lambda`, this + parameterization is equivalent to the one above with ``mu = nu/lam``, + ``loc = 0``, and ``scale = lam``. + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [_ShapeInfo("mu", False, (0, np.inf), (False, False))] + + def _rvs(self, mu, size=None, random_state=None): + return random_state.wald(mu, 1.0, size=size) + + def _pdf(self, x, mu): + # invgauss.pdf(x, mu) = + # 1 / sqrt(2*pi*x**3) * exp(-(x-mu)**2/(2*x*mu**2)) + return 1.0/np.sqrt(2*np.pi*x**3.0)*np.exp(-1.0/(2*x)*((x-mu)/mu)**2) + + def _logpdf(self, x, mu): + return -0.5*np.log(2*np.pi) - 1.5*np.log(x) - ((x-mu)/mu)**2/(2*x) + + # approach adapted from equations in + # https://journal.r-project.org/archive/2016-1/giner-smyth.pdf, + # not R code. see gh-13616 + + def _logcdf(self, x, mu): + fac = 1 / np.sqrt(x) + a = _norm_logcdf(fac * ((x / mu) - 1)) + b = 2 / mu + _norm_logcdf(-fac * ((x / mu) + 1)) + return a + np.log1p(np.exp(b - a)) + + def _logsf(self, x, mu): + fac = 1 / np.sqrt(x) + a = _norm_logsf(fac * ((x / mu) - 1)) + b = 2 / mu + _norm_logcdf(-fac * (x + mu) / mu) + return a + np.log1p(-np.exp(b - a)) + + def _sf(self, x, mu): + return np.exp(self._logsf(x, mu)) + + def _cdf(self, x, mu): + return np.exp(self._logcdf(x, mu)) + + def _ppf(self, x, mu): + with np.errstate(divide='ignore', over='ignore', invalid='ignore'): + x, mu = np.broadcast_arrays(x, mu) + ppf = _boost._invgauss_ppf(x, mu, 1) + i_wt = x > 0.5 # "wrong tail" - sometimes too inaccurate + ppf[i_wt] = _boost._invgauss_isf(1-x[i_wt], mu[i_wt], 1) + i_nan = np.isnan(ppf) + ppf[i_nan] = super()._ppf(x[i_nan], mu[i_nan]) + return ppf + + def _isf(self, x, mu): + with np.errstate(divide='ignore', over='ignore', invalid='ignore'): + x, mu = np.broadcast_arrays(x, mu) + isf = _boost._invgauss_isf(x, mu, 1) + i_wt = x > 0.5 # "wrong tail" - sometimes too inaccurate + isf[i_wt] = _boost._invgauss_ppf(1-x[i_wt], mu[i_wt], 1) + i_nan = np.isnan(isf) + isf[i_nan] = super()._isf(x[i_nan], mu[i_nan]) + return isf + + def _stats(self, mu): + return mu, mu**3.0, 3*np.sqrt(mu), 15*mu + + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + method = kwds.get('method', 'mle') + + if (isinstance(data, CensoredData) or type(self) == wald_gen + or method.lower() == 'mm'): + return super().fit(data, *args, **kwds) + + data, fshape_s, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + ''' + Source: Statistical Distributions, 3rd Edition. Evans, Hastings, + and Peacock (2000), Page 121. Their shape parameter is equivalent to + SciPy's with the conversion `fshape_s = fshape / scale`. + + MLE formulas are not used in 3 conditions: + - `loc` is not fixed + - `mu` is fixed + These cases fall back on the superclass fit method. + - `loc` is fixed but translation results in negative data raises + a `FitDataError`. + ''' + if floc is None or fshape_s is not None: + return super().fit(data, *args, **kwds) + elif np.any(data - floc < 0): + raise FitDataError("invgauss", lower=0, upper=np.inf) + else: + data = data - floc + fshape_n = np.mean(data) + if fscale is None: + fscale = len(data) / (np.sum(data ** -1 - fshape_n ** -1)) + fshape_s = fshape_n / fscale + return fshape_s, floc, fscale + + def _entropy(self, mu): + """ + Ref.: https://moser-isi.ethz.ch/docs/papers/smos-2012-10.pdf (eq. 9) + """ + # a = log(2*pi*e*mu**3) + # = 1 + log(2*pi) + 3 * log(mu) + a = 1. + np.log(2 * np.pi) + 3 * np.log(mu) + # b = exp(2/mu) * exp1(2/mu) + # = _scaled_exp1(2/mu) / (2/mu) + r = 2/mu + b = sc._ufuncs._scaled_exp1(r)/r + return 0.5 * a - 1.5 * b + + +invgauss = invgauss_gen(a=0.0, name='invgauss') + + +class geninvgauss_gen(rv_continuous): + r"""A Generalized Inverse Gaussian continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `geninvgauss` is: + + .. math:: + + f(x, p, b) = x^{p-1} \exp(-b (x + 1/x) / 2) / (2 K_p(b)) + + where `x > 0`, `p` is a real number and `b > 0`\([1]_). + :math:`K_p` is the modified Bessel function of second kind of order `p` + (`scipy.special.kv`). + + %(after_notes)s + + The inverse Gaussian distribution `stats.invgauss(mu)` is a special case of + `geninvgauss` with `p = -1/2`, `b = 1 / mu` and `scale = mu`. + + Generating random variates is challenging for this distribution. The + implementation is based on [2]_. + + References + ---------- + .. [1] O. Barndorff-Nielsen, P. Blaesild, C. Halgreen, "First hitting time + models for the generalized inverse gaussian distribution", + Stochastic Processes and their Applications 7, pp. 49--54, 1978. + + .. [2] W. Hoermann and J. Leydold, "Generating generalized inverse Gaussian + random variates", Statistics and Computing, 24(4), p. 547--557, 2014. + + %(example)s + + """ + def _argcheck(self, p, b): + return (p == p) & (b > 0) + + def _shape_info(self): + ip = _ShapeInfo("p", False, (-np.inf, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + return [ip, ib] + + def _logpdf(self, x, p, b): + # kve instead of kv works better for large values of b + # warn if kve produces infinite values and replace by nan + # otherwise c = -inf and the results are often incorrect + def logpdf_single(x, p, b): + return _stats.geninvgauss_logpdf(x, p, b) + + logpdf_single = np.vectorize(logpdf_single, otypes=[np.float64]) + + z = logpdf_single(x, p, b) + if np.isnan(z).any(): + msg = ("Infinite values encountered in scipy.special.kve(p, b). " + "Values replaced by NaN to avoid incorrect results.") + warnings.warn(msg, RuntimeWarning, stacklevel=3) + return z + + def _pdf(self, x, p, b): + # relying on logpdf avoids overflow of x**(p-1) for large x and p + return np.exp(self._logpdf(x, p, b)) + + def _cdf(self, x, *args): + _a, _b = self._get_support(*args) + + def _cdf_single(x, *args): + p, b = args + user_data = np.array([p, b], float).ctypes.data_as(ctypes.c_void_p) + llc = LowLevelCallable.from_cython(_stats, '_geninvgauss_pdf', + user_data) + + return integrate.quad(llc, _a, x)[0] + + _cdf_single = np.vectorize(_cdf_single, otypes=[np.float64]) + + return _cdf_single(x, *args) + + def _logquasipdf(self, x, p, b): + # log of the quasi-density (w/o normalizing constant) used in _rvs + return _lazywhere(x > 0, (x, p, b), + lambda x, p, b: (p - 1)*np.log(x) - b*(x + 1/x)/2, + -np.inf) + + def _rvs(self, p, b, size=None, random_state=None): + # if p and b are scalar, use _rvs_scalar, otherwise need to create + # output by iterating over parameters + if np.isscalar(p) and np.isscalar(b): + out = self._rvs_scalar(p, b, size, random_state) + elif p.size == 1 and b.size == 1: + out = self._rvs_scalar(p.item(), b.item(), size, random_state) + else: + # When this method is called, size will be a (possibly empty) + # tuple of integers. It will not be None; if `size=None` is passed + # to `rvs()`, size will be the empty tuple (). + + p, b = np.broadcast_arrays(p, b) + # p and b now have the same shape. + + # `shp` is the shape of the blocks of random variates that are + # generated for each combination of parameters associated with + # broadcasting p and b. + # bc is a tuple the same length as size. The values + # in bc are bools. If bc[j] is True, it means that + # entire axis is filled in for a given combination of the + # broadcast arguments. + shp, bc = _check_shape(p.shape, size) + + # `numsamples` is the total number of variates to be generated + # for each combination of the input arguments. + numsamples = int(np.prod(shp)) + + # `out` is the array to be returned. It is filled in the + # loop below. + out = np.empty(size) + + it = np.nditer([p, b], + flags=['multi_index'], + op_flags=[['readonly'], ['readonly']]) + while not it.finished: + # Convert the iterator's multi_index into an index into the + # `out` array where the call to _rvs_scalar() will be stored. + # Where bc is True, we use a full slice; otherwise we use the + # index value from it.multi_index. len(it.multi_index) might + # be less than len(bc), and in that case we want to align these + # two sequences to the right, so the loop variable j runs from + # -len(size) to 0. This doesn't cause an IndexError, as + # bc[j] will be True in those cases where it.multi_index[j] + # would cause an IndexError. + idx = tuple((it.multi_index[j] if not bc[j] else slice(None)) + for j in range(-len(size), 0)) + out[idx] = self._rvs_scalar(it[0], it[1], numsamples, + random_state).reshape(shp) + it.iternext() + + if size == (): + out = out.item() + return out + + def _rvs_scalar(self, p, b, numsamples, random_state): + # following [2], the quasi-pdf is used instead of the pdf for the + # generation of rvs + invert_res = False + if not numsamples: + numsamples = 1 + if p < 0: + # note: if X is geninvgauss(p, b), then 1/X is geninvgauss(-p, b) + p = -p + invert_res = True + m = self._mode(p, b) + + # determine method to be used following [2] + ratio_unif = True + if p >= 1 or b > 1: + # ratio of uniforms with mode shift below + mode_shift = True + elif b >= min(0.5, 2 * np.sqrt(1 - p) / 3): + # ratio of uniforms without mode shift below + mode_shift = False + else: + # new algorithm in [2] + ratio_unif = False + + # prepare sampling of rvs + size1d = tuple(np.atleast_1d(numsamples)) + N = np.prod(size1d) # number of rvs needed, reshape upon return + x = np.zeros(N) + simulated = 0 + + if ratio_unif: + # use ratio of uniforms method + if mode_shift: + a2 = -2 * (p + 1) / b - m + a1 = 2 * m * (p - 1) / b - 1 + # find roots of x**3 + a2*x**2 + a1*x + m (Cardano's formula) + p1 = a1 - a2**2 / 3 + q1 = 2 * a2**3 / 27 - a2 * a1 / 3 + m + phi = np.arccos(-q1 * np.sqrt(-27 / p1**3) / 2) + s1 = -np.sqrt(-4 * p1 / 3) + root1 = s1 * np.cos(phi / 3 + np.pi / 3) - a2 / 3 + root2 = -s1 * np.cos(phi / 3) - a2 / 3 + # root3 = s1 * np.cos(phi / 3 - np.pi / 3) - a2 / 3 + + # if g is the quasipdf, rescale: g(x) / g(m) which we can write + # as exp(log(g(x)) - log(g(m))). This is important + # since for large values of p and b, g cannot be evaluated. + # denote the rescaled quasipdf by h + lm = self._logquasipdf(m, p, b) + d1 = self._logquasipdf(root1, p, b) - lm + d2 = self._logquasipdf(root2, p, b) - lm + # compute the bounding rectangle w.r.t. h. Note that + # np.exp(0.5*d1) = np.sqrt(g(root1)/g(m)) = np.sqrt(h(root1)) + vmin = (root1 - m) * np.exp(0.5 * d1) + vmax = (root2 - m) * np.exp(0.5 * d2) + umax = 1 # umax = sqrt(h(m)) = 1 + + def logqpdf(x): + return self._logquasipdf(x, p, b) - lm + + c = m + else: + # ratio of uniforms without mode shift + # compute np.sqrt(quasipdf(m)) + umax = np.exp(0.5*self._logquasipdf(m, p, b)) + xplus = ((1 + p) + np.sqrt((1 + p)**2 + b**2))/b + vmin = 0 + # compute xplus * np.sqrt(quasipdf(xplus)) + vmax = xplus * np.exp(0.5 * self._logquasipdf(xplus, p, b)) + c = 0 + + def logqpdf(x): + return self._logquasipdf(x, p, b) + + if vmin >= vmax: + raise ValueError("vmin must be smaller than vmax.") + if umax <= 0: + raise ValueError("umax must be positive.") + + i = 1 + while simulated < N: + k = N - simulated + # simulate uniform rvs on [0, umax] and [vmin, vmax] + u = umax * random_state.uniform(size=k) + v = random_state.uniform(size=k) + v = vmin + (vmax - vmin) * v + rvs = v / u + c + # rewrite acceptance condition u**2 <= pdf(rvs) by taking logs + accept = (2*np.log(u) <= logqpdf(rvs)) + num_accept = np.sum(accept) + if num_accept > 0: + x[simulated:(simulated + num_accept)] = rvs[accept] + simulated += num_accept + + if (simulated == 0) and (i*N >= 50000): + msg = ("Not a single random variate could be generated " + f"in {i*N} attempts. Sampling does not appear to " + "work for the provided parameters.") + raise RuntimeError(msg) + i += 1 + else: + # use new algorithm in [2] + x0 = b / (1 - p) + xs = np.max((x0, 2 / b)) + k1 = np.exp(self._logquasipdf(m, p, b)) + A1 = k1 * x0 + if x0 < 2 / b: + k2 = np.exp(-b) + if p > 0: + A2 = k2 * ((2 / b)**p - x0**p) / p + else: + A2 = k2 * np.log(2 / b**2) + else: + k2, A2 = 0, 0 + k3 = xs**(p - 1) + A3 = 2 * k3 * np.exp(-xs * b / 2) / b + A = A1 + A2 + A3 + + # [2]: rejection constant is < 2.73; so expected runtime is finite + while simulated < N: + k = N - simulated + h, rvs = np.zeros(k), np.zeros(k) + # simulate uniform rvs on [x1, x2] and [0, y2] + u = random_state.uniform(size=k) + v = A * random_state.uniform(size=k) + cond1 = v <= A1 + cond2 = np.logical_not(cond1) & (v <= A1 + A2) + cond3 = np.logical_not(cond1 | cond2) + # subdomain (0, x0) + rvs[cond1] = x0 * v[cond1] / A1 + h[cond1] = k1 + # subdomain (x0, 2 / b) + if p > 0: + rvs[cond2] = (x0**p + (v[cond2] - A1) * p / k2)**(1 / p) + else: + rvs[cond2] = b * np.exp((v[cond2] - A1) * np.exp(b)) + h[cond2] = k2 * rvs[cond2]**(p - 1) + # subdomain (xs, infinity) + z = np.exp(-xs * b / 2) - b * (v[cond3] - A1 - A2) / (2 * k3) + rvs[cond3] = -2 / b * np.log(z) + h[cond3] = k3 * np.exp(-rvs[cond3] * b / 2) + # apply rejection method + accept = (np.log(u * h) <= self._logquasipdf(rvs, p, b)) + num_accept = sum(accept) + if num_accept > 0: + x[simulated:(simulated + num_accept)] = rvs[accept] + simulated += num_accept + + rvs = np.reshape(x, size1d) + if invert_res: + rvs = 1 / rvs + return rvs + + def _mode(self, p, b): + # distinguish cases to avoid catastrophic cancellation (see [2]) + if p < 1: + return b / (np.sqrt((p - 1)**2 + b**2) + 1 - p) + else: + return (np.sqrt((1 - p)**2 + b**2) - (1 - p)) / b + + def _munp(self, n, p, b): + num = sc.kve(p + n, b) + denom = sc.kve(p, b) + inf_vals = np.isinf(num) | np.isinf(denom) + if inf_vals.any(): + msg = ("Infinite values encountered in the moment calculation " + "involving scipy.special.kve. Values replaced by NaN to " + "avoid incorrect results.") + warnings.warn(msg, RuntimeWarning, stacklevel=3) + m = np.full_like(num, np.nan, dtype=np.float64) + m[~inf_vals] = num[~inf_vals] / denom[~inf_vals] + else: + m = num / denom + return m + + +geninvgauss = geninvgauss_gen(a=0.0, name="geninvgauss") + + +class norminvgauss_gen(rv_continuous): + r"""A Normal Inverse Gaussian continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `norminvgauss` is: + + .. math:: + + f(x, a, b) = \frac{a \, K_1(a \sqrt{1 + x^2})}{\pi \sqrt{1 + x^2}} \, + \exp(\sqrt{a^2 - b^2} + b x) + + where :math:`x` is a real number, the parameter :math:`a` is the tail + heaviness and :math:`b` is the asymmetry parameter satisfying + :math:`a > 0` and :math:`|b| <= a`. + :math:`K_1` is the modified Bessel function of second kind + (`scipy.special.k1`). + + %(after_notes)s + + A normal inverse Gaussian random variable `Y` with parameters `a` and `b` + can be expressed as a normal mean-variance mixture: + `Y = b * V + sqrt(V) * X` where `X` is `norm(0,1)` and `V` is + `invgauss(mu=1/sqrt(a**2 - b**2))`. This representation is used + to generate random variates. + + Another common parametrization of the distribution (see Equation 2.1 in + [2]_) is given by the following expression of the pdf: + + .. math:: + + g(x, \alpha, \beta, \delta, \mu) = + \frac{\alpha\delta K_1\left(\alpha\sqrt{\delta^2 + (x - \mu)^2}\right)} + {\pi \sqrt{\delta^2 + (x - \mu)^2}} \, + e^{\delta \sqrt{\alpha^2 - \beta^2} + \beta (x - \mu)} + + In SciPy, this corresponds to + `a = alpha * delta, b = beta * delta, loc = mu, scale=delta`. + + References + ---------- + .. [1] O. Barndorff-Nielsen, "Hyperbolic Distributions and Distributions on + Hyperbolae", Scandinavian Journal of Statistics, Vol. 5(3), + pp. 151-157, 1978. + + .. [2] O. Barndorff-Nielsen, "Normal Inverse Gaussian Distributions and + Stochastic Volatility Modelling", Scandinavian Journal of + Statistics, Vol. 24, pp. 1-13, 1997. + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _argcheck(self, a, b): + return (a > 0) & (np.absolute(b) < a) + + def _shape_info(self): + ia = _ShapeInfo("a", False, (0, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (-np.inf, np.inf), (False, False)) + return [ia, ib] + + def _fitstart(self, data): + # Arbitrary, but the default a = b = 1 is not valid; the distribution + # requires |b| < a. + return super()._fitstart(data, args=(1, 0.5)) + + def _pdf(self, x, a, b): + gamma = np.sqrt(a**2 - b**2) + fac1 = a / np.pi + sq = np.hypot(1, x) # reduce overflows + return fac1 * sc.k1e(a * sq) * np.exp(b*x - a*sq + gamma) / sq + + def _sf(self, x, a, b): + if np.isscalar(x): + # If x is a scalar, then so are a and b. + return integrate.quad(self._pdf, x, np.inf, args=(a, b))[0] + else: + a = np.atleast_1d(a) + b = np.atleast_1d(b) + result = [] + for (x0, a0, b0) in zip(x, a, b): + result.append(integrate.quad(self._pdf, x0, np.inf, + args=(a0, b0))[0]) + return np.array(result) + + def _isf(self, q, a, b): + def _isf_scalar(q, a, b): + + def eq(x, a, b, q): + # Solve eq(x, a, b, q) = 0 to obtain isf(x, a, b) = q. + return self._sf(x, a, b) - q + + # Find a bracketing interval for the root. + # Start at the mean, and grow the length of the interval + # by 2 each iteration until there is a sign change in eq. + xm = self.mean(a, b) + em = eq(xm, a, b, q) + if em == 0: + # Unlikely, but might as well check. + return xm + if em > 0: + delta = 1 + left = xm + right = xm + delta + while eq(right, a, b, q) > 0: + delta = 2*delta + right = xm + delta + else: + # em < 0 + delta = 1 + right = xm + left = xm - delta + while eq(left, a, b, q) < 0: + delta = 2*delta + left = xm - delta + result = optimize.brentq(eq, left, right, args=(a, b, q), + xtol=self.xtol) + return result + + if np.isscalar(q): + return _isf_scalar(q, a, b) + else: + result = [] + for (q0, a0, b0) in zip(q, a, b): + result.append(_isf_scalar(q0, a0, b0)) + return np.array(result) + + def _rvs(self, a, b, size=None, random_state=None): + # note: X = b * V + sqrt(V) * X is norminvgaus(a,b) if X is standard + # normal and V is invgauss(mu=1/sqrt(a**2 - b**2)) + gamma = np.sqrt(a**2 - b**2) + ig = invgauss.rvs(mu=1/gamma, size=size, random_state=random_state) + return b * ig + np.sqrt(ig) * norm.rvs(size=size, + random_state=random_state) + + def _stats(self, a, b): + gamma = np.sqrt(a**2 - b**2) + mean = b / gamma + variance = a**2 / gamma**3 + skewness = 3.0 * b / (a * np.sqrt(gamma)) + kurtosis = 3.0 * (1 + 4 * b**2 / a**2) / gamma + return mean, variance, skewness, kurtosis + + +norminvgauss = norminvgauss_gen(name="norminvgauss") + + +class invweibull_gen(rv_continuous): + """An inverted Weibull continuous random variable. + + This distribution is also known as the Fréchet distribution or the + type II extreme value distribution. + + %(before_notes)s + + Notes + ----- + The probability density function for `invweibull` is: + + .. math:: + + f(x, c) = c x^{-c-1} \\exp(-x^{-c}) + + for :math:`x > 0`, :math:`c > 0`. + + `invweibull` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + References + ---------- + F.R.S. de Gusmao, E.M.M Ortega and G.M. Cordeiro, "The generalized inverse + Weibull distribution", Stat. Papers, vol. 52, pp. 591-619, 2011. + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # invweibull.pdf(x, c) = c * x**(-c-1) * exp(-x**(-c)) + xc1 = np.power(x, -c - 1.0) + xc2 = np.power(x, -c) + xc2 = np.exp(-xc2) + return c * xc1 * xc2 + + def _cdf(self, x, c): + xc1 = np.power(x, -c) + return np.exp(-xc1) + + def _sf(self, x, c): + return -np.expm1(-x**-c) + + def _ppf(self, q, c): + return np.power(-np.log(q), -1.0/c) + + def _isf(self, p, c): + return (-np.log1p(-p))**(-1/c) + + def _munp(self, n, c): + return sc.gamma(1 - n / c) + + def _entropy(self, c): + return 1+_EULER + _EULER / c - np.log(c) + + def _fitstart(self, data, args=None): + # invweibull requires c > 1 for the first moment to exist, so use 2.0 + args = (2.0,) if args is None else args + return super()._fitstart(data, args=args) + + +invweibull = invweibull_gen(a=0, name='invweibull') + + +class jf_skew_t_gen(rv_continuous): + r"""Jones and Faddy skew-t distribution. + + %(before_notes)s + + Notes + ----- + The probability density function for `jf_skew_t` is: + + .. math:: + + f(x; a, b) = C_{a,b}^{-1} + \left(1+\frac{x}{\left(a+b+x^2\right)^{1/2}}\right)^{a+1/2} + \left(1-\frac{x}{\left(a+b+x^2\right)^{1/2}}\right)^{b+1/2} + + for real numbers :math:`a>0` and :math:`b>0`, where + :math:`C_{a,b} = 2^{a+b-1}B(a,b)(a+b)^{1/2}`, and :math:`B` denotes the + beta function (`scipy.special.beta`). + + When :math:`ab`, the distribution is positively skewed. If :math:`a=b`, then + we recover the `t` distribution with :math:`2a` degrees of freedom. + + `jf_skew_t` takes :math:`a` and :math:`b` as shape parameters. + + %(after_notes)s + + References + ---------- + .. [1] M.C. Jones and M.J. Faddy. "A skew extension of the t distribution, + with applications" *Journal of the Royal Statistical Society*. + Series B (Statistical Methodology) 65, no. 1 (2003): 159-174. + :doi:`10.1111/1467-9868.00378` + + %(example)s + + """ + def _shape_info(self): + ia = _ShapeInfo("a", False, (0, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + return [ia, ib] + + def _pdf(self, x, a, b): + c = 2 ** (a + b - 1) * sc.beta(a, b) * np.sqrt(a + b) + d1 = (1 + x / np.sqrt(a + b + x ** 2)) ** (a + 0.5) + d2 = (1 - x / np.sqrt(a + b + x ** 2)) ** (b + 0.5) + return d1 * d2 / c + + def _rvs(self, a, b, size=None, random_state=None): + d1 = random_state.beta(a, b, size) + d2 = (2 * d1 - 1) * np.sqrt(a + b) + d3 = 2 * np.sqrt(d1 * (1 - d1)) + return d2 / d3 + + def _cdf(self, x, a, b): + y = (1 + x / np.sqrt(a + b + x ** 2)) * 0.5 + return sc.betainc(a, b, y) + + def _ppf(self, q, a, b): + d1 = beta.ppf(q, a, b) + d2 = (2 * d1 - 1) * np.sqrt(a + b) + d3 = 2 * np.sqrt(d1 * (1 - d1)) + return d2 / d3 + + def _munp(self, n, a, b): + """Returns the n-th moment(s) where all the following hold: + + - n >= 0 + - a > n / 2 + - b > n / 2 + + The result is np.nan in all other cases. + """ + def nth_moment(n_k, a_k, b_k): + """Computes E[T^(n_k)] where T is skew-t distributed with + parameters a_k and b_k. + """ + num = (a_k + b_k) ** (0.5 * n_k) + denom = 2 ** n_k * sc.beta(a_k, b_k) + + indices = np.arange(n_k + 1) + sgn = np.where(indices % 2 > 0, -1, 1) + d = sc.beta(a_k + 0.5 * n_k - indices, b_k - 0.5 * n_k + indices) + sum_terms = sc.comb(n_k, indices) * sgn * d + + return num / denom * sum_terms.sum() + + nth_moment_valid = (a > 0.5 * n) & (b > 0.5 * n) & (n >= 0) + return _lazywhere( + nth_moment_valid, + (n, a, b), + np.vectorize(nth_moment, otypes=[np.float64]), + np.nan, + ) + + +jf_skew_t = jf_skew_t_gen(name='jf_skew_t') + + +class johnsonsb_gen(rv_continuous): + r"""A Johnson SB continuous random variable. + + %(before_notes)s + + See Also + -------- + johnsonsu + + Notes + ----- + The probability density function for `johnsonsb` is: + + .. math:: + + f(x, a, b) = \frac{b}{x(1-x)} \phi(a + b \log \frac{x}{1-x} ) + + where :math:`x`, :math:`a`, and :math:`b` are real scalars; :math:`b > 0` + and :math:`x \in [0,1]`. :math:`\phi` is the pdf of the normal + distribution. + + `johnsonsb` takes :math:`a` and :math:`b` as shape parameters. + + %(after_notes)s + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _argcheck(self, a, b): + return (b > 0) & (a == a) + + def _shape_info(self): + ia = _ShapeInfo("a", False, (-np.inf, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + return [ia, ib] + + def _pdf(self, x, a, b): + # johnsonsb.pdf(x, a, b) = b / (x*(1-x)) * phi(a + b * log(x/(1-x))) + trm = _norm_pdf(a + b*sc.logit(x)) + return b*1.0/(x*(1-x))*trm + + def _cdf(self, x, a, b): + return _norm_cdf(a + b*sc.logit(x)) + + def _ppf(self, q, a, b): + return sc.expit(1.0 / b * (_norm_ppf(q) - a)) + + def _sf(self, x, a, b): + return _norm_sf(a + b*sc.logit(x)) + + def _isf(self, q, a, b): + return sc.expit(1.0 / b * (_norm_isf(q) - a)) + + +johnsonsb = johnsonsb_gen(a=0.0, b=1.0, name='johnsonsb') + + +class johnsonsu_gen(rv_continuous): + r"""A Johnson SU continuous random variable. + + %(before_notes)s + + See Also + -------- + johnsonsb + + Notes + ----- + The probability density function for `johnsonsu` is: + + .. math:: + + f(x, a, b) = \frac{b}{\sqrt{x^2 + 1}} + \phi(a + b \log(x + \sqrt{x^2 + 1})) + + where :math:`x`, :math:`a`, and :math:`b` are real scalars; :math:`b > 0`. + :math:`\phi` is the pdf of the normal distribution. + + `johnsonsu` takes :math:`a` and :math:`b` as shape parameters. + + The first four central moments are calculated according to the formulas + in [1]_. + + %(after_notes)s + + References + ---------- + .. [1] Taylor Enterprises. "Johnson Family of Distributions". + https://variation.com/wp-content/distribution_analyzer_help/hs126.htm + + %(example)s + + """ + def _argcheck(self, a, b): + return (b > 0) & (a == a) + + def _shape_info(self): + ia = _ShapeInfo("a", False, (-np.inf, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + return [ia, ib] + + def _pdf(self, x, a, b): + # johnsonsu.pdf(x, a, b) = b / sqrt(x**2 + 1) * + # phi(a + b * log(x + sqrt(x**2 + 1))) + x2 = x*x + trm = _norm_pdf(a + b * np.arcsinh(x)) + return b*1.0/np.sqrt(x2+1.0)*trm + + def _cdf(self, x, a, b): + return _norm_cdf(a + b * np.arcsinh(x)) + + def _ppf(self, q, a, b): + return np.sinh((_norm_ppf(q) - a) / b) + + def _sf(self, x, a, b): + return _norm_sf(a + b * np.arcsinh(x)) + + def _isf(self, x, a, b): + return np.sinh((_norm_isf(x) - a) / b) + + def _stats(self, a, b, moments='mv'): + # Naive implementation of first and second moment to address gh-18071. + # https://variation.com/wp-content/distribution_analyzer_help/hs126.htm + # Numerical improvements left to future enhancements. + mu, mu2, g1, g2 = None, None, None, None + + bn2 = b**-2. + expbn2 = np.exp(bn2) + a_b = a / b + + if 'm' in moments: + mu = -expbn2**0.5 * np.sinh(a_b) + if 'v' in moments: + mu2 = 0.5*sc.expm1(bn2)*(expbn2*np.cosh(2*a_b) + 1) + if 's' in moments: + t1 = expbn2**.5 * sc.expm1(bn2)**0.5 + t2 = 3*np.sinh(a_b) + t3 = expbn2 * (expbn2 + 2) * np.sinh(3*a_b) + denom = np.sqrt(2) * (1 + expbn2 * np.cosh(2*a_b))**(3/2) + g1 = -t1 * (t2 + t3) / denom + if 'k' in moments: + t1 = 3 + 6*expbn2 + t2 = 4*expbn2**2 * (expbn2 + 2) * np.cosh(2*a_b) + t3 = expbn2**2 * np.cosh(4*a_b) + t4 = -3 + 3*expbn2**2 + 2*expbn2**3 + expbn2**4 + denom = 2*(1 + expbn2*np.cosh(2*a_b))**2 + g2 = (t1 + t2 + t3*t4) / denom - 3 + return mu, mu2, g1, g2 + + +johnsonsu = johnsonsu_gen(name='johnsonsu') + + +class laplace_gen(rv_continuous): + r"""A Laplace continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `laplace` is + + .. math:: + + f(x) = \frac{1}{2} \exp(-|x|) + + for a real number :math:`x`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return random_state.laplace(0, 1, size=size) + + def _pdf(self, x): + # laplace.pdf(x) = 1/2 * exp(-abs(x)) + return 0.5*np.exp(-abs(x)) + + def _cdf(self, x): + with np.errstate(over='ignore'): + return np.where(x > 0, 1.0 - 0.5*np.exp(-x), 0.5*np.exp(x)) + + def _sf(self, x): + # By symmetry... + return self._cdf(-x) + + def _ppf(self, q): + return np.where(q > 0.5, -np.log(2*(1-q)), np.log(2*q)) + + def _isf(self, q): + # By symmetry... + return -self._ppf(q) + + def _stats(self): + return 0, 2, 0, 3 + + def _entropy(self): + return np.log(2)+1 + + @_call_super_mom + @replace_notes_in_docstring(rv_continuous, notes="""\ + This function uses explicit formulas for the maximum likelihood + estimation of the Laplace distribution parameters, so the keyword + arguments `loc`, `scale`, and `optimizer` are ignored.\n\n""") + def fit(self, data, *args, **kwds): + data, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + + # Source: Statistical Distributions, 3rd Edition. Evans, Hastings, + # and Peacock (2000), Page 124 + + if floc is None: + floc = np.median(data) + + if fscale is None: + fscale = (np.sum(np.abs(data - floc))) / len(data) + + return floc, fscale + + +laplace = laplace_gen(name='laplace') + + +class laplace_asymmetric_gen(rv_continuous): + r"""An asymmetric Laplace continuous random variable. + + %(before_notes)s + + See Also + -------- + laplace : Laplace distribution + + Notes + ----- + The probability density function for `laplace_asymmetric` is + + .. math:: + + f(x, \kappa) &= \frac{1}{\kappa+\kappa^{-1}}\exp(-x\kappa),\quad x\ge0\\ + &= \frac{1}{\kappa+\kappa^{-1}}\exp(x/\kappa),\quad x<0\\ + + for :math:`-\infty < x < \infty`, :math:`\kappa > 0`. + + `laplace_asymmetric` takes ``kappa`` as a shape parameter for + :math:`\kappa`. For :math:`\kappa = 1`, it is identical to a + Laplace distribution. + + %(after_notes)s + + Note that the scale parameter of some references is the reciprocal of + SciPy's ``scale``. For example, :math:`\lambda = 1/2` in the + parameterization of [1]_ is equivalent to ``scale = 2`` with + `laplace_asymmetric`. + + References + ---------- + .. [1] "Asymmetric Laplace distribution", Wikipedia + https://en.wikipedia.org/wiki/Asymmetric_Laplace_distribution + + .. [2] Kozubowski TJ and Podgórski K. A Multivariate and + Asymmetric Generalization of Laplace Distribution, + Computational Statistics 15, 531--540 (2000). + :doi:`10.1007/PL00022717` + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("kappa", False, (0, np.inf), (False, False))] + + def _pdf(self, x, kappa): + return np.exp(self._logpdf(x, kappa)) + + def _logpdf(self, x, kappa): + kapinv = 1/kappa + lPx = x * np.where(x >= 0, -kappa, kapinv) + lPx -= np.log(kappa+kapinv) + return lPx + + def _cdf(self, x, kappa): + kapinv = 1/kappa + kappkapinv = kappa+kapinv + return np.where(x >= 0, + 1 - np.exp(-x*kappa)*(kapinv/kappkapinv), + np.exp(x*kapinv)*(kappa/kappkapinv)) + + def _sf(self, x, kappa): + kapinv = 1/kappa + kappkapinv = kappa+kapinv + return np.where(x >= 0, + np.exp(-x*kappa)*(kapinv/kappkapinv), + 1 - np.exp(x*kapinv)*(kappa/kappkapinv)) + + def _ppf(self, q, kappa): + kapinv = 1/kappa + kappkapinv = kappa+kapinv + return np.where(q >= kappa/kappkapinv, + -np.log((1 - q)*kappkapinv*kappa)*kapinv, + np.log(q*kappkapinv/kappa)*kappa) + + def _isf(self, q, kappa): + kapinv = 1/kappa + kappkapinv = kappa+kapinv + return np.where(q <= kapinv/kappkapinv, + -np.log(q*kappkapinv*kappa)*kapinv, + np.log((1 - q)*kappkapinv/kappa)*kappa) + + def _stats(self, kappa): + kapinv = 1/kappa + mn = kapinv - kappa + var = kapinv*kapinv + kappa*kappa + g1 = 2.0*(1-np.power(kappa, 6))/np.power(1+np.power(kappa, 4), 1.5) + g2 = 6.0*(1+np.power(kappa, 8))/np.power(1+np.power(kappa, 4), 2) + return mn, var, g1, g2 + + def _entropy(self, kappa): + return 1 + np.log(kappa+1/kappa) + + +laplace_asymmetric = laplace_asymmetric_gen(name='laplace_asymmetric') + + +def _check_fit_input_parameters(dist, data, args, kwds): + if not isinstance(data, CensoredData): + data = np.asarray(data) + + floc = kwds.get('floc', None) + fscale = kwds.get('fscale', None) + + num_shapes = len(dist.shapes.split(",")) if dist.shapes else 0 + fshape_keys = [] + fshapes = [] + + # user has many options for fixing the shape, so here we standardize it + # into 'f' + the number of the shape. + # Adapted from `_reduce_func` in `_distn_infrastructure.py`: + if dist.shapes: + shapes = dist.shapes.replace(',', ' ').split() + for j, s in enumerate(shapes): + key = 'f' + str(j) + names = [key, 'f' + s, 'fix_' + s] + val = _get_fixed_fit_value(kwds, names) + fshape_keys.append(key) + fshapes.append(val) + if val is not None: + kwds[key] = val + + # determine if there are any unknown arguments in kwds + known_keys = {'loc', 'scale', 'optimizer', 'method', + 'floc', 'fscale', *fshape_keys} + unknown_keys = set(kwds).difference(known_keys) + if unknown_keys: + raise TypeError(f"Unknown keyword arguments: {unknown_keys}.") + + if len(args) > num_shapes: + raise TypeError("Too many positional arguments.") + + if None not in {floc, fscale, *fshapes}: + # This check is for consistency with `rv_continuous.fit`. + # Without this check, this function would just return the + # parameters that were given. + raise RuntimeError("All parameters fixed. There is nothing to " + "optimize.") + + uncensored = data._uncensor() if isinstance(data, CensoredData) else data + if not np.isfinite(uncensored).all(): + raise ValueError("The data contains non-finite values.") + + return (data, *fshapes, floc, fscale) + + +class levy_gen(rv_continuous): + r"""A Levy continuous random variable. + + %(before_notes)s + + See Also + -------- + levy_stable, levy_l + + Notes + ----- + The probability density function for `levy` is: + + .. math:: + + f(x) = \frac{1}{\sqrt{2\pi x^3}} \exp\left(-\frac{1}{2x}\right) + + for :math:`x > 0`. + + This is the same as the Levy-stable distribution with :math:`a=1/2` and + :math:`b=1`. + + %(after_notes)s + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import levy + >>> import matplotlib.pyplot as plt + >>> fig, ax = plt.subplots(1, 1) + + Calculate the first four moments: + + >>> mean, var, skew, kurt = levy.stats(moments='mvsk') + + Display the probability density function (``pdf``): + + >>> # `levy` is very heavy-tailed. + >>> # To show a nice plot, let's cut off the upper 40 percent. + >>> a, b = levy.ppf(0), levy.ppf(0.6) + >>> x = np.linspace(a, b, 100) + >>> ax.plot(x, levy.pdf(x), + ... 'r-', lw=5, alpha=0.6, label='levy pdf') + + Alternatively, the distribution object can be called (as a function) + to fix the shape, location and scale parameters. This returns a "frozen" + RV object holding the given parameters fixed. + + Freeze the distribution and display the frozen ``pdf``: + + >>> rv = levy() + >>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf') + + Check accuracy of ``cdf`` and ``ppf``: + + >>> vals = levy.ppf([0.001, 0.5, 0.999]) + >>> np.allclose([0.001, 0.5, 0.999], levy.cdf(vals)) + True + + Generate random numbers: + + >>> r = levy.rvs(size=1000) + + And compare the histogram: + + >>> # manual binning to ignore the tail + >>> bins = np.concatenate((np.linspace(a, b, 20), [np.max(r)])) + >>> ax.hist(r, bins=bins, density=True, histtype='stepfilled', alpha=0.2) + >>> ax.set_xlim([x[0], x[-1]]) + >>> ax.legend(loc='best', frameon=False) + >>> plt.show() + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [] + + def _pdf(self, x): + # levy.pdf(x) = 1 / (x * sqrt(2*pi*x)) * exp(-1/(2*x)) + return 1 / np.sqrt(2*np.pi*x) / x * np.exp(-1/(2*x)) + + def _cdf(self, x): + # Equivalent to 2*norm.sf(np.sqrt(1/x)) + return sc.erfc(np.sqrt(0.5 / x)) + + def _sf(self, x): + return sc.erf(np.sqrt(0.5 / x)) + + def _ppf(self, q): + # Equivalent to 1.0/(norm.isf(q/2)**2) or 0.5/(erfcinv(q)**2) + val = _norm_isf(q/2) + return 1.0 / (val * val) + + def _isf(self, p): + return 1/(2*sc.erfinv(p)**2) + + def _stats(self): + return np.inf, np.inf, np.nan, np.nan + + +levy = levy_gen(a=0.0, name="levy") + + +class levy_l_gen(rv_continuous): + r"""A left-skewed Levy continuous random variable. + + %(before_notes)s + + See Also + -------- + levy, levy_stable + + Notes + ----- + The probability density function for `levy_l` is: + + .. math:: + f(x) = \frac{1}{|x| \sqrt{2\pi |x|}} \exp{ \left(-\frac{1}{2|x|} \right)} + + for :math:`x < 0`. + + This is the same as the Levy-stable distribution with :math:`a=1/2` and + :math:`b=-1`. + + %(after_notes)s + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import levy_l + >>> import matplotlib.pyplot as plt + >>> fig, ax = plt.subplots(1, 1) + + Calculate the first four moments: + + >>> mean, var, skew, kurt = levy_l.stats(moments='mvsk') + + Display the probability density function (``pdf``): + + >>> # `levy_l` is very heavy-tailed. + >>> # To show a nice plot, let's cut off the lower 40 percent. + >>> a, b = levy_l.ppf(0.4), levy_l.ppf(1) + >>> x = np.linspace(a, b, 100) + >>> ax.plot(x, levy_l.pdf(x), + ... 'r-', lw=5, alpha=0.6, label='levy_l pdf') + + Alternatively, the distribution object can be called (as a function) + to fix the shape, location and scale parameters. This returns a "frozen" + RV object holding the given parameters fixed. + + Freeze the distribution and display the frozen ``pdf``: + + >>> rv = levy_l() + >>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf') + + Check accuracy of ``cdf`` and ``ppf``: + + >>> vals = levy_l.ppf([0.001, 0.5, 0.999]) + >>> np.allclose([0.001, 0.5, 0.999], levy_l.cdf(vals)) + True + + Generate random numbers: + + >>> r = levy_l.rvs(size=1000) + + And compare the histogram: + + >>> # manual binning to ignore the tail + >>> bins = np.concatenate(([np.min(r)], np.linspace(a, b, 20))) + >>> ax.hist(r, bins=bins, density=True, histtype='stepfilled', alpha=0.2) + >>> ax.set_xlim([x[0], x[-1]]) + >>> ax.legend(loc='best', frameon=False) + >>> plt.show() + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [] + + def _pdf(self, x): + # levy_l.pdf(x) = 1 / (abs(x) * sqrt(2*pi*abs(x))) * exp(-1/(2*abs(x))) + ax = abs(x) + return 1/np.sqrt(2*np.pi*ax)/ax*np.exp(-1/(2*ax)) + + def _cdf(self, x): + ax = abs(x) + return 2 * _norm_cdf(1 / np.sqrt(ax)) - 1 + + def _sf(self, x): + ax = abs(x) + return 2 * _norm_sf(1 / np.sqrt(ax)) + + def _ppf(self, q): + val = _norm_ppf((q + 1.0) / 2) + return -1.0 / (val * val) + + def _isf(self, p): + return -1/_norm_isf(p/2)**2 + + def _stats(self): + return np.inf, np.inf, np.nan, np.nan + + +levy_l = levy_l_gen(b=0.0, name="levy_l") + + +class logistic_gen(rv_continuous): + r"""A logistic (or Sech-squared) continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `logistic` is: + + .. math:: + + f(x) = \frac{\exp(-x)} + {(1+\exp(-x))^2} + + `logistic` is a special case of `genlogistic` with ``c=1``. + + Remark that the survival function (``logistic.sf``) is equal to the + Fermi-Dirac distribution describing fermionic statistics. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return random_state.logistic(size=size) + + def _pdf(self, x): + # logistic.pdf(x) = exp(-x) / (1+exp(-x))**2 + return np.exp(self._logpdf(x)) + + def _logpdf(self, x): + y = -np.abs(x) + return y - 2. * sc.log1p(np.exp(y)) + + def _cdf(self, x): + return sc.expit(x) + + def _logcdf(self, x): + return sc.log_expit(x) + + def _ppf(self, q): + return sc.logit(q) + + def _sf(self, x): + return sc.expit(-x) + + def _logsf(self, x): + return sc.log_expit(-x) + + def _isf(self, q): + return -sc.logit(q) + + def _stats(self): + return 0, np.pi*np.pi/3.0, 0, 6.0/5.0 + + def _entropy(self): + # https://en.wikipedia.org/wiki/Logistic_distribution + return 2.0 + + @_call_super_mom + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + if kwds.pop('superfit', False): + return super().fit(data, *args, **kwds) + + data, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + n = len(data) + + # rv_continuous provided guesses + loc, scale = self._fitstart(data) + # these are trumped by user-provided guesses + loc, scale = kwds.get('loc', loc), kwds.get('scale', scale) + + # the maximum likelihood estimators `a` and `b` of the location and + # scale parameters are roots of the two equations described in `func`. + # Source: Statistical Distributions, 3rd Edition. Evans, Hastings, and + # Peacock (2000), Page 130 + + def dl_dloc(loc, scale=fscale): + c = (data - loc) / scale + return np.sum(sc.expit(c)) - n/2 + + def dl_dscale(scale, loc=floc): + c = (data - loc) / scale + return np.sum(c*np.tanh(c/2)) - n + + def func(params): + loc, scale = params + return dl_dloc(loc, scale), dl_dscale(scale, loc) + + if fscale is not None and floc is None: + res = optimize.root(dl_dloc, (loc,)) + loc = res.x[0] + scale = fscale + elif floc is not None and fscale is None: + res = optimize.root(dl_dscale, (scale,)) + scale = res.x[0] + loc = floc + else: + res = optimize.root(func, (loc, scale)) + loc, scale = res.x + + # Note: gh-18176 reported data for which the reported MLE had + # `scale < 0`. To fix the bug, we return abs(scale). This is OK because + # `dl_dscale` and `dl_dloc` are even and odd functions of `scale`, + # respectively, so if `-scale` is a solution, so is `scale`. + scale = abs(scale) + return ((loc, scale) if res.success + else super().fit(data, *args, **kwds)) + + +logistic = logistic_gen(name='logistic') + + +class loggamma_gen(rv_continuous): + r"""A log gamma continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `loggamma` is: + + .. math:: + + f(x, c) = \frac{\exp(c x - \exp(x))} + {\Gamma(c)} + + for all :math:`x, c > 0`. Here, :math:`\Gamma` is the + gamma function (`scipy.special.gamma`). + + `loggamma` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + %(example)s + + """ + + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _rvs(self, c, size=None, random_state=None): + # Use the property of the gamma distribution Gamma(c) + # Gamma(c) ~ Gamma(c + 1)*U**(1/c), + # where U is uniform on [0, 1]. (See, e.g., + # G. Marsaglia and W.W. Tsang, "A simple method for generating gamma + # variables", https://doi.org/10.1145/358407.358414) + # So + # log(Gamma(c)) ~ log(Gamma(c + 1)) + log(U)/c + # Generating a sample with this formulation is a bit slower + # than the more obvious log(Gamma(c)), but it avoids loss + # of precision when c << 1. + return (np.log(random_state.gamma(c + 1, size=size)) + + np.log(random_state.uniform(size=size))/c) + + def _pdf(self, x, c): + # loggamma.pdf(x, c) = exp(c*x-exp(x)) / gamma(c) + return np.exp(c*x-np.exp(x)-sc.gammaln(c)) + + def _logpdf(self, x, c): + return c*x - np.exp(x) - sc.gammaln(c) + + def _cdf(self, x, c): + # This function is gammainc(c, exp(x)), where gammainc(c, z) is + # the regularized incomplete gamma function. + # The first term in a series expansion of gamminc(c, z) is + # z**c/Gamma(c+1); see 6.5.29 of Abramowitz & Stegun (and refer + # back to 6.5.1, 6.5.2 and 6.5.4 for the relevant notation). + # This can also be found in the wikipedia article + # https://en.wikipedia.org/wiki/Incomplete_gamma_function. + # Here we use that formula when x is sufficiently negative that + # exp(x) will result in subnormal numbers and lose precision. + # We evaluate the log of the expression first to allow the possible + # cancellation of the terms in the division, and then exponentiate. + # That is, + # exp(x)**c/Gamma(c+1) = exp(log(exp(x)**c/Gamma(c+1))) + # = exp(c*x - gammaln(c+1)) + return _lazywhere(x < _LOGXMIN, (x, c), + lambda x, c: np.exp(c*x - sc.gammaln(c+1)), + f2=lambda x, c: sc.gammainc(c, np.exp(x))) + + def _ppf(self, q, c): + # The expression used when g < _XMIN inverts the one term expansion + # given in the comments of _cdf(). + g = sc.gammaincinv(c, q) + return _lazywhere(g < _XMIN, (g, q, c), + lambda g, q, c: (np.log(q) + sc.gammaln(c+1))/c, + f2=lambda g, q, c: np.log(g)) + + def _sf(self, x, c): + # See the comments for _cdf() for how x < _LOGXMIN is handled. + return _lazywhere(x < _LOGXMIN, (x, c), + lambda x, c: -np.expm1(c*x - sc.gammaln(c+1)), + f2=lambda x, c: sc.gammaincc(c, np.exp(x))) + + def _isf(self, q, c): + # The expression used when g < _XMIN inverts the complement of + # the one term expansion given in the comments of _cdf(). + g = sc.gammainccinv(c, q) + return _lazywhere(g < _XMIN, (g, q, c), + lambda g, q, c: (np.log1p(-q) + sc.gammaln(c+1))/c, + f2=lambda g, q, c: np.log(g)) + + def _stats(self, c): + # See, for example, "A Statistical Study of Log-Gamma Distribution", by + # Ping Shing Chan (thesis, McMaster University, 1993). + mean = sc.digamma(c) + var = sc.polygamma(1, c) + skewness = sc.polygamma(2, c) / np.power(var, 1.5) + excess_kurtosis = sc.polygamma(3, c) / (var*var) + return mean, var, skewness, excess_kurtosis + + def _entropy(self, c): + def regular(c): + h = sc.gammaln(c) - c * sc.digamma(c) + c + return h + + def asymptotic(c): + # using asymptotic expansions for gammaln and psi (see gh-18093) + term = -0.5*np.log(c) + c**-1./6 - c**-3./90 + c**-5./210 + h = norm._entropy() + term + return h + + h = _lazywhere(c >= 45, (c, ), f=asymptotic, f2=regular) + return h + + +loggamma = loggamma_gen(name='loggamma') + + +class loglaplace_gen(rv_continuous): + r"""A log-Laplace continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `loglaplace` is: + + .. math:: + + f(x, c) = \begin{cases}\frac{c}{2} x^{ c-1} &\text{for } 0 < x < 1\\ + \frac{c}{2} x^{-c-1} &\text{for } x \ge 1 + \end{cases} + + for :math:`c > 0`. + + `loglaplace` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + Suppose a random variable ``X`` follows the Laplace distribution with + location ``a`` and scale ``b``. Then ``Y = exp(X)`` follows the + log-Laplace distribution with ``c = 1 / b`` and ``scale = exp(a)``. + + References + ---------- + T.J. Kozubowski and K. Podgorski, "A log-Laplace growth rate model", + The Mathematical Scientist, vol. 28, pp. 49-60, 2003. + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # loglaplace.pdf(x, c) = c / 2 * x**(c-1), for 0 < x < 1 + # = c / 2 * x**(-c-1), for x >= 1 + cd2 = c/2.0 + c = np.where(x < 1, c, -c) + return cd2*x**(c-1) + + def _cdf(self, x, c): + return np.where(x < 1, 0.5*x**c, 1-0.5*x**(-c)) + + def _sf(self, x, c): + return np.where(x < 1, 1 - 0.5*x**c, 0.5*x**(-c)) + + def _ppf(self, q, c): + return np.where(q < 0.5, (2.0*q)**(1.0/c), (2*(1.0-q))**(-1.0/c)) + + def _isf(self, q, c): + return np.where(q > 0.5, (2.0*(1.0 - q))**(1.0/c), (2*q)**(-1.0/c)) + + def _munp(self, n, c): + with np.errstate(divide='ignore'): + c2, n2 = c**2, n**2 + return np.where(n2 < c2, c2 / (c2 - n2), np.inf) + + def _entropy(self, c): + return np.log(2.0/c) + 1.0 + + @_call_super_mom + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + data, fc, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + + # Specialize MLE only when location is known. + if floc is None: + return super(type(self), self).fit(data, *args, **kwds) + + # Raise an error if any observation has zero likelihood. + if np.any(data <= floc): + raise FitDataError("loglaplace", lower=floc, upper=np.inf) + + # Remove location from data. + if floc != 0: + data = data - floc + + # When location is zero, the log-Laplace distribution is related to + # the Laplace distribution in that if X ~ Laplace(loc=a, scale=b), + # then Y = exp(X) ~ LogLaplace(c=1/b, loc=0, scale=exp(a)). It can + # be shown that the MLE for Y is the same as the MLE for X = ln(Y). + # Therefore, we reuse the formulas from laplace.fit() and transform + # the result back into log-laplace's parameter space. + a, b = laplace.fit(np.log(data), + floc=np.log(fscale) if fscale is not None else None, + fscale=1/fc if fc is not None else None, + method='mle') + loc = floc + scale = np.exp(a) if fscale is None else fscale + c = 1 / b if fc is None else fc + return c, loc, scale + +loglaplace = loglaplace_gen(a=0.0, name='loglaplace') + + +def _lognorm_logpdf(x, s): + return _lazywhere(x != 0, (x, s), + lambda x, s: (-np.log(x)**2 / (2 * s**2) + - np.log(s * x * np.sqrt(2 * np.pi))), + -np.inf) + + +class lognorm_gen(rv_continuous): + r"""A lognormal continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `lognorm` is: + + .. math:: + + f(x, s) = \frac{1}{s x \sqrt{2\pi}} + \exp\left(-\frac{\log^2(x)}{2s^2}\right) + + for :math:`x > 0`, :math:`s > 0`. + + `lognorm` takes ``s`` as a shape parameter for :math:`s`. + + %(after_notes)s + + Suppose a normally distributed random variable ``X`` has mean ``mu`` and + standard deviation ``sigma``. Then ``Y = exp(X)`` is lognormally + distributed with ``s = sigma`` and ``scale = exp(mu)``. + + %(example)s + + The logarithm of a log-normally distributed random variable is + normally distributed: + + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy import stats + >>> fig, ax = plt.subplots(1, 1) + >>> mu, sigma = 2, 0.5 + >>> X = stats.norm(loc=mu, scale=sigma) + >>> Y = stats.lognorm(s=sigma, scale=np.exp(mu)) + >>> x = np.linspace(*X.interval(0.999)) + >>> y = Y.rvs(size=10000) + >>> ax.plot(x, X.pdf(x), label='X (pdf)') + >>> ax.hist(np.log(y), density=True, bins=x, label='log(Y) (histogram)') + >>> ax.legend() + >>> plt.show() + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [_ShapeInfo("s", False, (0, np.inf), (False, False))] + + def _rvs(self, s, size=None, random_state=None): + return np.exp(s * random_state.standard_normal(size)) + + def _pdf(self, x, s): + # lognorm.pdf(x, s) = 1 / (s*x*sqrt(2*pi)) * exp(-1/2*(log(x)/s)**2) + return np.exp(self._logpdf(x, s)) + + def _logpdf(self, x, s): + return _lognorm_logpdf(x, s) + + def _cdf(self, x, s): + return _norm_cdf(np.log(x) / s) + + def _logcdf(self, x, s): + return _norm_logcdf(np.log(x) / s) + + def _ppf(self, q, s): + return np.exp(s * _norm_ppf(q)) + + def _sf(self, x, s): + return _norm_sf(np.log(x) / s) + + def _logsf(self, x, s): + return _norm_logsf(np.log(x) / s) + + def _isf(self, q, s): + return np.exp(s * _norm_isf(q)) + + def _stats(self, s): + p = np.exp(s*s) + mu = np.sqrt(p) + mu2 = p*(p-1) + g1 = np.sqrt(p-1)*(2+p) + g2 = np.polyval([1, 2, 3, 0, -6.0], p) + return mu, mu2, g1, g2 + + def _entropy(self, s): + return 0.5 * (1 + np.log(2*np.pi) + 2 * np.log(s)) + + @_call_super_mom + @extend_notes_in_docstring(rv_continuous, notes="""\ + When `method='MLE'` and + the location parameter is fixed by using the `floc` argument, + this function uses explicit formulas for the maximum likelihood + estimation of the log-normal shape and scale parameters, so the + `optimizer`, `loc` and `scale` keyword arguments are ignored. + If the location is free, a likelihood maximum is found by + setting its partial derivative wrt to location to 0, and + solving by substituting the analytical expressions of shape + and scale (or provided parameters). + See, e.g., equation 3.1 in + A. Clifford Cohen & Betty Jones Whitten (1980) + Estimation in the Three-Parameter Lognormal Distribution, + Journal of the American Statistical Association, 75:370, 399-404 + https://doi.org/10.2307/2287466 + \n\n""") + def fit(self, data, *args, **kwds): + if kwds.pop('superfit', False): + return super().fit(data, *args, **kwds) + + parameters = _check_fit_input_parameters(self, data, args, kwds) + data, fshape, floc, fscale = parameters + data_min = np.min(data) + + def get_shape_scale(loc): + # Calculate maximum likelihood scale and shape with analytical + # formulas unless provided by the user + if fshape is None or fscale is None: + lndata = np.log(data - loc) + scale = fscale or np.exp(lndata.mean()) + shape = fshape or np.sqrt(np.mean((lndata - np.log(scale))**2)) + return shape, scale + + def dL_dLoc(loc): + # Derivative of (positive) LL w.r.t. loc + shape, scale = get_shape_scale(loc) + shifted = data - loc + return np.sum((1 + np.log(shifted/scale)/shape**2)/shifted) + + def ll(loc): + # (Positive) log-likelihood + shape, scale = get_shape_scale(loc) + return -self.nnlf((shape, loc, scale), data) + + if floc is None: + # The location must be less than the minimum of the data. + # Back off a bit to avoid numerical issues. + spacing = np.spacing(data_min) + rbrack = data_min - spacing + + # Find the right end of the bracket by successive doubling of the + # distance to data_min. We're interested in a maximum LL, so the + # slope dL_dLoc_rbrack should be negative at the right end. + # optimization for later: share shape, scale + dL_dLoc_rbrack = dL_dLoc(rbrack) + ll_rbrack = ll(rbrack) + delta = 2 * spacing # 2 * (data_min - rbrack) + while dL_dLoc_rbrack >= -1e-6: + rbrack = data_min - delta + dL_dLoc_rbrack = dL_dLoc(rbrack) + delta *= 2 + + if not np.isfinite(rbrack) or not np.isfinite(dL_dLoc_rbrack): + # If we never find a negative slope, either we missed it or the + # slope is always positive. It's usually the latter, + # which means + # loc = data_min - spacing + # But sometimes when shape and/or scale are fixed there are + # other issues, so be cautious. + return super().fit(data, *args, **kwds) + + # Now find the left end of the bracket. Guess is `rbrack-1` + # unless that is too small of a difference to resolve. Double + # the size of the interval until the left end is found. + lbrack = np.minimum(np.nextafter(rbrack, -np.inf), rbrack-1) + dL_dLoc_lbrack = dL_dLoc(lbrack) + delta = 2 * (rbrack - lbrack) + while (np.isfinite(lbrack) and np.isfinite(dL_dLoc_lbrack) + and np.sign(dL_dLoc_lbrack) == np.sign(dL_dLoc_rbrack)): + lbrack = rbrack - delta + dL_dLoc_lbrack = dL_dLoc(lbrack) + delta *= 2 + + # I don't recall observing this, but just in case... + if not np.isfinite(lbrack) or not np.isfinite(dL_dLoc_lbrack): + return super().fit(data, *args, **kwds) + + # If we have a valid bracket, find the root + res = root_scalar(dL_dLoc, bracket=(lbrack, rbrack)) + if not res.converged: + return super().fit(data, *args, **kwds) + + # If the slope was positive near the minimum of the data, + # the maximum LL could be there instead of at the root. Compare + # the LL of the two points to decide. + ll_root = ll(res.root) + loc = res.root if ll_root > ll_rbrack else data_min-spacing + + else: + if floc >= data_min: + raise FitDataError("lognorm", lower=0., upper=np.inf) + loc = floc + + shape, scale = get_shape_scale(loc) + if not (self._argcheck(shape) and scale > 0): + return super().fit(data, *args, **kwds) + return shape, loc, scale + + +lognorm = lognorm_gen(a=0.0, name='lognorm') + + +class gibrat_gen(rv_continuous): + r"""A Gibrat continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `gibrat` is: + + .. math:: + + f(x) = \frac{1}{x \sqrt{2\pi}} \exp(-\frac{1}{2} (\log(x))^2) + + `gibrat` is a special case of `lognorm` with ``s=1``. + + %(after_notes)s + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return np.exp(random_state.standard_normal(size)) + + def _pdf(self, x): + # gibrat.pdf(x) = 1/(x*sqrt(2*pi)) * exp(-1/2*(log(x))**2) + return np.exp(self._logpdf(x)) + + def _logpdf(self, x): + return _lognorm_logpdf(x, 1.0) + + def _cdf(self, x): + return _norm_cdf(np.log(x)) + + def _ppf(self, q): + return np.exp(_norm_ppf(q)) + + def _sf(self, x): + return _norm_sf(np.log(x)) + + def _isf(self, p): + return np.exp(_norm_isf(p)) + + def _stats(self): + p = np.e + mu = np.sqrt(p) + mu2 = p * (p - 1) + g1 = np.sqrt(p - 1) * (2 + p) + g2 = np.polyval([1, 2, 3, 0, -6.0], p) + return mu, mu2, g1, g2 + + def _entropy(self): + return 0.5 * np.log(2 * np.pi) + 0.5 + + +gibrat = gibrat_gen(a=0.0, name='gibrat') + + +class maxwell_gen(rv_continuous): + r"""A Maxwell continuous random variable. + + %(before_notes)s + + Notes + ----- + A special case of a `chi` distribution, with ``df=3``, ``loc=0.0``, + and given ``scale = a``, where ``a`` is the parameter used in the + Mathworld description [1]_. + + The probability density function for `maxwell` is: + + .. math:: + + f(x) = \sqrt{2/\pi}x^2 \exp(-x^2/2) + + for :math:`x >= 0`. + + %(after_notes)s + + References + ---------- + .. [1] http://mathworld.wolfram.com/MaxwellDistribution.html + + %(example)s + """ + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return chi.rvs(3.0, size=size, random_state=random_state) + + def _pdf(self, x): + # maxwell.pdf(x) = sqrt(2/pi)x**2 * exp(-x**2/2) + return _SQRT_2_OVER_PI*x*x*np.exp(-x*x/2.0) + + def _logpdf(self, x): + # Allow x=0 without 'divide by zero' warnings + with np.errstate(divide='ignore'): + return _LOG_SQRT_2_OVER_PI + 2*np.log(x) - 0.5*x*x + + def _cdf(self, x): + return sc.gammainc(1.5, x*x/2.0) + + def _ppf(self, q): + return np.sqrt(2*sc.gammaincinv(1.5, q)) + + def _sf(self, x): + return sc.gammaincc(1.5, x*x/2.0) + + def _isf(self, q): + return np.sqrt(2*sc.gammainccinv(1.5, q)) + + def _stats(self): + val = 3*np.pi-8 + return (2*np.sqrt(2.0/np.pi), + 3-8/np.pi, + np.sqrt(2)*(32-10*np.pi)/val**1.5, + (-12*np.pi*np.pi + 160*np.pi - 384) / val**2.0) + + def _entropy(self): + return _EULER + 0.5*np.log(2*np.pi)-0.5 + + +maxwell = maxwell_gen(a=0.0, name='maxwell') + + +class mielke_gen(rv_continuous): + r"""A Mielke Beta-Kappa / Dagum continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `mielke` is: + + .. math:: + + f(x, k, s) = \frac{k x^{k-1}}{(1+x^s)^{1+k/s}} + + for :math:`x > 0` and :math:`k, s > 0`. The distribution is sometimes + called Dagum distribution ([2]_). It was already defined in [3]_, called + a Burr Type III distribution (`burr` with parameters ``c=s`` and + ``d=k/s``). + + `mielke` takes ``k`` and ``s`` as shape parameters. + + %(after_notes)s + + References + ---------- + .. [1] Mielke, P.W., 1973 "Another Family of Distributions for Describing + and Analyzing Precipitation Data." J. Appl. Meteor., 12, 275-280 + .. [2] Dagum, C., 1977 "A new model for personal income distribution." + Economie Appliquee, 33, 327-367. + .. [3] Burr, I. W. "Cumulative frequency functions", Annals of + Mathematical Statistics, 13(2), pp 215-232 (1942). + + %(example)s + + """ + def _shape_info(self): + ik = _ShapeInfo("k", False, (0, np.inf), (False, False)) + i_s = _ShapeInfo("s", False, (0, np.inf), (False, False)) + return [ik, i_s] + + def _pdf(self, x, k, s): + return k*x**(k-1.0) / (1.0+x**s)**(1.0+k*1.0/s) + + def _logpdf(self, x, k, s): + # Allow x=0 without 'divide by zero' warnings. + with np.errstate(divide='ignore'): + return np.log(k) + np.log(x)*(k - 1) - np.log1p(x**s)*(1 + k/s) + + def _cdf(self, x, k, s): + return x**k / (1.0+x**s)**(k*1.0/s) + + def _ppf(self, q, k, s): + qsk = pow(q, s*1.0/k) + return pow(qsk/(1.0-qsk), 1.0/s) + + def _munp(self, n, k, s): + def nth_moment(n, k, s): + # n-th moment is defined for -k < n < s + return sc.gamma((k+n)/s)*sc.gamma(1-n/s)/sc.gamma(k/s) + + return _lazywhere(n < s, (n, k, s), nth_moment, np.inf) + + +mielke = mielke_gen(a=0.0, name='mielke') + + +class kappa4_gen(rv_continuous): + r"""Kappa 4 parameter distribution. + + %(before_notes)s + + Notes + ----- + The probability density function for kappa4 is: + + .. math:: + + f(x, h, k) = (1 - k x)^{1/k - 1} (1 - h (1 - k x)^{1/k})^{1/h-1} + + if :math:`h` and :math:`k` are not equal to 0. + + If :math:`h` or :math:`k` are zero then the pdf can be simplified: + + h = 0 and k != 0:: + + kappa4.pdf(x, h, k) = (1.0 - k*x)**(1.0/k - 1.0)* + exp(-(1.0 - k*x)**(1.0/k)) + + h != 0 and k = 0:: + + kappa4.pdf(x, h, k) = exp(-x)*(1.0 - h*exp(-x))**(1.0/h - 1.0) + + h = 0 and k = 0:: + + kappa4.pdf(x, h, k) = exp(-x)*exp(-exp(-x)) + + kappa4 takes :math:`h` and :math:`k` as shape parameters. + + The kappa4 distribution returns other distributions when certain + :math:`h` and :math:`k` values are used. + + +------+-------------+----------------+------------------+ + | h | k=0.0 | k=1.0 | -inf<=k<=inf | + +======+=============+================+==================+ + | -1.0 | Logistic | | Generalized | + | | | | Logistic(1) | + | | | | | + | | logistic(x) | | | + +------+-------------+----------------+------------------+ + | 0.0 | Gumbel | Reverse | Generalized | + | | | Exponential(2) | Extreme Value | + | | | | | + | | gumbel_r(x) | | genextreme(x, k) | + +------+-------------+----------------+------------------+ + | 1.0 | Exponential | Uniform | Generalized | + | | | | Pareto | + | | | | | + | | expon(x) | uniform(x) | genpareto(x, -k) | + +------+-------------+----------------+------------------+ + + (1) There are at least five generalized logistic distributions. + Four are described here: + https://en.wikipedia.org/wiki/Generalized_logistic_distribution + The "fifth" one is the one kappa4 should match which currently + isn't implemented in scipy: + https://en.wikipedia.org/wiki/Talk:Generalized_logistic_distribution + https://www.mathwave.com/help/easyfit/html/analyses/distributions/gen_logistic.html + (2) This distribution is currently not in scipy. + + References + ---------- + J.C. Finney, "Optimization of a Skewed Logistic Distribution With Respect + to the Kolmogorov-Smirnov Test", A Dissertation Submitted to the Graduate + Faculty of the Louisiana State University and Agricultural and Mechanical + College, (August, 2004), + https://digitalcommons.lsu.edu/gradschool_dissertations/3672 + + J.R.M. Hosking, "The four-parameter kappa distribution". IBM J. Res. + Develop. 38 (3), 25 1-258 (1994). + + B. Kumphon, A. Kaew-Man, P. Seenoi, "A Rainfall Distribution for the Lampao + Site in the Chi River Basin, Thailand", Journal of Water Resource and + Protection, vol. 4, 866-869, (2012). + :doi:`10.4236/jwarp.2012.410101` + + C. Winchester, "On Estimation of the Four-Parameter Kappa Distribution", A + Thesis Submitted to Dalhousie University, Halifax, Nova Scotia, (March + 2000). + http://www.nlc-bnc.ca/obj/s4/f2/dsk2/ftp01/MQ57336.pdf + + %(after_notes)s + + %(example)s + + """ + def _argcheck(self, h, k): + shape = np.broadcast_arrays(h, k)[0].shape + return np.full(shape, fill_value=True) + + def _shape_info(self): + ih = _ShapeInfo("h", False, (-np.inf, np.inf), (False, False)) + ik = _ShapeInfo("k", False, (-np.inf, np.inf), (False, False)) + return [ih, ik] + + def _get_support(self, h, k): + condlist = [np.logical_and(h > 0, k > 0), + np.logical_and(h > 0, k == 0), + np.logical_and(h > 0, k < 0), + np.logical_and(h <= 0, k > 0), + np.logical_and(h <= 0, k == 0), + np.logical_and(h <= 0, k < 0)] + + def f0(h, k): + return (1.0 - np.float_power(h, -k))/k + + def f1(h, k): + return np.log(h) + + def f3(h, k): + a = np.empty(np.shape(h)) + a[:] = -np.inf + return a + + def f5(h, k): + return 1.0/k + + _a = _lazyselect(condlist, + [f0, f1, f0, f3, f3, f5], + [h, k], + default=np.nan) + + def f0(h, k): + return 1.0/k + + def f1(h, k): + a = np.empty(np.shape(h)) + a[:] = np.inf + return a + + _b = _lazyselect(condlist, + [f0, f1, f1, f0, f1, f1], + [h, k], + default=np.nan) + return _a, _b + + def _pdf(self, x, h, k): + # kappa4.pdf(x, h, k) = (1.0 - k*x)**(1.0/k - 1.0)* + # (1.0 - h*(1.0 - k*x)**(1.0/k))**(1.0/h-1) + return np.exp(self._logpdf(x, h, k)) + + def _logpdf(self, x, h, k): + condlist = [np.logical_and(h != 0, k != 0), + np.logical_and(h == 0, k != 0), + np.logical_and(h != 0, k == 0), + np.logical_and(h == 0, k == 0)] + + def f0(x, h, k): + '''pdf = (1.0 - k*x)**(1.0/k - 1.0)*( + 1.0 - h*(1.0 - k*x)**(1.0/k))**(1.0/h-1.0) + logpdf = ... + ''' + return (sc.xlog1py(1.0/k - 1.0, -k*x) + + sc.xlog1py(1.0/h - 1.0, -h*(1.0 - k*x)**(1.0/k))) + + def f1(x, h, k): + '''pdf = (1.0 - k*x)**(1.0/k - 1.0)*np.exp(-( + 1.0 - k*x)**(1.0/k)) + logpdf = ... + ''' + return sc.xlog1py(1.0/k - 1.0, -k*x) - (1.0 - k*x)**(1.0/k) + + def f2(x, h, k): + '''pdf = np.exp(-x)*(1.0 - h*np.exp(-x))**(1.0/h - 1.0) + logpdf = ... + ''' + return -x + sc.xlog1py(1.0/h - 1.0, -h*np.exp(-x)) + + def f3(x, h, k): + '''pdf = np.exp(-x-np.exp(-x)) + logpdf = ... + ''' + return -x - np.exp(-x) + + return _lazyselect(condlist, + [f0, f1, f2, f3], + [x, h, k], + default=np.nan) + + def _cdf(self, x, h, k): + return np.exp(self._logcdf(x, h, k)) + + def _logcdf(self, x, h, k): + condlist = [np.logical_and(h != 0, k != 0), + np.logical_and(h == 0, k != 0), + np.logical_and(h != 0, k == 0), + np.logical_and(h == 0, k == 0)] + + def f0(x, h, k): + '''cdf = (1.0 - h*(1.0 - k*x)**(1.0/k))**(1.0/h) + logcdf = ... + ''' + return (1.0/h)*sc.log1p(-h*(1.0 - k*x)**(1.0/k)) + + def f1(x, h, k): + '''cdf = np.exp(-(1.0 - k*x)**(1.0/k)) + logcdf = ... + ''' + return -(1.0 - k*x)**(1.0/k) + + def f2(x, h, k): + '''cdf = (1.0 - h*np.exp(-x))**(1.0/h) + logcdf = ... + ''' + return (1.0/h)*sc.log1p(-h*np.exp(-x)) + + def f3(x, h, k): + '''cdf = np.exp(-np.exp(-x)) + logcdf = ... + ''' + return -np.exp(-x) + + return _lazyselect(condlist, + [f0, f1, f2, f3], + [x, h, k], + default=np.nan) + + def _ppf(self, q, h, k): + condlist = [np.logical_and(h != 0, k != 0), + np.logical_and(h == 0, k != 0), + np.logical_and(h != 0, k == 0), + np.logical_and(h == 0, k == 0)] + + def f0(q, h, k): + return 1.0/k*(1.0 - ((1.0 - (q**h))/h)**k) + + def f1(q, h, k): + return 1.0/k*(1.0 - (-np.log(q))**k) + + def f2(q, h, k): + '''ppf = -np.log((1.0 - (q**h))/h) + ''' + return -sc.log1p(-(q**h)) + np.log(h) + + def f3(q, h, k): + return -np.log(-np.log(q)) + + return _lazyselect(condlist, + [f0, f1, f2, f3], + [q, h, k], + default=np.nan) + + def _get_stats_info(self, h, k): + condlist = [ + np.logical_and(h < 0, k >= 0), + k < 0, + ] + + def f0(h, k): + return (-1.0/h*k).astype(int) + + def f1(h, k): + return (-1.0/k).astype(int) + + return _lazyselect(condlist, [f0, f1], [h, k], default=5) + + def _stats(self, h, k): + maxr = self._get_stats_info(h, k) + outputs = [None if np.any(r < maxr) else np.nan for r in range(1, 5)] + return outputs[:] + + def _mom1_sc(self, m, *args): + maxr = self._get_stats_info(args[0], args[1]) + if m >= maxr: + return np.nan + return integrate.quad(self._mom_integ1, 0, 1, args=(m,)+args)[0] + + +kappa4 = kappa4_gen(name='kappa4') + + +class kappa3_gen(rv_continuous): + r"""Kappa 3 parameter distribution. + + %(before_notes)s + + Notes + ----- + The probability density function for `kappa3` is: + + .. math:: + + f(x, a) = a (a + x^a)^{-(a + 1)/a} + + for :math:`x > 0` and :math:`a > 0`. + + `kappa3` takes ``a`` as a shape parameter for :math:`a`. + + References + ---------- + P.W. Mielke and E.S. Johnson, "Three-Parameter Kappa Distribution Maximum + Likelihood and Likelihood Ratio Tests", Methods in Weather Research, + 701-707, (September, 1973), + :doi:`10.1175/1520-0493(1973)101<0701:TKDMLE>2.3.CO;2` + + B. Kumphon, "Maximum Entropy and Maximum Likelihood Estimation for the + Three-Parameter Kappa Distribution", Open Journal of Statistics, vol 2, + 415-419 (2012), :doi:`10.4236/ojs.2012.24050` + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("a", False, (0, np.inf), (False, False))] + + def _pdf(self, x, a): + # kappa3.pdf(x, a) = a*(a + x**a)**(-(a + 1)/a), for x > 0 + return a*(a + x**a)**(-1.0/a-1) + + def _cdf(self, x, a): + return x*(a + x**a)**(-1.0/a) + + def _sf(self, x, a): + x, a = np.broadcast_arrays(x, a) # some code paths pass scalars + sf = super()._sf(x, a) + + # When the SF is small, another formulation is typically more accurate. + # However, it blows up for large `a`, so use it only if it also returns + # a small value of the SF. + cutoff = 0.01 + i = sf < cutoff + sf2 = -sc.expm1(sc.xlog1py(-1.0 / a[i], a[i] * x[i]**-a[i])) + i2 = sf2 > cutoff + sf2[i2] = sf[i][i2] # replace bad values with original values + + sf[i] = sf2 + return sf + + def _ppf(self, q, a): + return (a/(q**-a - 1.0))**(1.0/a) + + def _isf(self, q, a): + lg = sc.xlog1py(-a, -q) + denom = sc.expm1(lg) + return (a / denom)**(1.0 / a) + + def _stats(self, a): + outputs = [None if np.any(i < a) else np.nan for i in range(1, 5)] + return outputs[:] + + def _mom1_sc(self, m, *args): + if np.any(m >= args[0]): + return np.nan + return integrate.quad(self._mom_integ1, 0, 1, args=(m,)+args)[0] + + +kappa3 = kappa3_gen(a=0.0, name='kappa3') + + +class moyal_gen(rv_continuous): + r"""A Moyal continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `moyal` is: + + .. math:: + + f(x) = \exp(-(x + \exp(-x))/2) / \sqrt{2\pi} + + for a real number :math:`x`. + + %(after_notes)s + + This distribution has utility in high-energy physics and radiation + detection. It describes the energy loss of a charged relativistic + particle due to ionization of the medium [1]_. It also provides an + approximation for the Landau distribution. For an in depth description + see [2]_. For additional description, see [3]_. + + References + ---------- + .. [1] J.E. Moyal, "XXX. Theory of ionization fluctuations", + The London, Edinburgh, and Dublin Philosophical Magazine + and Journal of Science, vol 46, 263-280, (1955). + :doi:`10.1080/14786440308521076` (gated) + .. [2] G. Cordeiro et al., "The beta Moyal: a useful skew distribution", + International Journal of Research and Reviews in Applied Sciences, + vol 10, 171-192, (2012). + http://www.arpapress.com/Volumes/Vol10Issue2/IJRRAS_10_2_02.pdf + .. [3] C. Walck, "Handbook on Statistical Distributions for + Experimentalists; International Report SUF-PFY/96-01", Chapter 26, + University of Stockholm: Stockholm, Sweden, (2007). + http://www.stat.rice.edu/~dobelman/textfiles/DistributionsHandbook.pdf + + .. versionadded:: 1.1.0 + + %(example)s + + """ + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + u1 = gamma.rvs(a=0.5, scale=2, size=size, + random_state=random_state) + return -np.log(u1) + + def _pdf(self, x): + return np.exp(-0.5 * (x + np.exp(-x))) / np.sqrt(2*np.pi) + + def _cdf(self, x): + return sc.erfc(np.exp(-0.5 * x) / np.sqrt(2)) + + def _sf(self, x): + return sc.erf(np.exp(-0.5 * x) / np.sqrt(2)) + + def _ppf(self, x): + return -np.log(2 * sc.erfcinv(x)**2) + + def _stats(self): + mu = np.log(2) + np.euler_gamma + mu2 = np.pi**2 / 2 + g1 = 28 * np.sqrt(2) * sc.zeta(3) / np.pi**3 + g2 = 4. + return mu, mu2, g1, g2 + + def _munp(self, n): + if n == 1.0: + return np.log(2) + np.euler_gamma + elif n == 2.0: + return np.pi**2 / 2 + (np.log(2) + np.euler_gamma)**2 + elif n == 3.0: + tmp1 = 1.5 * np.pi**2 * (np.log(2)+np.euler_gamma) + tmp2 = (np.log(2)+np.euler_gamma)**3 + tmp3 = 14 * sc.zeta(3) + return tmp1 + tmp2 + tmp3 + elif n == 4.0: + tmp1 = 4 * 14 * sc.zeta(3) * (np.log(2) + np.euler_gamma) + tmp2 = 3 * np.pi**2 * (np.log(2) + np.euler_gamma)**2 + tmp3 = (np.log(2) + np.euler_gamma)**4 + tmp4 = 7 * np.pi**4 / 4 + return tmp1 + tmp2 + tmp3 + tmp4 + else: + # return generic for higher moments + # return rv_continuous._mom1_sc(self, n, b) + return self._mom1_sc(n) + + +moyal = moyal_gen(name="moyal") + + +class nakagami_gen(rv_continuous): + r"""A Nakagami continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `nakagami` is: + + .. math:: + + f(x, \nu) = \frac{2 \nu^\nu}{\Gamma(\nu)} x^{2\nu-1} \exp(-\nu x^2) + + for :math:`x >= 0`, :math:`\nu > 0`. The distribution was introduced in + [2]_, see also [1]_ for further information. + + `nakagami` takes ``nu`` as a shape parameter for :math:`\nu`. + + %(after_notes)s + + References + ---------- + .. [1] "Nakagami distribution", Wikipedia + https://en.wikipedia.org/wiki/Nakagami_distribution + .. [2] M. Nakagami, "The m-distribution - A general formula of intensity + distribution of rapid fading", Statistical methods in radio wave + propagation, Pergamon Press, 1960, 3-36. + :doi:`10.1016/B978-0-08-009306-2.50005-4` + + %(example)s + + """ + def _argcheck(self, nu): + return nu > 0 + + def _shape_info(self): + return [_ShapeInfo("nu", False, (0, np.inf), (False, False))] + + def _pdf(self, x, nu): + return np.exp(self._logpdf(x, nu)) + + def _logpdf(self, x, nu): + # nakagami.pdf(x, nu) = 2 * nu**nu / gamma(nu) * + # x**(2*nu-1) * exp(-nu*x**2) + return (np.log(2) + sc.xlogy(nu, nu) - sc.gammaln(nu) + + sc.xlogy(2*nu - 1, x) - nu*x**2) + + def _cdf(self, x, nu): + return sc.gammainc(nu, nu*x*x) + + def _ppf(self, q, nu): + return np.sqrt(1.0/nu*sc.gammaincinv(nu, q)) + + def _sf(self, x, nu): + return sc.gammaincc(nu, nu*x*x) + + def _isf(self, p, nu): + return np.sqrt(1/nu * sc.gammainccinv(nu, p)) + + def _stats(self, nu): + mu = sc.poch(nu, 0.5)/np.sqrt(nu) + mu2 = 1.0-mu*mu + g1 = mu * (1 - 4*nu*mu2) / 2.0 / nu / np.power(mu2, 1.5) + g2 = -6*mu**4*nu + (8*nu-2)*mu**2-2*nu + 1 + g2 /= nu*mu2**2.0 + return mu, mu2, g1, g2 + + def _entropy(self, nu): + shape = np.shape(nu) + # because somehow this isn't taken care of by the infrastructure... + nu = np.atleast_1d(nu) + A = sc.gammaln(nu) + B = nu - (nu - 0.5) * sc.digamma(nu) + C = -0.5 * np.log(nu) - np.log(2) + h = A + B + C + # This is the asymptotic sum of A and B (see gh-17868) + norm_entropy = stats.norm._entropy() + # Above, this is lost to rounding error for large nu, so use the + # asymptotic sum when the approximation becomes accurate + i = nu > 5e4 # roundoff error ~ approximation error + # -1 / (12 * nu) is the O(1/nu) term; see gh-17929 + h[i] = C[i] + norm_entropy - 1/(12*nu[i]) + return h.reshape(shape)[()] + + def _rvs(self, nu, size=None, random_state=None): + # this relationship can be found in [1] or by a direct calculation + return np.sqrt(random_state.standard_gamma(nu, size=size) / nu) + + def _fitstart(self, data, args=None): + if isinstance(data, CensoredData): + data = data._uncensor() + if args is None: + args = (1.0,) * self.numargs + # Analytical justified estimates + # see: https://docs.scipy.org/doc/scipy/reference/tutorial/stats/continuous_nakagami.html + loc = np.min(data) + scale = np.sqrt(np.sum((data - loc)**2) / len(data)) + return args + (loc, scale) + + +nakagami = nakagami_gen(a=0.0, name="nakagami") + + +# The function name ncx2 is an abbreviation for noncentral chi squared. +def _ncx2_log_pdf(x, df, nc): + # We use (xs**2 + ns**2)/2 = (xs - ns)**2/2 + xs*ns, and include the + # factor of exp(-xs*ns) into the ive function to improve numerical + # stability at large values of xs. See also `rice.pdf`. + df2 = df/2.0 - 1.0 + xs, ns = np.sqrt(x), np.sqrt(nc) + res = sc.xlogy(df2/2.0, x/nc) - 0.5*(xs - ns)**2 + corr = sc.ive(df2, xs*ns) / 2.0 + # Return res + np.log(corr) avoiding np.log(0) + return _lazywhere( + corr > 0, + (res, corr), + f=lambda r, c: r + np.log(c), + fillvalue=-np.inf) + + +class ncx2_gen(rv_continuous): + r"""A non-central chi-squared continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `ncx2` is: + + .. math:: + + f(x, k, \lambda) = \frac{1}{2} \exp(-(\lambda+x)/2) + (x/\lambda)^{(k-2)/4} I_{(k-2)/2}(\sqrt{\lambda x}) + + for :math:`x >= 0`, :math:`k > 0` and :math:`\lambda \ge 0`. + :math:`k` specifies the degrees of freedom (denoted ``df`` in the + implementation) and :math:`\lambda` is the non-centrality parameter + (denoted ``nc`` in the implementation). :math:`I_\nu` denotes the + modified Bessel function of first order of degree :math:`\nu` + (`scipy.special.iv`). + + `ncx2` takes ``df`` and ``nc`` as shape parameters. + + %(after_notes)s + + %(example)s + + """ + def _argcheck(self, df, nc): + return (df > 0) & np.isfinite(df) & (nc >= 0) + + def _shape_info(self): + idf = _ShapeInfo("df", False, (0, np.inf), (False, False)) + inc = _ShapeInfo("nc", False, (0, np.inf), (True, False)) + return [idf, inc] + + def _rvs(self, df, nc, size=None, random_state=None): + return random_state.noncentral_chisquare(df, nc, size) + + def _logpdf(self, x, df, nc): + cond = np.ones_like(x, dtype=bool) & (nc != 0) + return _lazywhere(cond, (x, df, nc), f=_ncx2_log_pdf, + f2=lambda x, df, _: chi2._logpdf(x, df)) + + def _pdf(self, x, df, nc): + cond = np.ones_like(x, dtype=bool) & (nc != 0) + with np.errstate(over='ignore'): # see gh-17432 + return _lazywhere(cond, (x, df, nc), f=_boost._ncx2_pdf, + f2=lambda x, df, _: chi2._pdf(x, df)) + + def _cdf(self, x, df, nc): + cond = np.ones_like(x, dtype=bool) & (nc != 0) + with np.errstate(over='ignore'): # see gh-17432 + return _lazywhere(cond, (x, df, nc), f=_boost._ncx2_cdf, + f2=lambda x, df, _: chi2._cdf(x, df)) + + def _ppf(self, q, df, nc): + cond = np.ones_like(q, dtype=bool) & (nc != 0) + with np.errstate(over='ignore'): # see gh-17432 + return _lazywhere(cond, (q, df, nc), f=_boost._ncx2_ppf, + f2=lambda x, df, _: chi2._ppf(x, df)) + + def _sf(self, x, df, nc): + cond = np.ones_like(x, dtype=bool) & (nc != 0) + with np.errstate(over='ignore'): # see gh-17432 + return _lazywhere(cond, (x, df, nc), f=_boost._ncx2_sf, + f2=lambda x, df, _: chi2._sf(x, df)) + + def _isf(self, x, df, nc): + cond = np.ones_like(x, dtype=bool) & (nc != 0) + with np.errstate(over='ignore'): # see gh-17432 + return _lazywhere(cond, (x, df, nc), f=_boost._ncx2_isf, + f2=lambda x, df, _: chi2._isf(x, df)) + + def _stats(self, df, nc): + return ( + _boost._ncx2_mean(df, nc), + _boost._ncx2_variance(df, nc), + _boost._ncx2_skewness(df, nc), + _boost._ncx2_kurtosis_excess(df, nc), + ) + + +ncx2 = ncx2_gen(a=0.0, name='ncx2') + + +class ncf_gen(rv_continuous): + r"""A non-central F distribution continuous random variable. + + %(before_notes)s + + See Also + -------- + scipy.stats.f : Fisher distribution + + Notes + ----- + The probability density function for `ncf` is: + + .. math:: + + f(x, n_1, n_2, \lambda) = + \exp\left(\frac{\lambda}{2} + + \lambda n_1 \frac{x}{2(n_1 x + n_2)} + \right) + n_1^{n_1/2} n_2^{n_2/2} x^{n_1/2 - 1} \\ + (n_2 + n_1 x)^{-(n_1 + n_2)/2} + \gamma(n_1/2) \gamma(1 + n_2/2) \\ + \frac{L^{\frac{n_1}{2}-1}_{n_2/2} + \left(-\lambda n_1 \frac{x}{2(n_1 x + n_2)}\right)} + {B(n_1/2, n_2/2) + \gamma\left(\frac{n_1 + n_2}{2}\right)} + + for :math:`n_1, n_2 > 0`, :math:`\lambda \ge 0`. Here :math:`n_1` is the + degrees of freedom in the numerator, :math:`n_2` the degrees of freedom in + the denominator, :math:`\lambda` the non-centrality parameter, + :math:`\gamma` is the logarithm of the Gamma function, :math:`L_n^k` is a + generalized Laguerre polynomial and :math:`B` is the beta function. + + `ncf` takes ``df1``, ``df2`` and ``nc`` as shape parameters. If ``nc=0``, + the distribution becomes equivalent to the Fisher distribution. + + %(after_notes)s + + %(example)s + + """ + def _argcheck(self, df1, df2, nc): + return (df1 > 0) & (df2 > 0) & (nc >= 0) + + def _shape_info(self): + idf1 = _ShapeInfo("df1", False, (0, np.inf), (False, False)) + idf2 = _ShapeInfo("df2", False, (0, np.inf), (False, False)) + inc = _ShapeInfo("nc", False, (0, np.inf), (True, False)) + return [idf1, idf2, inc] + + def _rvs(self, dfn, dfd, nc, size=None, random_state=None): + return random_state.noncentral_f(dfn, dfd, nc, size) + + def _pdf(self, x, dfn, dfd, nc): + # ncf.pdf(x, df1, df2, nc) = exp(nc/2 + nc*df1*x/(2*(df1*x+df2))) * + # df1**(df1/2) * df2**(df2/2) * x**(df1/2-1) * + # (df2+df1*x)**(-(df1+df2)/2) * + # gamma(df1/2)*gamma(1+df2/2) * + # L^{v1/2-1}^{v2/2}(-nc*v1*x/(2*(v1*x+v2))) / + # (B(v1/2, v2/2) * gamma((v1+v2)/2)) + return _boost._ncf_pdf(x, dfn, dfd, nc) + + def _cdf(self, x, dfn, dfd, nc): + return _boost._ncf_cdf(x, dfn, dfd, nc) + + def _ppf(self, q, dfn, dfd, nc): + with np.errstate(over='ignore'): # see gh-17432 + return _boost._ncf_ppf(q, dfn, dfd, nc) + + def _sf(self, x, dfn, dfd, nc): + return _boost._ncf_sf(x, dfn, dfd, nc) + + def _isf(self, x, dfn, dfd, nc): + with np.errstate(over='ignore'): # see gh-17432 + return _boost._ncf_isf(x, dfn, dfd, nc) + + def _munp(self, n, dfn, dfd, nc): + val = (dfn * 1.0/dfd)**n + term = sc.gammaln(n+0.5*dfn) + sc.gammaln(0.5*dfd-n) - sc.gammaln(dfd*0.5) + val *= np.exp(-nc / 2.0+term) + val *= sc.hyp1f1(n+0.5*dfn, 0.5*dfn, 0.5*nc) + return val + + def _stats(self, dfn, dfd, nc, moments='mv'): + mu = _boost._ncf_mean(dfn, dfd, nc) + mu2 = _boost._ncf_variance(dfn, dfd, nc) + g1 = _boost._ncf_skewness(dfn, dfd, nc) if 's' in moments else None + g2 = _boost._ncf_kurtosis_excess( + dfn, dfd, nc) if 'k' in moments else None + return mu, mu2, g1, g2 + + +ncf = ncf_gen(a=0.0, name='ncf') + + +class t_gen(rv_continuous): + r"""A Student's t continuous random variable. + + For the noncentral t distribution, see `nct`. + + %(before_notes)s + + See Also + -------- + nct + + Notes + ----- + The probability density function for `t` is: + + .. math:: + + f(x, \nu) = \frac{\Gamma((\nu+1)/2)} + {\sqrt{\pi \nu} \Gamma(\nu/2)} + (1+x^2/\nu)^{-(\nu+1)/2} + + where :math:`x` is a real number and the degrees of freedom parameter + :math:`\nu` (denoted ``df`` in the implementation) satisfies + :math:`\nu > 0`. :math:`\Gamma` is the gamma function + (`scipy.special.gamma`). + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("df", False, (0, np.inf), (False, False))] + + def _rvs(self, df, size=None, random_state=None): + return random_state.standard_t(df, size=size) + + def _pdf(self, x, df): + return _lazywhere( + df == np.inf, (x, df), + f=lambda x, df: norm._pdf(x), + f2=lambda x, df: ( + np.exp(self._logpdf(x, df)) + ) + ) + + def _logpdf(self, x, df): + + def t_logpdf(x, df): + return (np.log(sc.poch(0.5 * df, 0.5)) + - 0.5 * (np.log(df) + np.log(np.pi)) + - (df + 1)/2*np.log1p(x * x/df)) + + def norm_logpdf(x, df): + return norm._logpdf(x) + + return _lazywhere(df == np.inf, (x, df, ), f=norm_logpdf, f2=t_logpdf) + + def _cdf(self, x, df): + return sc.stdtr(df, x) + + def _sf(self, x, df): + return sc.stdtr(df, -x) + + def _ppf(self, q, df): + return sc.stdtrit(df, q) + + def _isf(self, q, df): + return -sc.stdtrit(df, q) + + def _stats(self, df): + # infinite df -> normal distribution (0.0, 1.0, 0.0, 0.0) + infinite_df = np.isposinf(df) + + mu = np.where(df > 1, 0.0, np.inf) + + condlist = ((df > 1) & (df <= 2), + (df > 2) & np.isfinite(df), + infinite_df) + choicelist = (lambda df: np.broadcast_to(np.inf, df.shape), + lambda df: df / (df-2.0), + lambda df: np.broadcast_to(1, df.shape)) + mu2 = _lazyselect(condlist, choicelist, (df,), np.nan) + + g1 = np.where(df > 3, 0.0, np.nan) + + condlist = ((df > 2) & (df <= 4), + (df > 4) & np.isfinite(df), + infinite_df) + choicelist = (lambda df: np.broadcast_to(np.inf, df.shape), + lambda df: 6.0 / (df-4.0), + lambda df: np.broadcast_to(0, df.shape)) + g2 = _lazyselect(condlist, choicelist, (df,), np.nan) + + return mu, mu2, g1, g2 + + def _entropy(self, df): + if df == np.inf: + return norm._entropy() + + def regular(df): + half = df/2 + half1 = (df + 1)/2 + return (half1*(sc.digamma(half1) - sc.digamma(half)) + + np.log(np.sqrt(df)*sc.beta(half, 0.5))) + + def asymptotic(df): + # Formula from Wolfram Alpha: + # "asymptotic expansion (d+1)/2 * (digamma((d+1)/2) - digamma(d/2)) + # + log(sqrt(d) * beta(d/2, 1/2))" + h = (norm._entropy() + 1/df + (df**-2.)/4 - (df**-3.)/6 + - (df**-4.)/8 + 3/10*(df**-5.) + (df**-6.)/4) + return h + + h = _lazywhere(df >= 100, (df, ), f=asymptotic, f2=regular) + return h + + +t = t_gen(name='t') + + +class nct_gen(rv_continuous): + r"""A non-central Student's t continuous random variable. + + %(before_notes)s + + Notes + ----- + If :math:`Y` is a standard normal random variable and :math:`V` is + an independent chi-square random variable (`chi2`) with :math:`k` degrees + of freedom, then + + .. math:: + + X = \frac{Y + c}{\sqrt{V/k}} + + has a non-central Student's t distribution on the real line. + The degrees of freedom parameter :math:`k` (denoted ``df`` in the + implementation) satisfies :math:`k > 0` and the noncentrality parameter + :math:`c` (denoted ``nc`` in the implementation) is a real number. + + %(after_notes)s + + %(example)s + + """ + def _argcheck(self, df, nc): + return (df > 0) & (nc == nc) + + def _shape_info(self): + idf = _ShapeInfo("df", False, (0, np.inf), (False, False)) + inc = _ShapeInfo("nc", False, (-np.inf, np.inf), (False, False)) + return [idf, inc] + + def _rvs(self, df, nc, size=None, random_state=None): + n = norm.rvs(loc=nc, size=size, random_state=random_state) + c2 = chi2.rvs(df, size=size, random_state=random_state) + return n * np.sqrt(df) / np.sqrt(c2) + + def _pdf(self, x, df, nc): + # Boost version has accuracy issues in left tail; see gh-16591 + n = df*1.0 + nc = nc*1.0 + x2 = x*x + ncx2 = nc*nc*x2 + fac1 = n + x2 + trm1 = (n/2.*np.log(n) + sc.gammaln(n+1) + - (n*np.log(2) + nc*nc/2 + (n/2)*np.log(fac1) + + sc.gammaln(n/2))) + Px = np.exp(trm1) + valF = ncx2 / (2*fac1) + trm1 = (np.sqrt(2)*nc*x*sc.hyp1f1(n/2+1, 1.5, valF) + / np.asarray(fac1*sc.gamma((n+1)/2))) + trm2 = (sc.hyp1f1((n+1)/2, 0.5, valF) + / np.asarray(np.sqrt(fac1)*sc.gamma(n/2+1))) + Px *= trm1+trm2 + return np.clip(Px, 0, None) + + def _cdf(self, x, df, nc): + with np.errstate(over='ignore'): # see gh-17432 + return np.clip(_boost._nct_cdf(x, df, nc), 0, 1) + + def _ppf(self, q, df, nc): + with np.errstate(over='ignore'): # see gh-17432 + return _boost._nct_ppf(q, df, nc) + + def _sf(self, x, df, nc): + with np.errstate(over='ignore'): # see gh-17432 + return np.clip(_boost._nct_sf(x, df, nc), 0, 1) + + def _isf(self, x, df, nc): + with np.errstate(over='ignore'): # see gh-17432 + return _boost._nct_isf(x, df, nc) + + def _stats(self, df, nc, moments='mv'): + mu = _boost._nct_mean(df, nc) + mu2 = _boost._nct_variance(df, nc) + g1 = _boost._nct_skewness(df, nc) if 's' in moments else None + g2 = _boost._nct_kurtosis_excess(df, nc) if 'k' in moments else None + return mu, mu2, g1, g2 + + +nct = nct_gen(name="nct") + + +class pareto_gen(rv_continuous): + r"""A Pareto continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `pareto` is: + + .. math:: + + f(x, b) = \frac{b}{x^{b+1}} + + for :math:`x \ge 1`, :math:`b > 0`. + + `pareto` takes ``b`` as a shape parameter for :math:`b`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("b", False, (0, np.inf), (False, False))] + + def _pdf(self, x, b): + # pareto.pdf(x, b) = b / x**(b+1) + return b * x**(-b-1) + + def _cdf(self, x, b): + return 1 - x**(-b) + + def _ppf(self, q, b): + return pow(1-q, -1.0/b) + + def _sf(self, x, b): + return x**(-b) + + def _isf(self, q, b): + return np.power(q, -1.0 / b) + + def _stats(self, b, moments='mv'): + mu, mu2, g1, g2 = None, None, None, None + if 'm' in moments: + mask = b > 1 + bt = np.extract(mask, b) + mu = np.full(np.shape(b), fill_value=np.inf) + np.place(mu, mask, bt / (bt-1.0)) + if 'v' in moments: + mask = b > 2 + bt = np.extract(mask, b) + mu2 = np.full(np.shape(b), fill_value=np.inf) + np.place(mu2, mask, bt / (bt-2.0) / (bt-1.0)**2) + if 's' in moments: + mask = b > 3 + bt = np.extract(mask, b) + g1 = np.full(np.shape(b), fill_value=np.nan) + vals = 2 * (bt + 1.0) * np.sqrt(bt - 2.0) / ((bt - 3.0) * np.sqrt(bt)) + np.place(g1, mask, vals) + if 'k' in moments: + mask = b > 4 + bt = np.extract(mask, b) + g2 = np.full(np.shape(b), fill_value=np.nan) + vals = (6.0*np.polyval([1.0, 1.0, -6, -2], bt) / + np.polyval([1.0, -7.0, 12.0, 0.0], bt)) + np.place(g2, mask, vals) + return mu, mu2, g1, g2 + + def _entropy(self, c): + return 1 + 1.0/c - np.log(c) + + @_call_super_mom + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + parameters = _check_fit_input_parameters(self, data, args, kwds) + data, fshape, floc, fscale = parameters + + # ensure that any fixed parameters don't violate constraints of the + # distribution before continuing. + if floc is not None and np.min(data) - floc < (fscale or 0): + raise FitDataError("pareto", lower=1, upper=np.inf) + + ndata = data.shape[0] + + def get_shape(scale, location): + # The first-order necessary condition on `shape` can be solved in + # closed form + return ndata / np.sum(np.log((data - location) / scale)) + + if floc is fscale is None: + # The support of the distribution is `(x - loc)/scale > 0`. + # The method of Lagrange multipliers turns this constraint + # into an equation that can be solved numerically. + # See gh-12545 for details. + + def dL_dScale(shape, scale): + # The partial derivative of the log-likelihood function w.r.t. + # the scale. + return ndata * shape / scale + + def dL_dLocation(shape, location): + # The partial derivative of the log-likelihood function w.r.t. + # the location. + return (shape + 1) * np.sum(1 / (data - location)) + + def fun_to_solve(scale): + # optimize the scale by setting the partial derivatives + # w.r.t. to location and scale equal and solving. + location = np.min(data) - scale + shape = fshape or get_shape(scale, location) + return dL_dLocation(shape, location) - dL_dScale(shape, scale) + + def interval_contains_root(lbrack, rbrack): + # return true if the signs disagree. + return (np.sign(fun_to_solve(lbrack)) != + np.sign(fun_to_solve(rbrack))) + + # set brackets for `root_scalar` to use when optimizing over the + # scale such that a root is likely between them. Use user supplied + # guess or default 1. + brack_start = float(kwds.get('scale', 1)) + lbrack, rbrack = brack_start / 2, brack_start * 2 + # if a root is not between the brackets, iteratively expand them + # until they include a sign change, checking after each bracket is + # modified. + while (not interval_contains_root(lbrack, rbrack) + and (lbrack > 0 or rbrack < np.inf)): + lbrack /= 2 + rbrack *= 2 + res = root_scalar(fun_to_solve, bracket=[lbrack, rbrack]) + if res.converged: + scale = res.root + loc = np.min(data) - scale + shape = fshape or get_shape(scale, loc) + + # The Pareto distribution requires that its parameters satisfy + # the condition `fscale + floc <= min(data)`. However, to + # avoid numerical issues, we require that `fscale + floc` + # is strictly less than `min(data)`. If this condition + # is not satisfied, reduce the scale with `np.nextafter` to + # ensure that data does not fall outside of the support. + if not (scale + loc) < np.min(data): + scale = np.min(data) - loc + scale = np.nextafter(scale, 0) + return shape, loc, scale + else: + return super().fit(data, **kwds) + elif floc is None: + loc = np.min(data) - fscale + else: + loc = floc + # Source: Evans, Hastings, and Peacock (2000), Statistical + # Distributions, 3rd. Ed., John Wiley and Sons. Page 149. + scale = fscale or np.min(data) - loc + shape = fshape or get_shape(scale, loc) + return shape, loc, scale + + +pareto = pareto_gen(a=1.0, name="pareto") + + +class lomax_gen(rv_continuous): + r"""A Lomax (Pareto of the second kind) continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `lomax` is: + + .. math:: + + f(x, c) = \frac{c}{(1+x)^{c+1}} + + for :math:`x \ge 0`, :math:`c > 0`. + + `lomax` takes ``c`` as a shape parameter for :math:`c`. + + `lomax` is a special case of `pareto` with ``loc=-1.0``. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # lomax.pdf(x, c) = c / (1+x)**(c+1) + return c*1.0/(1.0+x)**(c+1.0) + + def _logpdf(self, x, c): + return np.log(c) - (c+1)*sc.log1p(x) + + def _cdf(self, x, c): + return -sc.expm1(-c*sc.log1p(x)) + + def _sf(self, x, c): + return np.exp(-c*sc.log1p(x)) + + def _logsf(self, x, c): + return -c*sc.log1p(x) + + def _ppf(self, q, c): + return sc.expm1(-sc.log1p(-q)/c) + + def _isf(self, q, c): + return q**(-1.0 / c) - 1 + + def _stats(self, c): + mu, mu2, g1, g2 = pareto.stats(c, loc=-1.0, moments='mvsk') + return mu, mu2, g1, g2 + + def _entropy(self, c): + return 1+1.0/c-np.log(c) + + +lomax = lomax_gen(a=0.0, name="lomax") + + +class pearson3_gen(rv_continuous): + r"""A pearson type III continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `pearson3` is: + + .. math:: + + f(x, \kappa) = \frac{|\beta|}{\Gamma(\alpha)} + (\beta (x - \zeta))^{\alpha - 1} + \exp(-\beta (x - \zeta)) + + where: + + .. math:: + + \beta = \frac{2}{\kappa} + + \alpha = \beta^2 = \frac{4}{\kappa^2} + + \zeta = -\frac{\alpha}{\beta} = -\beta + + :math:`\Gamma` is the gamma function (`scipy.special.gamma`). + Pass the skew :math:`\kappa` into `pearson3` as the shape parameter + ``skew``. + + %(after_notes)s + + %(example)s + + References + ---------- + R.W. Vogel and D.E. McMartin, "Probability Plot Goodness-of-Fit and + Skewness Estimation Procedures for the Pearson Type 3 Distribution", Water + Resources Research, Vol.27, 3149-3158 (1991). + + L.R. Salvosa, "Tables of Pearson's Type III Function", Ann. Math. Statist., + Vol.1, 191-198 (1930). + + "Using Modern Computing Tools to Fit the Pearson Type III Distribution to + Aviation Loads Data", Office of Aviation Research (2003). + + """ + def _preprocess(self, x, skew): + # The real 'loc' and 'scale' are handled in the calling pdf(...). The + # local variables 'loc' and 'scale' within pearson3._pdf are set to + # the defaults just to keep them as part of the equations for + # documentation. + loc = 0.0 + scale = 1.0 + + # If skew is small, return _norm_pdf. The divide between pearson3 + # and norm was found by brute force and is approximately a skew of + # 0.000016. No one, I hope, would actually use a skew value even + # close to this small. + norm2pearson_transition = 0.000016 + + ans, x, skew = np.broadcast_arrays(1.0, x, skew) + ans = ans.copy() + + # mask is True where skew is small enough to use the normal approx. + mask = np.absolute(skew) < norm2pearson_transition + invmask = ~mask + + beta = 2.0 / (skew[invmask] * scale) + alpha = (scale * beta)**2 + zeta = loc - alpha / beta + + transx = beta * (x[invmask] - zeta) + return ans, x, transx, mask, invmask, beta, alpha, zeta + + def _argcheck(self, skew): + # The _argcheck function in rv_continuous only allows positive + # arguments. The skew argument for pearson3 can be zero (which I want + # to handle inside pearson3._pdf) or negative. So just return True + # for all skew args. + return np.isfinite(skew) + + def _shape_info(self): + return [_ShapeInfo("skew", False, (-np.inf, np.inf), (False, False))] + + def _stats(self, skew): + m = 0.0 + v = 1.0 + s = skew + k = 1.5*skew**2 + return m, v, s, k + + def _pdf(self, x, skew): + # pearson3.pdf(x, skew) = abs(beta) / gamma(alpha) * + # (beta * (x - zeta))**(alpha - 1) * exp(-beta*(x - zeta)) + # Do the calculation in _logpdf since helps to limit + # overflow/underflow problems + ans = np.exp(self._logpdf(x, skew)) + if ans.ndim == 0: + if np.isnan(ans): + return 0.0 + return ans + ans[np.isnan(ans)] = 0.0 + return ans + + def _logpdf(self, x, skew): + # PEARSON3 logpdf GAMMA logpdf + # np.log(abs(beta)) + # + (alpha - 1)*np.log(beta*(x - zeta)) + (a - 1)*np.log(x) + # - beta*(x - zeta) - x + # - sc.gammalnalpha) - sc.gammalna) + ans, x, transx, mask, invmask, beta, alpha, _ = ( + self._preprocess(x, skew)) + + ans[mask] = np.log(_norm_pdf(x[mask])) + # use logpdf instead of _logpdf to fix issue mentioned in gh-12640 + # (_logpdf does not return correct result for alpha = 1) + ans[invmask] = np.log(abs(beta)) + gamma.logpdf(transx, alpha) + return ans + + def _cdf(self, x, skew): + ans, x, transx, mask, invmask, _, alpha, _ = ( + self._preprocess(x, skew)) + + ans[mask] = _norm_cdf(x[mask]) + + skew = np.broadcast_to(skew, invmask.shape) + invmask1a = np.logical_and(invmask, skew > 0) + invmask1b = skew[invmask] > 0 + # use cdf instead of _cdf to fix issue mentioned in gh-12640 + # (_cdf produces NaNs for inputs outside support) + ans[invmask1a] = gamma.cdf(transx[invmask1b], alpha[invmask1b]) + + # The gamma._cdf approach wasn't working with negative skew. + # Note that multiplying the skew by -1 reflects about x=0. + # So instead of evaluating the CDF with negative skew at x, + # evaluate the SF with positive skew at -x. + invmask2a = np.logical_and(invmask, skew < 0) + invmask2b = skew[invmask] < 0 + # gamma._sf produces NaNs when transx < 0, so use gamma.sf + ans[invmask2a] = gamma.sf(transx[invmask2b], alpha[invmask2b]) + + return ans + + def _sf(self, x, skew): + ans, x, transx, mask, invmask, _, alpha, _ = ( + self._preprocess(x, skew)) + + ans[mask] = _norm_sf(x[mask]) + + skew = np.broadcast_to(skew, invmask.shape) + invmask1a = np.logical_and(invmask, skew > 0) + invmask1b = skew[invmask] > 0 + ans[invmask1a] = gamma.sf(transx[invmask1b], alpha[invmask1b]) + + invmask2a = np.logical_and(invmask, skew < 0) + invmask2b = skew[invmask] < 0 + ans[invmask2a] = gamma.cdf(transx[invmask2b], alpha[invmask2b]) + + return ans + + def _rvs(self, skew, size=None, random_state=None): + skew = np.broadcast_to(skew, size) + ans, _, _, mask, invmask, beta, alpha, zeta = ( + self._preprocess([0], skew)) + + nsmall = mask.sum() + nbig = mask.size - nsmall + ans[mask] = random_state.standard_normal(nsmall) + ans[invmask] = random_state.standard_gamma(alpha, nbig)/beta + zeta + + if size == (): + ans = ans[0] + return ans + + def _ppf(self, q, skew): + ans, q, _, mask, invmask, beta, alpha, zeta = ( + self._preprocess(q, skew)) + ans[mask] = _norm_ppf(q[mask]) + q = q[invmask] + q[beta < 0] = 1 - q[beta < 0] # for negative skew; see gh-17050 + ans[invmask] = sc.gammaincinv(alpha, q)/beta + zeta + return ans + + @_call_super_mom + @extend_notes_in_docstring(rv_continuous, notes="""\ + Note that method of moments (`method='MM'`) is not + available for this distribution.\n\n""") + def fit(self, data, *args, **kwds): + if kwds.get("method", None) == 'MM': + raise NotImplementedError("Fit `method='MM'` is not available for " + "the Pearson3 distribution. Please try " + "the default `method='MLE'`.") + else: + return super(type(self), self).fit(data, *args, **kwds) + + +pearson3 = pearson3_gen(name="pearson3") + + +class powerlaw_gen(rv_continuous): + r"""A power-function continuous random variable. + + %(before_notes)s + + See Also + -------- + pareto + + Notes + ----- + The probability density function for `powerlaw` is: + + .. math:: + + f(x, a) = a x^{a-1} + + for :math:`0 \le x \le 1`, :math:`a > 0`. + + `powerlaw` takes ``a`` as a shape parameter for :math:`a`. + + %(after_notes)s + + For example, the support of `powerlaw` can be adjusted from the default + interval ``[0, 1]`` to the interval ``[c, c+d]`` by setting ``loc=c`` and + ``scale=d``. For a power-law distribution with infinite support, see + `pareto`. + + `powerlaw` is a special case of `beta` with ``b=1``. + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("a", False, (0, np.inf), (False, False))] + + def _pdf(self, x, a): + # powerlaw.pdf(x, a) = a * x**(a-1) + return a*x**(a-1.0) + + def _logpdf(self, x, a): + return np.log(a) + sc.xlogy(a - 1, x) + + def _cdf(self, x, a): + return x**(a*1.0) + + def _logcdf(self, x, a): + return a*np.log(x) + + def _ppf(self, q, a): + return pow(q, 1.0/a) + + def _sf(self, p, a): + return -sc.powm1(p, a) + + def _munp(self, n, a): + # The following expression is correct for all real n (provided a > 0). + return a / (a + n) + + def _stats(self, a): + return (a / (a + 1.0), + a / (a + 2.0) / (a + 1.0) ** 2, + -2.0 * ((a - 1.0) / (a + 3.0)) * np.sqrt((a + 2.0) / a), + 6 * np.polyval([1, -1, -6, 2], a) / (a * (a + 3.0) * (a + 4))) + + def _entropy(self, a): + return 1 - 1.0/a - np.log(a) + + def _support_mask(self, x, a): + return (super()._support_mask(x, a) + & ((x != 0) | (a >= 1))) + + @_call_super_mom + @extend_notes_in_docstring(rv_continuous, notes="""\ + Notes specifically for ``powerlaw.fit``: If the location is a free + parameter and the value returned for the shape parameter is less than + one, the true maximum likelihood approaches infinity. This causes + numerical difficulties, and the resulting estimates are approximate. + \n\n""") + def fit(self, data, *args, **kwds): + # Summary of the strategy: + # + # 1) If the scale and location are fixed, return the shape according + # to a formula. + # + # 2) If the scale is fixed, there are two possibilities for the other + # parameters - one corresponding with shape less than one, and + # another with shape greater than one. Calculate both, and return + # whichever has the better log-likelihood. + # + # At this point, the scale is known to be free. + # + # 3) If the location is fixed, return the scale and shape according to + # formulas (or, if the shape is fixed, the fixed shape). + # + # At this point, the location and scale are both free. There are + # separate equations depending on whether the shape is less than one or + # greater than one. + # + # 4a) If the shape is less than one, there are formulas for shape, + # location, and scale. + # 4b) If the shape is greater than one, there are formulas for shape + # and scale, but there is a condition for location to be solved + # numerically. + # + # If the shape is fixed and less than one, we use 4a. + # If the shape is fixed and greater than one, we use 4b. + # If the shape is also free, we calculate fits using both 4a and 4b + # and choose the one that results a better log-likelihood. + # + # In many cases, the use of `np.nextafter` is used to avoid numerical + # issues. + if kwds.pop('superfit', False): + return super().fit(data, *args, **kwds) + + if len(np.unique(data)) == 1: + return super().fit(data, *args, **kwds) + + data, fshape, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + penalized_nllf_args = [data, (self._fitstart(data),)] + penalized_nllf = self._reduce_func(penalized_nllf_args, {})[1] + + # ensure that any fixed parameters don't violate constraints of the + # distribution before continuing. The support of the distribution + # is `0 < (x - loc)/scale < 1`. + if floc is not None: + if not data.min() > floc: + raise FitDataError('powerlaw', 0, 1) + if fscale is not None and not data.max() <= floc + fscale: + raise FitDataError('powerlaw', 0, 1) + + if fscale is not None: + if fscale <= 0: + raise ValueError("Negative or zero `fscale` is outside the " + "range allowed by the distribution.") + if fscale <= np.ptp(data): + msg = "`fscale` must be greater than the range of data." + raise ValueError(msg) + + def get_shape(data, loc, scale): + # The first-order necessary condition on `shape` can be solved in + # closed form. It can be used no matter the assumption of the + # value of the shape. + N = len(data) + return - N / (np.sum(np.log(data - loc)) - N*np.log(scale)) + + def get_scale(data, loc): + # analytical solution for `scale` based on the location. + # It can be used no matter the assumption of the value of the + # shape. + return data.max() - loc + + # 1) The location and scale are both fixed. Analytically determine the + # shape. + if fscale is not None and floc is not None: + return get_shape(data, floc, fscale), floc, fscale + + # 2) The scale is fixed. There are two possibilities for the other + # parameters. Choose the option with better log-likelihood. + if fscale is not None: + # using `data.min()` as the optimal location + loc_lt1 = np.nextafter(data.min(), -np.inf) + shape_lt1 = fshape or get_shape(data, loc_lt1, fscale) + ll_lt1 = penalized_nllf((shape_lt1, loc_lt1, fscale), data) + + # using `data.max() - scale` as the optimal location + loc_gt1 = np.nextafter(data.max() - fscale, np.inf) + shape_gt1 = fshape or get_shape(data, loc_gt1, fscale) + ll_gt1 = penalized_nllf((shape_gt1, loc_gt1, fscale), data) + + if ll_lt1 < ll_gt1: + return shape_lt1, loc_lt1, fscale + else: + return shape_gt1, loc_gt1, fscale + + # 3) The location is fixed. Return the analytical scale and the + # analytical (or fixed) shape. + if floc is not None: + scale = get_scale(data, floc) + shape = fshape or get_shape(data, floc, scale) + return shape, floc, scale + + # 4) Location and scale are both free + # 4a) Use formulas that assume `shape <= 1`. + + def fit_loc_scale_w_shape_lt_1(): + loc = np.nextafter(data.min(), -np.inf) + if np.abs(loc) < np.finfo(loc.dtype).tiny: + loc = np.sign(loc) * np.finfo(loc.dtype).tiny + scale = np.nextafter(get_scale(data, loc), np.inf) + shape = fshape or get_shape(data, loc, scale) + return shape, loc, scale + + # 4b) Fit under the assumption that `shape > 1`. The support + # of the distribution is `(x - loc)/scale <= 1`. The method of Lagrange + # multipliers turns this constraint into the condition that + # dL_dScale - dL_dLocation must be zero, which is solved numerically. + # (Alternatively, substitute the constraint into the objective + # function before deriving the likelihood equation for location.) + + def dL_dScale(data, shape, scale): + # The partial derivative of the log-likelihood function w.r.t. + # the scale. + return -data.shape[0] * shape / scale + + def dL_dLocation(data, shape, loc): + # The partial derivative of the log-likelihood function w.r.t. + # the location. + return (shape - 1) * np.sum(1 / (loc - data)) # -1/(data-loc) + + def dL_dLocation_star(loc): + # The derivative of the log-likelihood function w.r.t. + # the location, given optimal shape and scale + scale = np.nextafter(get_scale(data, loc), -np.inf) + shape = fshape or get_shape(data, loc, scale) + return dL_dLocation(data, shape, loc) + + def fun_to_solve(loc): + # optimize the location by setting the partial derivatives + # w.r.t. to location and scale equal and solving. + scale = np.nextafter(get_scale(data, loc), -np.inf) + shape = fshape or get_shape(data, loc, scale) + return (dL_dScale(data, shape, scale) + - dL_dLocation(data, shape, loc)) + + def fit_loc_scale_w_shape_gt_1(): + # set brackets for `root_scalar` to use when optimizing over the + # location such that a root is likely between them. + rbrack = np.nextafter(data.min(), -np.inf) + + # if the sign of `dL_dLocation_star` is positive at rbrack, + # we're not going to find the root we're looking for + delta = (data.min() - rbrack) + while dL_dLocation_star(rbrack) > 0: + rbrack = data.min() - delta + delta *= 2 + + def interval_contains_root(lbrack, rbrack): + # Check if the interval (lbrack, rbrack) contains the root. + return (np.sign(fun_to_solve(lbrack)) + != np.sign(fun_to_solve(rbrack))) + + lbrack = rbrack - 1 + + # if the sign doesn't change between the brackets, move the left + # bracket until it does. (The right bracket remains fixed at the + # maximum permissible value.) + i = 1.0 + while (not interval_contains_root(lbrack, rbrack) + and lbrack != -np.inf): + lbrack = (data.min() - i) + i *= 2 + + root = optimize.root_scalar(fun_to_solve, bracket=(lbrack, rbrack)) + + loc = np.nextafter(root.root, -np.inf) + scale = np.nextafter(get_scale(data, loc), np.inf) + shape = fshape or get_shape(data, loc, scale) + return shape, loc, scale + + # Shape is fixed - choose 4a or 4b accordingly. + if fshape is not None and fshape <= 1: + return fit_loc_scale_w_shape_lt_1() + elif fshape is not None and fshape > 1: + return fit_loc_scale_w_shape_gt_1() + + # Shape is free + fit_shape_lt1 = fit_loc_scale_w_shape_lt_1() + ll_lt1 = self.nnlf(fit_shape_lt1, data) + + fit_shape_gt1 = fit_loc_scale_w_shape_gt_1() + ll_gt1 = self.nnlf(fit_shape_gt1, data) + + if ll_lt1 <= ll_gt1 and fit_shape_lt1[0] <= 1: + return fit_shape_lt1 + elif ll_lt1 > ll_gt1 and fit_shape_gt1[0] > 1: + return fit_shape_gt1 + else: + return super().fit(data, *args, **kwds) + + +powerlaw = powerlaw_gen(a=0.0, b=1.0, name="powerlaw") + + +class powerlognorm_gen(rv_continuous): + r"""A power log-normal continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `powerlognorm` is: + + .. math:: + + f(x, c, s) = \frac{c}{x s} \phi(\log(x)/s) + (\Phi(-\log(x)/s))^{c-1} + + where :math:`\phi` is the normal pdf, and :math:`\Phi` is the normal cdf, + and :math:`x > 0`, :math:`s, c > 0`. + + `powerlognorm` takes :math:`c` and :math:`s` as shape parameters. + + %(after_notes)s + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + ic = _ShapeInfo("c", False, (0, np.inf), (False, False)) + i_s = _ShapeInfo("s", False, (0, np.inf), (False, False)) + return [ic, i_s] + + def _pdf(self, x, c, s): + return np.exp(self._logpdf(x, c, s)) + + def _logpdf(self, x, c, s): + return (np.log(c) - np.log(x) - np.log(s) + + _norm_logpdf(np.log(x) / s) + + _norm_logcdf(-np.log(x) / s) * (c - 1.)) + + def _cdf(self, x, c, s): + return -sc.expm1(self._logsf(x, c, s)) + + def _ppf(self, q, c, s): + return self._isf(1 - q, c, s) + + def _sf(self, x, c, s): + return np.exp(self._logsf(x, c, s)) + + def _logsf(self, x, c, s): + return _norm_logcdf(-np.log(x) / s) * c + + def _isf(self, q, c, s): + return np.exp(-_norm_ppf(q**(1/c)) * s) + + +powerlognorm = powerlognorm_gen(a=0.0, name="powerlognorm") + + +class powernorm_gen(rv_continuous): + r"""A power normal continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `powernorm` is: + + .. math:: + + f(x, c) = c \phi(x) (\Phi(-x))^{c-1} + + where :math:`\phi` is the normal pdf, :math:`\Phi` is the normal cdf, + :math:`x` is any real, and :math:`c > 0` [1]_. + + `powernorm` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + References + ---------- + .. [1] NIST Engineering Statistics Handbook, Section 1.3.6.6.13, + https://www.itl.nist.gov/div898/handbook//eda/section3/eda366d.htm + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + def _pdf(self, x, c): + # powernorm.pdf(x, c) = c * phi(x) * (Phi(-x))**(c-1) + return c*_norm_pdf(x) * (_norm_cdf(-x)**(c-1.0)) + + def _logpdf(self, x, c): + return np.log(c) + _norm_logpdf(x) + (c-1)*_norm_logcdf(-x) + + def _cdf(self, x, c): + return -sc.expm1(self._logsf(x, c)) + + def _ppf(self, q, c): + return -_norm_ppf(pow(1.0 - q, 1.0 / c)) + + def _sf(self, x, c): + return np.exp(self._logsf(x, c)) + + def _logsf(self, x, c): + return c * _norm_logcdf(-x) + + def _isf(self, q, c): + return -_norm_ppf(np.exp(np.log(q) / c)) + + +powernorm = powernorm_gen(name='powernorm') + + +class rdist_gen(rv_continuous): + r"""An R-distributed (symmetric beta) continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `rdist` is: + + .. math:: + + f(x, c) = \frac{(1-x^2)^{c/2-1}}{B(1/2, c/2)} + + for :math:`-1 \le x \le 1`, :math:`c > 0`. `rdist` is also called the + symmetric beta distribution: if B has a `beta` distribution with + parameters (c/2, c/2), then X = 2*B - 1 follows a R-distribution with + parameter c. + + `rdist` takes ``c`` as a shape parameter for :math:`c`. + + This distribution includes the following distribution kernels as + special cases:: + + c = 2: uniform + c = 3: `semicircular` + c = 4: Epanechnikov (parabolic) + c = 6: quartic (biweight) + c = 8: triweight + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("c", False, (0, np.inf), (False, False))] + + # use relation to the beta distribution for pdf, cdf, etc + def _pdf(self, x, c): + return np.exp(self._logpdf(x, c)) + + def _logpdf(self, x, c): + return -np.log(2) + beta._logpdf((x + 1)/2, c/2, c/2) + + def _cdf(self, x, c): + return beta._cdf((x + 1)/2, c/2, c/2) + + def _sf(self, x, c): + return beta._sf((x + 1)/2, c/2, c/2) + + def _ppf(self, q, c): + return 2*beta._ppf(q, c/2, c/2) - 1 + + def _rvs(self, c, size=None, random_state=None): + return 2 * random_state.beta(c/2, c/2, size) - 1 + + def _munp(self, n, c): + numerator = (1 - (n % 2)) * sc.beta((n + 1.0) / 2, c / 2.0) + return numerator / sc.beta(1. / 2, c / 2.) + + +rdist = rdist_gen(a=-1.0, b=1.0, name="rdist") + + +class rayleigh_gen(rv_continuous): + r"""A Rayleigh continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `rayleigh` is: + + .. math:: + + f(x) = x \exp(-x^2/2) + + for :math:`x \ge 0`. + + `rayleigh` is a special case of `chi` with ``df=2``. + + %(after_notes)s + + %(example)s + + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return chi.rvs(2, size=size, random_state=random_state) + + def _pdf(self, r): + # rayleigh.pdf(r) = r * exp(-r**2/2) + return np.exp(self._logpdf(r)) + + def _logpdf(self, r): + return np.log(r) - 0.5 * r * r + + def _cdf(self, r): + return -sc.expm1(-0.5 * r**2) + + def _ppf(self, q): + return np.sqrt(-2 * sc.log1p(-q)) + + def _sf(self, r): + return np.exp(self._logsf(r)) + + def _logsf(self, r): + return -0.5 * r * r + + def _isf(self, q): + return np.sqrt(-2 * np.log(q)) + + def _stats(self): + val = 4 - np.pi + return (np.sqrt(np.pi/2), + val/2, + 2*(np.pi-3)*np.sqrt(np.pi)/val**1.5, + 6*np.pi/val-16/val**2) + + def _entropy(self): + return _EULER/2.0 + 1 - 0.5*np.log(2) + + @_call_super_mom + @extend_notes_in_docstring(rv_continuous, notes="""\ + Notes specifically for ``rayleigh.fit``: If the location is fixed with + the `floc` parameter, this method uses an analytical formula to find + the scale. Otherwise, this function uses a numerical root finder on + the first order conditions of the log-likelihood function to find the + MLE. Only the (optional) `loc` parameter is used as the initial guess + for the root finder; the `scale` parameter and any other parameters + for the optimizer are ignored.\n\n""") + def fit(self, data, *args, **kwds): + if kwds.pop('superfit', False): + return super().fit(data, *args, **kwds) + data, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + + def scale_mle(loc): + # Source: Statistical Distributions, 3rd Edition. Evans, Hastings, + # and Peacock (2000), Page 175 + return (np.sum((data - loc) ** 2) / (2 * len(data))) ** .5 + + def loc_mle(loc): + # This implicit equation for `loc` is used when + # both `loc` and `scale` are free. + xm = data - loc + s1 = xm.sum() + s2 = (xm**2).sum() + s3 = (1/xm).sum() + return s1 - s2/(2*len(data))*s3 + + def loc_mle_scale_fixed(loc, scale=fscale): + # This implicit equation for `loc` is used when + # `scale` is fixed but `loc` is not. + xm = data - loc + return xm.sum() - scale**2 * (1/xm).sum() + + if floc is not None: + # `loc` is fixed, analytically determine `scale`. + if np.any(data - floc <= 0): + raise FitDataError("rayleigh", lower=1, upper=np.inf) + else: + return floc, scale_mle(floc) + + # Account for user provided guess of `loc`. + loc0 = kwds.get('loc') + if loc0 is None: + # Use _fitstart to estimate loc; ignore the returned scale. + loc0 = self._fitstart(data)[0] + + fun = loc_mle if fscale is None else loc_mle_scale_fixed + rbrack = np.nextafter(np.min(data), -np.inf) + lbrack = _get_left_bracket(fun, rbrack) + res = optimize.root_scalar(fun, bracket=(lbrack, rbrack)) + if not res.converged: + raise FitSolverError(res.flag) + loc = res.root + scale = fscale or scale_mle(loc) + return loc, scale + + +rayleigh = rayleigh_gen(a=0.0, name="rayleigh") + + +class reciprocal_gen(rv_continuous): + r"""A loguniform or reciprocal continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for this class is: + + .. math:: + + f(x, a, b) = \frac{1}{x \log(b/a)} + + for :math:`a \le x \le b`, :math:`b > a > 0`. This class takes + :math:`a` and :math:`b` as shape parameters. + + %(after_notes)s + + %(example)s + + This doesn't show the equal probability of ``0.01``, ``0.1`` and + ``1``. This is best when the x-axis is log-scaled: + + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> fig, ax = plt.subplots(1, 1) + >>> ax.hist(np.log10(r)) + >>> ax.set_ylabel("Frequency") + >>> ax.set_xlabel("Value of random variable") + >>> ax.xaxis.set_major_locator(plt.FixedLocator([-2, -1, 0])) + >>> ticks = ["$10^{{ {} }}$".format(i) for i in [-2, -1, 0]] + >>> ax.set_xticklabels(ticks) # doctest: +SKIP + >>> plt.show() + + This random variable will be log-uniform regardless of the base chosen for + ``a`` and ``b``. Let's specify with base ``2`` instead: + + >>> rvs = %(name)s(2**-2, 2**0).rvs(size=1000) + + Values of ``1/4``, ``1/2`` and ``1`` are equally likely with this random + variable. Here's the histogram: + + >>> fig, ax = plt.subplots(1, 1) + >>> ax.hist(np.log2(rvs)) + >>> ax.set_ylabel("Frequency") + >>> ax.set_xlabel("Value of random variable") + >>> ax.xaxis.set_major_locator(plt.FixedLocator([-2, -1, 0])) + >>> ticks = ["$2^{{ {} }}$".format(i) for i in [-2, -1, 0]] + >>> ax.set_xticklabels(ticks) # doctest: +SKIP + >>> plt.show() + + """ + def _argcheck(self, a, b): + return (a > 0) & (b > a) + + def _shape_info(self): + ia = _ShapeInfo("a", False, (0, np.inf), (False, False)) + ib = _ShapeInfo("b", False, (0, np.inf), (False, False)) + return [ia, ib] + + def _fitstart(self, data): + if isinstance(data, CensoredData): + data = data._uncensor() + # Reasonable, since support is [a, b] + return super()._fitstart(data, args=(np.min(data), np.max(data))) + + def _get_support(self, a, b): + return a, b + + def _pdf(self, x, a, b): + # reciprocal.pdf(x, a, b) = 1 / (x*(log(b) - log(a))) + return np.exp(self._logpdf(x, a, b)) + + def _logpdf(self, x, a, b): + return -np.log(x) - np.log(np.log(b) - np.log(a)) + + def _cdf(self, x, a, b): + return (np.log(x)-np.log(a)) / (np.log(b) - np.log(a)) + + def _ppf(self, q, a, b): + return np.exp(np.log(a) + q*(np.log(b) - np.log(a))) + + def _munp(self, n, a, b): + t1 = 1 / (np.log(b) - np.log(a)) / n + t2 = np.real(np.exp(_log_diff(n * np.log(b), n*np.log(a)))) + return t1 * t2 + + def _entropy(self, a, b): + return 0.5*(np.log(a) + np.log(b)) + np.log(np.log(b) - np.log(a)) + + fit_note = """\ + `loguniform`/`reciprocal` is over-parameterized. `fit` automatically + fixes `scale` to 1 unless `fscale` is provided by the user.\n\n""" + + @extend_notes_in_docstring(rv_continuous, notes=fit_note) + def fit(self, data, *args, **kwds): + fscale = kwds.pop('fscale', 1) + return super().fit(data, *args, fscale=fscale, **kwds) + + # Details related to the decision of not defining + # the survival function for this distribution can be + # found in the PR: https://github.com/scipy/scipy/pull/18614 + + +loguniform = reciprocal_gen(name="loguniform") +reciprocal = reciprocal_gen(name="reciprocal") + + +class rice_gen(rv_continuous): + r"""A Rice continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `rice` is: + + .. math:: + + f(x, b) = x \exp(- \frac{x^2 + b^2}{2}) I_0(x b) + + for :math:`x >= 0`, :math:`b > 0`. :math:`I_0` is the modified Bessel + function of order zero (`scipy.special.i0`). + + `rice` takes ``b`` as a shape parameter for :math:`b`. + + %(after_notes)s + + The Rice distribution describes the length, :math:`r`, of a 2-D vector with + components :math:`(U+u, V+v)`, where :math:`U, V` are constant, :math:`u, + v` are independent Gaussian random variables with standard deviation + :math:`s`. Let :math:`R = \sqrt{U^2 + V^2}`. Then the pdf of :math:`r` is + ``rice.pdf(x, R/s, scale=s)``. + + %(example)s + + """ + def _argcheck(self, b): + return b >= 0 + + def _shape_info(self): + return [_ShapeInfo("b", False, (0, np.inf), (True, False))] + + def _rvs(self, b, size=None, random_state=None): + # https://en.wikipedia.org/wiki/Rice_distribution + t = b/np.sqrt(2) + random_state.standard_normal(size=(2,) + size) + return np.sqrt((t*t).sum(axis=0)) + + def _cdf(self, x, b): + return sc.chndtr(np.square(x), 2, np.square(b)) + + def _ppf(self, q, b): + return np.sqrt(sc.chndtrix(q, 2, np.square(b))) + + def _pdf(self, x, b): + # rice.pdf(x, b) = x * exp(-(x**2+b**2)/2) * I[0](x*b) + # + # We use (x**2 + b**2)/2 = ((x-b)**2)/2 + xb. + # The factor of np.exp(-xb) is then included in the i0e function + # in place of the modified Bessel function, i0, improving + # numerical stability for large values of xb. + return x * np.exp(-(x-b)*(x-b)/2.0) * sc.i0e(x*b) + + def _munp(self, n, b): + nd2 = n/2.0 + n1 = 1 + nd2 + b2 = b*b/2.0 + return (2.0**(nd2) * np.exp(-b2) * sc.gamma(n1) * + sc.hyp1f1(n1, 1, b2)) + + +rice = rice_gen(a=0.0, name="rice") + + +class recipinvgauss_gen(rv_continuous): + r"""A reciprocal inverse Gaussian continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `recipinvgauss` is: + + .. math:: + + f(x, \mu) = \frac{1}{\sqrt{2\pi x}} + \exp\left(\frac{-(1-\mu x)^2}{2\mu^2x}\right) + + for :math:`x \ge 0`. + + `recipinvgauss` takes ``mu`` as a shape parameter for :math:`\mu`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("mu", False, (0, np.inf), (False, False))] + + def _pdf(self, x, mu): + # recipinvgauss.pdf(x, mu) = + # 1/sqrt(2*pi*x) * exp(-(1-mu*x)**2/(2*x*mu**2)) + return np.exp(self._logpdf(x, mu)) + + def _logpdf(self, x, mu): + return _lazywhere(x > 0, (x, mu), + lambda x, mu: (-(1 - mu*x)**2.0 / (2*x*mu**2.0) + - 0.5*np.log(2*np.pi*x)), + fillvalue=-np.inf) + + def _cdf(self, x, mu): + trm1 = 1.0/mu - x + trm2 = 1.0/mu + x + isqx = 1.0/np.sqrt(x) + return _norm_cdf(-isqx*trm1) - np.exp(2.0/mu)*_norm_cdf(-isqx*trm2) + + def _sf(self, x, mu): + trm1 = 1.0/mu - x + trm2 = 1.0/mu + x + isqx = 1.0/np.sqrt(x) + return _norm_cdf(isqx*trm1) + np.exp(2.0/mu)*_norm_cdf(-isqx*trm2) + + def _rvs(self, mu, size=None, random_state=None): + return 1.0/random_state.wald(mu, 1.0, size=size) + + +recipinvgauss = recipinvgauss_gen(a=0.0, name='recipinvgauss') + + +class semicircular_gen(rv_continuous): + r"""A semicircular continuous random variable. + + %(before_notes)s + + See Also + -------- + rdist + + Notes + ----- + The probability density function for `semicircular` is: + + .. math:: + + f(x) = \frac{2}{\pi} \sqrt{1-x^2} + + for :math:`-1 \le x \le 1`. + + The distribution is a special case of `rdist` with `c = 3`. + + %(after_notes)s + + References + ---------- + .. [1] "Wigner semicircle distribution", + https://en.wikipedia.org/wiki/Wigner_semicircle_distribution + + %(example)s + + """ + def _shape_info(self): + return [] + + def _pdf(self, x): + return 2.0/np.pi*np.sqrt(1-x*x) + + def _logpdf(self, x): + return np.log(2/np.pi) + 0.5*sc.log1p(-x*x) + + def _cdf(self, x): + return 0.5+1.0/np.pi*(x*np.sqrt(1-x*x) + np.arcsin(x)) + + def _ppf(self, q): + return rdist._ppf(q, 3) + + def _rvs(self, size=None, random_state=None): + # generate values uniformly distributed on the area under the pdf + # (semi-circle) by randomly generating the radius and angle + r = np.sqrt(random_state.uniform(size=size)) + a = np.cos(np.pi * random_state.uniform(size=size)) + return r * a + + def _stats(self): + return 0, 0.25, 0, -1.0 + + def _entropy(self): + return 0.64472988584940017414 + + +semicircular = semicircular_gen(a=-1.0, b=1.0, name="semicircular") + + +class skewcauchy_gen(rv_continuous): + r"""A skewed Cauchy random variable. + + %(before_notes)s + + See Also + -------- + cauchy : Cauchy distribution + + Notes + ----- + + The probability density function for `skewcauchy` is: + + .. math:: + + f(x) = \frac{1}{\pi \left(\frac{x^2}{\left(a\, \text{sign}(x) + 1 + \right)^2} + 1 \right)} + + for a real number :math:`x` and skewness parameter :math:`-1 < a < 1`. + + When :math:`a=0`, the distribution reduces to the usual Cauchy + distribution. + + %(after_notes)s + + References + ---------- + .. [1] "Skewed generalized *t* distribution", Wikipedia + https://en.wikipedia.org/wiki/Skewed_generalized_t_distribution#Skewed_Cauchy_distribution + + %(example)s + + """ + def _argcheck(self, a): + return np.abs(a) < 1 + + def _shape_info(self): + return [_ShapeInfo("a", False, (-1.0, 1.0), (False, False))] + + def _pdf(self, x, a): + return 1 / (np.pi * (x**2 / (a * np.sign(x) + 1)**2 + 1)) + + def _cdf(self, x, a): + return np.where(x <= 0, + (1 - a) / 2 + (1 - a) / np.pi * np.arctan(x / (1 - a)), + (1 - a) / 2 + (1 + a) / np.pi * np.arctan(x / (1 + a))) + + def _ppf(self, x, a): + i = x < self._cdf(0, a) + return np.where(i, + np.tan(np.pi / (1 - a) * (x - (1 - a) / 2)) * (1 - a), + np.tan(np.pi / (1 + a) * (x - (1 - a) / 2)) * (1 + a)) + + def _stats(self, a, moments='mvsk'): + return np.nan, np.nan, np.nan, np.nan + + def _fitstart(self, data): + # Use 0 as the initial guess of the skewness shape parameter. + # For the location and scale, estimate using the median and + # quartiles. + if isinstance(data, CensoredData): + data = data._uncensor() + p25, p50, p75 = np.percentile(data, [25, 50, 75]) + return 0.0, p50, (p75 - p25)/2 + + +skewcauchy = skewcauchy_gen(name='skewcauchy') + + +class skewnorm_gen(rv_continuous): + r"""A skew-normal random variable. + + %(before_notes)s + + Notes + ----- + The pdf is:: + + skewnorm.pdf(x, a) = 2 * norm.pdf(x) * norm.cdf(a*x) + + `skewnorm` takes a real number :math:`a` as a skewness parameter + When ``a = 0`` the distribution is identical to a normal distribution + (`norm`). `rvs` implements the method of [1]_. + + %(after_notes)s + + %(example)s + + References + ---------- + .. [1] A. Azzalini and A. Capitanio (1999). Statistical applications of + the multivariate skew-normal distribution. J. Roy. Statist. Soc., + B 61, 579-602. :arxiv:`0911.2093` + + """ + def _argcheck(self, a): + return np.isfinite(a) + + def _shape_info(self): + return [_ShapeInfo("a", False, (-np.inf, np.inf), (False, False))] + + def _pdf(self, x, a): + return _lazywhere( + a == 0, (x, a), lambda x, a: _norm_pdf(x), + f2=lambda x, a: 2.*_norm_pdf(x)*_norm_cdf(a*x) + ) + + def _logpdf(self, x, a): + return _lazywhere( + a == 0, (x, a), lambda x, a: _norm_logpdf(x), + f2=lambda x, a: np.log(2)+_norm_logpdf(x)+_norm_logcdf(a*x), + ) + + def _cdf(self, x, a): + a = np.atleast_1d(a) + cdf = _boost._skewnorm_cdf(x, 0, 1, a) + # for some reason, a isn't broadcasted if some of x are invalid + a = np.broadcast_to(a, cdf.shape) + # Boost is not accurate in left tail when a > 0 + i_small_cdf = (cdf < 1e-6) & (a > 0) + cdf[i_small_cdf] = super()._cdf(x[i_small_cdf], a[i_small_cdf]) + return np.clip(cdf, 0, 1) + + def _ppf(self, x, a): + return _boost._skewnorm_ppf(x, 0, 1, a) + + def _sf(self, x, a): + # Boost's SF is implemented this way. Use whatever customizations + # we made in the _cdf. + return self._cdf(-x, -a) + + def _isf(self, x, a): + return _boost._skewnorm_isf(x, 0, 1, a) + + def _rvs(self, a, size=None, random_state=None): + u0 = random_state.normal(size=size) + v = random_state.normal(size=size) + d = a/np.sqrt(1 + a**2) + u1 = d*u0 + v*np.sqrt(1 - d**2) + return np.where(u0 >= 0, u1, -u1) + + def _stats(self, a, moments='mvsk'): + output = [None, None, None, None] + const = np.sqrt(2/np.pi) * a/np.sqrt(1 + a**2) + + if 'm' in moments: + output[0] = const + if 'v' in moments: + output[1] = 1 - const**2 + if 's' in moments: + output[2] = ((4 - np.pi)/2) * (const/np.sqrt(1 - const**2))**3 + if 'k' in moments: + output[3] = (2*(np.pi - 3)) * (const**4/(1 - const**2)**2) + + return output + + # For odd order, the each noncentral moment of the skew-normal distribution + # with location 0 and scale 1 can be expressed as a polynomial in delta, + # where delta = a/sqrt(1 + a**2) and `a` is the skew-normal shape + # parameter. The dictionary _skewnorm_odd_moments defines those + # polynomials for orders up to 19. The dict is implemented as a cached + # property to reduce the impact of the creation of the dict on import time. + @cached_property + def _skewnorm_odd_moments(self): + skewnorm_odd_moments = { + 1: Polynomial([1]), + 3: Polynomial([3, -1]), + 5: Polynomial([15, -10, 3]), + 7: Polynomial([105, -105, 63, -15]), + 9: Polynomial([945, -1260, 1134, -540, 105]), + 11: Polynomial([10395, -17325, 20790, -14850, 5775, -945]), + 13: Polynomial([135135, -270270, 405405, -386100, 225225, -73710, + 10395]), + 15: Polynomial([2027025, -4729725, 8513505, -10135125, 7882875, + -3869775, 1091475, -135135]), + 17: Polynomial([34459425, -91891800, 192972780, -275675400, + 268017750, -175429800, 74220300, -18378360, + 2027025]), + 19: Polynomial([654729075, -1964187225, 4714049340, -7856748900, + 9166207050, -7499623950, 4230557100, -1571349780, + 346621275, -34459425]), + } + return skewnorm_odd_moments + + def _munp(self, order, a): + if order & 1: + if order > 19: + raise NotImplementedError("skewnorm noncentral moments not " + "implemented for odd orders greater " + "than 19.") + # Use the precomputed polynomials that were derived from the + # moment generating function. + delta = a/np.sqrt(1 + a**2) + return (delta * self._skewnorm_odd_moments[order](delta**2) + * _SQRT_2_OVER_PI) + else: + # For even order, the moment is just (order-1)!!, where !! is the + # notation for the double factorial; for an odd integer m, m!! is + # m*(m-2)*...*3*1. + # We could use special.factorial2, but we know the argument is odd, + # so avoid the overhead of that function and compute the result + # directly here. + return sc.gamma((order + 1)/2) * 2**(order/2) / _SQRT_PI + + @extend_notes_in_docstring(rv_continuous, notes="""\ + If ``method='mm'``, parameters fixed by the user are respected, and the + remaining parameters are used to match distribution and sample moments + where possible. For example, if the user fixes the location with + ``floc``, the parameters will only match the distribution skewness and + variance to the sample skewness and variance; no attempt will be made + to match the means or minimize a norm of the errors. + Note that the maximum possible skewness magnitude of a + `scipy.stats.skewnorm` distribution is approximately 0.9952717; if the + magnitude of the data's sample skewness exceeds this, the returned + shape parameter ``a`` will be infinite. + \n\n""") + def fit(self, data, *args, **kwds): + if kwds.pop("superfit", False): + return super().fit(data, *args, **kwds) + if isinstance(data, CensoredData): + if data.num_censored() == 0: + data = data._uncensor() + else: + return super().fit(data, *args, **kwds) + + # this extracts fixed shape, location, and scale however they + # are specified, and also leaves them in `kwds` + data, fa, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + method = kwds.get("method", "mle").lower() + + # See https://en.wikipedia.org/wiki/Skew_normal_distribution for + # moment formulas. + def skew_d(d): # skewness in terms of delta + return (4-np.pi)/2 * ((d * np.sqrt(2 / np.pi))**3 + / (1 - 2*d**2 / np.pi)**(3/2)) + def d_skew(skew): # delta in terms of skewness + s_23 = np.abs(skew)**(2/3) + return np.sign(skew) * np.sqrt( + np.pi/2 * s_23 / (s_23 + ((4 - np.pi)/2)**(2/3)) + ) + + # If method is method of moments, we don't need the user's guesses. + # Otherwise, extract the guesses from args and kwds. + if method == "mm": + a, loc, scale = None, None, None + else: + a = args[0] if len(args) else None + loc = kwds.pop('loc', None) + scale = kwds.pop('scale', None) + + if fa is None and a is None: # not fixed and no guess: use MoM + # Solve for a that matches sample distribution skewness to sample + # skewness. + s = stats.skew(data) + if method == 'mle': + # For MLE initial conditions, clip skewness to a large but + # reasonable value in case the data skewness is out-of-range. + s = np.clip(s, -0.99, 0.99) + else: + s_max = skew_d(1) + s = np.clip(s, -s_max, s_max) + d = d_skew(s) + with np.errstate(divide='ignore'): + a = np.sqrt(np.divide(d**2, (1-d**2)))*np.sign(s) + else: + a = fa if fa is not None else a + d = a / np.sqrt(1 + a**2) + + if fscale is None and scale is None: + v = np.var(data) + scale = np.sqrt(v / (1 - 2*d**2/np.pi)) + elif fscale is not None: + scale = fscale + + if floc is None and loc is None: + m = np.mean(data) + loc = m - scale*d*np.sqrt(2/np.pi) + elif floc is not None: + loc = floc + + if method == 'mm': + return a, loc, scale + else: + # At this point, parameter "guesses" may equal the fixed parameters + # in kwds. No harm in passing them as guesses, too. + return super().fit(data, a, loc=loc, scale=scale, **kwds) + + +skewnorm = skewnorm_gen(name='skewnorm') + + +class trapezoid_gen(rv_continuous): + r"""A trapezoidal continuous random variable. + + %(before_notes)s + + Notes + ----- + The trapezoidal distribution can be represented with an up-sloping line + from ``loc`` to ``(loc + c*scale)``, then constant to ``(loc + d*scale)`` + and then downsloping from ``(loc + d*scale)`` to ``(loc+scale)``. This + defines the trapezoid base from ``loc`` to ``(loc+scale)`` and the flat + top from ``c`` to ``d`` proportional to the position along the base + with ``0 <= c <= d <= 1``. When ``c=d``, this is equivalent to `triang` + with the same values for `loc`, `scale` and `c`. + The method of [1]_ is used for computing moments. + + `trapezoid` takes :math:`c` and :math:`d` as shape parameters. + + %(after_notes)s + + The standard form is in the range [0, 1] with c the mode. + The location parameter shifts the start to `loc`. + The scale parameter changes the width from 1 to `scale`. + + %(example)s + + References + ---------- + .. [1] Kacker, R.N. and Lawrence, J.F. (2007). Trapezoidal and triangular + distributions for Type B evaluation of standard uncertainty. + Metrologia 44, 117-127. :doi:`10.1088/0026-1394/44/2/003` + + + """ + def _argcheck(self, c, d): + return (c >= 0) & (c <= 1) & (d >= 0) & (d <= 1) & (d >= c) + + def _shape_info(self): + ic = _ShapeInfo("c", False, (0, 1.0), (True, True)) + id = _ShapeInfo("d", False, (0, 1.0), (True, True)) + return [ic, id] + + def _pdf(self, x, c, d): + u = 2 / (d-c+1) + + return _lazyselect([x < c, + (c <= x) & (x <= d), + x > d], + [lambda x, c, d, u: u * x / c, + lambda x, c, d, u: u, + lambda x, c, d, u: u * (1-x) / (1-d)], + (x, c, d, u)) + + def _cdf(self, x, c, d): + return _lazyselect([x < c, + (c <= x) & (x <= d), + x > d], + [lambda x, c, d: x**2 / c / (d-c+1), + lambda x, c, d: (c + 2 * (x-c)) / (d-c+1), + lambda x, c, d: 1-((1-x) ** 2 + / (d-c+1) / (1-d))], + (x, c, d)) + + def _ppf(self, q, c, d): + qc, qd = self._cdf(c, c, d), self._cdf(d, c, d) + condlist = [q < qc, q <= qd, q > qd] + choicelist = [np.sqrt(q * c * (1 + d - c)), + 0.5 * q * (1 + d - c) + 0.5 * c, + 1 - np.sqrt((1 - q) * (d - c + 1) * (1 - d))] + return np.select(condlist, choicelist) + + def _munp(self, n, c, d): + # Using the parameterization from Kacker, 2007, with + # a=bottom left, c=top left, d=top right, b=bottom right, then + # E[X^n] = h/(n+1)/(n+2) [(b^{n+2}-d^{n+2})/(b-d) + # - ((c^{n+2} - a^{n+2})/(c-a)] + # with h = 2/((b-a) - (d-c)). The corresponding parameterization + # in scipy, has a'=loc, c'=loc+c*scale, d'=loc+d*scale, b'=loc+scale, + # which for standard form reduces to a'=0, b'=1, c'=c, d'=d. + # Substituting into E[X^n] gives the bd' term as (1 - d^{n+2})/(1 - d) + # and the ac' term as c^{n-1} for the standard form. The bd' term has + # numerical difficulties near d=1, so replace (1 - d^{n+2})/(1-d) + # with expm1((n+2)*log(d))/(d-1). + # Testing with n=18 for c=(1e-30,1-eps) shows that this is stable. + # We still require an explicit test for d=1 to prevent divide by zero, + # and now a test for d=0 to prevent log(0). + ab_term = c**(n+1) + dc_term = _lazyselect( + [d == 0.0, (0.0 < d) & (d < 1.0), d == 1.0], + [lambda d: 1.0, + lambda d: np.expm1((n+2) * np.log(d)) / (d-1.0), + lambda d: n+2], + [d]) + val = 2.0 / (1.0+d-c) * (dc_term - ab_term) / ((n+1) * (n+2)) + return val + + def _entropy(self, c, d): + # Using the parameterization from Wikipedia (van Dorp, 2003) + # with a=bottom left, c=top left, d=top right, b=bottom right + # gives a'=loc, b'=loc+c*scale, c'=loc+d*scale, d'=loc+scale, + # which for loc=0, scale=1 is a'=0, b'=c, c'=d, d'=1. + # Substituting into the entropy formula from Wikipedia gives + # the following result. + return 0.5 * (1.0-d+c) / (1.0+d-c) + np.log(0.5 * (1.0+d-c)) + + +trapezoid = trapezoid_gen(a=0.0, b=1.0, name="trapezoid") +# Note: alias kept for backwards compatibility. Rename was done +# because trapz is a slur in colloquial English (see gh-12924). +trapz = trapezoid_gen(a=0.0, b=1.0, name="trapz") +if trapz.__doc__: + trapz.__doc__ = "trapz is an alias for `trapezoid`" + + +class triang_gen(rv_continuous): + r"""A triangular continuous random variable. + + %(before_notes)s + + Notes + ----- + The triangular distribution can be represented with an up-sloping line from + ``loc`` to ``(loc + c*scale)`` and then downsloping for ``(loc + c*scale)`` + to ``(loc + scale)``. + + `triang` takes ``c`` as a shape parameter for :math:`0 \le c \le 1`. + + %(after_notes)s + + The standard form is in the range [0, 1] with c the mode. + The location parameter shifts the start to `loc`. + The scale parameter changes the width from 1 to `scale`. + + %(example)s + + """ + def _rvs(self, c, size=None, random_state=None): + return random_state.triangular(0, c, 1, size) + + def _argcheck(self, c): + return (c >= 0) & (c <= 1) + + def _shape_info(self): + return [_ShapeInfo("c", False, (0, 1.0), (True, True))] + + def _pdf(self, x, c): + # 0: edge case where c=0 + # 1: generalised case for x < c, don't use x <= c, as it doesn't cope + # with c = 0. + # 2: generalised case for x >= c, but doesn't cope with c = 1 + # 3: edge case where c=1 + r = _lazyselect([c == 0, + x < c, + (x >= c) & (c != 1), + c == 1], + [lambda x, c: 2 - 2 * x, + lambda x, c: 2 * x / c, + lambda x, c: 2 * (1 - x) / (1 - c), + lambda x, c: 2 * x], + (x, c)) + return r + + def _cdf(self, x, c): + r = _lazyselect([c == 0, + x < c, + (x >= c) & (c != 1), + c == 1], + [lambda x, c: 2*x - x*x, + lambda x, c: x * x / c, + lambda x, c: (x*x - 2*x + c) / (c-1), + lambda x, c: x * x], + (x, c)) + return r + + def _ppf(self, q, c): + return np.where(q < c, np.sqrt(c * q), 1-np.sqrt((1-c) * (1-q))) + + def _stats(self, c): + return ((c+1.0)/3.0, + (1.0-c+c*c)/18, + np.sqrt(2)*(2*c-1)*(c+1)*(c-2) / (5*np.power((1.0-c+c*c), 1.5)), + -3.0/5.0) + + def _entropy(self, c): + return 0.5-np.log(2) + + +triang = triang_gen(a=0.0, b=1.0, name="triang") + + +class truncexpon_gen(rv_continuous): + r"""A truncated exponential continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `truncexpon` is: + + .. math:: + + f(x, b) = \frac{\exp(-x)}{1 - \exp(-b)} + + for :math:`0 <= x <= b`. + + `truncexpon` takes ``b`` as a shape parameter for :math:`b`. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("b", False, (0, np.inf), (False, False))] + + def _get_support(self, b): + return self.a, b + + def _pdf(self, x, b): + # truncexpon.pdf(x, b) = exp(-x) / (1-exp(-b)) + return np.exp(-x)/(-sc.expm1(-b)) + + def _logpdf(self, x, b): + return -x - np.log(-sc.expm1(-b)) + + def _cdf(self, x, b): + return sc.expm1(-x)/sc.expm1(-b) + + def _ppf(self, q, b): + return -sc.log1p(q*sc.expm1(-b)) + + def _sf(self, x, b): + return (np.exp(-b) - np.exp(-x))/sc.expm1(-b) + + def _isf(self, q, b): + return -np.log(np.exp(-b) - q * sc.expm1(-b)) + + def _munp(self, n, b): + # wrong answer with formula, same as in continuous.pdf + # return sc.gamman+1)-sc.gammainc1+n, b) + if n == 1: + return (1-(b+1)*np.exp(-b))/(-sc.expm1(-b)) + elif n == 2: + return 2*(1-0.5*(b*b+2*b+2)*np.exp(-b))/(-sc.expm1(-b)) + else: + # return generic for higher moments + return super()._munp(n, b) + + def _entropy(self, b): + eB = np.exp(b) + return np.log(eB-1)+(1+eB*(b-1.0))/(1.0-eB) + + +truncexpon = truncexpon_gen(a=0.0, name='truncexpon') + + +# logsumexp trick for log(p + q) with only log(p) and log(q) +def _log_sum(log_p, log_q): + return sc.logsumexp([log_p, log_q], axis=0) + + +# same as above, but using -exp(x) = exp(x + πi) +def _log_diff(log_p, log_q): + return sc.logsumexp([log_p, log_q+np.pi*1j], axis=0) + + +def _log_gauss_mass(a, b): + """Log of Gaussian probability mass within an interval""" + a, b = np.broadcast_arrays(a, b) + + # Calculations in right tail are inaccurate, so we'll exploit the + # symmetry and work only in the left tail + case_left = b <= 0 + case_right = a > 0 + case_central = ~(case_left | case_right) + + def mass_case_left(a, b): + return _log_diff(_norm_logcdf(b), _norm_logcdf(a)) + + def mass_case_right(a, b): + return mass_case_left(-b, -a) + + def mass_case_central(a, b): + # Previously, this was implemented as: + # left_mass = mass_case_left(a, 0) + # right_mass = mass_case_right(0, b) + # return _log_sum(left_mass, right_mass) + # Catastrophic cancellation occurs as np.exp(log_mass) approaches 1. + # Correct for this with an alternative formulation. + # We're not concerned with underflow here: if only one term + # underflows, it was insignificant; if both terms underflow, + # the result can't accurately be represented in logspace anyway + # because sc.log1p(x) ~ x for small x. + return sc.log1p(-_norm_cdf(a) - _norm_cdf(-b)) + + # _lazyselect not working; don't care to debug it + out = np.full_like(a, fill_value=np.nan, dtype=np.complex128) + if a[case_left].size: + out[case_left] = mass_case_left(a[case_left], b[case_left]) + if a[case_right].size: + out[case_right] = mass_case_right(a[case_right], b[case_right]) + if a[case_central].size: + out[case_central] = mass_case_central(a[case_central], b[case_central]) + return np.real(out) # discard ~0j + + +class truncnorm_gen(rv_continuous): + r"""A truncated normal continuous random variable. + + %(before_notes)s + + Notes + ----- + This distribution is the normal distribution centered on ``loc`` (default + 0), with standard deviation ``scale`` (default 1), and truncated at ``a`` + and ``b`` *standard deviations* from ``loc``. For arbitrary ``loc`` and + ``scale``, ``a`` and ``b`` are *not* the abscissae at which the shifted + and scaled distribution is truncated. + + .. note:: + If ``a_trunc`` and ``b_trunc`` are the abscissae at which we wish + to truncate the distribution (as opposed to the number of standard + deviations from ``loc``), then we can calculate the distribution + parameters ``a`` and ``b`` as follows:: + + a, b = (a_trunc - loc) / scale, (b_trunc - loc) / scale + + This is a common point of confusion. For additional clarification, + please see the example below. + + %(example)s + + In the examples above, ``loc=0`` and ``scale=1``, so the plot is truncated + at ``a`` on the left and ``b`` on the right. However, suppose we were to + produce the same histogram with ``loc = 1`` and ``scale=0.5``. + + >>> loc, scale = 1, 0.5 + >>> rv = truncnorm(a, b, loc=loc, scale=scale) + >>> x = np.linspace(truncnorm.ppf(0.01, a, b), + ... truncnorm.ppf(0.99, a, b), 100) + >>> r = rv.rvs(size=1000) + + >>> fig, ax = plt.subplots(1, 1) + >>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf') + >>> ax.hist(r, density=True, bins='auto', histtype='stepfilled', alpha=0.2) + >>> ax.set_xlim(a, b) + >>> ax.legend(loc='best', frameon=False) + >>> plt.show() + + Note that the distribution is no longer appears to be truncated at + abscissae ``a`` and ``b``. That is because the *standard* normal + distribution is first truncated at ``a`` and ``b``, *then* the resulting + distribution is scaled by ``scale`` and shifted by ``loc``. If we instead + want the shifted and scaled distribution to be truncated at ``a`` and + ``b``, we need to transform these values before passing them as the + distribution parameters. + + >>> a_transformed, b_transformed = (a - loc) / scale, (b - loc) / scale + >>> rv = truncnorm(a_transformed, b_transformed, loc=loc, scale=scale) + >>> x = np.linspace(truncnorm.ppf(0.01, a, b), + ... truncnorm.ppf(0.99, a, b), 100) + >>> r = rv.rvs(size=10000) + + >>> fig, ax = plt.subplots(1, 1) + >>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf') + >>> ax.hist(r, density=True, bins='auto', histtype='stepfilled', alpha=0.2) + >>> ax.set_xlim(a-0.1, b+0.1) + >>> ax.legend(loc='best', frameon=False) + >>> plt.show() + """ + + def _argcheck(self, a, b): + return a < b + + def _shape_info(self): + ia = _ShapeInfo("a", False, (-np.inf, np.inf), (True, False)) + ib = _ShapeInfo("b", False, (-np.inf, np.inf), (False, True)) + return [ia, ib] + + def _fitstart(self, data): + # Reasonable, since support is [a, b] + if isinstance(data, CensoredData): + data = data._uncensor() + return super()._fitstart(data, args=(np.min(data), np.max(data))) + + def _get_support(self, a, b): + return a, b + + def _pdf(self, x, a, b): + return np.exp(self._logpdf(x, a, b)) + + def _logpdf(self, x, a, b): + return _norm_logpdf(x) - _log_gauss_mass(a, b) + + def _cdf(self, x, a, b): + return np.exp(self._logcdf(x, a, b)) + + def _logcdf(self, x, a, b): + x, a, b = np.broadcast_arrays(x, a, b) + logcdf = np.asarray(_log_gauss_mass(a, x) - _log_gauss_mass(a, b)) + i = logcdf > -0.1 # avoid catastrophic cancellation + if np.any(i): + logcdf[i] = np.log1p(-np.exp(self._logsf(x[i], a[i], b[i]))) + return logcdf + + def _sf(self, x, a, b): + return np.exp(self._logsf(x, a, b)) + + def _logsf(self, x, a, b): + x, a, b = np.broadcast_arrays(x, a, b) + logsf = np.asarray(_log_gauss_mass(x, b) - _log_gauss_mass(a, b)) + i = logsf > -0.1 # avoid catastrophic cancellation + if np.any(i): + logsf[i] = np.log1p(-np.exp(self._logcdf(x[i], a[i], b[i]))) + return logsf + + def _entropy(self, a, b): + A = _norm_cdf(a) + B = _norm_cdf(b) + Z = B - A + C = np.log(np.sqrt(2 * np.pi * np.e) * Z) + D = (a * _norm_pdf(a) - b * _norm_pdf(b)) / (2 * Z) + h = C + D + return h + + def _ppf(self, q, a, b): + q, a, b = np.broadcast_arrays(q, a, b) + + case_left = a < 0 + case_right = ~case_left + + def ppf_left(q, a, b): + log_Phi_x = _log_sum(_norm_logcdf(a), + np.log(q) + _log_gauss_mass(a, b)) + return sc.ndtri_exp(log_Phi_x) + + def ppf_right(q, a, b): + log_Phi_x = _log_sum(_norm_logcdf(-b), + np.log1p(-q) + _log_gauss_mass(a, b)) + return -sc.ndtri_exp(log_Phi_x) + + out = np.empty_like(q) + + q_left = q[case_left] + q_right = q[case_right] + + if q_left.size: + out[case_left] = ppf_left(q_left, a[case_left], b[case_left]) + if q_right.size: + out[case_right] = ppf_right(q_right, a[case_right], b[case_right]) + + return out + + def _isf(self, q, a, b): + # Mostly copy-paste of _ppf, but I think this is simpler than combining + q, a, b = np.broadcast_arrays(q, a, b) + + case_left = b < 0 + case_right = ~case_left + + def isf_left(q, a, b): + log_Phi_x = _log_diff(_norm_logcdf(b), + np.log(q) + _log_gauss_mass(a, b)) + return sc.ndtri_exp(np.real(log_Phi_x)) + + def isf_right(q, a, b): + log_Phi_x = _log_diff(_norm_logcdf(-a), + np.log1p(-q) + _log_gauss_mass(a, b)) + return -sc.ndtri_exp(np.real(log_Phi_x)) + + out = np.empty_like(q) + + q_left = q[case_left] + q_right = q[case_right] + + if q_left.size: + out[case_left] = isf_left(q_left, a[case_left], b[case_left]) + if q_right.size: + out[case_right] = isf_right(q_right, a[case_right], b[case_right]) + + return out + + def _munp(self, n, a, b): + def n_th_moment(n, a, b): + """ + Returns n-th moment. Defined only if n >= 0. + Function cannot broadcast due to the loop over n + """ + pA, pB = self._pdf(np.asarray([a, b]), a, b) + probs = [pA, -pB] + moments = [0, 1] + for k in range(1, n+1): + # a or b might be infinite, and the corresponding pdf value + # is 0 in that case, but nan is returned for the + # multiplication. However, as b->infinity, pdf(b)*b**k -> 0. + # So it is safe to use _lazywhere to avoid the nan. + vals = _lazywhere(probs, [probs, [a, b]], + lambda x, y: x * y**(k-1), fillvalue=0) + mk = np.sum(vals) + (k-1) * moments[-2] + moments.append(mk) + return moments[-1] + + return _lazywhere((n >= 0) & (a == a) & (b == b), (n, a, b), + np.vectorize(n_th_moment, otypes=[np.float64]), + np.nan) + + def _stats(self, a, b, moments='mv'): + pA, pB = self.pdf(np.array([a, b]), a, b) + + def _truncnorm_stats_scalar(a, b, pA, pB, moments): + m1 = pA - pB + mu = m1 + # use _lazywhere to avoid nan (See detailed comment in _munp) + probs = [pA, -pB] + vals = _lazywhere(probs, [probs, [a, b]], lambda x, y: x*y, + fillvalue=0) + m2 = 1 + np.sum(vals) + vals = _lazywhere(probs, [probs, [a-mu, b-mu]], lambda x, y: x*y, + fillvalue=0) + # mu2 = m2 - mu**2, but not as numerically stable as: + # mu2 = (a-mu)*pA - (b-mu)*pB + 1 + mu2 = 1 + np.sum(vals) + vals = _lazywhere(probs, [probs, [a, b]], lambda x, y: x*y**2, + fillvalue=0) + m3 = 2*m1 + np.sum(vals) + vals = _lazywhere(probs, [probs, [a, b]], lambda x, y: x*y**3, + fillvalue=0) + m4 = 3*m2 + np.sum(vals) + + mu3 = m3 + m1 * (-3*m2 + 2*m1**2) + g1 = mu3 / np.power(mu2, 1.5) + mu4 = m4 + m1*(-4*m3 + 3*m1*(2*m2 - m1**2)) + g2 = mu4 / mu2**2 - 3 + return mu, mu2, g1, g2 + + _truncnorm_stats = np.vectorize(_truncnorm_stats_scalar, + excluded=('moments',)) + return _truncnorm_stats(a, b, pA, pB, moments) + + +truncnorm = truncnorm_gen(name='truncnorm', momtype=1) + + +class truncpareto_gen(rv_continuous): + r"""An upper truncated Pareto continuous random variable. + + %(before_notes)s + + See Also + -------- + pareto : Pareto distribution + + Notes + ----- + The probability density function for `truncpareto` is: + + .. math:: + + f(x, b, c) = \frac{b}{1 - c^{-b}} \frac{1}{x^{b+1}} + + for :math:`b > 0`, :math:`c > 1` and :math:`1 \le x \le c`. + + `truncpareto` takes `b` and `c` as shape parameters for :math:`b` and + :math:`c`. + + Notice that the upper truncation value :math:`c` is defined in + standardized form so that random values of an unscaled, unshifted variable + are within the range ``[1, c]``. + If ``u_r`` is the upper bound to a scaled and/or shifted variable, + then ``c = (u_r - loc) / scale``. In other words, the support of the + distribution becomes ``(scale + loc) <= x <= (c*scale + loc)`` when + `scale` and/or `loc` are provided. + + %(after_notes)s + + References + ---------- + .. [1] Burroughs, S. M., and Tebbens S. F. + "Upper-truncated power laws in natural systems." + Pure and Applied Geophysics 158.4 (2001): 741-757. + + %(example)s + + """ + + def _shape_info(self): + ib = _ShapeInfo("b", False, (0.0, np.inf), (False, False)) + ic = _ShapeInfo("c", False, (1.0, np.inf), (False, False)) + return [ib, ic] + + def _argcheck(self, b, c): + return (b > 0.) & (c > 1.) + + def _get_support(self, b, c): + return self.a, c + + def _pdf(self, x, b, c): + return b * x**-(b+1) / (1 - 1/c**b) + + def _logpdf(self, x, b, c): + return np.log(b) - np.log(-np.expm1(-b*np.log(c))) - (b+1)*np.log(x) + + def _cdf(self, x, b, c): + return (1 - x**-b) / (1 - 1/c**b) + + def _logcdf(self, x, b, c): + return np.log1p(-x**-b) - np.log1p(-1/c**b) + + def _ppf(self, q, b, c): + return pow(1 - (1 - 1/c**b)*q, -1/b) + + def _sf(self, x, b, c): + return (x**-b - 1/c**b) / (1 - 1/c**b) + + def _logsf(self, x, b, c): + return np.log(x**-b - 1/c**b) - np.log1p(-1/c**b) + + def _isf(self, q, b, c): + return pow(1/c**b + (1 - 1/c**b)*q, -1/b) + + def _entropy(self, b, c): + return -(np.log(b/(1 - 1/c**b)) + + (b+1)*(np.log(c)/(c**b - 1) - 1/b)) + + def _munp(self, n, b, c): + if (n == b).all(): + return b*np.log(c) / (1 - 1/c**b) + else: + return b / (b-n) * (c**b - c**n) / (c**b - 1) + + def _fitstart(self, data): + if isinstance(data, CensoredData): + data = data._uncensor() + b, loc, scale = pareto.fit(data) + c = (max(data) - loc)/scale + return b, c, loc, scale + + @_call_super_mom + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + if kwds.pop("superfit", False): + return super().fit(data, *args, **kwds) + + def log_mean(x): + return np.mean(np.log(x)) + + def harm_mean(x): + return 1/np.mean(1/x) + + def get_b(c, loc, scale): + u = (data-loc)/scale + harm_m = harm_mean(u) + log_m = log_mean(u) + quot = (harm_m-1)/log_m + return (1 - (quot-1) / (quot - (1 - 1/c)*harm_m/np.log(c)))/log_m + + def get_c(loc, scale): + return (mx - loc)/scale + + def get_loc(fc, fscale): + if fscale: # (fscale and fc) or (fscale and not fc) + loc = mn - fscale + return loc + if fc: + loc = (fc*mn - mx)/(fc - 1) + return loc + + def get_scale(loc): + return mn - loc + + # Functions used for optimisation; partial derivatives of + # the Lagrangian, set to equal 0. + + def dL_dLoc(loc, b_=None): + # Partial derivative wrt location. + # Optimised upon when no parameters, or only b, are fixed. + scale = get_scale(loc) + c = get_c(loc, scale) + b = get_b(c, loc, scale) if b_ is None else b_ + harm_m = harm_mean((data - loc)/scale) + return 1 - (1 + (c - 1)/(c**(b+1) - c)) * (1 - 1/(b+1)) * harm_m + + def dL_dB(b, logc, logm): + # Partial derivative wrt b. + # Optimised upon whenever at least one parameter but b is fixed, + # and b is free. + return b - np.log1p(b*logc / (1 - b*logm)) / logc + + def fallback(data, *args, **kwargs): + # Should any issue arise, default to the general fit method. + return super(truncpareto_gen, self).fit(data, *args, **kwargs) + + parameters = _check_fit_input_parameters(self, data, args, kwds) + data, fb, fc, floc, fscale = parameters + mn, mx = data.min(), data.max() + mn_inf = np.nextafter(mn, -np.inf) + + if (fb is not None + and fc is not None + and floc is not None + and fscale is not None): + raise ValueError("All parameters fixed." + "There is nothing to optimize.") + elif fc is None and floc is None and fscale is None: + if fb is None: + def cond_b(loc): + # b is positive only if this function is positive + scale = get_scale(loc) + c = get_c(loc, scale) + harm_m = harm_mean((data - loc)/scale) + return (1 + 1/(c-1)) * np.log(c) / harm_m - 1 + + # This gives an upper bound on loc allowing for a positive b. + # Iteratively look for a bracket for root_scalar. + mn_inf = np.nextafter(mn, -np.inf) + rbrack = mn_inf + i = 0 + lbrack = rbrack - 1 + while ((lbrack > -np.inf) + and (cond_b(lbrack)*cond_b(rbrack) >= 0)): + i += 1 + lbrack = rbrack - np.power(2., i) + if not lbrack > -np.inf: + return fallback(data, *args, **kwds) + res = root_scalar(cond_b, bracket=(lbrack, rbrack)) + if not res.converged: + return fallback(data, *args, **kwds) + + # Determine the MLE for loc. + # Iteratively look for a bracket for root_scalar. + rbrack = res.root - 1e-3 # grad_loc is numerically ill-behaved + lbrack = rbrack - 1 + i = 0 + while ((lbrack > -np.inf) + and (dL_dLoc(lbrack)*dL_dLoc(rbrack) >= 0)): + i += 1 + lbrack = rbrack - np.power(2., i) + if not lbrack > -np.inf: + return fallback(data, *args, **kwds) + res = root_scalar(dL_dLoc, bracket=(lbrack, rbrack)) + if not res.converged: + return fallback(data, *args, **kwds) + loc = res.root + scale = get_scale(loc) + c = get_c(loc, scale) + b = get_b(c, loc, scale) + + std_data = (data - loc)/scale + # The expression of b relies on b being bounded above. + up_bound_b = min(1/log_mean(std_data), + 1/(harm_mean(std_data)-1)) + if not (b < up_bound_b): + return fallback(data, *args, **kwds) + else: + # We know b is positive (or a FitError will be triggered) + # so we let loc get close to min(data). + rbrack = mn_inf + lbrack = mn_inf - 1 + i = 0 + # Iteratively look for a bracket for root_scalar. + while (lbrack > -np.inf + and (dL_dLoc(lbrack, fb) + * dL_dLoc(rbrack, fb) >= 0)): + i += 1 + lbrack = rbrack - 2**i + if not lbrack > -np.inf: + return fallback(data, *args, **kwds) + res = root_scalar(dL_dLoc, (fb,), + bracket=(lbrack, rbrack)) + if not res.converged: + return fallback(data, *args, **kwds) + loc = res.root + scale = get_scale(loc) + c = get_c(loc, scale) + b = fb + else: + # At least one of the parameters determining the support is fixed; + # the others then have analytical expressions from the constraints. + # The completely determined case (fixed c, loc and scale) + # has to be checked for not overflowing the support. + # If not fixed, b has to be determined numerically. + loc = floc if floc is not None else get_loc(fc, fscale) + scale = fscale or get_scale(loc) + c = fc or get_c(loc, scale) + + # Unscaled, translated values should be positive when the location + # is fixed. If it is not the case, we end up with negative `scale` + # and `c`, which would trigger a FitError before exiting the + # method. + if floc is not None and data.min() - floc < 0: + raise FitDataError("truncpareto", lower=1, upper=c) + + # Standardised values should be within the distribution support + # when all parameters controlling it are fixed. If it not the case, + # `fc` is overridden by `c` determined from `floc` and `fscale` when + # raising the exception. + if fc and (floc is not None) and fscale: + if data.max() > fc*fscale + floc: + raise FitDataError("truncpareto", lower=1, + upper=get_c(loc, scale)) + + # The other constraints should be automatically satisfied + # from the analytical expressions of the parameters. + # If fc or fscale are respectively less than one or less than 0, + # a FitError is triggered before exiting the method. + + if fb is None: + std_data = (data - loc)/scale + logm = log_mean(std_data) + logc = np.log(c) + # Condition for a positive root to exist. + if not (2*logm < logc): + return fallback(data, *args, **kwds) + + lbrack = 1/logm + 1/(logm - logc) + rbrack = np.nextafter(1/logm, 0) + try: + res = root_scalar(dL_dB, (logc, logm), + bracket=(lbrack, rbrack)) + # we should then never get there + if not res.converged: + return fallback(data, *args, **kwds) + b = res.root + except ValueError: + b = rbrack + else: + b = fb + + # The distribution requires that `scale+loc <= data <= c*scale+loc`. + # To avoid numerical issues, some tuning may be necessary. + # We adjust `scale` to satisfy the lower bound, and we adjust + # `c` to satisfy the upper bound. + if not (scale+loc) < mn: + if fscale: + loc = np.nextafter(loc, -np.inf) + else: + scale = get_scale(loc) + scale = np.nextafter(scale, 0) + if not (c*scale+loc) > mx: + c = get_c(loc, scale) + c = np.nextafter(c, np.inf) + + if not (np.all(self._argcheck(b, c)) and (scale > 0)): + return fallback(data, *args, **kwds) + + params_override = b, c, loc, scale + if floc is None and fscale is None: + # Based on testing in gh-16782, the following methods are only + # reliable if either `floc` or `fscale` are provided. They are + # fast, though, so might as well see if they are better than the + # generic method. + params_super = fallback(data, *args, **kwds) + nllf_override = self.nnlf(params_override, data) + nllf_super = self.nnlf(params_super, data) + if nllf_super < nllf_override: + return params_super + + return params_override + + +truncpareto = truncpareto_gen(a=1.0, name='truncpareto') + + +class tukeylambda_gen(rv_continuous): + r"""A Tukey-Lamdba continuous random variable. + + %(before_notes)s + + Notes + ----- + A flexible distribution, able to represent and interpolate between the + following distributions: + + - Cauchy (:math:`lambda = -1`) + - logistic (:math:`lambda = 0`) + - approx Normal (:math:`lambda = 0.14`) + - uniform from -1 to 1 (:math:`lambda = 1`) + + `tukeylambda` takes a real number :math:`lambda` (denoted ``lam`` + in the implementation) as a shape parameter. + + %(after_notes)s + + %(example)s + + """ + def _argcheck(self, lam): + return np.isfinite(lam) + + def _shape_info(self): + return [_ShapeInfo("lam", False, (-np.inf, np.inf), (False, False))] + + def _pdf(self, x, lam): + Fx = np.asarray(sc.tklmbda(x, lam)) + Px = Fx**(lam-1.0) + (np.asarray(1-Fx))**(lam-1.0) + Px = 1.0/np.asarray(Px) + return np.where((lam <= 0) | (abs(x) < 1.0/np.asarray(lam)), Px, 0.0) + + def _cdf(self, x, lam): + return sc.tklmbda(x, lam) + + def _ppf(self, q, lam): + return sc.boxcox(q, lam) - sc.boxcox1p(-q, lam) + + def _stats(self, lam): + return 0, _tlvar(lam), 0, _tlkurt(lam) + + def _entropy(self, lam): + def integ(p): + return np.log(pow(p, lam-1)+pow(1-p, lam-1)) + return integrate.quad(integ, 0, 1)[0] + + +tukeylambda = tukeylambda_gen(name='tukeylambda') + + +class FitUniformFixedScaleDataError(FitDataError): + def __init__(self, ptp, fscale): + self.args = ( + "Invalid values in `data`. Maximum likelihood estimation with " + "the uniform distribution and fixed scale requires that " + f"np.ptp(data) <= fscale, but np.ptp(data) = {ptp} and " + f"fscale = {fscale}." + ) + + +class uniform_gen(rv_continuous): + r"""A uniform continuous random variable. + + In the standard form, the distribution is uniform on ``[0, 1]``. Using + the parameters ``loc`` and ``scale``, one obtains the uniform distribution + on ``[loc, loc + scale]``. + + %(before_notes)s + + %(example)s + + """ + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return random_state.uniform(0.0, 1.0, size) + + def _pdf(self, x): + return 1.0*(x == x) + + def _cdf(self, x): + return x + + def _ppf(self, q): + return q + + def _stats(self): + return 0.5, 1.0/12, 0, -1.2 + + def _entropy(self): + return 0.0 + + @_call_super_mom + def fit(self, data, *args, **kwds): + """ + Maximum likelihood estimate for the location and scale parameters. + + `uniform.fit` uses only the following parameters. Because exact + formulas are used, the parameters related to optimization that are + available in the `fit` method of other distributions are ignored + here. The only positional argument accepted is `data`. + + Parameters + ---------- + data : array_like + Data to use in calculating the maximum likelihood estimate. + floc : float, optional + Hold the location parameter fixed to the specified value. + fscale : float, optional + Hold the scale parameter fixed to the specified value. + + Returns + ------- + loc, scale : float + Maximum likelihood estimates for the location and scale. + + Notes + ----- + An error is raised if `floc` is given and any values in `data` are + less than `floc`, or if `fscale` is given and `fscale` is less + than ``data.max() - data.min()``. An error is also raised if both + `floc` and `fscale` are given. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import uniform + + We'll fit the uniform distribution to `x`: + + >>> x = np.array([2, 2.5, 3.1, 9.5, 13.0]) + + For a uniform distribution MLE, the location is the minimum of the + data, and the scale is the maximum minus the minimum. + + >>> loc, scale = uniform.fit(x) + >>> loc + 2.0 + >>> scale + 11.0 + + If we know the data comes from a uniform distribution where the support + starts at 0, we can use `floc=0`: + + >>> loc, scale = uniform.fit(x, floc=0) + >>> loc + 0.0 + >>> scale + 13.0 + + Alternatively, if we know the length of the support is 12, we can use + `fscale=12`: + + >>> loc, scale = uniform.fit(x, fscale=12) + >>> loc + 1.5 + >>> scale + 12.0 + + In that last example, the support interval is [1.5, 13.5]. This + solution is not unique. For example, the distribution with ``loc=2`` + and ``scale=12`` has the same likelihood as the one above. When + `fscale` is given and it is larger than ``data.max() - data.min()``, + the parameters returned by the `fit` method center the support over + the interval ``[data.min(), data.max()]``. + + """ + if len(args) > 0: + raise TypeError("Too many arguments.") + + floc = kwds.pop('floc', None) + fscale = kwds.pop('fscale', None) + + _remove_optimizer_parameters(kwds) + + if floc is not None and fscale is not None: + # This check is for consistency with `rv_continuous.fit`. + raise ValueError("All parameters fixed. There is nothing to " + "optimize.") + + data = np.asarray(data) + + if not np.isfinite(data).all(): + raise ValueError("The data contains non-finite values.") + + # MLE for the uniform distribution + # -------------------------------- + # The PDF is + # + # f(x, loc, scale) = {1/scale for loc <= x <= loc + scale + # {0 otherwise} + # + # The likelihood function is + # L(x, loc, scale) = (1/scale)**n + # where n is len(x), assuming loc <= x <= loc + scale for all x. + # The log-likelihood is + # l(x, loc, scale) = -n*log(scale) + # The log-likelihood is maximized by making scale as small as possible, + # while keeping loc <= x <= loc + scale. So if neither loc nor scale + # are fixed, the log-likelihood is maximized by choosing + # loc = x.min() + # scale = np.ptp(x) + # If loc is fixed, it must be less than or equal to x.min(), and then + # the scale is + # scale = x.max() - loc + # If scale is fixed, it must not be less than np.ptp(x). If scale is + # greater than np.ptp(x), the solution is not unique. Note that the + # likelihood does not depend on loc, except for the requirement that + # loc <= x <= loc + scale. All choices of loc for which + # x.max() - scale <= loc <= x.min() + # have the same log-likelihood. In this case, we choose loc such that + # the support is centered over the interval [data.min(), data.max()]: + # loc = x.min() = 0.5*(scale - np.ptp(x)) + + if fscale is None: + # scale is not fixed. + if floc is None: + # loc is not fixed, scale is not fixed. + loc = data.min() + scale = np.ptp(data) + else: + # loc is fixed, scale is not fixed. + loc = floc + scale = data.max() - loc + if data.min() < loc: + raise FitDataError("uniform", lower=loc, upper=loc + scale) + else: + # loc is not fixed, scale is fixed. + ptp = np.ptp(data) + if ptp > fscale: + raise FitUniformFixedScaleDataError(ptp=ptp, fscale=fscale) + # If ptp < fscale, the ML estimate is not unique; see the comments + # above. We choose the distribution for which the support is + # centered over the interval [data.min(), data.max()]. + loc = data.min() - 0.5*(fscale - ptp) + scale = fscale + + # We expect the return values to be floating point, so ensure it + # by explicitly converting to float. + return float(loc), float(scale) + + +uniform = uniform_gen(a=0.0, b=1.0, name='uniform') + + +class vonmises_gen(rv_continuous): + r"""A Von Mises continuous random variable. + + %(before_notes)s + + See Also + -------- + scipy.stats.vonmises_fisher : Von-Mises Fisher distribution on a + hypersphere + + Notes + ----- + The probability density function for `vonmises` and `vonmises_line` is: + + .. math:: + + f(x, \kappa) = \frac{ \exp(\kappa \cos(x)) }{ 2 \pi I_0(\kappa) } + + for :math:`-\pi \le x \le \pi`, :math:`\kappa \ge 0`. :math:`I_0` is the + modified Bessel function of order zero (`scipy.special.i0`). + + `vonmises` is a circular distribution which does not restrict the + distribution to a fixed interval. Currently, there is no circular + distribution framework in SciPy. The ``cdf`` is implemented such that + ``cdf(x + 2*np.pi) == cdf(x) + 1``. + + `vonmises_line` is the same distribution, defined on :math:`[-\pi, \pi]` + on the real line. This is a regular (i.e. non-circular) distribution. + + Note about distribution parameters: `vonmises` and `vonmises_line` take + ``kappa`` as a shape parameter (concentration) and ``loc`` as the location + (circular mean). A ``scale`` parameter is accepted but does not have any + effect. + + Examples + -------- + Import the necessary modules. + + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy.stats import vonmises + + Define distribution parameters. + + >>> loc = 0.5 * np.pi # circular mean + >>> kappa = 1 # concentration + + Compute the probability density at ``x=0`` via the ``pdf`` method. + + >>> vonmises.pdf(0, loc=loc, kappa=kappa) + 0.12570826359722018 + + Verify that the percentile function ``ppf`` inverts the cumulative + distribution function ``cdf`` up to floating point accuracy. + + >>> x = 1 + >>> cdf_value = vonmises.cdf(x, loc=loc, kappa=kappa) + >>> ppf_value = vonmises.ppf(cdf_value, loc=loc, kappa=kappa) + >>> x, cdf_value, ppf_value + (1, 0.31489339900904967, 1.0000000000000004) + + Draw 1000 random variates by calling the ``rvs`` method. + + >>> sample_size = 1000 + >>> sample = vonmises(loc=loc, kappa=kappa).rvs(sample_size) + + Plot the von Mises density on a Cartesian and polar grid to emphasize + that it is a circular distribution. + + >>> fig = plt.figure(figsize=(12, 6)) + >>> left = plt.subplot(121) + >>> right = plt.subplot(122, projection='polar') + >>> x = np.linspace(-np.pi, np.pi, 500) + >>> vonmises_pdf = vonmises.pdf(x, loc=loc, kappa=kappa) + >>> ticks = [0, 0.15, 0.3] + + The left image contains the Cartesian plot. + + >>> left.plot(x, vonmises_pdf) + >>> left.set_yticks(ticks) + >>> number_of_bins = int(np.sqrt(sample_size)) + >>> left.hist(sample, density=True, bins=number_of_bins) + >>> left.set_title("Cartesian plot") + >>> left.set_xlim(-np.pi, np.pi) + >>> left.grid(True) + + The right image contains the polar plot. + + >>> right.plot(x, vonmises_pdf, label="PDF") + >>> right.set_yticks(ticks) + >>> right.hist(sample, density=True, bins=number_of_bins, + ... label="Histogram") + >>> right.set_title("Polar plot") + >>> right.legend(bbox_to_anchor=(0.15, 1.06)) + + """ + def _shape_info(self): + return [_ShapeInfo("kappa", False, (0, np.inf), (True, False))] + + def _argcheck(self, kappa): + return kappa >= 0 + + def _rvs(self, kappa, size=None, random_state=None): + return random_state.vonmises(0.0, kappa, size=size) + + @inherit_docstring_from(rv_continuous) + def rvs(self, *args, **kwds): + rvs = super().rvs(*args, **kwds) + return np.mod(rvs + np.pi, 2*np.pi) - np.pi + + def _pdf(self, x, kappa): + # vonmises.pdf(x, kappa) = exp(kappa * cos(x)) / (2*pi*I[0](kappa)) + # = exp(kappa * (cos(x) - 1)) / + # (2*pi*exp(-kappa)*I[0](kappa)) + # = exp(kappa * cosm1(x)) / (2*pi*i0e(kappa)) + return np.exp(kappa*sc.cosm1(x)) / (2*np.pi*sc.i0e(kappa)) + + def _logpdf(self, x, kappa): + # vonmises.pdf(x, kappa) = exp(kappa * cosm1(x)) / (2*pi*i0e(kappa)) + return kappa * sc.cosm1(x) - np.log(2*np.pi) - np.log(sc.i0e(kappa)) + + def _cdf(self, x, kappa): + return _stats.von_mises_cdf(kappa, x) + + def _stats_skip(self, kappa): + return 0, None, 0, None + + def _entropy(self, kappa): + # vonmises.entropy(kappa) = -kappa * I[1](kappa) / I[0](kappa) + + # log(2 * np.pi * I[0](kappa)) + # = -kappa * I[1](kappa) * exp(-kappa) / + # (I[0](kappa) * exp(-kappa)) + + # log(2 * np.pi * + # I[0](kappa) * exp(-kappa) / exp(-kappa)) + # = -kappa * sc.i1e(kappa) / sc.i0e(kappa) + + # log(2 * np.pi * i0e(kappa)) + kappa + return (-kappa * sc.i1e(kappa) / sc.i0e(kappa) + + np.log(2 * np.pi * sc.i0e(kappa)) + kappa) + + @extend_notes_in_docstring(rv_continuous, notes="""\ + The default limits of integration are endpoints of the interval + of width ``2*pi`` centered at `loc` (e.g. ``[-pi, pi]`` when + ``loc=0``).\n\n""") + def expect(self, func=None, args=(), loc=0, scale=1, lb=None, ub=None, + conditional=False, **kwds): + _a, _b = -np.pi, np.pi + + if lb is None: + lb = loc + _a + if ub is None: + ub = loc + _b + + return super().expect(func, args, loc, + scale, lb, ub, conditional, **kwds) + + @_call_super_mom + @extend_notes_in_docstring(rv_continuous, notes="""\ + Fit data is assumed to represent angles and will be wrapped onto the + unit circle. `f0` and `fscale` are ignored; the returned shape is + always the maximum likelihood estimate and the scale is always + 1. Initial guesses are ignored.\n\n""") + def fit(self, data, *args, **kwds): + if kwds.pop('superfit', False): + return super().fit(data, *args, **kwds) + + data, fshape, floc, fscale = _check_fit_input_parameters(self, data, + args, kwds) + if self.a == -np.pi: + # vonmises line case, here the default fit method will be used + return super().fit(data, *args, **kwds) + + # wrap data to interval [0, 2*pi] + data = np.mod(data, 2 * np.pi) + + def find_mu(data): + return stats.circmean(data) + + def find_kappa(data, loc): + # Usually, sources list the following as the equation to solve for + # the MLE of the shape parameter: + # r = I[1](kappa)/I[0](kappa), where r = mean resultant length + # This is valid when the location is the MLE of location. + # More generally, when the location may be fixed at an arbitrary + # value, r should be defined as follows: + r = np.sum(np.cos(loc - data))/len(data) + # See gh-18128 for more information. + + # The function r[0](kappa) := I[1](kappa)/I[0](kappa) is monotonic + # increasing from r[0](0) = 0 to r[0](+inf) = 1. The partial + # derivative of the log likelihood function with respect to kappa + # is monotonic decreasing in kappa. + if r == 1: + # All observations are (almost) equal to the mean. Return + # some large kappa such that r[0](kappa) = 1.0 numerically. + return 1e16 + elif r > 0: + def solve_for_kappa(kappa): + return sc.i1e(kappa)/sc.i0e(kappa) - r + + # The bounds of the root of r[0](kappa) = r are derived from + # selected bounds of r[0](x) given in [1, Eq. 11 & 16]. See + # gh-20102 for details. + # + # [1] Amos, D. E. (1973). Computation of Modified Bessel + # Functions and Their Ratios. Mathematics of Computation, + # 28(125): 239-251. + lower_bound = r/(1-r)/(1+r) + upper_bound = 2*lower_bound + + # The bounds are violated numerically for certain values of r, + # where solve_for_kappa evaluated at the bounds have the same + # sign. This indicates numerical imprecision of i1e()/i0e(). + # Return the violated bound in this case as it's more accurate. + if solve_for_kappa(lower_bound) >= 0: + return lower_bound + elif solve_for_kappa(upper_bound) <= 0: + return upper_bound + else: + root_res = root_scalar(solve_for_kappa, method="brentq", + bracket=(lower_bound, upper_bound)) + return root_res.root + else: + # if the provided floc is very far from the circular mean, + # the mean resultant length r can become negative. + # In that case, the equation + # I[1](kappa)/I[0](kappa) = r does not have a solution. + # The maximum likelihood kappa is then 0 which practically + # results in the uniform distribution on the circle. As + # vonmises is defined for kappa > 0, return instead the + # smallest floating point value. + # See gh-18190 for more information + return np.finfo(float).tiny + + # location likelihood equation has a solution independent of kappa + loc = floc if floc is not None else find_mu(data) + # shape likelihood equation depends on location + shape = fshape if fshape is not None else find_kappa(data, loc) + + loc = np.mod(loc + np.pi, 2 * np.pi) - np.pi # ensure in [-pi, pi] + return shape, loc, 1 # scale is not handled + + +vonmises = vonmises_gen(name='vonmises') +vonmises_line = vonmises_gen(a=-np.pi, b=np.pi, name='vonmises_line') + + +class wald_gen(invgauss_gen): + r"""A Wald continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `wald` is: + + .. math:: + + f(x) = \frac{1}{\sqrt{2\pi x^3}} \exp(- \frac{ (x-1)^2 }{ 2x }) + + for :math:`x >= 0`. + + `wald` is a special case of `invgauss` with ``mu=1``. + + %(after_notes)s + + %(example)s + """ + _support_mask = rv_continuous._open_support_mask + + def _shape_info(self): + return [] + + def _rvs(self, size=None, random_state=None): + return random_state.wald(1.0, 1.0, size=size) + + def _pdf(self, x): + # wald.pdf(x) = 1/sqrt(2*pi*x**3) * exp(-(x-1)**2/(2*x)) + return invgauss._pdf(x, 1.0) + + def _cdf(self, x): + return invgauss._cdf(x, 1.0) + + def _sf(self, x): + return invgauss._sf(x, 1.0) + + def _ppf(self, x): + return invgauss._ppf(x, 1.0) + + def _isf(self, x): + return invgauss._isf(x, 1.0) + + def _logpdf(self, x): + return invgauss._logpdf(x, 1.0) + + def _logcdf(self, x): + return invgauss._logcdf(x, 1.0) + + def _logsf(self, x): + return invgauss._logsf(x, 1.0) + + def _stats(self): + return 1.0, 1.0, 3.0, 15.0 + + def _entropy(self): + return invgauss._entropy(1.0) + + +wald = wald_gen(a=0.0, name="wald") + + +class wrapcauchy_gen(rv_continuous): + r"""A wrapped Cauchy continuous random variable. + + %(before_notes)s + + Notes + ----- + The probability density function for `wrapcauchy` is: + + .. math:: + + f(x, c) = \frac{1-c^2}{2\pi (1+c^2 - 2c \cos(x))} + + for :math:`0 \le x \le 2\pi`, :math:`0 < c < 1`. + + `wrapcauchy` takes ``c`` as a shape parameter for :math:`c`. + + %(after_notes)s + + %(example)s + + """ + def _argcheck(self, c): + return (c > 0) & (c < 1) + + def _shape_info(self): + return [_ShapeInfo("c", False, (0, 1), (False, False))] + + def _pdf(self, x, c): + # wrapcauchy.pdf(x, c) = (1-c**2) / (2*pi*(1+c**2-2*c*cos(x))) + return (1.0-c*c)/(2*np.pi*(1+c*c-2*c*np.cos(x))) + + def _cdf(self, x, c): + + def f1(x, cr): + # CDF for 0 <= x < pi + return 1/np.pi * np.arctan(cr*np.tan(x/2)) + + def f2(x, cr): + # CDF for pi <= x <= 2*pi + return 1 - 1/np.pi * np.arctan(cr*np.tan((2*np.pi - x)/2)) + + cr = (1 + c)/(1 - c) + return _lazywhere(x < np.pi, (x, cr), f=f1, f2=f2) + + def _ppf(self, q, c): + val = (1.0-c)/(1.0+c) + rcq = 2*np.arctan(val*np.tan(np.pi*q)) + rcmq = 2*np.pi-2*np.arctan(val*np.tan(np.pi*(1-q))) + return np.where(q < 1.0/2, rcq, rcmq) + + def _entropy(self, c): + return np.log(2*np.pi*(1-c*c)) + + def _fitstart(self, data): + # Use 0.5 as the initial guess of the shape parameter. + # For the location and scale, use the minimum and + # peak-to-peak/(2*pi), respectively. + if isinstance(data, CensoredData): + data = data._uncensor() + return 0.5, np.min(data), np.ptp(data)/(2*np.pi) + + +wrapcauchy = wrapcauchy_gen(a=0.0, b=2*np.pi, name='wrapcauchy') + + +class gennorm_gen(rv_continuous): + r"""A generalized normal continuous random variable. + + %(before_notes)s + + See Also + -------- + laplace : Laplace distribution + norm : normal distribution + + Notes + ----- + The probability density function for `gennorm` is [1]_: + + .. math:: + + f(x, \beta) = \frac{\beta}{2 \Gamma(1/\beta)} \exp(-|x|^\beta), + + where :math:`x` is a real number, :math:`\beta > 0` and + :math:`\Gamma` is the gamma function (`scipy.special.gamma`). + + `gennorm` takes ``beta`` as a shape parameter for :math:`\beta`. + For :math:`\beta = 1`, it is identical to a Laplace distribution. + For :math:`\beta = 2`, it is identical to a normal distribution + (with ``scale=1/sqrt(2)``). + + References + ---------- + + .. [1] "Generalized normal distribution, Version 1", + https://en.wikipedia.org/wiki/Generalized_normal_distribution#Version_1 + + .. [2] Nardon, Martina, and Paolo Pianca. "Simulation techniques for + generalized Gaussian densities." Journal of Statistical + Computation and Simulation 79.11 (2009): 1317-1329 + + .. [3] Wicklin, Rick. "Simulate data from a generalized Gaussian + distribution" in The DO Loop blog, September 21, 2016, + https://blogs.sas.com/content/iml/2016/09/21/simulate-generalized-gaussian-sas.html + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("beta", False, (0, np.inf), (False, False))] + + def _pdf(self, x, beta): + return np.exp(self._logpdf(x, beta)) + + def _logpdf(self, x, beta): + return np.log(0.5*beta) - sc.gammaln(1.0/beta) - abs(x)**beta + + def _cdf(self, x, beta): + c = 0.5 * np.sign(x) + # evaluating (.5 + c) first prevents numerical cancellation + return (0.5 + c) - c * sc.gammaincc(1.0/beta, abs(x)**beta) + + def _ppf(self, x, beta): + c = np.sign(x - 0.5) + # evaluating (1. + c) first prevents numerical cancellation + return c * sc.gammainccinv(1.0/beta, (1.0 + c) - 2.0*c*x)**(1.0/beta) + + def _sf(self, x, beta): + return self._cdf(-x, beta) + + def _isf(self, x, beta): + return -self._ppf(x, beta) + + def _stats(self, beta): + c1, c3, c5 = sc.gammaln([1.0/beta, 3.0/beta, 5.0/beta]) + return 0., np.exp(c3 - c1), 0., np.exp(c5 + c1 - 2.0*c3) - 3. + + def _entropy(self, beta): + return 1. / beta - np.log(.5 * beta) + sc.gammaln(1. / beta) + + def _rvs(self, beta, size=None, random_state=None): + # see [2]_ for the algorithm + # see [3]_ for reference implementation in SAS + z = random_state.gamma(1/beta, size=size) + y = z ** (1/beta) + # convert y to array to ensure masking support + y = np.asarray(y) + mask = random_state.random(size=y.shape) < 0.5 + y[mask] = -y[mask] + return y + + +gennorm = gennorm_gen(name='gennorm') + + +class halfgennorm_gen(rv_continuous): + r"""The upper half of a generalized normal continuous random variable. + + %(before_notes)s + + See Also + -------- + gennorm : generalized normal distribution + expon : exponential distribution + halfnorm : half normal distribution + + Notes + ----- + The probability density function for `halfgennorm` is: + + .. math:: + + f(x, \beta) = \frac{\beta}{\Gamma(1/\beta)} \exp(-|x|^\beta) + + for :math:`x, \beta > 0`. :math:`\Gamma` is the gamma function + (`scipy.special.gamma`). + + `halfgennorm` takes ``beta`` as a shape parameter for :math:`\beta`. + For :math:`\beta = 1`, it is identical to an exponential distribution. + For :math:`\beta = 2`, it is identical to a half normal distribution + (with ``scale=1/sqrt(2)``). + + References + ---------- + + .. [1] "Generalized normal distribution, Version 1", + https://en.wikipedia.org/wiki/Generalized_normal_distribution#Version_1 + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("beta", False, (0, np.inf), (False, False))] + + def _pdf(self, x, beta): + # beta + # halfgennorm.pdf(x, beta) = ------------- exp(-|x|**beta) + # gamma(1/beta) + return np.exp(self._logpdf(x, beta)) + + def _logpdf(self, x, beta): + return np.log(beta) - sc.gammaln(1.0/beta) - x**beta + + def _cdf(self, x, beta): + return sc.gammainc(1.0/beta, x**beta) + + def _ppf(self, x, beta): + return sc.gammaincinv(1.0/beta, x)**(1.0/beta) + + def _sf(self, x, beta): + return sc.gammaincc(1.0/beta, x**beta) + + def _isf(self, x, beta): + return sc.gammainccinv(1.0/beta, x)**(1.0/beta) + + def _entropy(self, beta): + return 1.0/beta - np.log(beta) + sc.gammaln(1.0/beta) + + +halfgennorm = halfgennorm_gen(a=0, name='halfgennorm') + + +class crystalball_gen(rv_continuous): + r""" + Crystalball distribution + + %(before_notes)s + + Notes + ----- + The probability density function for `crystalball` is: + + .. math:: + + f(x, \beta, m) = \begin{cases} + N \exp(-x^2 / 2), &\text{for } x > -\beta\\ + N A (B - x)^{-m} &\text{for } x \le -\beta + \end{cases} + + where :math:`A = (m / |\beta|)^m \exp(-\beta^2 / 2)`, + :math:`B = m/|\beta| - |\beta|` and :math:`N` is a normalisation constant. + + `crystalball` takes :math:`\beta > 0` and :math:`m > 1` as shape + parameters. :math:`\beta` defines the point where the pdf changes + from a power-law to a Gaussian distribution. :math:`m` is the power + of the power-law tail. + + %(after_notes)s + + .. versionadded:: 0.19.0 + + References + ---------- + .. [1] "Crystal Ball Function", + https://en.wikipedia.org/wiki/Crystal_Ball_function + + %(example)s + """ + def _argcheck(self, beta, m): + """ + Shape parameter bounds are m > 1 and beta > 0. + """ + return (m > 1) & (beta > 0) + + def _shape_info(self): + ibeta = _ShapeInfo("beta", False, (0, np.inf), (False, False)) + im = _ShapeInfo("m", False, (1, np.inf), (False, False)) + return [ibeta, im] + + def _fitstart(self, data): + # Arbitrary, but the default m=1 is not valid + return super()._fitstart(data, args=(1, 1.5)) + + def _pdf(self, x, beta, m): + """ + Return PDF of the crystalball function. + + -- + | exp(-x**2 / 2), for x > -beta + crystalball.pdf(x, beta, m) = N * | + | A * (B - x)**(-m), for x <= -beta + -- + """ + N = 1.0 / (m/beta / (m-1) * np.exp(-beta**2 / 2.0) + + _norm_pdf_C * _norm_cdf(beta)) + + def rhs(x, beta, m): + return np.exp(-x**2 / 2) + + def lhs(x, beta, m): + return ((m/beta)**m * np.exp(-beta**2 / 2.0) * + (m/beta - beta - x)**(-m)) + + return N * _lazywhere(x > -beta, (x, beta, m), f=rhs, f2=lhs) + + def _logpdf(self, x, beta, m): + """ + Return the log of the PDF of the crystalball function. + """ + N = 1.0 / (m/beta / (m-1) * np.exp(-beta**2 / 2.0) + + _norm_pdf_C * _norm_cdf(beta)) + + def rhs(x, beta, m): + return -x**2/2 + + def lhs(x, beta, m): + return m*np.log(m/beta) - beta**2/2 - m*np.log(m/beta - beta - x) + + return np.log(N) + _lazywhere(x > -beta, (x, beta, m), f=rhs, f2=lhs) + + def _cdf(self, x, beta, m): + """ + Return CDF of the crystalball function + """ + N = 1.0 / (m/beta / (m-1) * np.exp(-beta**2 / 2.0) + + _norm_pdf_C * _norm_cdf(beta)) + + def rhs(x, beta, m): + return ((m/beta) * np.exp(-beta**2 / 2.0) / (m-1) + + _norm_pdf_C * (_norm_cdf(x) - _norm_cdf(-beta))) + + def lhs(x, beta, m): + return ((m/beta)**m * np.exp(-beta**2 / 2.0) * + (m/beta - beta - x)**(-m+1) / (m-1)) + + return N * _lazywhere(x > -beta, (x, beta, m), f=rhs, f2=lhs) + + def _ppf(self, p, beta, m): + N = 1.0 / (m/beta / (m-1) * np.exp(-beta**2 / 2.0) + + _norm_pdf_C * _norm_cdf(beta)) + pbeta = N * (m/beta) * np.exp(-beta**2/2) / (m - 1) + + def ppf_less(p, beta, m): + eb2 = np.exp(-beta**2/2) + C = (m/beta) * eb2 / (m-1) + N = 1/(C + _norm_pdf_C * _norm_cdf(beta)) + return (m/beta - beta - + ((m - 1)*(m/beta)**(-m)/eb2*p/N)**(1/(1-m))) + + def ppf_greater(p, beta, m): + eb2 = np.exp(-beta**2/2) + C = (m/beta) * eb2 / (m-1) + N = 1/(C + _norm_pdf_C * _norm_cdf(beta)) + return _norm_ppf(_norm_cdf(-beta) + (1/_norm_pdf_C)*(p/N - C)) + + return _lazywhere(p < pbeta, (p, beta, m), f=ppf_less, f2=ppf_greater) + + def _munp(self, n, beta, m): + """ + Returns the n-th non-central moment of the crystalball function. + """ + N = 1.0 / (m/beta / (m-1) * np.exp(-beta**2 / 2.0) + + _norm_pdf_C * _norm_cdf(beta)) + + def n_th_moment(n, beta, m): + """ + Returns n-th moment. Defined only if n+1 < m + Function cannot broadcast due to the loop over n + """ + A = (m/beta)**m * np.exp(-beta**2 / 2.0) + B = m/beta - beta + rhs = (2**((n-1)/2.0) * sc.gamma((n+1)/2) * + (1.0 + (-1)**n * sc.gammainc((n+1)/2, beta**2 / 2))) + lhs = np.zeros(rhs.shape) + for k in range(n + 1): + lhs += (sc.binom(n, k) * B**(n-k) * (-1)**k / (m - k - 1) * + (m/beta)**(-m + k + 1)) + return A * lhs + rhs + + return N * _lazywhere(n + 1 < m, (n, beta, m), + np.vectorize(n_th_moment, otypes=[np.float64]), + np.inf) + + +crystalball = crystalball_gen(name='crystalball', longname="A Crystalball Function") + + +def _argus_phi(chi): + """ + Utility function for the argus distribution used in the pdf, sf and + moment calculation. + Note that for all x > 0: + gammainc(1.5, x**2/2) = 2 * (_norm_cdf(x) - x * _norm_pdf(x) - 0.5). + This can be verified directly by noting that the cdf of Gamma(1.5) can + be written as erf(sqrt(x)) - 2*sqrt(x)*exp(-x)/sqrt(Pi). + We use gammainc instead of the usual definition because it is more precise + for small chi. + """ + return sc.gammainc(1.5, chi**2/2) / 2 + + +class argus_gen(rv_continuous): + r""" + Argus distribution + + %(before_notes)s + + Notes + ----- + The probability density function for `argus` is: + + .. math:: + + f(x, \chi) = \frac{\chi^3}{\sqrt{2\pi} \Psi(\chi)} x \sqrt{1-x^2} + \exp(-\chi^2 (1 - x^2)/2) + + for :math:`0 < x < 1` and :math:`\chi > 0`, where + + .. math:: + + \Psi(\chi) = \Phi(\chi) - \chi \phi(\chi) - 1/2 + + with :math:`\Phi` and :math:`\phi` being the CDF and PDF of a standard + normal distribution, respectively. + + `argus` takes :math:`\chi` as shape a parameter. Details about sampling + from the ARGUS distribution can be found in [2]_. + + %(after_notes)s + + References + ---------- + .. [1] "ARGUS distribution", + https://en.wikipedia.org/wiki/ARGUS_distribution + .. [2] Christoph Baumgarten "Random variate generation by fast numerical + inversion in the varying parameter case." Research in Statistics, + vol. 1, 2023, doi:10.1080/27684520.2023.2279060. + + .. versionadded:: 0.19.0 + + %(example)s + """ + def _shape_info(self): + return [_ShapeInfo("chi", False, (0, np.inf), (False, False))] + + def _logpdf(self, x, chi): + # for x = 0 or 1, logpdf returns -np.inf + with np.errstate(divide='ignore'): + y = 1.0 - x*x + A = 3*np.log(chi) - _norm_pdf_logC - np.log(_argus_phi(chi)) + return A + np.log(x) + 0.5*np.log1p(-x*x) - chi**2 * y / 2 + + def _pdf(self, x, chi): + return np.exp(self._logpdf(x, chi)) + + def _cdf(self, x, chi): + return 1.0 - self._sf(x, chi) + + def _sf(self, x, chi): + return _argus_phi(chi * np.sqrt(1 - x**2)) / _argus_phi(chi) + + def _rvs(self, chi, size=None, random_state=None): + chi = np.asarray(chi) + if chi.size == 1: + out = self._rvs_scalar(chi, numsamples=size, + random_state=random_state) + else: + shp, bc = _check_shape(chi.shape, size) + numsamples = int(np.prod(shp)) + out = np.empty(size) + it = np.nditer([chi], + flags=['multi_index'], + op_flags=[['readonly']]) + while not it.finished: + idx = tuple((it.multi_index[j] if not bc[j] else slice(None)) + for j in range(-len(size), 0)) + r = self._rvs_scalar(it[0], numsamples=numsamples, + random_state=random_state) + out[idx] = r.reshape(shp) + it.iternext() + + if size == (): + out = out[()] + return out + + def _rvs_scalar(self, chi, numsamples=None, random_state=None): + # if chi <= 1.8: + # use rejection method, see Devroye: + # Non-Uniform Random Variate Generation, 1986, section II.3.2. + # write: PDF f(x) = c * g(x) * h(x), where + # h is [0,1]-valued and g is a density + # we use two ways to write f + # + # Case 1: + # write g(x) = 3*x*sqrt(1-x**2), h(x) = exp(-chi**2 (1-x**2) / 2) + # If X has a distribution with density g its ppf G_inv is given by: + # G_inv(u) = np.sqrt(1 - u**(2/3)) + # + # Case 2: + # g(x) = chi**2 * x * exp(-chi**2 * (1-x**2)/2) / (1 - exp(-chi**2 /2)) + # h(x) = sqrt(1 - x**2), 0 <= x <= 1 + # one can show that + # G_inv(u) = np.sqrt(2*np.log(u*(np.exp(chi**2/2)-1)+1))/chi + # = np.sqrt(1 + 2*np.log(np.exp(-chi**2/2)*(1-u)+u)/chi**2) + # the latter expression is used for precision with small chi + # + # In both cases, the inverse cdf of g can be written analytically, and + # we can apply the rejection method: + # + # REPEAT + # Generate U uniformly distributed on [0, 1] + # Generate X with density g (e.g. via inverse transform sampling: + # X = G_inv(V) with V uniformly distributed on [0, 1]) + # UNTIL X <= h(X) + # RETURN X + # + # We use case 1 for chi <= 0.5 as it maintains precision for small chi + # and case 2 for 0.5 < chi <= 1.8 due to its speed for moderate chi. + # + # if chi > 1.8: + # use relation to the Gamma distribution: if X is ARGUS with parameter + # chi), then Y = chi**2 * (1 - X**2) / 2 has density proportional to + # sqrt(u) * exp(-u) on [0, chi**2 / 2], i.e. a Gamma(3/2) distribution + # conditioned on [0, chi**2 / 2]). Therefore, to sample X from the + # ARGUS distribution, we sample Y from the gamma distribution, keeping + # only samples on [0, chi**2 / 2], and apply the inverse + # transformation X = (1 - 2*Y/chi**2)**(1/2). Since we only + # look at chi > 1.8, gamma(1.5).cdf(chi**2/2) is large enough such + # Y falls in the interval [0, chi**2 / 2] with a high probability: + # stats.gamma(1.5).cdf(1.8**2/2) = 0.644... + # + # The points to switch between the different methods are determined + # by a comparison of the runtime of the different methods. However, + # the runtime is platform-dependent. The implemented values should + # ensure a good overall performance and are supported by an analysis + # of the rejection constants of different methods. + + size1d = tuple(np.atleast_1d(numsamples)) + N = int(np.prod(size1d)) + x = np.zeros(N) + simulated = 0 + chi2 = chi * chi + if chi <= 0.5: + d = -chi2 / 2 + while simulated < N: + k = N - simulated + u = random_state.uniform(size=k) + v = random_state.uniform(size=k) + z = v**(2/3) + # acceptance condition: u <= h(G_inv(v)). This simplifies to + accept = (np.log(u) <= d * z) + num_accept = np.sum(accept) + if num_accept > 0: + # we still need to transform z=v**(2/3) to X = G_inv(v) + rvs = np.sqrt(1 - z[accept]) + x[simulated:(simulated + num_accept)] = rvs + simulated += num_accept + elif chi <= 1.8: + echi = np.exp(-chi2 / 2) + while simulated < N: + k = N - simulated + u = random_state.uniform(size=k) + v = random_state.uniform(size=k) + z = 2 * np.log(echi * (1 - v) + v) / chi2 + # as in case one, simplify u <= h(G_inv(v)) and then transform + # z to the target distribution X = G_inv(v) + accept = (u**2 + z <= 0) + num_accept = np.sum(accept) + if num_accept > 0: + rvs = np.sqrt(1 + z[accept]) + x[simulated:(simulated + num_accept)] = rvs + simulated += num_accept + else: + # conditional Gamma for chi > 1.8 + while simulated < N: + k = N - simulated + g = random_state.standard_gamma(1.5, size=k) + accept = (g <= chi2 / 2) + num_accept = np.sum(accept) + if num_accept > 0: + x[simulated:(simulated + num_accept)] = g[accept] + simulated += num_accept + x = np.sqrt(1 - 2 * x / chi2) + + return np.reshape(x, size1d) + + def _stats(self, chi): + # need to ensure that dtype is float + # otherwise the mask below does not work for integers + chi = np.asarray(chi, dtype=float) + phi = _argus_phi(chi) + m = np.sqrt(np.pi/8) * chi * sc.ive(1, chi**2/4) / phi + # compute second moment, use Taylor expansion for small chi (<= 0.1) + mu2 = np.empty_like(chi) + mask = chi > 0.1 + c = chi[mask] + mu2[mask] = 1 - 3 / c**2 + c * _norm_pdf(c) / phi[mask] + c = chi[~mask] + coef = [-358/65690625, 0, -94/1010625, 0, 2/2625, 0, 6/175, 0, 0.4] + mu2[~mask] = np.polyval(coef, c) + return m, mu2 - m**2, None, None + + +argus = argus_gen(name='argus', longname="An Argus Function", a=0.0, b=1.0) + + +class rv_histogram(rv_continuous): + """ + Generates a distribution given by a histogram. + This is useful to generate a template distribution from a binned + datasample. + + As a subclass of the `rv_continuous` class, `rv_histogram` inherits from it + a collection of generic methods (see `rv_continuous` for the full list), + and implements them based on the properties of the provided binned + datasample. + + Parameters + ---------- + histogram : tuple of array_like + Tuple containing two array_like objects. + The first containing the content of n bins, + the second containing the (n+1) bin boundaries. + In particular, the return value of `numpy.histogram` is accepted. + + density : bool, optional + If False, assumes the histogram is proportional to counts per bin; + otherwise, assumes it is proportional to a density. + For constant bin widths, these are equivalent, but the distinction + is important when bin widths vary (see Notes). + If None (default), sets ``density=True`` for backwards compatibility, + but warns if the bin widths are variable. Set `density` explicitly + to silence the warning. + + .. versionadded:: 1.10.0 + + Notes + ----- + When a histogram has unequal bin widths, there is a distinction between + histograms that are proportional to counts per bin and histograms that are + proportional to probability density over a bin. If `numpy.histogram` is + called with its default ``density=False``, the resulting histogram is the + number of counts per bin, so ``density=False`` should be passed to + `rv_histogram`. If `numpy.histogram` is called with ``density=True``, the + resulting histogram is in terms of probability density, so ``density=True`` + should be passed to `rv_histogram`. To avoid warnings, always pass + ``density`` explicitly when the input histogram has unequal bin widths. + + There are no additional shape parameters except for the loc and scale. + The pdf is defined as a stepwise function from the provided histogram. + The cdf is a linear interpolation of the pdf. + + .. versionadded:: 0.19.0 + + Examples + -------- + + Create a scipy.stats distribution from a numpy histogram + + >>> import scipy.stats + >>> import numpy as np + >>> data = scipy.stats.norm.rvs(size=100000, loc=0, scale=1.5, + ... random_state=123) + >>> hist = np.histogram(data, bins=100) + >>> hist_dist = scipy.stats.rv_histogram(hist, density=False) + + Behaves like an ordinary scipy rv_continuous distribution + + >>> hist_dist.pdf(1.0) + 0.20538577847618705 + >>> hist_dist.cdf(2.0) + 0.90818568543056499 + + PDF is zero above (below) the highest (lowest) bin of the histogram, + defined by the max (min) of the original dataset + + >>> hist_dist.pdf(np.max(data)) + 0.0 + >>> hist_dist.cdf(np.max(data)) + 1.0 + >>> hist_dist.pdf(np.min(data)) + 7.7591907244498314e-05 + >>> hist_dist.cdf(np.min(data)) + 0.0 + + PDF and CDF follow the histogram + + >>> import matplotlib.pyplot as plt + >>> X = np.linspace(-5.0, 5.0, 100) + >>> fig, ax = plt.subplots() + >>> ax.set_title("PDF from Template") + >>> ax.hist(data, density=True, bins=100) + >>> ax.plot(X, hist_dist.pdf(X), label='PDF') + >>> ax.plot(X, hist_dist.cdf(X), label='CDF') + >>> ax.legend() + >>> fig.show() + + """ + _support_mask = rv_continuous._support_mask + + def __init__(self, histogram, *args, density=None, **kwargs): + """ + Create a new distribution using the given histogram + + Parameters + ---------- + histogram : tuple of array_like + Tuple containing two array_like objects. + The first containing the content of n bins, + the second containing the (n+1) bin boundaries. + In particular, the return value of np.histogram is accepted. + density : bool, optional + If False, assumes the histogram is proportional to counts per bin; + otherwise, assumes it is proportional to a density. + For constant bin widths, these are equivalent. + If None (default), sets ``density=True`` for backward + compatibility, but warns if the bin widths are variable. Set + `density` explicitly to silence the warning. + """ + self._histogram = histogram + self._density = density + if len(histogram) != 2: + raise ValueError("Expected length 2 for parameter histogram") + self._hpdf = np.asarray(histogram[0]) + self._hbins = np.asarray(histogram[1]) + if len(self._hpdf) + 1 != len(self._hbins): + raise ValueError("Number of elements in histogram content " + "and histogram boundaries do not match, " + "expected n and n+1.") + self._hbin_widths = self._hbins[1:] - self._hbins[:-1] + bins_vary = not np.allclose(self._hbin_widths, self._hbin_widths[0]) + if density is None and bins_vary: + message = ("Bin widths are not constant. Assuming `density=True`." + "Specify `density` explicitly to silence this warning.") + warnings.warn(message, RuntimeWarning, stacklevel=2) + density = True + elif not density: + self._hpdf = self._hpdf / self._hbin_widths + + self._hpdf = self._hpdf / float(np.sum(self._hpdf * self._hbin_widths)) + self._hcdf = np.cumsum(self._hpdf * self._hbin_widths) + self._hpdf = np.hstack([0.0, self._hpdf, 0.0]) + self._hcdf = np.hstack([0.0, self._hcdf]) + # Set support + kwargs['a'] = self.a = self._hbins[0] + kwargs['b'] = self.b = self._hbins[-1] + super().__init__(*args, **kwargs) + + def _pdf(self, x): + """ + PDF of the histogram + """ + return self._hpdf[np.searchsorted(self._hbins, x, side='right')] + + def _cdf(self, x): + """ + CDF calculated from the histogram + """ + return np.interp(x, self._hbins, self._hcdf) + + def _ppf(self, x): + """ + Percentile function calculated from the histogram + """ + return np.interp(x, self._hcdf, self._hbins) + + def _munp(self, n): + """Compute the n-th non-central moment.""" + integrals = (self._hbins[1:]**(n+1) - self._hbins[:-1]**(n+1)) / (n+1) + return np.sum(self._hpdf[1:-1] * integrals) + + def _entropy(self): + """Compute entropy of distribution""" + res = _lazywhere(self._hpdf[1:-1] > 0.0, + (self._hpdf[1:-1],), + np.log, + 0.0) + return -np.sum(self._hpdf[1:-1] * res * self._hbin_widths) + + def _updated_ctor_param(self): + """ + Set the histogram as additional constructor argument + """ + dct = super()._updated_ctor_param() + dct['histogram'] = self._histogram + dct['density'] = self._density + return dct + + +class studentized_range_gen(rv_continuous): + r"""A studentized range continuous random variable. + + %(before_notes)s + + See Also + -------- + t: Student's t distribution + + Notes + ----- + The probability density function for `studentized_range` is: + + .. math:: + + f(x; k, \nu) = \frac{k(k-1)\nu^{\nu/2}}{\Gamma(\nu/2) + 2^{\nu/2-1}} \int_{0}^{\infty} \int_{-\infty}^{\infty} + s^{\nu} e^{-\nu s^2/2} \phi(z) \phi(sx + z) + [\Phi(sx + z) - \Phi(z)]^{k-2} \,dz \,ds + + for :math:`x ≥ 0`, :math:`k > 1`, and :math:`\nu > 0`. + + `studentized_range` takes ``k`` for :math:`k` and ``df`` for :math:`\nu` + as shape parameters. + + When :math:`\nu` exceeds 100,000, an asymptotic approximation (infinite + degrees of freedom) is used to compute the cumulative distribution + function [4]_ and probability distribution function. + + %(after_notes)s + + References + ---------- + + .. [1] "Studentized range distribution", + https://en.wikipedia.org/wiki/Studentized_range_distribution + .. [2] Batista, Ben Dêivide, et al. "Externally Studentized Normal Midrange + Distribution." Ciência e Agrotecnologia, vol. 41, no. 4, 2017, pp. + 378-389., doi:10.1590/1413-70542017414047716. + .. [3] Harter, H. Leon. "Tables of Range and Studentized Range." The Annals + of Mathematical Statistics, vol. 31, no. 4, 1960, pp. 1122-1147. + JSTOR, www.jstor.org/stable/2237810. Accessed 18 Feb. 2021. + .. [4] Lund, R. E., and J. R. Lund. "Algorithm AS 190: Probabilities and + Upper Quantiles for the Studentized Range." Journal of the Royal + Statistical Society. Series C (Applied Statistics), vol. 32, no. 2, + 1983, pp. 204-210. JSTOR, www.jstor.org/stable/2347300. Accessed 18 + Feb. 2021. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import studentized_range + >>> import matplotlib.pyplot as plt + >>> fig, ax = plt.subplots(1, 1) + + Calculate the first four moments: + + >>> k, df = 3, 10 + >>> mean, var, skew, kurt = studentized_range.stats(k, df, moments='mvsk') + + Display the probability density function (``pdf``): + + >>> x = np.linspace(studentized_range.ppf(0.01, k, df), + ... studentized_range.ppf(0.99, k, df), 100) + >>> ax.plot(x, studentized_range.pdf(x, k, df), + ... 'r-', lw=5, alpha=0.6, label='studentized_range pdf') + + Alternatively, the distribution object can be called (as a function) + to fix the shape, location and scale parameters. This returns a "frozen" + RV object holding the given parameters fixed. + + Freeze the distribution and display the frozen ``pdf``: + + >>> rv = studentized_range(k, df) + >>> ax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf') + + Check accuracy of ``cdf`` and ``ppf``: + + >>> vals = studentized_range.ppf([0.001, 0.5, 0.999], k, df) + >>> np.allclose([0.001, 0.5, 0.999], studentized_range.cdf(vals, k, df)) + True + + Rather than using (``studentized_range.rvs``) to generate random variates, + which is very slow for this distribution, we can approximate the inverse + CDF using an interpolator, and then perform inverse transform sampling + with this approximate inverse CDF. + + This distribution has an infinite but thin right tail, so we focus our + attention on the leftmost 99.9 percent. + + >>> a, b = studentized_range.ppf([0, .999], k, df) + >>> a, b + 0, 7.41058083802274 + + >>> from scipy.interpolate import interp1d + >>> rng = np.random.default_rng() + >>> xs = np.linspace(a, b, 50) + >>> cdf = studentized_range.cdf(xs, k, df) + # Create an interpolant of the inverse CDF + >>> ppf = interp1d(cdf, xs, fill_value='extrapolate') + # Perform inverse transform sampling using the interpolant + >>> r = ppf(rng.uniform(size=1000)) + + And compare the histogram: + + >>> ax.hist(r, density=True, histtype='stepfilled', alpha=0.2) + >>> ax.legend(loc='best', frameon=False) + >>> plt.show() + + """ + + def _argcheck(self, k, df): + return (k > 1) & (df > 0) + + def _shape_info(self): + ik = _ShapeInfo("k", False, (1, np.inf), (False, False)) + idf = _ShapeInfo("df", False, (0, np.inf), (False, False)) + return [ik, idf] + + def _fitstart(self, data): + # Default is k=1, but that is not a valid value of the parameter. + return super()._fitstart(data, args=(2, 1)) + + def _munp(self, K, k, df): + cython_symbol = '_studentized_range_moment' + _a, _b = self._get_support() + # all three of these are used to create a numpy array so they must + # be the same shape. + + def _single_moment(K, k, df): + log_const = _stats._studentized_range_pdf_logconst(k, df) + arg = [K, k, df, log_const] + usr_data = np.array(arg, float).ctypes.data_as(ctypes.c_void_p) + + llc = LowLevelCallable.from_cython(_stats, cython_symbol, usr_data) + + ranges = [(-np.inf, np.inf), (0, np.inf), (_a, _b)] + opts = dict(epsabs=1e-11, epsrel=1e-12) + + return integrate.nquad(llc, ranges=ranges, opts=opts)[0] + + ufunc = np.frompyfunc(_single_moment, 3, 1) + return np.asarray(ufunc(K, k, df), dtype=np.float64)[()] + + def _pdf(self, x, k, df): + + def _single_pdf(q, k, df): + # The infinite form of the PDF is derived from the infinite + # CDF. + if df < 100000: + cython_symbol = '_studentized_range_pdf' + log_const = _stats._studentized_range_pdf_logconst(k, df) + arg = [q, k, df, log_const] + usr_data = np.array(arg, float).ctypes.data_as(ctypes.c_void_p) + ranges = [(-np.inf, np.inf), (0, np.inf)] + + else: + cython_symbol = '_studentized_range_pdf_asymptotic' + arg = [q, k] + usr_data = np.array(arg, float).ctypes.data_as(ctypes.c_void_p) + ranges = [(-np.inf, np.inf)] + + llc = LowLevelCallable.from_cython(_stats, cython_symbol, usr_data) + opts = dict(epsabs=1e-11, epsrel=1e-12) + return integrate.nquad(llc, ranges=ranges, opts=opts)[0] + + ufunc = np.frompyfunc(_single_pdf, 3, 1) + return np.asarray(ufunc(x, k, df), dtype=np.float64)[()] + + def _cdf(self, x, k, df): + + def _single_cdf(q, k, df): + # "When the degrees of freedom V are infinite the probability + # integral takes [on a] simpler form," and a single asymptotic + # integral is evaluated rather than the standard double integral. + # (Lund, Lund, page 205) + if df < 100000: + cython_symbol = '_studentized_range_cdf' + log_const = _stats._studentized_range_cdf_logconst(k, df) + arg = [q, k, df, log_const] + usr_data = np.array(arg, float).ctypes.data_as(ctypes.c_void_p) + ranges = [(-np.inf, np.inf), (0, np.inf)] + + else: + cython_symbol = '_studentized_range_cdf_asymptotic' + arg = [q, k] + usr_data = np.array(arg, float).ctypes.data_as(ctypes.c_void_p) + ranges = [(-np.inf, np.inf)] + + llc = LowLevelCallable.from_cython(_stats, cython_symbol, usr_data) + opts = dict(epsabs=1e-11, epsrel=1e-12) + return integrate.nquad(llc, ranges=ranges, opts=opts)[0] + + ufunc = np.frompyfunc(_single_cdf, 3, 1) + + # clip p-values to ensure they are in [0, 1]. + return np.clip(np.asarray(ufunc(x, k, df), dtype=np.float64)[()], 0, 1) + + +studentized_range = studentized_range_gen(name='studentized_range', a=0, + b=np.inf) + + +class rel_breitwigner_gen(rv_continuous): + r"""A relativistic Breit-Wigner random variable. + + %(before_notes)s + + See Also + -------- + cauchy: Cauchy distribution, also known as the Breit-Wigner distribution. + + Notes + ----- + + The probability density function for `rel_breitwigner` is + + .. math:: + + f(x, \rho) = \frac{k}{(x^2 - \rho^2)^2 + \rho^2} + + where + + .. math:: + k = \frac{2\sqrt{2}\rho^2\sqrt{\rho^2 + 1}} + {\pi\sqrt{\rho^2 + \rho\sqrt{\rho^2 + 1}}} + + The relativistic Breit-Wigner distribution is used in high energy physics + to model resonances [1]_. It gives the uncertainty in the invariant mass, + :math:`M` [2]_, of a resonance with characteristic mass :math:`M_0` and + decay-width :math:`\Gamma`, where :math:`M`, :math:`M_0` and :math:`\Gamma` + are expressed in natural units. In SciPy's parametrization, the shape + parameter :math:`\rho` is equal to :math:`M_0/\Gamma` and takes values in + :math:`(0, \infty)`. + + Equivalently, the relativistic Breit-Wigner distribution is said to give + the uncertainty in the center-of-mass energy :math:`E_{\text{cm}}`. In + natural units, the speed of light :math:`c` is equal to 1 and the invariant + mass :math:`M` is equal to the rest energy :math:`Mc^2`. In the + center-of-mass frame, the rest energy is equal to the total energy [3]_. + + %(after_notes)s + + :math:`\rho = M/\Gamma` and :math:`\Gamma` is the scale parameter. For + example, if one seeks to model the :math:`Z^0` boson with :math:`M_0 + \approx 91.1876 \text{ GeV}` and :math:`\Gamma \approx 2.4952\text{ GeV}` + [4]_ one can set ``rho=91.1876/2.4952`` and ``scale=2.4952``. + + To ensure a physically meaningful result when using the `fit` method, one + should set ``floc=0`` to fix the location parameter to 0. + + References + ---------- + .. [1] Relativistic Breit-Wigner distribution, Wikipedia, + https://en.wikipedia.org/wiki/Relativistic_Breit-Wigner_distribution + .. [2] Invariant mass, Wikipedia, + https://en.wikipedia.org/wiki/Invariant_mass + .. [3] Center-of-momentum frame, Wikipedia, + https://en.wikipedia.org/wiki/Center-of-momentum_frame + .. [4] M. Tanabashi et al. (Particle Data Group) Phys. Rev. D 98, 030001 - + Published 17 August 2018 + + %(example)s + + """ + def _argcheck(self, rho): + return rho > 0 + + def _shape_info(self): + return [_ShapeInfo("rho", False, (0, np.inf), (False, False))] + + def _pdf(self, x, rho): + # C = k / rho**2 + C = np.sqrt( + 2 * (1 + 1/rho**2) / (1 + np.sqrt(1 + 1/rho**2)) + ) * 2 / np.pi + with np.errstate(over='ignore'): + return C / (((x - rho)*(x + rho)/rho)**2 + 1) + + def _cdf(self, x, rho): + # C = k / (2 * rho**2) / np.sqrt(1 + 1/rho**2) + C = np.sqrt(2/(1 + np.sqrt(1 + 1/rho**2)))/np.pi + result = ( + np.sqrt(-1 + 1j/rho) + * np.arctan(x/np.sqrt(-rho*(rho + 1j))) + ) + result = C * 2 * np.imag(result) + # Sometimes above formula produces values greater than 1. + return np.clip(result, None, 1) + + def _munp(self, n, rho): + if n == 1: + # C = k / (2 * rho) + C = np.sqrt( + 2 * (1 + 1/rho**2) / (1 + np.sqrt(1 + 1/rho**2)) + ) / np.pi * rho + return C * (np.pi/2 + np.arctan(rho)) + if n == 2: + # C = pi * k / (4 * rho) + C = np.sqrt( + (1 + 1/rho**2) / (2 * (1 + np.sqrt(1 + 1/rho**2))) + ) * rho + result = (1 - rho * 1j) / np.sqrt(-1 - 1j/rho) + return 2 * C * np.real(result) + else: + return np.inf + + def _stats(self, rho): + # Returning None from stats makes public stats use _munp. + # nan values will be omitted from public stats. Skew and + # kurtosis are actually infinite. + return None, None, np.nan, np.nan + + @inherit_docstring_from(rv_continuous) + def fit(self, data, *args, **kwds): + # Override rv_continuous.fit to better handle case where floc is set. + data, _, floc, fscale = _check_fit_input_parameters( + self, data, args, kwds + ) + + censored = isinstance(data, CensoredData) + if censored: + if data.num_censored() == 0: + # There are no censored values in data, so replace the + # CensoredData instance with a regular array. + data = data._uncensored + censored = False + + if floc is None or censored: + return super().fit(data, *args, **kwds) + + if fscale is None: + # The interquartile range approximates the scale parameter gamma. + # The median approximates rho * gamma. + p25, p50, p75 = np.quantile(data - floc, [0.25, 0.5, 0.75]) + scale_0 = p75 - p25 + rho_0 = p50 / scale_0 + if not args: + args = [rho_0] + if "scale" not in kwds: + kwds["scale"] = scale_0 + else: + M_0 = np.median(data - floc) + rho_0 = M_0 / fscale + if not args: + args = [rho_0] + return super().fit(data, *args, **kwds) + + +rel_breitwigner = rel_breitwigner_gen(a=0.0, name="rel_breitwigner") + + +# Collect names of classes and objects in this module. +pairs = list(globals().copy().items()) +_distn_names, _distn_gen_names = get_distribution_names(pairs, rv_continuous) + +__all__ = _distn_names + _distn_gen_names + ['rv_histogram'] diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_covariance.py b/venv/lib/python3.10/site-packages/scipy/stats/_covariance.py new file mode 100644 index 0000000000000000000000000000000000000000..812a3ec62eff46a06ea4e058670081874e82c021 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_covariance.py @@ -0,0 +1,633 @@ +from functools import cached_property + +import numpy as np +from scipy import linalg +from scipy.stats import _multivariate + + +__all__ = ["Covariance"] + + +class Covariance: + """ + Representation of a covariance matrix + + Calculations involving covariance matrices (e.g. data whitening, + multivariate normal function evaluation) are often performed more + efficiently using a decomposition of the covariance matrix instead of the + covariance matrix itself. This class allows the user to construct an + object representing a covariance matrix using any of several + decompositions and perform calculations using a common interface. + + .. note:: + + The `Covariance` class cannot be instantiated directly. Instead, use + one of the factory methods (e.g. `Covariance.from_diagonal`). + + Examples + -------- + The `Covariance` class is is used by calling one of its + factory methods to create a `Covariance` object, then pass that + representation of the `Covariance` matrix as a shape parameter of a + multivariate distribution. + + For instance, the multivariate normal distribution can accept an array + representing a covariance matrix: + + >>> from scipy import stats + >>> import numpy as np + >>> d = [1, 2, 3] + >>> A = np.diag(d) # a diagonal covariance matrix + >>> x = [4, -2, 5] # a point of interest + >>> dist = stats.multivariate_normal(mean=[0, 0, 0], cov=A) + >>> dist.pdf(x) + 4.9595685102808205e-08 + + but the calculations are performed in a very generic way that does not + take advantage of any special properties of the covariance matrix. Because + our covariance matrix is diagonal, we can use ``Covariance.from_diagonal`` + to create an object representing the covariance matrix, and + `multivariate_normal` can use this to compute the probability density + function more efficiently. + + >>> cov = stats.Covariance.from_diagonal(d) + >>> dist = stats.multivariate_normal(mean=[0, 0, 0], cov=cov) + >>> dist.pdf(x) + 4.9595685102808205e-08 + + """ + def __init__(self): + message = ("The `Covariance` class cannot be instantiated directly. " + "Please use one of the factory methods " + "(e.g. `Covariance.from_diagonal`).") + raise NotImplementedError(message) + + @staticmethod + def from_diagonal(diagonal): + r""" + Return a representation of a covariance matrix from its diagonal. + + Parameters + ---------- + diagonal : array_like + The diagonal elements of a diagonal matrix. + + Notes + ----- + Let the diagonal elements of a diagonal covariance matrix :math:`D` be + stored in the vector :math:`d`. + + When all elements of :math:`d` are strictly positive, whitening of a + data point :math:`x` is performed by computing + :math:`x \cdot d^{-1/2}`, where the inverse square root can be taken + element-wise. + :math:`\log\det{D}` is calculated as :math:`-2 \sum(\log{d})`, + where the :math:`\log` operation is performed element-wise. + + This `Covariance` class supports singular covariance matrices. When + computing ``_log_pdet``, non-positive elements of :math:`d` are + ignored. Whitening is not well defined when the point to be whitened + does not lie in the span of the columns of the covariance matrix. The + convention taken here is to treat the inverse square root of + non-positive elements of :math:`d` as zeros. + + Examples + -------- + Prepare a symmetric positive definite covariance matrix ``A`` and a + data point ``x``. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> n = 5 + >>> A = np.diag(rng.random(n)) + >>> x = rng.random(size=n) + + Extract the diagonal from ``A`` and create the `Covariance` object. + + >>> d = np.diag(A) + >>> cov = stats.Covariance.from_diagonal(d) + + Compare the functionality of the `Covariance` object against a + reference implementations. + + >>> res = cov.whiten(x) + >>> ref = np.diag(d**-0.5) @ x + >>> np.allclose(res, ref) + True + >>> res = cov.log_pdet + >>> ref = np.linalg.slogdet(A)[-1] + >>> np.allclose(res, ref) + True + + """ + return CovViaDiagonal(diagonal) + + @staticmethod + def from_precision(precision, covariance=None): + r""" + Return a representation of a covariance from its precision matrix. + + Parameters + ---------- + precision : array_like + The precision matrix; that is, the inverse of a square, symmetric, + positive definite covariance matrix. + covariance : array_like, optional + The square, symmetric, positive definite covariance matrix. If not + provided, this may need to be calculated (e.g. to evaluate the + cumulative distribution function of + `scipy.stats.multivariate_normal`) by inverting `precision`. + + Notes + ----- + Let the covariance matrix be :math:`A`, its precision matrix be + :math:`P = A^{-1}`, and :math:`L` be the lower Cholesky factor such + that :math:`L L^T = P`. + Whitening of a data point :math:`x` is performed by computing + :math:`x^T L`. :math:`\log\det{A}` is calculated as + :math:`-2tr(\log{L})`, where the :math:`\log` operation is performed + element-wise. + + This `Covariance` class does not support singular covariance matrices + because the precision matrix does not exist for a singular covariance + matrix. + + Examples + -------- + Prepare a symmetric positive definite precision matrix ``P`` and a + data point ``x``. (If the precision matrix is not already available, + consider the other factory methods of the `Covariance` class.) + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> n = 5 + >>> P = rng.random(size=(n, n)) + >>> P = P @ P.T # a precision matrix must be positive definite + >>> x = rng.random(size=n) + + Create the `Covariance` object. + + >>> cov = stats.Covariance.from_precision(P) + + Compare the functionality of the `Covariance` object against + reference implementations. + + >>> res = cov.whiten(x) + >>> ref = x @ np.linalg.cholesky(P) + >>> np.allclose(res, ref) + True + >>> res = cov.log_pdet + >>> ref = -np.linalg.slogdet(P)[-1] + >>> np.allclose(res, ref) + True + + """ + return CovViaPrecision(precision, covariance) + + @staticmethod + def from_cholesky(cholesky): + r""" + Representation of a covariance provided via the (lower) Cholesky factor + + Parameters + ---------- + cholesky : array_like + The lower triangular Cholesky factor of the covariance matrix. + + Notes + ----- + Let the covariance matrix be :math:`A` and :math:`L` be the lower + Cholesky factor such that :math:`L L^T = A`. + Whitening of a data point :math:`x` is performed by computing + :math:`L^{-1} x`. :math:`\log\det{A}` is calculated as + :math:`2tr(\log{L})`, where the :math:`\log` operation is performed + element-wise. + + This `Covariance` class does not support singular covariance matrices + because the Cholesky decomposition does not exist for a singular + covariance matrix. + + Examples + -------- + Prepare a symmetric positive definite covariance matrix ``A`` and a + data point ``x``. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> n = 5 + >>> A = rng.random(size=(n, n)) + >>> A = A @ A.T # make the covariance symmetric positive definite + >>> x = rng.random(size=n) + + Perform the Cholesky decomposition of ``A`` and create the + `Covariance` object. + + >>> L = np.linalg.cholesky(A) + >>> cov = stats.Covariance.from_cholesky(L) + + Compare the functionality of the `Covariance` object against + reference implementation. + + >>> from scipy.linalg import solve_triangular + >>> res = cov.whiten(x) + >>> ref = solve_triangular(L, x, lower=True) + >>> np.allclose(res, ref) + True + >>> res = cov.log_pdet + >>> ref = np.linalg.slogdet(A)[-1] + >>> np.allclose(res, ref) + True + + """ + return CovViaCholesky(cholesky) + + @staticmethod + def from_eigendecomposition(eigendecomposition): + r""" + Representation of a covariance provided via eigendecomposition + + Parameters + ---------- + eigendecomposition : sequence + A sequence (nominally a tuple) containing the eigenvalue and + eigenvector arrays as computed by `scipy.linalg.eigh` or + `numpy.linalg.eigh`. + + Notes + ----- + Let the covariance matrix be :math:`A`, let :math:`V` be matrix of + eigenvectors, and let :math:`W` be the diagonal matrix of eigenvalues + such that `V W V^T = A`. + + When all of the eigenvalues are strictly positive, whitening of a + data point :math:`x` is performed by computing + :math:`x^T (V W^{-1/2})`, where the inverse square root can be taken + element-wise. + :math:`\log\det{A}` is calculated as :math:`tr(\log{W})`, + where the :math:`\log` operation is performed element-wise. + + This `Covariance` class supports singular covariance matrices. When + computing ``_log_pdet``, non-positive eigenvalues are ignored. + Whitening is not well defined when the point to be whitened + does not lie in the span of the columns of the covariance matrix. The + convention taken here is to treat the inverse square root of + non-positive eigenvalues as zeros. + + Examples + -------- + Prepare a symmetric positive definite covariance matrix ``A`` and a + data point ``x``. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> n = 5 + >>> A = rng.random(size=(n, n)) + >>> A = A @ A.T # make the covariance symmetric positive definite + >>> x = rng.random(size=n) + + Perform the eigendecomposition of ``A`` and create the `Covariance` + object. + + >>> w, v = np.linalg.eigh(A) + >>> cov = stats.Covariance.from_eigendecomposition((w, v)) + + Compare the functionality of the `Covariance` object against + reference implementations. + + >>> res = cov.whiten(x) + >>> ref = x @ (v @ np.diag(w**-0.5)) + >>> np.allclose(res, ref) + True + >>> res = cov.log_pdet + >>> ref = np.linalg.slogdet(A)[-1] + >>> np.allclose(res, ref) + True + + """ + return CovViaEigendecomposition(eigendecomposition) + + def whiten(self, x): + """ + Perform a whitening transformation on data. + + "Whitening" ("white" as in "white noise", in which each frequency has + equal magnitude) transforms a set of random variables into a new set of + random variables with unit-diagonal covariance. When a whitening + transform is applied to a sample of points distributed according to + a multivariate normal distribution with zero mean, the covariance of + the transformed sample is approximately the identity matrix. + + Parameters + ---------- + x : array_like + An array of points. The last dimension must correspond with the + dimensionality of the space, i.e., the number of columns in the + covariance matrix. + + Returns + ------- + x_ : array_like + The transformed array of points. + + References + ---------- + .. [1] "Whitening Transformation". Wikipedia. + https://en.wikipedia.org/wiki/Whitening_transformation + .. [2] Novak, Lukas, and Miroslav Vorechovsky. "Generalization of + coloring linear transformation". Transactions of VSB 18.2 + (2018): 31-35. :doi:`10.31490/tces-2018-0013` + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> n = 3 + >>> A = rng.random(size=(n, n)) + >>> cov_array = A @ A.T # make matrix symmetric positive definite + >>> precision = np.linalg.inv(cov_array) + >>> cov_object = stats.Covariance.from_precision(precision) + >>> x = rng.multivariate_normal(np.zeros(n), cov_array, size=(10000)) + >>> x_ = cov_object.whiten(x) + >>> np.cov(x_, rowvar=False) # near-identity covariance + array([[0.97862122, 0.00893147, 0.02430451], + [0.00893147, 0.96719062, 0.02201312], + [0.02430451, 0.02201312, 0.99206881]]) + + """ + return self._whiten(np.asarray(x)) + + def colorize(self, x): + """ + Perform a colorizing transformation on data. + + "Colorizing" ("color" as in "colored noise", in which different + frequencies may have different magnitudes) transforms a set of + uncorrelated random variables into a new set of random variables with + the desired covariance. When a coloring transform is applied to a + sample of points distributed according to a multivariate normal + distribution with identity covariance and zero mean, the covariance of + the transformed sample is approximately the covariance matrix used + in the coloring transform. + + Parameters + ---------- + x : array_like + An array of points. The last dimension must correspond with the + dimensionality of the space, i.e., the number of columns in the + covariance matrix. + + Returns + ------- + x_ : array_like + The transformed array of points. + + References + ---------- + .. [1] "Whitening Transformation". Wikipedia. + https://en.wikipedia.org/wiki/Whitening_transformation + .. [2] Novak, Lukas, and Miroslav Vorechovsky. "Generalization of + coloring linear transformation". Transactions of VSB 18.2 + (2018): 31-35. :doi:`10.31490/tces-2018-0013` + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng(1638083107694713882823079058616272161) + >>> n = 3 + >>> A = rng.random(size=(n, n)) + >>> cov_array = A @ A.T # make matrix symmetric positive definite + >>> cholesky = np.linalg.cholesky(cov_array) + >>> cov_object = stats.Covariance.from_cholesky(cholesky) + >>> x = rng.multivariate_normal(np.zeros(n), np.eye(n), size=(10000)) + >>> x_ = cov_object.colorize(x) + >>> cov_data = np.cov(x_, rowvar=False) + >>> np.allclose(cov_data, cov_array, rtol=3e-2) + True + """ + return self._colorize(np.asarray(x)) + + @property + def log_pdet(self): + """ + Log of the pseudo-determinant of the covariance matrix + """ + return np.array(self._log_pdet, dtype=float)[()] + + @property + def rank(self): + """ + Rank of the covariance matrix + """ + return np.array(self._rank, dtype=int)[()] + + @property + def covariance(self): + """ + Explicit representation of the covariance matrix + """ + return self._covariance + + @property + def shape(self): + """ + Shape of the covariance array + """ + return self._shape + + def _validate_matrix(self, A, name): + A = np.atleast_2d(A) + m, n = A.shape[-2:] + if m != n or A.ndim != 2 or not (np.issubdtype(A.dtype, np.integer) or + np.issubdtype(A.dtype, np.floating)): + message = (f"The input `{name}` must be a square, " + "two-dimensional array of real numbers.") + raise ValueError(message) + return A + + def _validate_vector(self, A, name): + A = np.atleast_1d(A) + if A.ndim != 1 or not (np.issubdtype(A.dtype, np.integer) or + np.issubdtype(A.dtype, np.floating)): + message = (f"The input `{name}` must be a one-dimensional array " + "of real numbers.") + raise ValueError(message) + return A + + +class CovViaPrecision(Covariance): + + def __init__(self, precision, covariance=None): + precision = self._validate_matrix(precision, 'precision') + if covariance is not None: + covariance = self._validate_matrix(covariance, 'covariance') + message = "`precision.shape` must equal `covariance.shape`." + if precision.shape != covariance.shape: + raise ValueError(message) + + self._chol_P = np.linalg.cholesky(precision) + self._log_pdet = -2*np.log(np.diag(self._chol_P)).sum(axis=-1) + self._rank = precision.shape[-1] # must be full rank if invertible + self._precision = precision + self._cov_matrix = covariance + self._shape = precision.shape + self._allow_singular = False + + def _whiten(self, x): + return x @ self._chol_P + + @cached_property + def _covariance(self): + n = self._shape[-1] + return (linalg.cho_solve((self._chol_P, True), np.eye(n)) + if self._cov_matrix is None else self._cov_matrix) + + def _colorize(self, x): + return linalg.solve_triangular(self._chol_P.T, x.T, lower=False).T + + +def _dot_diag(x, d): + # If d were a full diagonal matrix, x @ d would always do what we want. + # Special treatment is needed for n-dimensional `d` in which each row + # includes only the diagonal elements of a covariance matrix. + return x * d if x.ndim < 2 else x * np.expand_dims(d, -2) + + +class CovViaDiagonal(Covariance): + + def __init__(self, diagonal): + diagonal = self._validate_vector(diagonal, 'diagonal') + + i_zero = diagonal <= 0 + positive_diagonal = np.array(diagonal, dtype=np.float64) + + positive_diagonal[i_zero] = 1 # ones don't affect determinant + self._log_pdet = np.sum(np.log(positive_diagonal), axis=-1) + + psuedo_reciprocals = 1 / np.sqrt(positive_diagonal) + psuedo_reciprocals[i_zero] = 0 + + self._sqrt_diagonal = np.sqrt(diagonal) + self._LP = psuedo_reciprocals + self._rank = positive_diagonal.shape[-1] - i_zero.sum(axis=-1) + self._covariance = np.apply_along_axis(np.diag, -1, diagonal) + self._i_zero = i_zero + self._shape = self._covariance.shape + self._allow_singular = True + + def _whiten(self, x): + return _dot_diag(x, self._LP) + + def _colorize(self, x): + return _dot_diag(x, self._sqrt_diagonal) + + def _support_mask(self, x): + """ + Check whether x lies in the support of the distribution. + """ + return ~np.any(_dot_diag(x, self._i_zero), axis=-1) + + +class CovViaCholesky(Covariance): + + def __init__(self, cholesky): + L = self._validate_matrix(cholesky, 'cholesky') + + self._factor = L + self._log_pdet = 2*np.log(np.diag(self._factor)).sum(axis=-1) + self._rank = L.shape[-1] # must be full rank for cholesky + self._shape = L.shape + self._allow_singular = False + + @cached_property + def _covariance(self): + return self._factor @ self._factor.T + + def _whiten(self, x): + res = linalg.solve_triangular(self._factor, x.T, lower=True).T + return res + + def _colorize(self, x): + return x @ self._factor.T + + +class CovViaEigendecomposition(Covariance): + + def __init__(self, eigendecomposition): + eigenvalues, eigenvectors = eigendecomposition + eigenvalues = self._validate_vector(eigenvalues, 'eigenvalues') + eigenvectors = self._validate_matrix(eigenvectors, 'eigenvectors') + message = ("The shapes of `eigenvalues` and `eigenvectors` " + "must be compatible.") + try: + eigenvalues = np.expand_dims(eigenvalues, -2) + eigenvectors, eigenvalues = np.broadcast_arrays(eigenvectors, + eigenvalues) + eigenvalues = eigenvalues[..., 0, :] + except ValueError: + raise ValueError(message) + + i_zero = eigenvalues <= 0 + positive_eigenvalues = np.array(eigenvalues, dtype=np.float64) + + positive_eigenvalues[i_zero] = 1 # ones don't affect determinant + self._log_pdet = np.sum(np.log(positive_eigenvalues), axis=-1) + + psuedo_reciprocals = 1 / np.sqrt(positive_eigenvalues) + psuedo_reciprocals[i_zero] = 0 + + self._LP = eigenvectors * psuedo_reciprocals + self._LA = eigenvectors * np.sqrt(eigenvalues) + self._rank = positive_eigenvalues.shape[-1] - i_zero.sum(axis=-1) + self._w = eigenvalues + self._v = eigenvectors + self._shape = eigenvectors.shape + self._null_basis = eigenvectors * i_zero + # This is only used for `_support_mask`, not to decide whether + # the covariance is singular or not. + self._eps = _multivariate._eigvalsh_to_eps(eigenvalues) * 10**3 + self._allow_singular = True + + def _whiten(self, x): + return x @ self._LP + + def _colorize(self, x): + return x @ self._LA.T + + @cached_property + def _covariance(self): + return (self._v * self._w) @ self._v.T + + def _support_mask(self, x): + """ + Check whether x lies in the support of the distribution. + """ + residual = np.linalg.norm(x @ self._null_basis, axis=-1) + in_support = residual < self._eps + return in_support + + +class CovViaPSD(Covariance): + """ + Representation of a covariance provided via an instance of _PSD + """ + + def __init__(self, psd): + self._LP = psd.U + self._log_pdet = psd.log_pdet + self._rank = psd.rank + self._covariance = psd._M + self._shape = psd._M.shape + self._psd = psd + self._allow_singular = False # by default + + def _whiten(self, x): + return x @ self._LP + + def _support_mask(self, x): + return self._psd._support_mask(x) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_crosstab.py b/venv/lib/python3.10/site-packages/scipy/stats/_crosstab.py new file mode 100644 index 0000000000000000000000000000000000000000..ee762a2700bf3e13bc251c5287630c4a237aa2ae --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_crosstab.py @@ -0,0 +1,204 @@ +import numpy as np +from scipy.sparse import coo_matrix +from scipy._lib._bunch import _make_tuple_bunch + + +CrosstabResult = _make_tuple_bunch( + "CrosstabResult", ["elements", "count"] +) + + +def crosstab(*args, levels=None, sparse=False): + """ + Return table of counts for each possible unique combination in ``*args``. + + When ``len(args) > 1``, the array computed by this function is + often referred to as a *contingency table* [1]_. + + The arguments must be sequences with the same length. The second return + value, `count`, is an integer array with ``len(args)`` dimensions. If + `levels` is None, the shape of `count` is ``(n0, n1, ...)``, where ``nk`` + is the number of unique elements in ``args[k]``. + + Parameters + ---------- + *args : sequences + A sequence of sequences whose unique aligned elements are to be + counted. The sequences in args must all be the same length. + levels : sequence, optional + If `levels` is given, it must be a sequence that is the same length as + `args`. Each element in `levels` is either a sequence or None. If it + is a sequence, it gives the values in the corresponding sequence in + `args` that are to be counted. If any value in the sequences in `args` + does not occur in the corresponding sequence in `levels`, that value + is ignored and not counted in the returned array `count`. The default + value of `levels` for ``args[i]`` is ``np.unique(args[i])`` + sparse : bool, optional + If True, return a sparse matrix. The matrix will be an instance of + the `scipy.sparse.coo_matrix` class. Because SciPy's sparse matrices + must be 2-d, only two input sequences are allowed when `sparse` is + True. Default is False. + + Returns + ------- + res : CrosstabResult + An object containing the following attributes: + + elements : tuple of numpy.ndarrays. + Tuple of length ``len(args)`` containing the arrays of elements + that are counted in `count`. These can be interpreted as the + labels of the corresponding dimensions of `count`. If `levels` was + given, then if ``levels[i]`` is not None, ``elements[i]`` will + hold the values given in ``levels[i]``. + count : numpy.ndarray or scipy.sparse.coo_matrix + Counts of the unique elements in ``zip(*args)``, stored in an + array. Also known as a *contingency table* when ``len(args) > 1``. + + See Also + -------- + numpy.unique + + Notes + ----- + .. versionadded:: 1.7.0 + + References + ---------- + .. [1] "Contingency table", http://en.wikipedia.org/wiki/Contingency_table + + Examples + -------- + >>> from scipy.stats.contingency import crosstab + + Given the lists `a` and `x`, create a contingency table that counts the + frequencies of the corresponding pairs. + + >>> a = ['A', 'B', 'A', 'A', 'B', 'B', 'A', 'A', 'B', 'B'] + >>> x = ['X', 'X', 'X', 'Y', 'Z', 'Z', 'Y', 'Y', 'Z', 'Z'] + >>> res = crosstab(a, x) + >>> avals, xvals = res.elements + >>> avals + array(['A', 'B'], dtype='>> xvals + array(['X', 'Y', 'Z'], dtype='>> res.count + array([[2, 3, 0], + [1, 0, 4]]) + + So `('A', 'X')` occurs twice, `('A', 'Y')` occurs three times, etc. + + Higher dimensional contingency tables can be created. + + >>> p = [0, 0, 0, 0, 1, 1, 1, 0, 0, 1] + >>> res = crosstab(a, x, p) + >>> res.count + array([[[2, 0], + [2, 1], + [0, 0]], + [[1, 0], + [0, 0], + [1, 3]]]) + >>> res.count.shape + (2, 3, 2) + + The values to be counted can be set by using the `levels` argument. + It allows the elements of interest in each input sequence to be + given explicitly instead finding the unique elements of the sequence. + + For example, suppose one of the arguments is an array containing the + answers to a survey question, with integer values 1 to 4. Even if the + value 1 does not occur in the data, we want an entry for it in the table. + + >>> q1 = [2, 3, 3, 2, 4, 4, 2, 3, 4, 4, 4, 3, 3, 3, 4] # 1 does not occur. + >>> q2 = [4, 4, 2, 2, 2, 4, 1, 1, 2, 2, 4, 2, 2, 2, 4] # 3 does not occur. + >>> options = [1, 2, 3, 4] + >>> res = crosstab(q1, q2, levels=(options, options)) + >>> res.count + array([[0, 0, 0, 0], + [1, 1, 0, 1], + [1, 4, 0, 1], + [0, 3, 0, 3]]) + + If `levels` is given, but an element of `levels` is None, the unique values + of the corresponding argument are used. For example, + + >>> res = crosstab(q1, q2, levels=(None, options)) + >>> res.elements + [array([2, 3, 4]), [1, 2, 3, 4]] + >>> res.count + array([[1, 1, 0, 1], + [1, 4, 0, 1], + [0, 3, 0, 3]]) + + If we want to ignore the pairs where 4 occurs in ``q2``, we can + give just the values [1, 2] to `levels`, and the 4 will be ignored: + + >>> res = crosstab(q1, q2, levels=(None, [1, 2])) + >>> res.elements + [array([2, 3, 4]), [1, 2]] + >>> res.count + array([[1, 1], + [1, 4], + [0, 3]]) + + Finally, let's repeat the first example, but return a sparse matrix: + + >>> res = crosstab(a, x, sparse=True) + >>> res.count + <2x3 sparse matrix of type '' + with 4 stored elements in COOrdinate format> + >>> res.count.A + array([[2, 3, 0], + [1, 0, 4]]) + + """ + nargs = len(args) + if nargs == 0: + raise TypeError("At least one input sequence is required.") + + len0 = len(args[0]) + if not all(len(a) == len0 for a in args[1:]): + raise ValueError("All input sequences must have the same length.") + + if sparse and nargs != 2: + raise ValueError("When `sparse` is True, only two input sequences " + "are allowed.") + + if levels is None: + # Call np.unique with return_inverse=True on each argument. + actual_levels, indices = zip(*[np.unique(a, return_inverse=True) + for a in args]) + else: + # `levels` is not None... + if len(levels) != nargs: + raise ValueError('len(levels) must equal the number of input ' + 'sequences') + + args = [np.asarray(arg) for arg in args] + mask = np.zeros((nargs, len0), dtype=np.bool_) + inv = np.zeros((nargs, len0), dtype=np.intp) + actual_levels = [] + for k, (levels_list, arg) in enumerate(zip(levels, args)): + if levels_list is None: + levels_list, inv[k, :] = np.unique(arg, return_inverse=True) + mask[k, :] = True + else: + q = arg == np.asarray(levels_list).reshape(-1, 1) + mask[k, :] = np.any(q, axis=0) + qnz = q.T.nonzero() + inv[k, qnz[0]] = qnz[1] + actual_levels.append(levels_list) + + mask_all = mask.all(axis=0) + indices = tuple(inv[:, mask_all]) + + if sparse: + count = coo_matrix((np.ones(len(indices[0]), dtype=int), + (indices[0], indices[1]))) + count.sum_duplicates() + else: + shape = [len(u) for u in actual_levels] + count = np.zeros(shape, dtype=int) + np.add.at(count, indices, 1) + + return CrosstabResult(actual_levels, count) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_discrete_distns.py b/venv/lib/python3.10/site-packages/scipy/stats/_discrete_distns.py new file mode 100644 index 0000000000000000000000000000000000000000..169222855fddba03dc39248cd5e441e981719758 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_discrete_distns.py @@ -0,0 +1,1952 @@ +# +# Author: Travis Oliphant 2002-2011 with contributions from +# SciPy Developers 2004-2011 +# +from functools import partial + +from scipy import special +from scipy.special import entr, logsumexp, betaln, gammaln as gamln, zeta +from scipy._lib._util import _lazywhere, rng_integers +from scipy.interpolate import interp1d + +from numpy import floor, ceil, log, exp, sqrt, log1p, expm1, tanh, cosh, sinh + +import numpy as np + +from ._distn_infrastructure import (rv_discrete, get_distribution_names, + _check_shape, _ShapeInfo) +import scipy.stats._boost as _boost +from ._biasedurn import (_PyFishersNCHypergeometric, + _PyWalleniusNCHypergeometric, + _PyStochasticLib3) + + +def _isintegral(x): + return x == np.round(x) + + +class binom_gen(rv_discrete): + r"""A binomial discrete random variable. + + %(before_notes)s + + Notes + ----- + The probability mass function for `binom` is: + + .. math:: + + f(k) = \binom{n}{k} p^k (1-p)^{n-k} + + for :math:`k \in \{0, 1, \dots, n\}`, :math:`0 \leq p \leq 1` + + `binom` takes :math:`n` and :math:`p` as shape parameters, + where :math:`p` is the probability of a single success + and :math:`1-p` is the probability of a single failure. + + %(after_notes)s + + %(example)s + + See Also + -------- + hypergeom, nbinom, nhypergeom + + """ + def _shape_info(self): + return [_ShapeInfo("n", True, (0, np.inf), (True, False)), + _ShapeInfo("p", False, (0, 1), (True, True))] + + def _rvs(self, n, p, size=None, random_state=None): + return random_state.binomial(n, p, size) + + def _argcheck(self, n, p): + return (n >= 0) & _isintegral(n) & (p >= 0) & (p <= 1) + + def _get_support(self, n, p): + return self.a, n + + def _logpmf(self, x, n, p): + k = floor(x) + combiln = (gamln(n+1) - (gamln(k+1) + gamln(n-k+1))) + return combiln + special.xlogy(k, p) + special.xlog1py(n-k, -p) + + def _pmf(self, x, n, p): + # binom.pmf(k) = choose(n, k) * p**k * (1-p)**(n-k) + return _boost._binom_pdf(x, n, p) + + def _cdf(self, x, n, p): + k = floor(x) + return _boost._binom_cdf(k, n, p) + + def _sf(self, x, n, p): + k = floor(x) + return _boost._binom_sf(k, n, p) + + def _isf(self, x, n, p): + return _boost._binom_isf(x, n, p) + + def _ppf(self, q, n, p): + return _boost._binom_ppf(q, n, p) + + def _stats(self, n, p, moments='mv'): + mu = _boost._binom_mean(n, p) + var = _boost._binom_variance(n, p) + g1, g2 = None, None + if 's' in moments: + g1 = _boost._binom_skewness(n, p) + if 'k' in moments: + g2 = _boost._binom_kurtosis_excess(n, p) + return mu, var, g1, g2 + + def _entropy(self, n, p): + k = np.r_[0:n + 1] + vals = self._pmf(k, n, p) + return np.sum(entr(vals), axis=0) + + +binom = binom_gen(name='binom') + + +class bernoulli_gen(binom_gen): + r"""A Bernoulli discrete random variable. + + %(before_notes)s + + Notes + ----- + The probability mass function for `bernoulli` is: + + .. math:: + + f(k) = \begin{cases}1-p &\text{if } k = 0\\ + p &\text{if } k = 1\end{cases} + + for :math:`k` in :math:`\{0, 1\}`, :math:`0 \leq p \leq 1` + + `bernoulli` takes :math:`p` as shape parameter, + where :math:`p` is the probability of a single success + and :math:`1-p` is the probability of a single failure. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("p", False, (0, 1), (True, True))] + + def _rvs(self, p, size=None, random_state=None): + return binom_gen._rvs(self, 1, p, size=size, random_state=random_state) + + def _argcheck(self, p): + return (p >= 0) & (p <= 1) + + def _get_support(self, p): + # Overrides binom_gen._get_support!x + return self.a, self.b + + def _logpmf(self, x, p): + return binom._logpmf(x, 1, p) + + def _pmf(self, x, p): + # bernoulli.pmf(k) = 1-p if k = 0 + # = p if k = 1 + return binom._pmf(x, 1, p) + + def _cdf(self, x, p): + return binom._cdf(x, 1, p) + + def _sf(self, x, p): + return binom._sf(x, 1, p) + + def _isf(self, x, p): + return binom._isf(x, 1, p) + + def _ppf(self, q, p): + return binom._ppf(q, 1, p) + + def _stats(self, p): + return binom._stats(1, p) + + def _entropy(self, p): + return entr(p) + entr(1-p) + + +bernoulli = bernoulli_gen(b=1, name='bernoulli') + + +class betabinom_gen(rv_discrete): + r"""A beta-binomial discrete random variable. + + %(before_notes)s + + Notes + ----- + The beta-binomial distribution is a binomial distribution with a + probability of success `p` that follows a beta distribution. + + The probability mass function for `betabinom` is: + + .. math:: + + f(k) = \binom{n}{k} \frac{B(k + a, n - k + b)}{B(a, b)} + + for :math:`k \in \{0, 1, \dots, n\}`, :math:`n \geq 0`, :math:`a > 0`, + :math:`b > 0`, where :math:`B(a, b)` is the beta function. + + `betabinom` takes :math:`n`, :math:`a`, and :math:`b` as shape parameters. + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Beta-binomial_distribution + + %(after_notes)s + + .. versionadded:: 1.4.0 + + See Also + -------- + beta, binom + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("n", True, (0, np.inf), (True, False)), + _ShapeInfo("a", False, (0, np.inf), (False, False)), + _ShapeInfo("b", False, (0, np.inf), (False, False))] + + def _rvs(self, n, a, b, size=None, random_state=None): + p = random_state.beta(a, b, size) + return random_state.binomial(n, p, size) + + def _get_support(self, n, a, b): + return 0, n + + def _argcheck(self, n, a, b): + return (n >= 0) & _isintegral(n) & (a > 0) & (b > 0) + + def _logpmf(self, x, n, a, b): + k = floor(x) + combiln = -log(n + 1) - betaln(n - k + 1, k + 1) + return combiln + betaln(k + a, n - k + b) - betaln(a, b) + + def _pmf(self, x, n, a, b): + return exp(self._logpmf(x, n, a, b)) + + def _stats(self, n, a, b, moments='mv'): + e_p = a / (a + b) + e_q = 1 - e_p + mu = n * e_p + var = n * (a + b + n) * e_p * e_q / (a + b + 1) + g1, g2 = None, None + if 's' in moments: + g1 = 1.0 / sqrt(var) + g1 *= (a + b + 2 * n) * (b - a) + g1 /= (a + b + 2) * (a + b) + if 'k' in moments: + g2 = (a + b).astype(e_p.dtype) + g2 *= (a + b - 1 + 6 * n) + g2 += 3 * a * b * (n - 2) + g2 += 6 * n ** 2 + g2 -= 3 * e_p * b * n * (6 - n) + g2 -= 18 * e_p * e_q * n ** 2 + g2 *= (a + b) ** 2 * (1 + a + b) + g2 /= (n * a * b * (a + b + 2) * (a + b + 3) * (a + b + n)) + g2 -= 3 + return mu, var, g1, g2 + + +betabinom = betabinom_gen(name='betabinom') + + +class nbinom_gen(rv_discrete): + r"""A negative binomial discrete random variable. + + %(before_notes)s + + Notes + ----- + Negative binomial distribution describes a sequence of i.i.d. Bernoulli + trials, repeated until a predefined, non-random number of successes occurs. + + The probability mass function of the number of failures for `nbinom` is: + + .. math:: + + f(k) = \binom{k+n-1}{n-1} p^n (1-p)^k + + for :math:`k \ge 0`, :math:`0 < p \leq 1` + + `nbinom` takes :math:`n` and :math:`p` as shape parameters where :math:`n` + is the number of successes, :math:`p` is the probability of a single + success, and :math:`1-p` is the probability of a single failure. + + Another common parameterization of the negative binomial distribution is + in terms of the mean number of failures :math:`\mu` to achieve :math:`n` + successes. The mean :math:`\mu` is related to the probability of success + as + + .. math:: + + p = \frac{n}{n + \mu} + + The number of successes :math:`n` may also be specified in terms of a + "dispersion", "heterogeneity", or "aggregation" parameter :math:`\alpha`, + which relates the mean :math:`\mu` to the variance :math:`\sigma^2`, + e.g. :math:`\sigma^2 = \mu + \alpha \mu^2`. Regardless of the convention + used for :math:`\alpha`, + + .. math:: + + p &= \frac{\mu}{\sigma^2} \\ + n &= \frac{\mu^2}{\sigma^2 - \mu} + + %(after_notes)s + + %(example)s + + See Also + -------- + hypergeom, binom, nhypergeom + + """ + def _shape_info(self): + return [_ShapeInfo("n", True, (0, np.inf), (True, False)), + _ShapeInfo("p", False, (0, 1), (True, True))] + + def _rvs(self, n, p, size=None, random_state=None): + return random_state.negative_binomial(n, p, size) + + def _argcheck(self, n, p): + return (n > 0) & (p > 0) & (p <= 1) + + def _pmf(self, x, n, p): + # nbinom.pmf(k) = choose(k+n-1, n-1) * p**n * (1-p)**k + return _boost._nbinom_pdf(x, n, p) + + def _logpmf(self, x, n, p): + coeff = gamln(n+x) - gamln(x+1) - gamln(n) + return coeff + n*log(p) + special.xlog1py(x, -p) + + def _cdf(self, x, n, p): + k = floor(x) + return _boost._nbinom_cdf(k, n, p) + + def _logcdf(self, x, n, p): + k = floor(x) + k, n, p = np.broadcast_arrays(k, n, p) + cdf = self._cdf(k, n, p) + cond = cdf > 0.5 + def f1(k, n, p): + return np.log1p(-special.betainc(k + 1, n, 1 - p)) + + # do calc in place + logcdf = cdf + with np.errstate(divide='ignore'): + logcdf[cond] = f1(k[cond], n[cond], p[cond]) + logcdf[~cond] = np.log(cdf[~cond]) + return logcdf + + def _sf(self, x, n, p): + k = floor(x) + return _boost._nbinom_sf(k, n, p) + + def _isf(self, x, n, p): + with np.errstate(over='ignore'): # see gh-17432 + return _boost._nbinom_isf(x, n, p) + + def _ppf(self, q, n, p): + with np.errstate(over='ignore'): # see gh-17432 + return _boost._nbinom_ppf(q, n, p) + + def _stats(self, n, p): + return ( + _boost._nbinom_mean(n, p), + _boost._nbinom_variance(n, p), + _boost._nbinom_skewness(n, p), + _boost._nbinom_kurtosis_excess(n, p), + ) + + +nbinom = nbinom_gen(name='nbinom') + + +class betanbinom_gen(rv_discrete): + r"""A beta-negative-binomial discrete random variable. + + %(before_notes)s + + Notes + ----- + The beta-negative-binomial distribution is a negative binomial + distribution with a probability of success `p` that follows a + beta distribution. + + The probability mass function for `betanbinom` is: + + .. math:: + + f(k) = \binom{n + k - 1}{k} \frac{B(a + n, b + k)}{B(a, b)} + + for :math:`k \ge 0`, :math:`n \geq 0`, :math:`a > 0`, + :math:`b > 0`, where :math:`B(a, b)` is the beta function. + + `betanbinom` takes :math:`n`, :math:`a`, and :math:`b` as shape parameters. + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Beta_negative_binomial_distribution + + %(after_notes)s + + .. versionadded:: 1.12.0 + + See Also + -------- + betabinom : Beta binomial distribution + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("n", True, (0, np.inf), (True, False)), + _ShapeInfo("a", False, (0, np.inf), (False, False)), + _ShapeInfo("b", False, (0, np.inf), (False, False))] + + def _rvs(self, n, a, b, size=None, random_state=None): + p = random_state.beta(a, b, size) + return random_state.negative_binomial(n, p, size) + + def _argcheck(self, n, a, b): + return (n >= 0) & _isintegral(n) & (a > 0) & (b > 0) + + def _logpmf(self, x, n, a, b): + k = floor(x) + combiln = -np.log(n + k) - betaln(n, k + 1) + return combiln + betaln(a + n, b + k) - betaln(a, b) + + def _pmf(self, x, n, a, b): + return exp(self._logpmf(x, n, a, b)) + + def _stats(self, n, a, b, moments='mv'): + # reference: Wolfram Alpha input + # BetaNegativeBinomialDistribution[a, b, n] + def mean(n, a, b): + return n * b / (a - 1.) + mu = _lazywhere(a > 1, (n, a, b), f=mean, fillvalue=np.inf) + def var(n, a, b): + return (n * b * (n + a - 1.) * (a + b - 1.) + / ((a - 2.) * (a - 1.)**2.)) + var = _lazywhere(a > 2, (n, a, b), f=var, fillvalue=np.inf) + g1, g2 = None, None + def skew(n, a, b): + return ((2 * n + a - 1.) * (2 * b + a - 1.) + / (a - 3.) / sqrt(n * b * (n + a - 1.) * (b + a - 1.) + / (a - 2.))) + if 's' in moments: + g1 = _lazywhere(a > 3, (n, a, b), f=skew, fillvalue=np.inf) + def kurtosis(n, a, b): + term = (a - 2.) + term_2 = ((a - 1.)**2. * (a**2. + a * (6 * b - 1.) + + 6. * (b - 1.) * b) + + 3. * n**2. * ((a + 5.) * b**2. + (a + 5.) + * (a - 1.) * b + 2. * (a - 1.)**2) + + 3 * (a - 1.) * n + * ((a + 5.) * b**2. + (a + 5.) * (a - 1.) * b + + 2. * (a - 1.)**2.)) + denominator = ((a - 4.) * (a - 3.) * b * n + * (a + b - 1.) * (a + n - 1.)) + # Wolfram Alpha uses Pearson kurtosis, so we substract 3 to get + # scipy's Fisher kurtosis + return term * term_2 / denominator - 3. + if 'k' in moments: + g2 = _lazywhere(a > 4, (n, a, b), f=kurtosis, fillvalue=np.inf) + return mu, var, g1, g2 + + +betanbinom = betanbinom_gen(name='betanbinom') + + +class geom_gen(rv_discrete): + r"""A geometric discrete random variable. + + %(before_notes)s + + Notes + ----- + The probability mass function for `geom` is: + + .. math:: + + f(k) = (1-p)^{k-1} p + + for :math:`k \ge 1`, :math:`0 < p \leq 1` + + `geom` takes :math:`p` as shape parameter, + where :math:`p` is the probability of a single success + and :math:`1-p` is the probability of a single failure. + + %(after_notes)s + + See Also + -------- + planck + + %(example)s + + """ + + def _shape_info(self): + return [_ShapeInfo("p", False, (0, 1), (True, True))] + + def _rvs(self, p, size=None, random_state=None): + return random_state.geometric(p, size=size) + + def _argcheck(self, p): + return (p <= 1) & (p > 0) + + def _pmf(self, k, p): + return np.power(1-p, k-1) * p + + def _logpmf(self, k, p): + return special.xlog1py(k - 1, -p) + log(p) + + def _cdf(self, x, p): + k = floor(x) + return -expm1(log1p(-p)*k) + + def _sf(self, x, p): + return np.exp(self._logsf(x, p)) + + def _logsf(self, x, p): + k = floor(x) + return k*log1p(-p) + + def _ppf(self, q, p): + vals = ceil(log1p(-q) / log1p(-p)) + temp = self._cdf(vals-1, p) + return np.where((temp >= q) & (vals > 0), vals-1, vals) + + def _stats(self, p): + mu = 1.0/p + qr = 1.0-p + var = qr / p / p + g1 = (2.0-p) / sqrt(qr) + g2 = np.polyval([1, -6, 6], p)/(1.0-p) + return mu, var, g1, g2 + + def _entropy(self, p): + return -np.log(p) - np.log1p(-p) * (1.0-p) / p + + +geom = geom_gen(a=1, name='geom', longname="A geometric") + + +class hypergeom_gen(rv_discrete): + r"""A hypergeometric discrete random variable. + + The hypergeometric distribution models drawing objects from a bin. + `M` is the total number of objects, `n` is total number of Type I objects. + The random variate represents the number of Type I objects in `N` drawn + without replacement from the total population. + + %(before_notes)s + + Notes + ----- + The symbols used to denote the shape parameters (`M`, `n`, and `N`) are not + universally accepted. See the Examples for a clarification of the + definitions used here. + + The probability mass function is defined as, + + .. math:: p(k, M, n, N) = \frac{\binom{n}{k} \binom{M - n}{N - k}} + {\binom{M}{N}} + + for :math:`k \in [\max(0, N - M + n), \min(n, N)]`, where the binomial + coefficients are defined as, + + .. math:: \binom{n}{k} \equiv \frac{n!}{k! (n - k)!}. + + %(after_notes)s + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import hypergeom + >>> import matplotlib.pyplot as plt + + Suppose we have a collection of 20 animals, of which 7 are dogs. Then if + we want to know the probability of finding a given number of dogs if we + choose at random 12 of the 20 animals, we can initialize a frozen + distribution and plot the probability mass function: + + >>> [M, n, N] = [20, 7, 12] + >>> rv = hypergeom(M, n, N) + >>> x = np.arange(0, n+1) + >>> pmf_dogs = rv.pmf(x) + + >>> fig = plt.figure() + >>> ax = fig.add_subplot(111) + >>> ax.plot(x, pmf_dogs, 'bo') + >>> ax.vlines(x, 0, pmf_dogs, lw=2) + >>> ax.set_xlabel('# of dogs in our group of chosen animals') + >>> ax.set_ylabel('hypergeom PMF') + >>> plt.show() + + Instead of using a frozen distribution we can also use `hypergeom` + methods directly. To for example obtain the cumulative distribution + function, use: + + >>> prb = hypergeom.cdf(x, M, n, N) + + And to generate random numbers: + + >>> R = hypergeom.rvs(M, n, N, size=10) + + See Also + -------- + nhypergeom, binom, nbinom + + """ + def _shape_info(self): + return [_ShapeInfo("M", True, (0, np.inf), (True, False)), + _ShapeInfo("n", True, (0, np.inf), (True, False)), + _ShapeInfo("N", True, (0, np.inf), (True, False))] + + def _rvs(self, M, n, N, size=None, random_state=None): + return random_state.hypergeometric(n, M-n, N, size=size) + + def _get_support(self, M, n, N): + return np.maximum(N-(M-n), 0), np.minimum(n, N) + + def _argcheck(self, M, n, N): + cond = (M > 0) & (n >= 0) & (N >= 0) + cond &= (n <= M) & (N <= M) + cond &= _isintegral(M) & _isintegral(n) & _isintegral(N) + return cond + + def _logpmf(self, k, M, n, N): + tot, good = M, n + bad = tot - good + result = (betaln(good+1, 1) + betaln(bad+1, 1) + betaln(tot-N+1, N+1) - + betaln(k+1, good-k+1) - betaln(N-k+1, bad-N+k+1) - + betaln(tot+1, 1)) + return result + + def _pmf(self, k, M, n, N): + return _boost._hypergeom_pdf(k, n, N, M) + + def _cdf(self, k, M, n, N): + return _boost._hypergeom_cdf(k, n, N, M) + + def _stats(self, M, n, N): + M, n, N = 1. * M, 1. * n, 1. * N + m = M - n + + # Boost kurtosis_excess doesn't return the same as the value + # computed here. + g2 = M * (M + 1) - 6. * N * (M - N) - 6. * n * m + g2 *= (M - 1) * M * M + g2 += 6. * n * N * (M - N) * m * (5. * M - 6) + g2 /= n * N * (M - N) * m * (M - 2.) * (M - 3.) + return ( + _boost._hypergeom_mean(n, N, M), + _boost._hypergeom_variance(n, N, M), + _boost._hypergeom_skewness(n, N, M), + g2, + ) + + def _entropy(self, M, n, N): + k = np.r_[N - (M - n):min(n, N) + 1] + vals = self.pmf(k, M, n, N) + return np.sum(entr(vals), axis=0) + + def _sf(self, k, M, n, N): + return _boost._hypergeom_sf(k, n, N, M) + + def _logsf(self, k, M, n, N): + res = [] + for quant, tot, good, draw in zip(*np.broadcast_arrays(k, M, n, N)): + if (quant + 0.5) * (tot + 0.5) < (good - 0.5) * (draw - 0.5): + # Less terms to sum if we calculate log(1-cdf) + res.append(log1p(-exp(self.logcdf(quant, tot, good, draw)))) + else: + # Integration over probability mass function using logsumexp + k2 = np.arange(quant + 1, draw + 1) + res.append(logsumexp(self._logpmf(k2, tot, good, draw))) + return np.asarray(res) + + def _logcdf(self, k, M, n, N): + res = [] + for quant, tot, good, draw in zip(*np.broadcast_arrays(k, M, n, N)): + if (quant + 0.5) * (tot + 0.5) > (good - 0.5) * (draw - 0.5): + # Less terms to sum if we calculate log(1-sf) + res.append(log1p(-exp(self.logsf(quant, tot, good, draw)))) + else: + # Integration over probability mass function using logsumexp + k2 = np.arange(0, quant + 1) + res.append(logsumexp(self._logpmf(k2, tot, good, draw))) + return np.asarray(res) + + +hypergeom = hypergeom_gen(name='hypergeom') + + +class nhypergeom_gen(rv_discrete): + r"""A negative hypergeometric discrete random variable. + + Consider a box containing :math:`M` balls:, :math:`n` red and + :math:`M-n` blue. We randomly sample balls from the box, one + at a time and *without* replacement, until we have picked :math:`r` + blue balls. `nhypergeom` is the distribution of the number of + red balls :math:`k` we have picked. + + %(before_notes)s + + Notes + ----- + The symbols used to denote the shape parameters (`M`, `n`, and `r`) are not + universally accepted. See the Examples for a clarification of the + definitions used here. + + The probability mass function is defined as, + + .. math:: f(k; M, n, r) = \frac{{{k+r-1}\choose{k}}{{M-r-k}\choose{n-k}}} + {{M \choose n}} + + for :math:`k \in [0, n]`, :math:`n \in [0, M]`, :math:`r \in [0, M-n]`, + and the binomial coefficient is: + + .. math:: \binom{n}{k} \equiv \frac{n!}{k! (n - k)!}. + + It is equivalent to observing :math:`k` successes in :math:`k+r-1` + samples with :math:`k+r`'th sample being a failure. The former + can be modelled as a hypergeometric distribution. The probability + of the latter is simply the number of failures remaining + :math:`M-n-(r-1)` divided by the size of the remaining population + :math:`M-(k+r-1)`. This relationship can be shown as: + + .. math:: NHG(k;M,n,r) = HG(k;M,n,k+r-1)\frac{(M-n-(r-1))}{(M-(k+r-1))} + + where :math:`NHG` is probability mass function (PMF) of the + negative hypergeometric distribution and :math:`HG` is the + PMF of the hypergeometric distribution. + + %(after_notes)s + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import nhypergeom + >>> import matplotlib.pyplot as plt + + Suppose we have a collection of 20 animals, of which 7 are dogs. + Then if we want to know the probability of finding a given number + of dogs (successes) in a sample with exactly 12 animals that + aren't dogs (failures), we can initialize a frozen distribution + and plot the probability mass function: + + >>> M, n, r = [20, 7, 12] + >>> rv = nhypergeom(M, n, r) + >>> x = np.arange(0, n+2) + >>> pmf_dogs = rv.pmf(x) + + >>> fig = plt.figure() + >>> ax = fig.add_subplot(111) + >>> ax.plot(x, pmf_dogs, 'bo') + >>> ax.vlines(x, 0, pmf_dogs, lw=2) + >>> ax.set_xlabel('# of dogs in our group with given 12 failures') + >>> ax.set_ylabel('nhypergeom PMF') + >>> plt.show() + + Instead of using a frozen distribution we can also use `nhypergeom` + methods directly. To for example obtain the probability mass + function, use: + + >>> prb = nhypergeom.pmf(x, M, n, r) + + And to generate random numbers: + + >>> R = nhypergeom.rvs(M, n, r, size=10) + + To verify the relationship between `hypergeom` and `nhypergeom`, use: + + >>> from scipy.stats import hypergeom, nhypergeom + >>> M, n, r = 45, 13, 8 + >>> k = 6 + >>> nhypergeom.pmf(k, M, n, r) + 0.06180776620271643 + >>> hypergeom.pmf(k, M, n, k+r-1) * (M - n - (r-1)) / (M - (k+r-1)) + 0.06180776620271644 + + See Also + -------- + hypergeom, binom, nbinom + + References + ---------- + .. [1] Negative Hypergeometric Distribution on Wikipedia + https://en.wikipedia.org/wiki/Negative_hypergeometric_distribution + + .. [2] Negative Hypergeometric Distribution from + http://www.math.wm.edu/~leemis/chart/UDR/PDFs/Negativehypergeometric.pdf + + """ + + def _shape_info(self): + return [_ShapeInfo("M", True, (0, np.inf), (True, False)), + _ShapeInfo("n", True, (0, np.inf), (True, False)), + _ShapeInfo("r", True, (0, np.inf), (True, False))] + + def _get_support(self, M, n, r): + return 0, n + + def _argcheck(self, M, n, r): + cond = (n >= 0) & (n <= M) & (r >= 0) & (r <= M-n) + cond &= _isintegral(M) & _isintegral(n) & _isintegral(r) + return cond + + def _rvs(self, M, n, r, size=None, random_state=None): + + @_vectorize_rvs_over_shapes + def _rvs1(M, n, r, size, random_state): + # invert cdf by calculating all values in support, scalar M, n, r + a, b = self.support(M, n, r) + ks = np.arange(a, b+1) + cdf = self.cdf(ks, M, n, r) + ppf = interp1d(cdf, ks, kind='next', fill_value='extrapolate') + rvs = ppf(random_state.uniform(size=size)).astype(int) + if size is None: + return rvs.item() + return rvs + + return _rvs1(M, n, r, size=size, random_state=random_state) + + def _logpmf(self, k, M, n, r): + cond = ((r == 0) & (k == 0)) + result = _lazywhere(~cond, (k, M, n, r), + lambda k, M, n, r: + (-betaln(k+1, r) + betaln(k+r, 1) - + betaln(n-k+1, M-r-n+1) + betaln(M-r-k+1, 1) + + betaln(n+1, M-n+1) - betaln(M+1, 1)), + fillvalue=0.0) + return result + + def _pmf(self, k, M, n, r): + # same as the following but numerically more precise + # return comb(k+r-1, k) * comb(M-r-k, n-k) / comb(M, n) + return exp(self._logpmf(k, M, n, r)) + + def _stats(self, M, n, r): + # Promote the datatype to at least float + # mu = rn / (M-n+1) + M, n, r = 1.*M, 1.*n, 1.*r + mu = r*n / (M-n+1) + + var = r*(M+1)*n / ((M-n+1)*(M-n+2)) * (1 - r / (M-n+1)) + + # The skew and kurtosis are mathematically + # intractable so return `None`. See [2]_. + g1, g2 = None, None + return mu, var, g1, g2 + + +nhypergeom = nhypergeom_gen(name='nhypergeom') + + +# FIXME: Fails _cdfvec +class logser_gen(rv_discrete): + r"""A Logarithmic (Log-Series, Series) discrete random variable. + + %(before_notes)s + + Notes + ----- + The probability mass function for `logser` is: + + .. math:: + + f(k) = - \frac{p^k}{k \log(1-p)} + + for :math:`k \ge 1`, :math:`0 < p < 1` + + `logser` takes :math:`p` as shape parameter, + where :math:`p` is the probability of a single success + and :math:`1-p` is the probability of a single failure. + + %(after_notes)s + + %(example)s + + """ + + def _shape_info(self): + return [_ShapeInfo("p", False, (0, 1), (True, True))] + + def _rvs(self, p, size=None, random_state=None): + # looks wrong for p>0.5, too few k=1 + # trying to use generic is worse, no k=1 at all + return random_state.logseries(p, size=size) + + def _argcheck(self, p): + return (p > 0) & (p < 1) + + def _pmf(self, k, p): + # logser.pmf(k) = - p**k / (k*log(1-p)) + return -np.power(p, k) * 1.0 / k / special.log1p(-p) + + def _stats(self, p): + r = special.log1p(-p) + mu = p / (p - 1.0) / r + mu2p = -p / r / (p - 1.0)**2 + var = mu2p - mu*mu + mu3p = -p / r * (1.0+p) / (1.0 - p)**3 + mu3 = mu3p - 3*mu*mu2p + 2*mu**3 + g1 = mu3 / np.power(var, 1.5) + + mu4p = -p / r * ( + 1.0 / (p-1)**2 - 6*p / (p - 1)**3 + 6*p*p / (p-1)**4) + mu4 = mu4p - 4*mu3p*mu + 6*mu2p*mu*mu - 3*mu**4 + g2 = mu4 / var**2 - 3.0 + return mu, var, g1, g2 + + +logser = logser_gen(a=1, name='logser', longname='A logarithmic') + + +class poisson_gen(rv_discrete): + r"""A Poisson discrete random variable. + + %(before_notes)s + + Notes + ----- + The probability mass function for `poisson` is: + + .. math:: + + f(k) = \exp(-\mu) \frac{\mu^k}{k!} + + for :math:`k \ge 0`. + + `poisson` takes :math:`\mu \geq 0` as shape parameter. + When :math:`\mu = 0`, the ``pmf`` method + returns ``1.0`` at quantile :math:`k = 0`. + + %(after_notes)s + + %(example)s + + """ + + def _shape_info(self): + return [_ShapeInfo("mu", False, (0, np.inf), (True, False))] + + # Override rv_discrete._argcheck to allow mu=0. + def _argcheck(self, mu): + return mu >= 0 + + def _rvs(self, mu, size=None, random_state=None): + return random_state.poisson(mu, size) + + def _logpmf(self, k, mu): + Pk = special.xlogy(k, mu) - gamln(k + 1) - mu + return Pk + + def _pmf(self, k, mu): + # poisson.pmf(k) = exp(-mu) * mu**k / k! + return exp(self._logpmf(k, mu)) + + def _cdf(self, x, mu): + k = floor(x) + return special.pdtr(k, mu) + + def _sf(self, x, mu): + k = floor(x) + return special.pdtrc(k, mu) + + def _ppf(self, q, mu): + vals = ceil(special.pdtrik(q, mu)) + vals1 = np.maximum(vals - 1, 0) + temp = special.pdtr(vals1, mu) + return np.where(temp >= q, vals1, vals) + + def _stats(self, mu): + var = mu + tmp = np.asarray(mu) + mu_nonzero = tmp > 0 + g1 = _lazywhere(mu_nonzero, (tmp,), lambda x: sqrt(1.0/x), np.inf) + g2 = _lazywhere(mu_nonzero, (tmp,), lambda x: 1.0/x, np.inf) + return mu, var, g1, g2 + + +poisson = poisson_gen(name="poisson", longname='A Poisson') + + +class planck_gen(rv_discrete): + r"""A Planck discrete exponential random variable. + + %(before_notes)s + + Notes + ----- + The probability mass function for `planck` is: + + .. math:: + + f(k) = (1-\exp(-\lambda)) \exp(-\lambda k) + + for :math:`k \ge 0` and :math:`\lambda > 0`. + + `planck` takes :math:`\lambda` as shape parameter. The Planck distribution + can be written as a geometric distribution (`geom`) with + :math:`p = 1 - \exp(-\lambda)` shifted by ``loc = -1``. + + %(after_notes)s + + See Also + -------- + geom + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("lambda", False, (0, np.inf), (False, False))] + + def _argcheck(self, lambda_): + return lambda_ > 0 + + def _pmf(self, k, lambda_): + return -expm1(-lambda_)*exp(-lambda_*k) + + def _cdf(self, x, lambda_): + k = floor(x) + return -expm1(-lambda_*(k+1)) + + def _sf(self, x, lambda_): + return exp(self._logsf(x, lambda_)) + + def _logsf(self, x, lambda_): + k = floor(x) + return -lambda_*(k+1) + + def _ppf(self, q, lambda_): + vals = ceil(-1.0/lambda_ * log1p(-q)-1) + vals1 = (vals-1).clip(*(self._get_support(lambda_))) + temp = self._cdf(vals1, lambda_) + return np.where(temp >= q, vals1, vals) + + def _rvs(self, lambda_, size=None, random_state=None): + # use relation to geometric distribution for sampling + p = -expm1(-lambda_) + return random_state.geometric(p, size=size) - 1.0 + + def _stats(self, lambda_): + mu = 1/expm1(lambda_) + var = exp(-lambda_)/(expm1(-lambda_))**2 + g1 = 2*cosh(lambda_/2.0) + g2 = 4+2*cosh(lambda_) + return mu, var, g1, g2 + + def _entropy(self, lambda_): + C = -expm1(-lambda_) + return lambda_*exp(-lambda_)/C - log(C) + + +planck = planck_gen(a=0, name='planck', longname='A discrete exponential ') + + +class boltzmann_gen(rv_discrete): + r"""A Boltzmann (Truncated Discrete Exponential) random variable. + + %(before_notes)s + + Notes + ----- + The probability mass function for `boltzmann` is: + + .. math:: + + f(k) = (1-\exp(-\lambda)) \exp(-\lambda k) / (1-\exp(-\lambda N)) + + for :math:`k = 0,..., N-1`. + + `boltzmann` takes :math:`\lambda > 0` and :math:`N > 0` as shape parameters. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("lambda_", False, (0, np.inf), (False, False)), + _ShapeInfo("N", True, (0, np.inf), (False, False))] + + def _argcheck(self, lambda_, N): + return (lambda_ > 0) & (N > 0) & _isintegral(N) + + def _get_support(self, lambda_, N): + return self.a, N - 1 + + def _pmf(self, k, lambda_, N): + # boltzmann.pmf(k) = + # (1-exp(-lambda_)*exp(-lambda_*k)/(1-exp(-lambda_*N)) + fact = (1-exp(-lambda_))/(1-exp(-lambda_*N)) + return fact*exp(-lambda_*k) + + def _cdf(self, x, lambda_, N): + k = floor(x) + return (1-exp(-lambda_*(k+1)))/(1-exp(-lambda_*N)) + + def _ppf(self, q, lambda_, N): + qnew = q*(1-exp(-lambda_*N)) + vals = ceil(-1.0/lambda_ * log(1-qnew)-1) + vals1 = (vals-1).clip(0.0, np.inf) + temp = self._cdf(vals1, lambda_, N) + return np.where(temp >= q, vals1, vals) + + def _stats(self, lambda_, N): + z = exp(-lambda_) + zN = exp(-lambda_*N) + mu = z/(1.0-z)-N*zN/(1-zN) + var = z/(1.0-z)**2 - N*N*zN/(1-zN)**2 + trm = (1-zN)/(1-z) + trm2 = (z*trm**2 - N*N*zN) + g1 = z*(1+z)*trm**3 - N**3*zN*(1+zN) + g1 = g1 / trm2**(1.5) + g2 = z*(1+4*z+z*z)*trm**4 - N**4 * zN*(1+4*zN+zN*zN) + g2 = g2 / trm2 / trm2 + return mu, var, g1, g2 + + +boltzmann = boltzmann_gen(name='boltzmann', a=0, + longname='A truncated discrete exponential ') + + +class randint_gen(rv_discrete): + r"""A uniform discrete random variable. + + %(before_notes)s + + Notes + ----- + The probability mass function for `randint` is: + + .. math:: + + f(k) = \frac{1}{\texttt{high} - \texttt{low}} + + for :math:`k \in \{\texttt{low}, \dots, \texttt{high} - 1\}`. + + `randint` takes :math:`\texttt{low}` and :math:`\texttt{high}` as shape + parameters. + + %(after_notes)s + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import randint + >>> import matplotlib.pyplot as plt + >>> fig, ax = plt.subplots(1, 1) + + Calculate the first four moments: + + >>> low, high = 7, 31 + >>> mean, var, skew, kurt = randint.stats(low, high, moments='mvsk') + + Display the probability mass function (``pmf``): + + >>> x = np.arange(low - 5, high + 5) + >>> ax.plot(x, randint.pmf(x, low, high), 'bo', ms=8, label='randint pmf') + >>> ax.vlines(x, 0, randint.pmf(x, low, high), colors='b', lw=5, alpha=0.5) + + Alternatively, the distribution object can be called (as a function) to + fix the shape and location. This returns a "frozen" RV object holding the + given parameters fixed. + + Freeze the distribution and display the frozen ``pmf``: + + >>> rv = randint(low, high) + >>> ax.vlines(x, 0, rv.pmf(x), colors='k', linestyles='-', + ... lw=1, label='frozen pmf') + >>> ax.legend(loc='lower center') + >>> plt.show() + + Check the relationship between the cumulative distribution function + (``cdf``) and its inverse, the percent point function (``ppf``): + + >>> q = np.arange(low, high) + >>> p = randint.cdf(q, low, high) + >>> np.allclose(q, randint.ppf(p, low, high)) + True + + Generate random numbers: + + >>> r = randint.rvs(low, high, size=1000) + + """ + + def _shape_info(self): + return [_ShapeInfo("low", True, (-np.inf, np.inf), (False, False)), + _ShapeInfo("high", True, (-np.inf, np.inf), (False, False))] + + def _argcheck(self, low, high): + return (high > low) & _isintegral(low) & _isintegral(high) + + def _get_support(self, low, high): + return low, high-1 + + def _pmf(self, k, low, high): + # randint.pmf(k) = 1./(high - low) + p = np.ones_like(k) / (high - low) + return np.where((k >= low) & (k < high), p, 0.) + + def _cdf(self, x, low, high): + k = floor(x) + return (k - low + 1.) / (high - low) + + def _ppf(self, q, low, high): + vals = ceil(q * (high - low) + low) - 1 + vals1 = (vals - 1).clip(low, high) + temp = self._cdf(vals1, low, high) + return np.where(temp >= q, vals1, vals) + + def _stats(self, low, high): + m2, m1 = np.asarray(high), np.asarray(low) + mu = (m2 + m1 - 1.0) / 2 + d = m2 - m1 + var = (d*d - 1) / 12.0 + g1 = 0.0 + g2 = -6.0/5.0 * (d*d + 1.0) / (d*d - 1.0) + return mu, var, g1, g2 + + def _rvs(self, low, high, size=None, random_state=None): + """An array of *size* random integers >= ``low`` and < ``high``.""" + if np.asarray(low).size == 1 and np.asarray(high).size == 1: + # no need to vectorize in that case + return rng_integers(random_state, low, high, size=size) + + if size is not None: + # NumPy's RandomState.randint() doesn't broadcast its arguments. + # Use `broadcast_to()` to extend the shapes of low and high + # up to size. Then we can use the numpy.vectorize'd + # randint without needing to pass it a `size` argument. + low = np.broadcast_to(low, size) + high = np.broadcast_to(high, size) + randint = np.vectorize(partial(rng_integers, random_state), + otypes=[np.dtype(int)]) + return randint(low, high) + + def _entropy(self, low, high): + return log(high - low) + + +randint = randint_gen(name='randint', longname='A discrete uniform ' + '(random integer)') + + +# FIXME: problems sampling. +class zipf_gen(rv_discrete): + r"""A Zipf (Zeta) discrete random variable. + + %(before_notes)s + + See Also + -------- + zipfian + + Notes + ----- + The probability mass function for `zipf` is: + + .. math:: + + f(k, a) = \frac{1}{\zeta(a) k^a} + + for :math:`k \ge 1`, :math:`a > 1`. + + `zipf` takes :math:`a > 1` as shape parameter. :math:`\zeta` is the + Riemann zeta function (`scipy.special.zeta`) + + The Zipf distribution is also known as the zeta distribution, which is + a special case of the Zipfian distribution (`zipfian`). + + %(after_notes)s + + References + ---------- + .. [1] "Zeta Distribution", Wikipedia, + https://en.wikipedia.org/wiki/Zeta_distribution + + %(example)s + + Confirm that `zipf` is the large `n` limit of `zipfian`. + + >>> import numpy as np + >>> from scipy.stats import zipf, zipfian + >>> k = np.arange(11) + >>> np.allclose(zipf.pmf(k, a), zipfian.pmf(k, a, n=10000000)) + True + + """ + + def _shape_info(self): + return [_ShapeInfo("a", False, (1, np.inf), (False, False))] + + def _rvs(self, a, size=None, random_state=None): + return random_state.zipf(a, size=size) + + def _argcheck(self, a): + return a > 1 + + def _pmf(self, k, a): + # zipf.pmf(k, a) = 1/(zeta(a) * k**a) + Pk = 1.0 / special.zeta(a, 1) / k**a + return Pk + + def _munp(self, n, a): + return _lazywhere( + a > n + 1, (a, n), + lambda a, n: special.zeta(a - n, 1) / special.zeta(a, 1), + np.inf) + + +zipf = zipf_gen(a=1, name='zipf', longname='A Zipf') + + +def _gen_harmonic_gt1(n, a): + """Generalized harmonic number, a > 1""" + # See https://en.wikipedia.org/wiki/Harmonic_number; search for "hurwitz" + return zeta(a, 1) - zeta(a, n+1) + + +def _gen_harmonic_leq1(n, a): + """Generalized harmonic number, a <= 1""" + if not np.size(n): + return n + n_max = np.max(n) # loop starts at maximum of all n + out = np.zeros_like(a, dtype=float) + # add terms of harmonic series; starting from smallest to avoid roundoff + for i in np.arange(n_max, 0, -1, dtype=float): + mask = i <= n # don't add terms after nth + out[mask] += 1/i**a[mask] + return out + + +def _gen_harmonic(n, a): + """Generalized harmonic number""" + n, a = np.broadcast_arrays(n, a) + return _lazywhere(a > 1, (n, a), + f=_gen_harmonic_gt1, f2=_gen_harmonic_leq1) + + +class zipfian_gen(rv_discrete): + r"""A Zipfian discrete random variable. + + %(before_notes)s + + See Also + -------- + zipf + + Notes + ----- + The probability mass function for `zipfian` is: + + .. math:: + + f(k, a, n) = \frac{1}{H_{n,a} k^a} + + for :math:`k \in \{1, 2, \dots, n-1, n\}`, :math:`a \ge 0`, + :math:`n \in \{1, 2, 3, \dots\}`. + + `zipfian` takes :math:`a` and :math:`n` as shape parameters. + :math:`H_{n,a}` is the :math:`n`:sup:`th` generalized harmonic + number of order :math:`a`. + + The Zipfian distribution reduces to the Zipf (zeta) distribution as + :math:`n \rightarrow \infty`. + + %(after_notes)s + + References + ---------- + .. [1] "Zipf's Law", Wikipedia, https://en.wikipedia.org/wiki/Zipf's_law + .. [2] Larry Leemis, "Zipf Distribution", Univariate Distribution + Relationships. http://www.math.wm.edu/~leemis/chart/UDR/PDFs/Zipf.pdf + + %(example)s + + Confirm that `zipfian` reduces to `zipf` for large `n`, `a > 1`. + + >>> import numpy as np + >>> from scipy.stats import zipf, zipfian + >>> k = np.arange(11) + >>> np.allclose(zipfian.pmf(k, a=3.5, n=10000000), zipf.pmf(k, a=3.5)) + True + + """ + + def _shape_info(self): + return [_ShapeInfo("a", False, (0, np.inf), (True, False)), + _ShapeInfo("n", True, (0, np.inf), (False, False))] + + def _argcheck(self, a, n): + # we need np.asarray here because moment (maybe others) don't convert + return (a >= 0) & (n > 0) & (n == np.asarray(n, dtype=int)) + + def _get_support(self, a, n): + return 1, n + + def _pmf(self, k, a, n): + return 1.0 / _gen_harmonic(n, a) / k**a + + def _cdf(self, k, a, n): + return _gen_harmonic(k, a) / _gen_harmonic(n, a) + + def _sf(self, k, a, n): + k = k + 1 # # to match SciPy convention + # see http://www.math.wm.edu/~leemis/chart/UDR/PDFs/Zipf.pdf + return ((k**a*(_gen_harmonic(n, a) - _gen_harmonic(k, a)) + 1) + / (k**a*_gen_harmonic(n, a))) + + def _stats(self, a, n): + # see # see http://www.math.wm.edu/~leemis/chart/UDR/PDFs/Zipf.pdf + Hna = _gen_harmonic(n, a) + Hna1 = _gen_harmonic(n, a-1) + Hna2 = _gen_harmonic(n, a-2) + Hna3 = _gen_harmonic(n, a-3) + Hna4 = _gen_harmonic(n, a-4) + mu1 = Hna1/Hna + mu2n = (Hna2*Hna - Hna1**2) + mu2d = Hna**2 + mu2 = mu2n / mu2d + g1 = (Hna3/Hna - 3*Hna1*Hna2/Hna**2 + 2*Hna1**3/Hna**3)/mu2**(3/2) + g2 = (Hna**3*Hna4 - 4*Hna**2*Hna1*Hna3 + 6*Hna*Hna1**2*Hna2 + - 3*Hna1**4) / mu2n**2 + g2 -= 3 + return mu1, mu2, g1, g2 + + +zipfian = zipfian_gen(a=1, name='zipfian', longname='A Zipfian') + + +class dlaplace_gen(rv_discrete): + r"""A Laplacian discrete random variable. + + %(before_notes)s + + Notes + ----- + The probability mass function for `dlaplace` is: + + .. math:: + + f(k) = \tanh(a/2) \exp(-a |k|) + + for integers :math:`k` and :math:`a > 0`. + + `dlaplace` takes :math:`a` as shape parameter. + + %(after_notes)s + + %(example)s + + """ + + def _shape_info(self): + return [_ShapeInfo("a", False, (0, np.inf), (False, False))] + + def _pmf(self, k, a): + # dlaplace.pmf(k) = tanh(a/2) * exp(-a*abs(k)) + return tanh(a/2.0) * exp(-a * abs(k)) + + def _cdf(self, x, a): + k = floor(x) + + def f(k, a): + return 1.0 - exp(-a * k) / (exp(a) + 1) + + def f2(k, a): + return exp(a * (k + 1)) / (exp(a) + 1) + + return _lazywhere(k >= 0, (k, a), f=f, f2=f2) + + def _ppf(self, q, a): + const = 1 + exp(a) + vals = ceil(np.where(q < 1.0 / (1 + exp(-a)), + log(q*const) / a - 1, + -log((1-q) * const) / a)) + vals1 = vals - 1 + return np.where(self._cdf(vals1, a) >= q, vals1, vals) + + def _stats(self, a): + ea = exp(a) + mu2 = 2.*ea/(ea-1.)**2 + mu4 = 2.*ea*(ea**2+10.*ea+1.) / (ea-1.)**4 + return 0., mu2, 0., mu4/mu2**2 - 3. + + def _entropy(self, a): + return a / sinh(a) - log(tanh(a/2.0)) + + def _rvs(self, a, size=None, random_state=None): + # The discrete Laplace is equivalent to the two-sided geometric + # distribution with PMF: + # f(k) = (1 - alpha)/(1 + alpha) * alpha^abs(k) + # Reference: + # https://www.sciencedirect.com/science/ + # article/abs/pii/S0378375804003519 + # Furthermore, the two-sided geometric distribution is + # equivalent to the difference between two iid geometric + # distributions. + # Reference (page 179): + # https://pdfs.semanticscholar.org/61b3/ + # b99f466815808fd0d03f5d2791eea8b541a1.pdf + # Thus, we can leverage the following: + # 1) alpha = e^-a + # 2) probability_of_success = 1 - alpha (Bernoulli trial) + probOfSuccess = -np.expm1(-np.asarray(a)) + x = random_state.geometric(probOfSuccess, size=size) + y = random_state.geometric(probOfSuccess, size=size) + return x - y + + +dlaplace = dlaplace_gen(a=-np.inf, + name='dlaplace', longname='A discrete Laplacian') + + +class skellam_gen(rv_discrete): + r"""A Skellam discrete random variable. + + %(before_notes)s + + Notes + ----- + Probability distribution of the difference of two correlated or + uncorrelated Poisson random variables. + + Let :math:`k_1` and :math:`k_2` be two Poisson-distributed r.v. with + expected values :math:`\lambda_1` and :math:`\lambda_2`. Then, + :math:`k_1 - k_2` follows a Skellam distribution with parameters + :math:`\mu_1 = \lambda_1 - \rho \sqrt{\lambda_1 \lambda_2}` and + :math:`\mu_2 = \lambda_2 - \rho \sqrt{\lambda_1 \lambda_2}`, where + :math:`\rho` is the correlation coefficient between :math:`k_1` and + :math:`k_2`. If the two Poisson-distributed r.v. are independent then + :math:`\rho = 0`. + + Parameters :math:`\mu_1` and :math:`\mu_2` must be strictly positive. + + For details see: https://en.wikipedia.org/wiki/Skellam_distribution + + `skellam` takes :math:`\mu_1` and :math:`\mu_2` as shape parameters. + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("mu1", False, (0, np.inf), (False, False)), + _ShapeInfo("mu2", False, (0, np.inf), (False, False))] + + def _rvs(self, mu1, mu2, size=None, random_state=None): + n = size + return (random_state.poisson(mu1, n) - + random_state.poisson(mu2, n)) + + def _pmf(self, x, mu1, mu2): + with np.errstate(over='ignore'): # see gh-17432 + px = np.where(x < 0, + _boost._ncx2_pdf(2*mu2, 2*(1-x), 2*mu1)*2, + _boost._ncx2_pdf(2*mu1, 2*(1+x), 2*mu2)*2) + # ncx2.pdf() returns nan's for extremely low probabilities + return px + + def _cdf(self, x, mu1, mu2): + x = floor(x) + with np.errstate(over='ignore'): # see gh-17432 + px = np.where(x < 0, + _boost._ncx2_cdf(2*mu2, -2*x, 2*mu1), + 1 - _boost._ncx2_cdf(2*mu1, 2*(x+1), 2*mu2)) + return px + + def _stats(self, mu1, mu2): + mean = mu1 - mu2 + var = mu1 + mu2 + g1 = mean / sqrt((var)**3) + g2 = 1 / var + return mean, var, g1, g2 + + +skellam = skellam_gen(a=-np.inf, name="skellam", longname='A Skellam') + + +class yulesimon_gen(rv_discrete): + r"""A Yule-Simon discrete random variable. + + %(before_notes)s + + Notes + ----- + + The probability mass function for the `yulesimon` is: + + .. math:: + + f(k) = \alpha B(k, \alpha+1) + + for :math:`k=1,2,3,...`, where :math:`\alpha>0`. + Here :math:`B` refers to the `scipy.special.beta` function. + + The sampling of random variates is based on pg 553, Section 6.3 of [1]_. + Our notation maps to the referenced logic via :math:`\alpha=a-1`. + + For details see the wikipedia entry [2]_. + + References + ---------- + .. [1] Devroye, Luc. "Non-uniform Random Variate Generation", + (1986) Springer, New York. + + .. [2] https://en.wikipedia.org/wiki/Yule-Simon_distribution + + %(after_notes)s + + %(example)s + + """ + def _shape_info(self): + return [_ShapeInfo("alpha", False, (0, np.inf), (False, False))] + + def _rvs(self, alpha, size=None, random_state=None): + E1 = random_state.standard_exponential(size) + E2 = random_state.standard_exponential(size) + ans = ceil(-E1 / log1p(-exp(-E2 / alpha))) + return ans + + def _pmf(self, x, alpha): + return alpha * special.beta(x, alpha + 1) + + def _argcheck(self, alpha): + return (alpha > 0) + + def _logpmf(self, x, alpha): + return log(alpha) + special.betaln(x, alpha + 1) + + def _cdf(self, x, alpha): + return 1 - x * special.beta(x, alpha + 1) + + def _sf(self, x, alpha): + return x * special.beta(x, alpha + 1) + + def _logsf(self, x, alpha): + return log(x) + special.betaln(x, alpha + 1) + + def _stats(self, alpha): + mu = np.where(alpha <= 1, np.inf, alpha / (alpha - 1)) + mu2 = np.where(alpha > 2, + alpha**2 / ((alpha - 2.0) * (alpha - 1)**2), + np.inf) + mu2 = np.where(alpha <= 1, np.nan, mu2) + g1 = np.where(alpha > 3, + sqrt(alpha - 2) * (alpha + 1)**2 / (alpha * (alpha - 3)), + np.inf) + g1 = np.where(alpha <= 2, np.nan, g1) + g2 = np.where(alpha > 4, + alpha + 3 + ((alpha**3 - 49 * alpha - 22) / + (alpha * (alpha - 4) * (alpha - 3))), + np.inf) + g2 = np.where(alpha <= 2, np.nan, g2) + return mu, mu2, g1, g2 + + +yulesimon = yulesimon_gen(name='yulesimon', a=1) + + +def _vectorize_rvs_over_shapes(_rvs1): + """Decorator that vectorizes _rvs method to work on ndarray shapes""" + # _rvs1 must be a _function_ that accepts _scalar_ args as positional + # arguments, `size` and `random_state` as keyword arguments. + # _rvs1 must return a random variate array with shape `size`. If `size` is + # None, _rvs1 must return a scalar. + # When applied to _rvs1, this decorator broadcasts ndarray args + # and loops over them, calling _rvs1 for each set of scalar args. + # For usage example, see _nchypergeom_gen + def _rvs(*args, size, random_state): + _rvs1_size, _rvs1_indices = _check_shape(args[0].shape, size) + + size = np.array(size) + _rvs1_size = np.array(_rvs1_size) + _rvs1_indices = np.array(_rvs1_indices) + + if np.all(_rvs1_indices): # all args are scalars + return _rvs1(*args, size, random_state) + + out = np.empty(size) + + # out.shape can mix dimensions associated with arg_shape and _rvs1_size + # Sort them to arg_shape + _rvs1_size for easy indexing of dimensions + # corresponding with the different sets of scalar args + j0 = np.arange(out.ndim) + j1 = np.hstack((j0[~_rvs1_indices], j0[_rvs1_indices])) + out = np.moveaxis(out, j1, j0) + + for i in np.ndindex(*size[~_rvs1_indices]): + # arg can be squeezed because singleton dimensions will be + # associated with _rvs1_size, not arg_shape per _check_shape + out[i] = _rvs1(*[np.squeeze(arg)[i] for arg in args], + _rvs1_size, random_state) + + return np.moveaxis(out, j0, j1) # move axes back before returning + return _rvs + + +class _nchypergeom_gen(rv_discrete): + r"""A noncentral hypergeometric discrete random variable. + + For subclassing by nchypergeom_fisher_gen and nchypergeom_wallenius_gen. + + """ + + rvs_name = None + dist = None + + def _shape_info(self): + return [_ShapeInfo("M", True, (0, np.inf), (True, False)), + _ShapeInfo("n", True, (0, np.inf), (True, False)), + _ShapeInfo("N", True, (0, np.inf), (True, False)), + _ShapeInfo("odds", False, (0, np.inf), (False, False))] + + def _get_support(self, M, n, N, odds): + N, m1, n = M, n, N # follow Wikipedia notation + m2 = N - m1 + x_min = np.maximum(0, n - m2) + x_max = np.minimum(n, m1) + return x_min, x_max + + def _argcheck(self, M, n, N, odds): + M, n = np.asarray(M), np.asarray(n), + N, odds = np.asarray(N), np.asarray(odds) + cond1 = (M.astype(int) == M) & (M >= 0) + cond2 = (n.astype(int) == n) & (n >= 0) + cond3 = (N.astype(int) == N) & (N >= 0) + cond4 = odds > 0 + cond5 = N <= M + cond6 = n <= M + return cond1 & cond2 & cond3 & cond4 & cond5 & cond6 + + def _rvs(self, M, n, N, odds, size=None, random_state=None): + + @_vectorize_rvs_over_shapes + def _rvs1(M, n, N, odds, size, random_state): + length = np.prod(size) + urn = _PyStochasticLib3() + rv_gen = getattr(urn, self.rvs_name) + rvs = rv_gen(N, n, M, odds, length, random_state) + rvs = rvs.reshape(size) + return rvs + + return _rvs1(M, n, N, odds, size=size, random_state=random_state) + + def _pmf(self, x, M, n, N, odds): + + x, M, n, N, odds = np.broadcast_arrays(x, M, n, N, odds) + if x.size == 0: # np.vectorize doesn't work with zero size input + return np.empty_like(x) + + @np.vectorize + def _pmf1(x, M, n, N, odds): + urn = self.dist(N, n, M, odds, 1e-12) + return urn.probability(x) + + return _pmf1(x, M, n, N, odds) + + def _stats(self, M, n, N, odds, moments): + + @np.vectorize + def _moments1(M, n, N, odds): + urn = self.dist(N, n, M, odds, 1e-12) + return urn.moments() + + m, v = (_moments1(M, n, N, odds) if ("m" in moments or "v" in moments) + else (None, None)) + s, k = None, None + return m, v, s, k + + +class nchypergeom_fisher_gen(_nchypergeom_gen): + r"""A Fisher's noncentral hypergeometric discrete random variable. + + Fisher's noncentral hypergeometric distribution models drawing objects of + two types from a bin. `M` is the total number of objects, `n` is the + number of Type I objects, and `odds` is the odds ratio: the odds of + selecting a Type I object rather than a Type II object when there is only + one object of each type. + The random variate represents the number of Type I objects drawn if we + take a handful of objects from the bin at once and find out afterwards + that we took `N` objects. + + %(before_notes)s + + See Also + -------- + nchypergeom_wallenius, hypergeom, nhypergeom + + Notes + ----- + Let mathematical symbols :math:`N`, :math:`n`, and :math:`M` correspond + with parameters `N`, `n`, and `M` (respectively) as defined above. + + The probability mass function is defined as + + .. math:: + + p(x; M, n, N, \omega) = + \frac{\binom{n}{x}\binom{M - n}{N-x}\omega^x}{P_0}, + + for + :math:`x \in [x_l, x_u]`, + :math:`M \in {\mathbb N}`, + :math:`n \in [0, M]`, + :math:`N \in [0, M]`, + :math:`\omega > 0`, + where + :math:`x_l = \max(0, N - (M - n))`, + :math:`x_u = \min(N, n)`, + + .. math:: + + P_0 = \sum_{y=x_l}^{x_u} \binom{n}{y}\binom{M - n}{N-y}\omega^y, + + and the binomial coefficients are defined as + + .. math:: \binom{n}{k} \equiv \frac{n!}{k! (n - k)!}. + + `nchypergeom_fisher` uses the BiasedUrn package by Agner Fog with + permission for it to be distributed under SciPy's license. + + The symbols used to denote the shape parameters (`N`, `n`, and `M`) are not + universally accepted; they are chosen for consistency with `hypergeom`. + + Note that Fisher's noncentral hypergeometric distribution is distinct + from Wallenius' noncentral hypergeometric distribution, which models + drawing a pre-determined `N` objects from a bin one by one. + When the odds ratio is unity, however, both distributions reduce to the + ordinary hypergeometric distribution. + + %(after_notes)s + + References + ---------- + .. [1] Agner Fog, "Biased Urn Theory". + https://cran.r-project.org/web/packages/BiasedUrn/vignettes/UrnTheory.pdf + + .. [2] "Fisher's noncentral hypergeometric distribution", Wikipedia, + https://en.wikipedia.org/wiki/Fisher's_noncentral_hypergeometric_distribution + + %(example)s + + """ + + rvs_name = "rvs_fisher" + dist = _PyFishersNCHypergeometric + + +nchypergeom_fisher = nchypergeom_fisher_gen( + name='nchypergeom_fisher', + longname="A Fisher's noncentral hypergeometric") + + +class nchypergeom_wallenius_gen(_nchypergeom_gen): + r"""A Wallenius' noncentral hypergeometric discrete random variable. + + Wallenius' noncentral hypergeometric distribution models drawing objects of + two types from a bin. `M` is the total number of objects, `n` is the + number of Type I objects, and `odds` is the odds ratio: the odds of + selecting a Type I object rather than a Type II object when there is only + one object of each type. + The random variate represents the number of Type I objects drawn if we + draw a pre-determined `N` objects from a bin one by one. + + %(before_notes)s + + See Also + -------- + nchypergeom_fisher, hypergeom, nhypergeom + + Notes + ----- + Let mathematical symbols :math:`N`, :math:`n`, and :math:`M` correspond + with parameters `N`, `n`, and `M` (respectively) as defined above. + + The probability mass function is defined as + + .. math:: + + p(x; N, n, M) = \binom{n}{x} \binom{M - n}{N-x} + \int_0^1 \left(1-t^{\omega/D}\right)^x\left(1-t^{1/D}\right)^{N-x} dt + + for + :math:`x \in [x_l, x_u]`, + :math:`M \in {\mathbb N}`, + :math:`n \in [0, M]`, + :math:`N \in [0, M]`, + :math:`\omega > 0`, + where + :math:`x_l = \max(0, N - (M - n))`, + :math:`x_u = \min(N, n)`, + + .. math:: + + D = \omega(n - x) + ((M - n)-(N-x)), + + and the binomial coefficients are defined as + + .. math:: \binom{n}{k} \equiv \frac{n!}{k! (n - k)!}. + + `nchypergeom_wallenius` uses the BiasedUrn package by Agner Fog with + permission for it to be distributed under SciPy's license. + + The symbols used to denote the shape parameters (`N`, `n`, and `M`) are not + universally accepted; they are chosen for consistency with `hypergeom`. + + Note that Wallenius' noncentral hypergeometric distribution is distinct + from Fisher's noncentral hypergeometric distribution, which models + take a handful of objects from the bin at once, finding out afterwards + that `N` objects were taken. + When the odds ratio is unity, however, both distributions reduce to the + ordinary hypergeometric distribution. + + %(after_notes)s + + References + ---------- + .. [1] Agner Fog, "Biased Urn Theory". + https://cran.r-project.org/web/packages/BiasedUrn/vignettes/UrnTheory.pdf + + .. [2] "Wallenius' noncentral hypergeometric distribution", Wikipedia, + https://en.wikipedia.org/wiki/Wallenius'_noncentral_hypergeometric_distribution + + %(example)s + + """ + + rvs_name = "rvs_wallenius" + dist = _PyWalleniusNCHypergeometric + + +nchypergeom_wallenius = nchypergeom_wallenius_gen( + name='nchypergeom_wallenius', + longname="A Wallenius' noncentral hypergeometric") + + +# Collect names of classes and objects in this module. +pairs = list(globals().copy().items()) +_distn_names, _distn_gen_names = get_distribution_names(pairs, rv_discrete) + +__all__ = _distn_names + _distn_gen_names diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_distr_params.py b/venv/lib/python3.10/site-packages/scipy/stats/_distr_params.py new file mode 100644 index 0000000000000000000000000000000000000000..c70299a5abdb1fa22ed689ece8311a814c60f270 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_distr_params.py @@ -0,0 +1,288 @@ +""" +Sane parameters for stats.distributions. +""" +import numpy as np + +distcont = [ + ['alpha', (3.5704770516650459,)], + ['anglit', ()], + ['arcsine', ()], + ['argus', (1.0,)], + ['beta', (2.3098496451481823, 0.62687954300963677)], + ['betaprime', (5, 6)], + ['bradford', (0.29891359763170633,)], + ['burr', (10.5, 4.3)], + ['burr12', (10, 4)], + ['cauchy', ()], + ['chi', (78,)], + ['chi2', (55,)], + ['cosine', ()], + ['crystalball', (2.0, 3.0)], + ['dgamma', (1.1023326088288166,)], + ['dweibull', (2.0685080649914673,)], + ['erlang', (10,)], + ['expon', ()], + ['exponnorm', (1.5,)], + ['exponpow', (2.697119160358469,)], + ['exponweib', (2.8923945291034436, 1.9505288745913174)], + ['f', (29, 18)], + ['fatiguelife', (29,)], # correction numargs = 1 + ['fisk', (3.0857548622253179,)], + ['foldcauchy', (4.7164673455831894,)], + ['foldnorm', (1.9521253373555869,)], + ['gamma', (1.9932305483800778,)], + ['gausshyper', (13.763771604130699, 3.1189636648681431, + 2.5145980350183019, 5.1811649903971615)], # veryslow + ['genexpon', (9.1325976465418908, 16.231956600590632, 3.2819552690843983)], + ['genextreme', (-0.1,)], + ['gengamma', (4.4162385429431925, 3.1193091679242761)], + ['gengamma', (4.4162385429431925, -3.1193091679242761)], + ['genhalflogistic', (0.77274727809929322,)], + ['genhyperbolic', (0.5, 1.5, -0.5,)], + ['geninvgauss', (2.3, 1.5)], + ['genlogistic', (0.41192440799679475,)], + ['gennorm', (1.2988442399460265,)], + ['halfgennorm', (0.6748054997000371,)], + ['genpareto', (0.1,)], # use case with finite moments + ['gibrat', ()], + ['gompertz', (0.94743713075105251,)], + ['gumbel_l', ()], + ['gumbel_r', ()], + ['halfcauchy', ()], + ['halflogistic', ()], + ['halfnorm', ()], + ['hypsecant', ()], + ['invgamma', (4.0668996136993067,)], + ['invgauss', (0.14546264555347513,)], + ['invweibull', (10.58,)], + ['jf_skew_t', (8, 4)], + ['johnsonsb', (4.3172675099141058, 3.1837781130785063)], + ['johnsonsu', (2.554395574161155, 2.2482281679651965)], + ['kappa4', (0.0, 0.0)], + ['kappa4', (-0.1, 0.1)], + ['kappa4', (0.0, 0.1)], + ['kappa4', (0.1, 0.0)], + ['kappa3', (1.0,)], + ['ksone', (1000,)], # replace 22 by 100 to avoid failing range, ticket 956 + ['kstwo', (10,)], + ['kstwobign', ()], + ['laplace', ()], + ['laplace_asymmetric', (2,)], + ['levy', ()], + ['levy_l', ()], + ['levy_stable', (1.8, -0.5)], + ['loggamma', (0.41411931826052117,)], + ['logistic', ()], + ['loglaplace', (3.2505926592051435,)], + ['lognorm', (0.95368226960575331,)], + ['loguniform', (0.01, 1.25)], + ['lomax', (1.8771398388773268,)], + ['maxwell', ()], + ['mielke', (10.4, 4.6)], + ['moyal', ()], + ['nakagami', (4.9673794866666237,)], + ['ncf', (27, 27, 0.41578441799226107)], + ['nct', (14, 0.24045031331198066)], + ['ncx2', (21, 1.0560465975116415)], + ['norm', ()], + ['norminvgauss', (1.25, 0.5)], + ['pareto', (2.621716532144454,)], + ['pearson3', (0.1,)], + ['pearson3', (-2,)], + ['powerlaw', (1.6591133289905851,)], + ['powerlaw', (0.6591133289905851,)], + ['powerlognorm', (2.1413923530064087, 0.44639540782048337)], + ['powernorm', (4.4453652254590779,)], + ['rayleigh', ()], + ['rdist', (1.6,)], + ['recipinvgauss', (0.63004267809369119,)], + ['reciprocal', (0.01, 1.25)], + ['rel_breitwigner', (36.545206797050334, )], + ['rice', (0.7749725210111873,)], + ['semicircular', ()], + ['skewcauchy', (0.5,)], + ['skewnorm', (4.0,)], + ['studentized_range', (3.0, 10.0)], + ['t', (2.7433514990818093,)], + ['trapezoid', (0.2, 0.8)], + ['triang', (0.15785029824528218,)], + ['truncexpon', (4.6907725456810478,)], + ['truncnorm', (-1.0978730080013919, 2.7306754109031979)], + ['truncnorm', (0.1, 2.)], + ['truncpareto', (1.8, 5.3)], + ['truncpareto', (2, 5)], + ['truncweibull_min', (2.5, 0.25, 1.75)], + ['tukeylambda', (3.1321477856738267,)], + ['uniform', ()], + ['vonmises', (3.9939042581071398,)], + ['vonmises_line', (3.9939042581071398,)], + ['wald', ()], + ['weibull_max', (2.8687961709100187,)], + ['weibull_min', (1.7866166930421596,)], + ['wrapcauchy', (0.031071279018614728,)]] + + +distdiscrete = [ + ['bernoulli',(0.3,)], + ['betabinom', (5, 2.3, 0.63)], + ['betanbinom', (5, 9.3, 1)], + ['binom', (5, 0.4)], + ['boltzmann',(1.4, 19)], + ['dlaplace', (0.8,)], # 0.5 + ['geom', (0.5,)], + ['hypergeom',(30, 12, 6)], + ['hypergeom',(21,3,12)], # numpy.random (3,18,12) numpy ticket:921 + ['hypergeom',(21,18,11)], # numpy.random (18,3,11) numpy ticket:921 + ['nchypergeom_fisher', (140, 80, 60, 0.5)], + ['nchypergeom_wallenius', (140, 80, 60, 0.5)], + ['logser', (0.6,)], # re-enabled, numpy ticket:921 + ['nbinom', (0.4, 0.4)], # from tickets: 583 + ['nbinom', (5, 0.5)], + ['planck', (0.51,)], # 4.1 + ['poisson', (0.6,)], + ['randint', (7, 31)], + ['skellam', (15, 8)], + ['zipf', (6.5,)], + ['zipfian', (0.75, 15)], + ['zipfian', (1.25, 10)], + ['yulesimon', (11.0,)], + ['nhypergeom', (20, 7, 1)] +] + + +invdistdiscrete = [ + # In each of the following, at least one shape parameter is invalid + ['hypergeom', (3, 3, 4)], + ['nhypergeom', (5, 2, 8)], + ['nchypergeom_fisher', (3, 3, 4, 1)], + ['nchypergeom_wallenius', (3, 3, 4, 1)], + ['bernoulli', (1.5, )], + ['binom', (10, 1.5)], + ['betabinom', (10, -0.4, -0.5)], + ['betanbinom', (10, -0.4, -0.5)], + ['boltzmann', (-1, 4)], + ['dlaplace', (-0.5, )], + ['geom', (1.5, )], + ['logser', (1.5, )], + ['nbinom', (10, 1.5)], + ['planck', (-0.5, )], + ['poisson', (-0.5, )], + ['randint', (5, 2)], + ['skellam', (-5, -2)], + ['zipf', (-2, )], + ['yulesimon', (-2, )], + ['zipfian', (-0.75, 15)] +] + + +invdistcont = [ + # In each of the following, at least one shape parameter is invalid + ['alpha', (-1, )], + ['anglit', ()], + ['arcsine', ()], + ['argus', (-1, )], + ['beta', (-2, 2)], + ['betaprime', (-2, 2)], + ['bradford', (-1, )], + ['burr', (-1, 1)], + ['burr12', (-1, 1)], + ['cauchy', ()], + ['chi', (-1, )], + ['chi2', (-1, )], + ['cosine', ()], + ['crystalball', (-1, 2)], + ['dgamma', (-1, )], + ['dweibull', (-1, )], + ['erlang', (-1, )], + ['expon', ()], + ['exponnorm', (-1, )], + ['exponweib', (1, -1)], + ['exponpow', (-1, )], + ['f', (10, -10)], + ['fatiguelife', (-1, )], + ['fisk', (-1, )], + ['foldcauchy', (-1, )], + ['foldnorm', (-1, )], + ['genlogistic', (-1, )], + ['gennorm', (-1, )], + ['genpareto', (np.inf, )], + ['genexpon', (1, 2, -3)], + ['genextreme', (np.inf, )], + ['genhyperbolic', (0.5, -0.5, -1.5,)], + ['gausshyper', (1, 2, 3, -4)], + ['gamma', (-1, )], + ['gengamma', (-1, 0)], + ['genhalflogistic', (-1, )], + ['geninvgauss', (1, 0)], + ['gibrat', ()], + ['gompertz', (-1, )], + ['gumbel_r', ()], + ['gumbel_l', ()], + ['halfcauchy', ()], + ['halflogistic', ()], + ['halfnorm', ()], + ['halfgennorm', (-1, )], + ['hypsecant', ()], + ['invgamma', (-1, )], + ['invgauss', (-1, )], + ['invweibull', (-1, )], + ['jf_skew_t', (-1, 0)], + ['johnsonsb', (1, -2)], + ['johnsonsu', (1, -2)], + ['kappa4', (np.nan, 0)], + ['kappa3', (-1, )], + ['ksone', (-1, )], + ['kstwo', (-1, )], + ['kstwobign', ()], + ['laplace', ()], + ['laplace_asymmetric', (-1, )], + ['levy', ()], + ['levy_l', ()], + ['levy_stable', (-1, 1)], + ['logistic', ()], + ['loggamma', (-1, )], + ['loglaplace', (-1, )], + ['lognorm', (-1, )], + ['loguniform', (10, 5)], + ['lomax', (-1, )], + ['maxwell', ()], + ['mielke', (1, -2)], + ['moyal', ()], + ['nakagami', (-1, )], + ['ncx2', (-1, 2)], + ['ncf', (10, 20, -1)], + ['nct', (-1, 2)], + ['norm', ()], + ['norminvgauss', (5, -10)], + ['pareto', (-1, )], + ['pearson3', (np.nan, )], + ['powerlaw', (-1, )], + ['powerlognorm', (1, -2)], + ['powernorm', (-1, )], + ['rdist', (-1, )], + ['rayleigh', ()], + ['rice', (-1, )], + ['recipinvgauss', (-1, )], + ['semicircular', ()], + ['skewnorm', (np.inf, )], + ['studentized_range', (-1, 1)], + ['rel_breitwigner', (-2, )], + ['t', (-1, )], + ['trapezoid', (0, 2)], + ['triang', (2, )], + ['truncexpon', (-1, )], + ['truncnorm', (10, 5)], + ['truncpareto', (-1, 5)], + ['truncpareto', (1.8, .5)], + ['truncweibull_min', (-2.5, 0.25, 1.75)], + ['tukeylambda', (np.nan, )], + ['uniform', ()], + ['vonmises', (-1, )], + ['vonmises_line', (-1, )], + ['wald', ()], + ['weibull_min', (-1, )], + ['weibull_max', (-1, )], + ['wrapcauchy', (2, )], + ['reciprocal', (15, 10)], + ['skewcauchy', (2, )] +] diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_fit.py b/venv/lib/python3.10/site-packages/scipy/stats/_fit.py new file mode 100644 index 0000000000000000000000000000000000000000..b23e33d74a2c422c91f8eb275af286f004b61b5e --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_fit.py @@ -0,0 +1,1351 @@ +import warnings +from collections import namedtuple +import numpy as np +from scipy import optimize, stats +from scipy._lib._util import check_random_state + + +def _combine_bounds(name, user_bounds, shape_domain, integral): + """Intersection of user-defined bounds and distribution PDF/PMF domain""" + + user_bounds = np.atleast_1d(user_bounds) + + if user_bounds[0] > user_bounds[1]: + message = (f"There are no values for `{name}` on the interval " + f"{list(user_bounds)}.") + raise ValueError(message) + + bounds = (max(user_bounds[0], shape_domain[0]), + min(user_bounds[1], shape_domain[1])) + + if integral and (np.ceil(bounds[0]) > np.floor(bounds[1])): + message = (f"There are no integer values for `{name}` on the interval " + f"defined by the user-provided bounds and the domain " + "of the distribution.") + raise ValueError(message) + elif not integral and (bounds[0] > bounds[1]): + message = (f"There are no values for `{name}` on the interval " + f"defined by the user-provided bounds and the domain " + "of the distribution.") + raise ValueError(message) + + if not np.all(np.isfinite(bounds)): + message = (f"The intersection of user-provided bounds for `{name}` " + f"and the domain of the distribution is not finite. Please " + f"provide finite bounds for shape `{name}` in `bounds`.") + raise ValueError(message) + + return bounds + + +class FitResult: + r"""Result of fitting a discrete or continuous distribution to data + + Attributes + ---------- + params : namedtuple + A namedtuple containing the maximum likelihood estimates of the + shape parameters, location, and (if applicable) scale of the + distribution. + success : bool or None + Whether the optimizer considered the optimization to terminate + successfully or not. + message : str or None + Any status message provided by the optimizer. + + """ + + def __init__(self, dist, data, discrete, res): + self._dist = dist + self._data = data + self.discrete = discrete + self.pxf = getattr(dist, "pmf", None) or getattr(dist, "pdf", None) + + shape_names = [] if dist.shapes is None else dist.shapes.split(", ") + if not discrete: + FitParams = namedtuple('FitParams', shape_names + ['loc', 'scale']) + else: + FitParams = namedtuple('FitParams', shape_names + ['loc']) + + self.params = FitParams(*res.x) + + # Optimizer can report success even when nllf is infinite + if res.success and not np.isfinite(self.nllf()): + res.success = False + res.message = ("Optimization converged to parameter values that " + "are inconsistent with the data.") + self.success = getattr(res, "success", None) + self.message = getattr(res, "message", None) + + def __repr__(self): + keys = ["params", "success", "message"] + m = max(map(len, keys)) + 1 + return '\n'.join([key.rjust(m) + ': ' + repr(getattr(self, key)) + for key in keys if getattr(self, key) is not None]) + + def nllf(self, params=None, data=None): + """Negative log-likelihood function + + Evaluates the negative of the log-likelihood function of the provided + data at the provided parameters. + + Parameters + ---------- + params : tuple, optional + The shape parameters, location, and (if applicable) scale of the + distribution as a single tuple. Default is the maximum likelihood + estimates (``self.params``). + data : array_like, optional + The data for which the log-likelihood function is to be evaluated. + Default is the data to which the distribution was fit. + + Returns + ------- + nllf : float + The negative of the log-likelihood function. + + """ + params = params if params is not None else self.params + data = data if data is not None else self._data + return self._dist.nnlf(theta=params, x=data) + + def plot(self, ax=None, *, plot_type="hist"): + """Visually compare the data against the fitted distribution. + + Available only if `matplotlib` is installed. + + Parameters + ---------- + ax : `matplotlib.axes.Axes` + Axes object to draw the plot onto, otherwise uses the current Axes. + plot_type : {"hist", "qq", "pp", "cdf"} + Type of plot to draw. Options include: + + - "hist": Superposes the PDF/PMF of the fitted distribution + over a normalized histogram of the data. + - "qq": Scatter plot of theoretical quantiles against the + empirical quantiles. Specifically, the x-coordinates are the + values of the fitted distribution PPF evaluated at the + percentiles ``(np.arange(1, n) - 0.5)/n``, where ``n`` is the + number of data points, and the y-coordinates are the sorted + data points. + - "pp": Scatter plot of theoretical percentiles against the + observed percentiles. Specifically, the x-coordinates are the + percentiles ``(np.arange(1, n) - 0.5)/n``, where ``n`` is + the number of data points, and the y-coordinates are the values + of the fitted distribution CDF evaluated at the sorted + data points. + - "cdf": Superposes the CDF of the fitted distribution over the + empirical CDF. Specifically, the x-coordinates of the empirical + CDF are the sorted data points, and the y-coordinates are the + percentiles ``(np.arange(1, n) - 0.5)/n``, where ``n`` is + the number of data points. + + Returns + ------- + ax : `matplotlib.axes.Axes` + The matplotlib Axes object on which the plot was drawn. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> import matplotlib.pyplot as plt # matplotlib must be installed + >>> rng = np.random.default_rng() + >>> data = stats.nbinom(5, 0.5).rvs(size=1000, random_state=rng) + >>> bounds = [(0, 30), (0, 1)] + >>> res = stats.fit(stats.nbinom, data, bounds) + >>> ax = res.plot() # save matplotlib Axes object + + The `matplotlib.axes.Axes` object can be used to customize the plot. + See `matplotlib.axes.Axes` documentation for details. + + >>> ax.set_xlabel('number of trials') # customize axis label + >>> ax.get_children()[0].set_linewidth(5) # customize line widths + >>> ax.legend() + >>> plt.show() + """ + try: + import matplotlib # noqa: F401 + except ModuleNotFoundError as exc: + message = "matplotlib must be installed to use method `plot`." + raise ModuleNotFoundError(message) from exc + + plots = {'histogram': self._hist_plot, 'qq': self._qq_plot, + 'pp': self._pp_plot, 'cdf': self._cdf_plot, + 'hist': self._hist_plot} + if plot_type.lower() not in plots: + message = f"`plot_type` must be one of {set(plots.keys())}" + raise ValueError(message) + plot = plots[plot_type.lower()] + + if ax is None: + import matplotlib.pyplot as plt + ax = plt.gca() + + fit_params = np.atleast_1d(self.params) + + return plot(ax=ax, fit_params=fit_params) + + def _hist_plot(self, ax, fit_params): + from matplotlib.ticker import MaxNLocator + + support = self._dist.support(*fit_params) + lb = support[0] if np.isfinite(support[0]) else min(self._data) + ub = support[1] if np.isfinite(support[1]) else max(self._data) + pxf = "PMF" if self.discrete else "PDF" + + if self.discrete: + x = np.arange(lb, ub + 2) + y = self.pxf(x, *fit_params) + ax.vlines(x[:-1], 0, y[:-1], label='Fitted Distribution PMF', + color='C0') + options = dict(density=True, bins=x, align='left', color='C1') + ax.xaxis.set_major_locator(MaxNLocator(integer=True)) + ax.set_xlabel('k') + ax.set_ylabel('PMF') + else: + x = np.linspace(lb, ub, 200) + y = self.pxf(x, *fit_params) + ax.plot(x, y, '--', label='Fitted Distribution PDF', color='C0') + options = dict(density=True, bins=50, align='mid', color='C1') + ax.set_xlabel('x') + ax.set_ylabel('PDF') + + if len(self._data) > 50 or self.discrete: + ax.hist(self._data, label="Histogram of Data", **options) + else: + ax.plot(self._data, np.zeros_like(self._data), "*", + label='Data', color='C1') + + ax.set_title(rf"Fitted $\tt {self._dist.name}$ {pxf} and Histogram") + ax.legend(*ax.get_legend_handles_labels()) + return ax + + def _qp_plot(self, ax, fit_params, qq): + data = np.sort(self._data) + ps = self._plotting_positions(len(self._data)) + + if qq: + qp = "Quantiles" + plot_type = 'Q-Q' + x = self._dist.ppf(ps, *fit_params) + y = data + else: + qp = "Percentiles" + plot_type = 'P-P' + x = ps + y = self._dist.cdf(data, *fit_params) + + ax.plot(x, y, '.', label=f'Fitted Distribution {plot_type}', + color='C0', zorder=1) + xlim = ax.get_xlim() + ylim = ax.get_ylim() + lim = [min(xlim[0], ylim[0]), max(xlim[1], ylim[1])] + if not qq: + lim = max(lim[0], 0), min(lim[1], 1) + + if self.discrete and qq: + q_min, q_max = int(lim[0]), int(lim[1]+1) + q_ideal = np.arange(q_min, q_max) + # q_ideal = np.unique(self._dist.ppf(ps, *fit_params)) + ax.plot(q_ideal, q_ideal, 'o', label='Reference', color='k', + alpha=0.25, markerfacecolor='none', clip_on=True) + elif self.discrete and not qq: + # The intent of this is to match the plot that would be produced + # if x were continuous on [0, 1] and y were cdf(ppf(x)). + # It can be approximated by letting x = np.linspace(0, 1, 1000), + # but this might not look great when zooming in. The vertical + # portions are included to indicate where the transition occurs + # where the data completely obscures the horizontal portions. + p_min, p_max = lim + a, b = self._dist.support(*fit_params) + p_min = max(p_min, 0 if np.isfinite(a) else 1e-3) + p_max = min(p_max, 1 if np.isfinite(b) else 1-1e-3) + q_min, q_max = self._dist.ppf([p_min, p_max], *fit_params) + qs = np.arange(q_min-1, q_max+1) + ps = self._dist.cdf(qs, *fit_params) + ax.step(ps, ps, '-', label='Reference', color='k', alpha=0.25, + clip_on=True) + else: + ax.plot(lim, lim, '-', label='Reference', color='k', alpha=0.25, + clip_on=True) + + ax.set_xlim(lim) + ax.set_ylim(lim) + ax.set_xlabel(rf"Fitted $\tt {self._dist.name}$ Theoretical {qp}") + ax.set_ylabel(f"Data {qp}") + ax.set_title(rf"Fitted $\tt {self._dist.name}$ {plot_type} Plot") + ax.legend(*ax.get_legend_handles_labels()) + ax.set_aspect('equal') + return ax + + def _qq_plot(self, **kwargs): + return self._qp_plot(qq=True, **kwargs) + + def _pp_plot(self, **kwargs): + return self._qp_plot(qq=False, **kwargs) + + def _plotting_positions(self, n, a=.5): + # See https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot#Plotting_positions + k = np.arange(1, n+1) + return (k-a) / (n + 1 - 2*a) + + def _cdf_plot(self, ax, fit_params): + data = np.sort(self._data) + ecdf = self._plotting_positions(len(self._data)) + ls = '--' if len(np.unique(data)) < 30 else '.' + xlabel = 'k' if self.discrete else 'x' + ax.step(data, ecdf, ls, label='Empirical CDF', color='C1', zorder=0) + + xlim = ax.get_xlim() + q = np.linspace(*xlim, 300) + tcdf = self._dist.cdf(q, *fit_params) + + ax.plot(q, tcdf, label='Fitted Distribution CDF', color='C0', zorder=1) + ax.set_xlim(xlim) + ax.set_ylim(0, 1) + ax.set_xlabel(xlabel) + ax.set_ylabel("CDF") + ax.set_title(rf"Fitted $\tt {self._dist.name}$ and Empirical CDF") + handles, labels = ax.get_legend_handles_labels() + ax.legend(handles[::-1], labels[::-1]) + return ax + + +def fit(dist, data, bounds=None, *, guess=None, method='mle', + optimizer=optimize.differential_evolution): + r"""Fit a discrete or continuous distribution to data + + Given a distribution, data, and bounds on the parameters of the + distribution, return maximum likelihood estimates of the parameters. + + Parameters + ---------- + dist : `scipy.stats.rv_continuous` or `scipy.stats.rv_discrete` + The object representing the distribution to be fit to the data. + data : 1D array_like + The data to which the distribution is to be fit. If the data contain + any of ``np.nan``, ``np.inf``, or -``np.inf``, the fit method will + raise a ``ValueError``. + bounds : dict or sequence of tuples, optional + If a dictionary, each key is the name of a parameter of the + distribution, and the corresponding value is a tuple containing the + lower and upper bound on that parameter. If the distribution is + defined only for a finite range of values of that parameter, no entry + for that parameter is required; e.g., some distributions have + parameters which must be on the interval [0, 1]. Bounds for parameters + location (``loc``) and scale (``scale``) are optional; by default, + they are fixed to 0 and 1, respectively. + + If a sequence, element *i* is a tuple containing the lower and upper + bound on the *i*\ th parameter of the distribution. In this case, + bounds for *all* distribution shape parameters must be provided. + Optionally, bounds for location and scale may follow the + distribution shape parameters. + + If a shape is to be held fixed (e.g. if it is known), the + lower and upper bounds may be equal. If a user-provided lower or upper + bound is beyond a bound of the domain for which the distribution is + defined, the bound of the distribution's domain will replace the + user-provided value. Similarly, parameters which must be integral + will be constrained to integral values within the user-provided bounds. + guess : dict or array_like, optional + If a dictionary, each key is the name of a parameter of the + distribution, and the corresponding value is a guess for the value + of the parameter. + + If a sequence, element *i* is a guess for the *i*\ th parameter of the + distribution. In this case, guesses for *all* distribution shape + parameters must be provided. + + If `guess` is not provided, guesses for the decision variables will + not be passed to the optimizer. If `guess` is provided, guesses for + any missing parameters will be set at the mean of the lower and + upper bounds. Guesses for parameters which must be integral will be + rounded to integral values, and guesses that lie outside the + intersection of the user-provided bounds and the domain of the + distribution will be clipped. + method : {'mle', 'mse'} + With ``method="mle"`` (default), the fit is computed by minimizing + the negative log-likelihood function. A large, finite penalty + (rather than infinite negative log-likelihood) is applied for + observations beyond the support of the distribution. + With ``method="mse"``, the fit is computed by minimizing + the negative log-product spacing function. The same penalty is applied + for observations beyond the support. We follow the approach of [1]_, + which is generalized for samples with repeated observations. + optimizer : callable, optional + `optimizer` is a callable that accepts the following positional + argument. + + fun : callable + The objective function to be optimized. `fun` accepts one argument + ``x``, candidate shape parameters of the distribution, and returns + the objective function value given ``x``, `dist`, and the provided + `data`. + The job of `optimizer` is to find values of the decision variables + that minimizes `fun`. + + `optimizer` must also accept the following keyword argument. + + bounds : sequence of tuples + The bounds on values of the decision variables; each element will + be a tuple containing the lower and upper bound on a decision + variable. + + If `guess` is provided, `optimizer` must also accept the following + keyword argument. + + x0 : array_like + The guesses for each decision variable. + + If the distribution has any shape parameters that must be integral or + if the distribution is discrete and the location parameter is not + fixed, `optimizer` must also accept the following keyword argument. + + integrality : array_like of bools + For each decision variable, True if the decision variable + must be constrained to integer values and False if the decision + variable is continuous. + + `optimizer` must return an object, such as an instance of + `scipy.optimize.OptimizeResult`, which holds the optimal values of + the decision variables in an attribute ``x``. If attributes + ``fun``, ``status``, or ``message`` are provided, they will be + included in the result object returned by `fit`. + + Returns + ------- + result : `~scipy.stats._result_classes.FitResult` + An object with the following fields. + + params : namedtuple + A namedtuple containing the maximum likelihood estimates of the + shape parameters, location, and (if applicable) scale of the + distribution. + success : bool or None + Whether the optimizer considered the optimization to terminate + successfully or not. + message : str or None + Any status message provided by the optimizer. + + The object has the following method: + + nllf(params=None, data=None) + By default, the negative log-likehood function at the fitted + `params` for the given `data`. Accepts a tuple containing + alternative shapes, location, and scale of the distribution and + an array of alternative data. + + plot(ax=None) + Superposes the PDF/PMF of the fitted distribution over a normalized + histogram of the data. + + See Also + -------- + rv_continuous, rv_discrete + + Notes + ----- + Optimization is more likely to converge to the maximum likelihood estimate + when the user provides tight bounds containing the maximum likelihood + estimate. For example, when fitting a binomial distribution to data, the + number of experiments underlying each sample may be known, in which case + the corresponding shape parameter ``n`` can be fixed. + + References + ---------- + .. [1] Shao, Yongzhao, and Marjorie G. Hahn. "Maximum product of spacings + method: a unified formulation with illustration of strong + consistency." Illinois Journal of Mathematics 43.3 (1999): 489-499. + + Examples + -------- + Suppose we wish to fit a distribution to the following data. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> dist = stats.nbinom + >>> shapes = (5, 0.5) + >>> data = dist.rvs(*shapes, size=1000, random_state=rng) + + Suppose we do not know how the data were generated, but we suspect that + it follows a negative binomial distribution with parameters *n* and *p*\. + (See `scipy.stats.nbinom`.) We believe that the parameter *n* was fewer + than 30, and we know that the parameter *p* must lie on the interval + [0, 1]. We record this information in a variable `bounds` and pass + this information to `fit`. + + >>> bounds = [(0, 30), (0, 1)] + >>> res = stats.fit(dist, data, bounds) + + `fit` searches within the user-specified `bounds` for the + values that best match the data (in the sense of maximum likelihood + estimation). In this case, it found shape values similar to those + from which the data were actually generated. + + >>> res.params + FitParams(n=5.0, p=0.5028157644634368, loc=0.0) # may vary + + We can visualize the results by superposing the probability mass function + of the distribution (with the shapes fit to the data) over a normalized + histogram of the data. + + >>> import matplotlib.pyplot as plt # matplotlib must be installed to plot + >>> res.plot() + >>> plt.show() + + Note that the estimate for *n* was exactly integral; this is because + the domain of the `nbinom` PMF includes only integral *n*, and the `nbinom` + object "knows" that. `nbinom` also knows that the shape *p* must be a + value between 0 and 1. In such a case - when the domain of the distribution + with respect to a parameter is finite - we are not required to specify + bounds for the parameter. + + >>> bounds = {'n': (0, 30)} # omit parameter p using a `dict` + >>> res2 = stats.fit(dist, data, bounds) + >>> res2.params + FitParams(n=5.0, p=0.5016492009232932, loc=0.0) # may vary + + If we wish to force the distribution to be fit with *n* fixed at 6, we can + set both the lower and upper bounds on *n* to 6. Note, however, that the + value of the objective function being optimized is typically worse (higher) + in this case. + + >>> bounds = {'n': (6, 6)} # fix parameter `n` + >>> res3 = stats.fit(dist, data, bounds) + >>> res3.params + FitParams(n=6.0, p=0.5486556076755706, loc=0.0) # may vary + >>> res3.nllf() > res.nllf() + True # may vary + + Note that the numerical results of the previous examples are typical, but + they may vary because the default optimizer used by `fit`, + `scipy.optimize.differential_evolution`, is stochastic. However, we can + customize the settings used by the optimizer to ensure reproducibility - + or even use a different optimizer entirely - using the `optimizer` + parameter. + + >>> from scipy.optimize import differential_evolution + >>> rng = np.random.default_rng(767585560716548) + >>> def optimizer(fun, bounds, *, integrality): + ... return differential_evolution(fun, bounds, strategy='best2bin', + ... seed=rng, integrality=integrality) + >>> bounds = [(0, 30), (0, 1)] + >>> res4 = stats.fit(dist, data, bounds, optimizer=optimizer) + >>> res4.params + FitParams(n=5.0, p=0.5015183149259951, loc=0.0) + + """ + # --- Input Validation / Standardization --- # + user_bounds = bounds + user_guess = guess + + # distribution input validation and information collection + if hasattr(dist, "pdf"): # can't use isinstance for types + default_bounds = {'loc': (0, 0), 'scale': (1, 1)} + discrete = False + elif hasattr(dist, "pmf"): + default_bounds = {'loc': (0, 0)} + discrete = True + else: + message = ("`dist` must be an instance of `rv_continuous` " + "or `rv_discrete.`") + raise ValueError(message) + + try: + param_info = dist._param_info() + except AttributeError as e: + message = (f"Distribution `{dist.name}` is not yet supported by " + "`scipy.stats.fit` because shape information has " + "not been defined.") + raise ValueError(message) from e + + # data input validation + data = np.asarray(data) + if data.ndim != 1: + message = "`data` must be exactly one-dimensional." + raise ValueError(message) + if not (np.issubdtype(data.dtype, np.number) + and np.all(np.isfinite(data))): + message = "All elements of `data` must be finite numbers." + raise ValueError(message) + + # bounds input validation and information collection + n_params = len(param_info) + n_shapes = n_params - (1 if discrete else 2) + param_list = [param.name for param in param_info] + param_names = ", ".join(param_list) + shape_names = ", ".join(param_list[:n_shapes]) + + if user_bounds is None: + user_bounds = {} + + if isinstance(user_bounds, dict): + default_bounds.update(user_bounds) + user_bounds = default_bounds + user_bounds_array = np.empty((n_params, 2)) + for i in range(n_params): + param_name = param_info[i].name + user_bound = user_bounds.pop(param_name, None) + if user_bound is None: + user_bound = param_info[i].domain + user_bounds_array[i] = user_bound + if user_bounds: + message = ("Bounds provided for the following unrecognized " + f"parameters will be ignored: {set(user_bounds)}") + warnings.warn(message, RuntimeWarning, stacklevel=2) + + else: + try: + user_bounds = np.asarray(user_bounds, dtype=float) + if user_bounds.size == 0: + user_bounds = np.empty((0, 2)) + except ValueError as e: + message = ("Each element of a `bounds` sequence must be a tuple " + "containing two elements: the lower and upper bound of " + "a distribution parameter.") + raise ValueError(message) from e + if (user_bounds.ndim != 2 or user_bounds.shape[1] != 2): + message = ("Each element of `bounds` must be a tuple specifying " + "the lower and upper bounds of a shape parameter") + raise ValueError(message) + if user_bounds.shape[0] < n_shapes: + message = (f"A `bounds` sequence must contain at least {n_shapes} " + "elements: tuples specifying the lower and upper " + f"bounds of all shape parameters {shape_names}.") + raise ValueError(message) + if user_bounds.shape[0] > n_params: + message = ("A `bounds` sequence may not contain more than " + f"{n_params} elements: tuples specifying the lower and " + "upper bounds of distribution parameters " + f"{param_names}.") + raise ValueError(message) + + user_bounds_array = np.empty((n_params, 2)) + user_bounds_array[n_shapes:] = list(default_bounds.values()) + user_bounds_array[:len(user_bounds)] = user_bounds + + user_bounds = user_bounds_array + validated_bounds = [] + for i in range(n_params): + name = param_info[i].name + user_bound = user_bounds_array[i] + param_domain = param_info[i].domain + integral = param_info[i].integrality + combined = _combine_bounds(name, user_bound, param_domain, integral) + validated_bounds.append(combined) + + bounds = np.asarray(validated_bounds) + integrality = [param.integrality for param in param_info] + + # guess input validation + + if user_guess is None: + guess_array = None + elif isinstance(user_guess, dict): + default_guess = {param.name: np.mean(bound) + for param, bound in zip(param_info, bounds)} + unrecognized = set(user_guess) - set(default_guess) + if unrecognized: + message = ("Guesses provided for the following unrecognized " + f"parameters will be ignored: {unrecognized}") + warnings.warn(message, RuntimeWarning, stacklevel=2) + default_guess.update(user_guess) + + message = ("Each element of `guess` must be a scalar " + "guess for a distribution parameter.") + try: + guess_array = np.asarray([default_guess[param.name] + for param in param_info], dtype=float) + except ValueError as e: + raise ValueError(message) from e + + else: + message = ("Each element of `guess` must be a scalar " + "guess for a distribution parameter.") + try: + user_guess = np.asarray(user_guess, dtype=float) + except ValueError as e: + raise ValueError(message) from e + if user_guess.ndim != 1: + raise ValueError(message) + if user_guess.shape[0] < n_shapes: + message = (f"A `guess` sequence must contain at least {n_shapes} " + "elements: scalar guesses for the distribution shape " + f"parameters {shape_names}.") + raise ValueError(message) + if user_guess.shape[0] > n_params: + message = ("A `guess` sequence may not contain more than " + f"{n_params} elements: scalar guesses for the " + f"distribution parameters {param_names}.") + raise ValueError(message) + + guess_array = np.mean(bounds, axis=1) + guess_array[:len(user_guess)] = user_guess + + if guess_array is not None: + guess_rounded = guess_array.copy() + + guess_rounded[integrality] = np.round(guess_rounded[integrality]) + rounded = np.where(guess_rounded != guess_array)[0] + for i in rounded: + message = (f"Guess for parameter `{param_info[i].name}` " + f"rounded from {guess_array[i]} to {guess_rounded[i]}.") + warnings.warn(message, RuntimeWarning, stacklevel=2) + + guess_clipped = np.clip(guess_rounded, bounds[:, 0], bounds[:, 1]) + clipped = np.where(guess_clipped != guess_rounded)[0] + for i in clipped: + message = (f"Guess for parameter `{param_info[i].name}` " + f"clipped from {guess_rounded[i]} to " + f"{guess_clipped[i]}.") + warnings.warn(message, RuntimeWarning, stacklevel=2) + + guess = guess_clipped + else: + guess = None + + # --- Fitting --- # + def nllf(free_params, data=data): # bind data NOW + with np.errstate(invalid='ignore', divide='ignore'): + return dist._penalized_nnlf(free_params, data) + + def nlpsf(free_params, data=data): # bind data NOW + with np.errstate(invalid='ignore', divide='ignore'): + return dist._penalized_nlpsf(free_params, data) + + methods = {'mle': nllf, 'mse': nlpsf} + objective = methods[method.lower()] + + with np.errstate(invalid='ignore', divide='ignore'): + kwds = {} + if bounds is not None: + kwds['bounds'] = bounds + if np.any(integrality): + kwds['integrality'] = integrality + if guess is not None: + kwds['x0'] = guess + res = optimizer(objective, **kwds) + + return FitResult(dist, data, discrete, res) + + +GoodnessOfFitResult = namedtuple('GoodnessOfFitResult', + ('fit_result', 'statistic', 'pvalue', + 'null_distribution')) + + +def goodness_of_fit(dist, data, *, known_params=None, fit_params=None, + guessed_params=None, statistic='ad', n_mc_samples=9999, + random_state=None): + r""" + Perform a goodness of fit test comparing data to a distribution family. + + Given a distribution family and data, perform a test of the null hypothesis + that the data were drawn from a distribution in that family. Any known + parameters of the distribution may be specified. Remaining parameters of + the distribution will be fit to the data, and the p-value of the test + is computed accordingly. Several statistics for comparing the distribution + to data are available. + + Parameters + ---------- + dist : `scipy.stats.rv_continuous` + The object representing the distribution family under the null + hypothesis. + data : 1D array_like + Finite, uncensored data to be tested. + known_params : dict, optional + A dictionary containing name-value pairs of known distribution + parameters. Monte Carlo samples are randomly drawn from the + null-hypothesized distribution with these values of the parameters. + Before the statistic is evaluated for each Monte Carlo sample, only + remaining unknown parameters of the null-hypothesized distribution + family are fit to the samples; the known parameters are held fixed. + If all parameters of the distribution family are known, then the step + of fitting the distribution family to each sample is omitted. + fit_params : dict, optional + A dictionary containing name-value pairs of distribution parameters + that have already been fit to the data, e.g. using `scipy.stats.fit` + or the ``fit`` method of `dist`. Monte Carlo samples are drawn from the + null-hypothesized distribution with these specified values of the + parameter. On those Monte Carlo samples, however, these and all other + unknown parameters of the null-hypothesized distribution family are + fit before the statistic is evaluated. + guessed_params : dict, optional + A dictionary containing name-value pairs of distribution parameters + which have been guessed. These parameters are always considered as + free parameters and are fit both to the provided `data` as well as + to the Monte Carlo samples drawn from the null-hypothesized + distribution. The purpose of these `guessed_params` is to be used as + initial values for the numerical fitting procedure. + statistic : {"ad", "ks", "cvm", "filliben"} or callable, optional + The statistic used to compare data to a distribution after fitting + unknown parameters of the distribution family to the data. The + Anderson-Darling ("ad") [1]_, Kolmogorov-Smirnov ("ks") [1]_, + Cramer-von Mises ("cvm") [1]_, and Filliben ("filliben") [7]_ + statistics are available. Alternatively, a callable with signature + ``(dist, data, axis)`` may be supplied to compute the statistic. Here + ``dist`` is a frozen distribution object (potentially with array + parameters), ``data`` is an array of Monte Carlo samples (of + compatible shape), and ``axis`` is the axis of ``data`` along which + the statistic must be computed. + n_mc_samples : int, default: 9999 + The number of Monte Carlo samples drawn from the null hypothesized + distribution to form the null distribution of the statistic. The + sample size of each is the same as the given `data`. + random_state : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + Pseudorandom number generator state used to generate the Monte Carlo + samples. + + If `random_state` is ``None`` (default), the + `numpy.random.RandomState` singleton is used. + If `random_state` is an int, a new ``RandomState`` instance is used, + seeded with `random_state`. + If `random_state` is already a ``Generator`` or ``RandomState`` + instance, then the provided instance is used. + + Returns + ------- + res : GoodnessOfFitResult + An object with the following attributes. + + fit_result : `~scipy.stats._result_classes.FitResult` + An object representing the fit of the provided `dist` to `data`. + This object includes the values of distribution family parameters + that fully define the null-hypothesized distribution, that is, + the distribution from which Monte Carlo samples are drawn. + statistic : float + The value of the statistic comparing provided `data` to the + null-hypothesized distribution. + pvalue : float + The proportion of elements in the null distribution with + statistic values at least as extreme as the statistic value of the + provided `data`. + null_distribution : ndarray + The value of the statistic for each Monte Carlo sample + drawn from the null-hypothesized distribution. + + Notes + ----- + This is a generalized Monte Carlo goodness-of-fit procedure, special cases + of which correspond with various Anderson-Darling tests, Lilliefors' test, + etc. The test is described in [2]_, [3]_, and [4]_ as a parametric + bootstrap test. This is a Monte Carlo test in which parameters that + specify the distribution from which samples are drawn have been estimated + from the data. We describe the test using "Monte Carlo" rather than + "parametric bootstrap" throughout to avoid confusion with the more familiar + nonparametric bootstrap, and describe how the test is performed below. + + *Traditional goodness of fit tests* + + Traditionally, critical values corresponding with a fixed set of + significance levels are pre-calculated using Monte Carlo methods. Users + perform the test by calculating the value of the test statistic only for + their observed `data` and comparing this value to tabulated critical + values. This practice is not very flexible, as tables are not available for + all distributions and combinations of known and unknown parameter values. + Also, results can be inaccurate when critical values are interpolated from + limited tabulated data to correspond with the user's sample size and + fitted parameter values. To overcome these shortcomings, this function + allows the user to perform the Monte Carlo trials adapted to their + particular data. + + *Algorithmic overview* + + In brief, this routine executes the following steps: + + 1. Fit unknown parameters to the given `data`, thereby forming the + "null-hypothesized" distribution, and compute the statistic of + this pair of data and distribution. + 2. Draw random samples from this null-hypothesized distribution. + 3. Fit the unknown parameters to each random sample. + 4. Calculate the statistic between each sample and the distribution that + has been fit to the sample. + 5. Compare the value of the statistic corresponding with `data` from (1) + against the values of the statistic corresponding with the random + samples from (4). The p-value is the proportion of samples with a + statistic value greater than or equal to the statistic of the observed + data. + + In more detail, the steps are as follows. + + First, any unknown parameters of the distribution family specified by + `dist` are fit to the provided `data` using maximum likelihood estimation. + (One exception is the normal distribution with unknown location and scale: + we use the bias-corrected standard deviation ``np.std(data, ddof=1)`` for + the scale as recommended in [1]_.) + These values of the parameters specify a particular member of the + distribution family referred to as the "null-hypothesized distribution", + that is, the distribution from which the data were sampled under the null + hypothesis. The `statistic`, which compares data to a distribution, is + computed between `data` and the null-hypothesized distribution. + + Next, many (specifically `n_mc_samples`) new samples, each containing the + same number of observations as `data`, are drawn from the + null-hypothesized distribution. All unknown parameters of the distribution + family `dist` are fit to *each resample*, and the `statistic` is computed + between each sample and its corresponding fitted distribution. These + values of the statistic form the Monte Carlo null distribution (not to be + confused with the "null-hypothesized distribution" above). + + The p-value of the test is the proportion of statistic values in the Monte + Carlo null distribution that are at least as extreme as the statistic value + of the provided `data`. More precisely, the p-value is given by + + .. math:: + + p = \frac{b + 1} + {m + 1} + + where :math:`b` is the number of statistic values in the Monte Carlo null + distribution that are greater than or equal to the statistic value + calculated for `data`, and :math:`m` is the number of elements in the + Monte Carlo null distribution (`n_mc_samples`). The addition of :math:`1` + to the numerator and denominator can be thought of as including the + value of the statistic corresponding with `data` in the null distribution, + but a more formal explanation is given in [5]_. + + *Limitations* + + The test can be very slow for some distribution families because unknown + parameters of the distribution family must be fit to each of the Monte + Carlo samples, and for most distributions in SciPy, distribution fitting + performed via numerical optimization. + + *Anti-Pattern* + + For this reason, it may be tempting + to treat parameters of the distribution pre-fit to `data` (by the user) + as though they were `known_params`, as specification of all parameters of + the distribution precludes the need to fit the distribution to each Monte + Carlo sample. (This is essentially how the original Kilmogorov-Smirnov + test is performed.) Although such a test can provide evidence against the + null hypothesis, the test is conservative in the sense that small p-values + will tend to (greatly) *overestimate* the probability of making a type I + error (that is, rejecting the null hypothesis although it is true), and the + power of the test is low (that is, it is less likely to reject the null + hypothesis even when the null hypothesis is false). + This is because the Monte Carlo samples are less likely to agree with the + null-hypothesized distribution as well as `data`. This tends to increase + the values of the statistic recorded in the null distribution, so that a + larger number of them exceed the value of statistic for `data`, thereby + inflating the p-value. + + References + ---------- + .. [1] M. A. Stephens (1974). "EDF Statistics for Goodness of Fit and + Some Comparisons." Journal of the American Statistical Association, + Vol. 69, pp. 730-737. + .. [2] W. Stute, W. G. Manteiga, and M. P. Quindimil (1993). + "Bootstrap based goodness-of-fit-tests." Metrika 40.1: 243-256. + .. [3] C. Genest, & B Rémillard. (2008). "Validity of the parametric + bootstrap for goodness-of-fit testing in semiparametric models." + Annales de l'IHP Probabilités et statistiques. Vol. 44. No. 6. + .. [4] I. Kojadinovic and J. Yan (2012). "Goodness-of-fit testing based on + a weighted bootstrap: A fast large-sample alternative to the + parametric bootstrap." Canadian Journal of Statistics 40.3: 480-500. + .. [5] B. Phipson and G. K. Smyth (2010). "Permutation P-values Should + Never Be Zero: Calculating Exact P-values When Permutations Are + Randomly Drawn." Statistical Applications in Genetics and Molecular + Biology 9.1. + .. [6] H. W. Lilliefors (1967). "On the Kolmogorov-Smirnov test for + normality with mean and variance unknown." Journal of the American + statistical Association 62.318: 399-402. + .. [7] Filliben, James J. "The probability plot correlation coefficient + test for normality." Technometrics 17.1 (1975): 111-117. + + Examples + -------- + A well-known test of the null hypothesis that data were drawn from a + given distribution is the Kolmogorov-Smirnov (KS) test, available in SciPy + as `scipy.stats.ks_1samp`. Suppose we wish to test whether the following + data: + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> x = stats.uniform.rvs(size=75, random_state=rng) + + were sampled from a normal distribution. To perform a KS test, the + empirical distribution function of the observed data will be compared + against the (theoretical) cumulative distribution function of a normal + distribution. Of course, to do this, the normal distribution under the null + hypothesis must be fully specified. This is commonly done by first fitting + the ``loc`` and ``scale`` parameters of the distribution to the observed + data, then performing the test. + + >>> loc, scale = np.mean(x), np.std(x, ddof=1) + >>> cdf = stats.norm(loc, scale).cdf + >>> stats.ks_1samp(x, cdf) + KstestResult(statistic=0.1119257570456813, pvalue=0.2827756409939257) + + An advantage of the KS-test is that the p-value - the probability of + obtaining a value of the test statistic under the null hypothesis as + extreme as the value obtained from the observed data - can be calculated + exactly and efficiently. `goodness_of_fit` can only approximate these + results. + + >>> known_params = {'loc': loc, 'scale': scale} + >>> res = stats.goodness_of_fit(stats.norm, x, known_params=known_params, + ... statistic='ks', random_state=rng) + >>> res.statistic, res.pvalue + (0.1119257570456813, 0.2788) + + The statistic matches exactly, but the p-value is estimated by forming + a "Monte Carlo null distribution", that is, by explicitly drawing random + samples from `scipy.stats.norm` with the provided parameters and + calculating the stastic for each. The fraction of these statistic values + at least as extreme as ``res.statistic`` approximates the exact p-value + calculated by `scipy.stats.ks_1samp`. + + However, in many cases, we would prefer to test only that the data were + sampled from one of *any* member of the normal distribution family, not + specifically from the normal distribution with the location and scale + fitted to the observed sample. In this case, Lilliefors [6]_ argued that + the KS test is far too conservative (that is, the p-value overstates + the actual probability of rejecting a true null hypothesis) and thus lacks + power - the ability to reject the null hypothesis when the null hypothesis + is actually false. + Indeed, our p-value above is approximately 0.28, which is far too large + to reject the null hypothesis at any common significance level. + + Consider why this might be. Note that in the KS test above, the statistic + always compares data against the CDF of a normal distribution fitted to the + *observed data*. This tends to reduce the value of the statistic for the + observed data, but it is "unfair" when computing the statistic for other + samples, such as those we randomly draw to form the Monte Carlo null + distribution. It is easy to correct for this: whenever we compute the KS + statistic of a sample, we use the CDF of a normal distribution fitted + to *that sample*. The null distribution in this case has not been + calculated exactly and is tyically approximated using Monte Carlo methods + as described above. This is where `goodness_of_fit` excels. + + >>> res = stats.goodness_of_fit(stats.norm, x, statistic='ks', + ... random_state=rng) + >>> res.statistic, res.pvalue + (0.1119257570456813, 0.0196) + + Indeed, this p-value is much smaller, and small enough to (correctly) + reject the null hypothesis at common significance levels, including 5% and + 2.5%. + + However, the KS statistic is not very sensitive to all deviations from + normality. The original advantage of the KS statistic was the ability + to compute the null distribution theoretically, but a more sensitive + statistic - resulting in a higher test power - can be used now that we can + approximate the null distribution + computationally. The Anderson-Darling statistic [1]_ tends to be more + sensitive, and critical values of the this statistic have been tabulated + for various significance levels and sample sizes using Monte Carlo methods. + + >>> res = stats.anderson(x, 'norm') + >>> print(res.statistic) + 1.2139573337497467 + >>> print(res.critical_values) + [0.549 0.625 0.75 0.875 1.041] + >>> print(res.significance_level) + [15. 10. 5. 2.5 1. ] + + Here, the observed value of the statistic exceeds the critical value + corresponding with a 1% significance level. This tells us that the p-value + of the observed data is less than 1%, but what is it? We could interpolate + from these (already-interpolated) values, but `goodness_of_fit` can + estimate it directly. + + >>> res = stats.goodness_of_fit(stats.norm, x, statistic='ad', + ... random_state=rng) + >>> res.statistic, res.pvalue + (1.2139573337497467, 0.0034) + + A further advantage is that use of `goodness_of_fit` is not limited to + a particular set of distributions or conditions on which parameters + are known versus which must be estimated from data. Instead, + `goodness_of_fit` can estimate p-values relatively quickly for any + distribution with a sufficiently fast and reliable ``fit`` method. For + instance, here we perform a goodness of fit test using the Cramer-von Mises + statistic against the Rayleigh distribution with known location and unknown + scale. + + >>> rng = np.random.default_rng() + >>> x = stats.chi(df=2.2, loc=0, scale=2).rvs(size=1000, random_state=rng) + >>> res = stats.goodness_of_fit(stats.rayleigh, x, statistic='cvm', + ... known_params={'loc': 0}, random_state=rng) + + This executes fairly quickly, but to check the reliability of the ``fit`` + method, we should inspect the fit result. + + >>> res.fit_result # location is as specified, and scale is reasonable + params: FitParams(loc=0.0, scale=2.1026719844231243) + success: True + message: 'The fit was performed successfully.' + >>> import matplotlib.pyplot as plt # matplotlib must be installed to plot + >>> res.fit_result.plot() + >>> plt.show() + + If the distribution is not fit to the observed data as well as possible, + the test may not control the type I error rate, that is, the chance of + rejecting the null hypothesis even when it is true. + + We should also look for extreme outliers in the null distribution that + may be caused by unreliable fitting. These do not necessarily invalidate + the result, but they tend to reduce the test's power. + + >>> _, ax = plt.subplots() + >>> ax.hist(np.log10(res.null_distribution)) + >>> ax.set_xlabel("log10 of CVM statistic under the null hypothesis") + >>> ax.set_ylabel("Frequency") + >>> ax.set_title("Histogram of the Monte Carlo null distribution") + >>> plt.show() + + This plot seems reassuring. + + If ``fit`` method is working reliably, and if the distribution of the test + statistic is not particularly sensitive to the values of the fitted + parameters, then the p-value provided by `goodness_of_fit` is expected to + be a good approximation. + + >>> res.statistic, res.pvalue + (0.2231991510248692, 0.0525) + + """ + args = _gof_iv(dist, data, known_params, fit_params, guessed_params, + statistic, n_mc_samples, random_state) + (dist, data, fixed_nhd_params, fixed_rfd_params, guessed_nhd_params, + guessed_rfd_params, statistic, n_mc_samples_int, random_state) = args + + # Fit null hypothesis distribution to data + nhd_fit_fun = _get_fit_fun(dist, data, guessed_nhd_params, + fixed_nhd_params) + nhd_vals = nhd_fit_fun(data) + nhd_dist = dist(*nhd_vals) + + def rvs(size): + return nhd_dist.rvs(size=size, random_state=random_state) + + # Define statistic + fit_fun = _get_fit_fun(dist, data, guessed_rfd_params, fixed_rfd_params) + if callable(statistic): + compare_fun = statistic + else: + compare_fun = _compare_dict[statistic] + alternative = getattr(compare_fun, 'alternative', 'greater') + + def statistic_fun(data, axis): + # Make things simple by always working along the last axis. + data = np.moveaxis(data, axis, -1) + rfd_vals = fit_fun(data) + rfd_dist = dist(*rfd_vals) + return compare_fun(rfd_dist, data, axis=-1) + + res = stats.monte_carlo_test(data, rvs, statistic_fun, vectorized=True, + n_resamples=n_mc_samples, axis=-1, + alternative=alternative) + opt_res = optimize.OptimizeResult() + opt_res.success = True + opt_res.message = "The fit was performed successfully." + opt_res.x = nhd_vals + # Only continuous distributions for now, hence discrete=False + # There's no fundamental limitation; it's just that we're not using + # stats.fit, discrete distributions don't have `fit` method, and + # we haven't written any vectorized fit functions for a discrete + # distribution yet. + return GoodnessOfFitResult(FitResult(dist, data, False, opt_res), + res.statistic, res.pvalue, + res.null_distribution) + + +def _get_fit_fun(dist, data, guessed_params, fixed_params): + + shape_names = [] if dist.shapes is None else dist.shapes.split(", ") + param_names = shape_names + ['loc', 'scale'] + fparam_names = ['f'+name for name in param_names] + all_fixed = not set(fparam_names).difference(fixed_params) + guessed_shapes = [guessed_params.pop(x, None) + for x in shape_names if x in guessed_params] + + if all_fixed: + def fit_fun(data): + return [fixed_params[name] for name in fparam_names] + # Define statistic, including fitting distribution to data + elif dist in _fit_funs: + def fit_fun(data): + params = _fit_funs[dist](data, **fixed_params) + params = np.asarray(np.broadcast_arrays(*params)) + if params.ndim > 1: + params = params[..., np.newaxis] + return params + else: + def fit_fun_1d(data): + return dist.fit(data, *guessed_shapes, **guessed_params, + **fixed_params) + + def fit_fun(data): + params = np.apply_along_axis(fit_fun_1d, axis=-1, arr=data) + if params.ndim > 1: + params = params.T[..., np.newaxis] + return params + + return fit_fun + + +# Vectorized fitting functions. These are to accept ND `data` in which each +# row (slice along last axis) is a sample to fit and scalar fixed parameters. +# They return a tuple of shape parameter arrays, each of shape data.shape[:-1]. +def _fit_norm(data, floc=None, fscale=None): + loc = floc + scale = fscale + if loc is None and scale is None: + loc = np.mean(data, axis=-1) + scale = np.std(data, ddof=1, axis=-1) + elif loc is None: + loc = np.mean(data, axis=-1) + elif scale is None: + scale = np.sqrt(((data - loc)**2).mean(axis=-1)) + return loc, scale + + +_fit_funs = {stats.norm: _fit_norm} # type: ignore[attr-defined] + + +# Vectorized goodness of fit statistic functions. These accept a frozen +# distribution object and `data` in which each row (slice along last axis) is +# a sample. + + +def _anderson_darling(dist, data, axis): + x = np.sort(data, axis=-1) + n = data.shape[-1] + i = np.arange(1, n+1) + Si = (2*i - 1)/n * (dist.logcdf(x) + dist.logsf(x[..., ::-1])) + S = np.sum(Si, axis=-1) + return -n - S + + +def _compute_dplus(cdfvals): # adapted from _stats_py before gh-17062 + n = cdfvals.shape[-1] + return (np.arange(1.0, n + 1) / n - cdfvals).max(axis=-1) + + +def _compute_dminus(cdfvals): + n = cdfvals.shape[-1] + return (cdfvals - np.arange(0.0, n)/n).max(axis=-1) + + +def _kolmogorov_smirnov(dist, data, axis): + x = np.sort(data, axis=-1) + cdfvals = dist.cdf(x) + Dplus = _compute_dplus(cdfvals) # always works along last axis + Dminus = _compute_dminus(cdfvals) + return np.maximum(Dplus, Dminus) + + +def _corr(X, M): + # Correlation coefficient r, simplified and vectorized as we need it. + # See [7] Equation (2). Lemma 1/2 are only for distributions symmetric + # about 0. + Xm = X.mean(axis=-1, keepdims=True) + Mm = M.mean(axis=-1, keepdims=True) + num = np.sum((X - Xm) * (M - Mm), axis=-1) + den = np.sqrt(np.sum((X - Xm)**2, axis=-1) * np.sum((M - Mm)**2, axis=-1)) + return num/den + + +def _filliben(dist, data, axis): + # [7] Section 8 # 1 + X = np.sort(data, axis=-1) + + # [7] Section 8 # 2 + n = data.shape[-1] + k = np.arange(1, n+1) + # Filliben used an approximation for the uniform distribution order + # statistic medians. + # m = (k - .3175)/(n + 0.365) + # m[-1] = 0.5**(1/n) + # m[0] = 1 - m[-1] + # We can just as easily use the (theoretically) exact values. See e.g. + # https://en.wikipedia.org/wiki/Order_statistic + # "Order statistics sampled from a uniform distribution" + m = stats.beta(k, n + 1 - k).median() + + # [7] Section 8 # 3 + M = dist.ppf(m) + + # [7] Section 8 # 4 + return _corr(X, M) +_filliben.alternative = 'less' # type: ignore[attr-defined] + + +def _cramer_von_mises(dist, data, axis): + x = np.sort(data, axis=-1) + n = data.shape[-1] + cdfvals = dist.cdf(x) + u = (2*np.arange(1, n+1) - 1)/(2*n) + w = 1 / (12*n) + np.sum((u - cdfvals)**2, axis=-1) + return w + + +_compare_dict = {"ad": _anderson_darling, "ks": _kolmogorov_smirnov, + "cvm": _cramer_von_mises, "filliben": _filliben} + + +def _gof_iv(dist, data, known_params, fit_params, guessed_params, statistic, + n_mc_samples, random_state): + + if not isinstance(dist, stats.rv_continuous): + message = ("`dist` must be a (non-frozen) instance of " + "`stats.rv_continuous`.") + raise TypeError(message) + + data = np.asarray(data, dtype=float) + if not data.ndim == 1: + message = "`data` must be a one-dimensional array of numbers." + raise ValueError(message) + + # Leave validation of these key/value pairs to the `fit` method, + # but collect these into dictionaries that will be used + known_params = known_params or dict() + fit_params = fit_params or dict() + guessed_params = guessed_params or dict() + + known_params_f = {("f"+key): val for key, val in known_params.items()} + fit_params_f = {("f"+key): val for key, val in fit_params.items()} + + # These are the values of parameters of the null distribution family + # with which resamples are drawn + fixed_nhd_params = known_params_f.copy() + fixed_nhd_params.update(fit_params_f) + + # These are fixed when fitting the distribution family to resamples + fixed_rfd_params = known_params_f.copy() + + # These are used as guesses when fitting the distribution family to + # the original data + guessed_nhd_params = guessed_params.copy() + + # These are used as guesses when fitting the distribution family to + # resamples + guessed_rfd_params = fit_params.copy() + guessed_rfd_params.update(guessed_params) + + if not callable(statistic): + statistic = statistic.lower() + statistics = {'ad', 'ks', 'cvm', 'filliben'} + if statistic not in statistics: + message = f"`statistic` must be one of {statistics}." + raise ValueError(message) + + n_mc_samples_int = int(n_mc_samples) + if n_mc_samples_int != n_mc_samples: + message = "`n_mc_samples` must be an integer." + raise TypeError(message) + + random_state = check_random_state(random_state) + + return (dist, data, fixed_nhd_params, fixed_rfd_params, guessed_nhd_params, + guessed_rfd_params, statistic, n_mc_samples_int, random_state) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_generate_pyx.py b/venv/lib/python3.10/site-packages/scipy/stats/_generate_pyx.py new file mode 100644 index 0000000000000000000000000000000000000000..a9647b53ca97018e20136b927d6bf71e3251bfd7 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_generate_pyx.py @@ -0,0 +1,27 @@ +import pathlib +import subprocess +import sys +import os +import argparse + + +def make_boost(outdir): + # Call code generator inside _boost directory + code_gen = pathlib.Path(__file__).parent / '_boost/include/code_gen.py' + subprocess.run([sys.executable, str(code_gen), '-o', outdir], + check=True) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument("-o", "--outdir", type=str, + help="Path to the output directory") + args = parser.parse_args() + + if not args.outdir: + raise ValueError("A path to the output directory is required") + else: + # Meson build + srcdir_abs = pathlib.Path(os.path.abspath(os.path.dirname(__file__))) + outdir_abs = pathlib.Path(os.getcwd()) / args.outdir + make_boost(outdir_abs) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_hypotests.py b/venv/lib/python3.10/site-packages/scipy/stats/_hypotests.py new file mode 100644 index 0000000000000000000000000000000000000000..ebf8cdb4f971b45e023200337b007d7d8fac0eb4 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_hypotests.py @@ -0,0 +1,2021 @@ +from collections import namedtuple +from dataclasses import dataclass +from math import comb +import numpy as np +import warnings +from itertools import combinations +import scipy.stats +from scipy.optimize import shgo +from . import distributions +from ._common import ConfidenceInterval +from ._continuous_distns import chi2, norm +from scipy.special import gamma, kv, gammaln +from scipy.fft import ifft +from ._stats_pythran import _a_ij_Aij_Dij2 +from ._stats_pythran import ( + _concordant_pairs as _P, _discordant_pairs as _Q +) +from ._axis_nan_policy import _axis_nan_policy_factory +from scipy.stats import _stats_py + +__all__ = ['epps_singleton_2samp', 'cramervonmises', 'somersd', + 'barnard_exact', 'boschloo_exact', 'cramervonmises_2samp', + 'tukey_hsd', 'poisson_means_test'] + +Epps_Singleton_2sampResult = namedtuple('Epps_Singleton_2sampResult', + ('statistic', 'pvalue')) + + +@_axis_nan_policy_factory(Epps_Singleton_2sampResult, n_samples=2, too_small=4) +def epps_singleton_2samp(x, y, t=(0.4, 0.8)): + """Compute the Epps-Singleton (ES) test statistic. + + Test the null hypothesis that two samples have the same underlying + probability distribution. + + Parameters + ---------- + x, y : array-like + The two samples of observations to be tested. Input must not have more + than one dimension. Samples can have different lengths. + t : array-like, optional + The points (t1, ..., tn) where the empirical characteristic function is + to be evaluated. It should be positive distinct numbers. The default + value (0.4, 0.8) is proposed in [1]_. Input must not have more than + one dimension. + + Returns + ------- + statistic : float + The test statistic. + pvalue : float + The associated p-value based on the asymptotic chi2-distribution. + + See Also + -------- + ks_2samp, anderson_ksamp + + Notes + ----- + Testing whether two samples are generated by the same underlying + distribution is a classical question in statistics. A widely used test is + the Kolmogorov-Smirnov (KS) test which relies on the empirical + distribution function. Epps and Singleton introduce a test based on the + empirical characteristic function in [1]_. + + One advantage of the ES test compared to the KS test is that is does + not assume a continuous distribution. In [1]_, the authors conclude + that the test also has a higher power than the KS test in many + examples. They recommend the use of the ES test for discrete samples as + well as continuous samples with at least 25 observations each, whereas + `anderson_ksamp` is recommended for smaller sample sizes in the + continuous case. + + The p-value is computed from the asymptotic distribution of the test + statistic which follows a `chi2` distribution. If the sample size of both + `x` and `y` is below 25, the small sample correction proposed in [1]_ is + applied to the test statistic. + + The default values of `t` are determined in [1]_ by considering + various distributions and finding good values that lead to a high power + of the test in general. Table III in [1]_ gives the optimal values for + the distributions tested in that study. The values of `t` are scaled by + the semi-interquartile range in the implementation, see [1]_. + + References + ---------- + .. [1] T. W. Epps and K. J. Singleton, "An omnibus test for the two-sample + problem using the empirical characteristic function", Journal of + Statistical Computation and Simulation 26, p. 177--203, 1986. + + .. [2] S. J. Goerg and J. Kaiser, "Nonparametric testing of distributions + - the Epps-Singleton two-sample test using the empirical characteristic + function", The Stata Journal 9(3), p. 454--465, 2009. + + """ + # x and y are converted to arrays by the decorator + t = np.asarray(t) + # check if x and y are valid inputs + nx, ny = len(x), len(y) + if (nx < 5) or (ny < 5): + raise ValueError('x and y should have at least 5 elements, but len(x) ' + f'= {nx} and len(y) = {ny}.') + if not np.isfinite(x).all(): + raise ValueError('x must not contain nonfinite values.') + if not np.isfinite(y).all(): + raise ValueError('y must not contain nonfinite values.') + n = nx + ny + + # check if t is valid + if t.ndim > 1: + raise ValueError(f't must be 1d, but t.ndim equals {t.ndim}.') + if np.less_equal(t, 0).any(): + raise ValueError('t must contain positive elements only.') + + # rescale t with semi-iqr as proposed in [1]; import iqr here to avoid + # circular import + from scipy.stats import iqr + sigma = iqr(np.hstack((x, y))) / 2 + ts = np.reshape(t, (-1, 1)) / sigma + + # covariance estimation of ES test + gx = np.vstack((np.cos(ts*x), np.sin(ts*x))).T # shape = (nx, 2*len(t)) + gy = np.vstack((np.cos(ts*y), np.sin(ts*y))).T + cov_x = np.cov(gx.T, bias=True) # the test uses biased cov-estimate + cov_y = np.cov(gy.T, bias=True) + est_cov = (n/nx)*cov_x + (n/ny)*cov_y + est_cov_inv = np.linalg.pinv(est_cov) + r = np.linalg.matrix_rank(est_cov_inv) + if r < 2*len(t): + warnings.warn('Estimated covariance matrix does not have full rank. ' + 'This indicates a bad choice of the input t and the ' + 'test might not be consistent.', # see p. 183 in [1]_ + stacklevel=2) + + # compute test statistic w distributed asympt. as chisquare with df=r + g_diff = np.mean(gx, axis=0) - np.mean(gy, axis=0) + w = n*np.dot(g_diff.T, np.dot(est_cov_inv, g_diff)) + + # apply small-sample correction + if (max(nx, ny) < 25): + corr = 1.0/(1.0 + n**(-0.45) + 10.1*(nx**(-1.7) + ny**(-1.7))) + w = corr * w + + p = chi2.sf(w, r) + + return Epps_Singleton_2sampResult(w, p) + + +def poisson_means_test(k1, n1, k2, n2, *, diff=0, alternative='two-sided'): + r""" + Performs the Poisson means test, AKA the "E-test". + + This is a test of the null hypothesis that the difference between means of + two Poisson distributions is `diff`. The samples are provided as the + number of events `k1` and `k2` observed within measurement intervals + (e.g. of time, space, number of observations) of sizes `n1` and `n2`. + + Parameters + ---------- + k1 : int + Number of events observed from distribution 1. + n1: float + Size of sample from distribution 1. + k2 : int + Number of events observed from distribution 2. + n2 : float + Size of sample from distribution 2. + diff : float, default=0 + The hypothesized difference in means between the distributions + underlying the samples. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the difference between distribution means is not + equal to `diff` + * 'less': the difference between distribution means is less than + `diff` + * 'greater': the difference between distribution means is greater + than `diff` + + Returns + ------- + statistic : float + The test statistic (see [1]_ equation 3.3). + pvalue : float + The probability of achieving such an extreme value of the test + statistic under the null hypothesis. + + Notes + ----- + + Let: + + .. math:: X_1 \sim \mbox{Poisson}(\mathtt{n1}\lambda_1) + + be a random variable independent of + + .. math:: X_2 \sim \mbox{Poisson}(\mathtt{n2}\lambda_2) + + and let ``k1`` and ``k2`` be the observed values of :math:`X_1` + and :math:`X_2`, respectively. Then `poisson_means_test` uses the number + of observed events ``k1`` and ``k2`` from samples of size ``n1`` and + ``n2``, respectively, to test the null hypothesis that + + .. math:: + H_0: \lambda_1 - \lambda_2 = \mathtt{diff} + + A benefit of the E-test is that it has good power for small sample sizes, + which can reduce sampling costs [1]_. It has been evaluated and determined + to be more powerful than the comparable C-test, sometimes referred to as + the Poisson exact test. + + References + ---------- + .. [1] Krishnamoorthy, K., & Thomson, J. (2004). A more powerful test for + comparing two Poisson means. Journal of Statistical Planning and + Inference, 119(1), 23-35. + + .. [2] Przyborowski, J., & Wilenski, H. (1940). Homogeneity of results in + testing samples from Poisson series: With an application to testing + clover seed for dodder. Biometrika, 31(3/4), 313-323. + + Examples + -------- + + Suppose that a gardener wishes to test the number of dodder (weed) seeds + in a sack of clover seeds that they buy from a seed company. It has + previously been established that the number of dodder seeds in clover + follows the Poisson distribution. + + A 100 gram sample is drawn from the sack before being shipped to the + gardener. The sample is analyzed, and it is found to contain no dodder + seeds; that is, `k1` is 0. However, upon arrival, the gardener draws + another 100 gram sample from the sack. This time, three dodder seeds are + found in the sample; that is, `k2` is 3. The gardener would like to + know if the difference is significant and not due to chance. The + null hypothesis is that the difference between the two samples is merely + due to chance, or that :math:`\lambda_1 - \lambda_2 = \mathtt{diff}` + where :math:`\mathtt{diff} = 0`. The alternative hypothesis is that the + difference is not due to chance, or :math:`\lambda_1 - \lambda_2 \ne 0`. + The gardener selects a significance level of 5% to reject the null + hypothesis in favor of the alternative [2]_. + + >>> import scipy.stats as stats + >>> res = stats.poisson_means_test(0, 100, 3, 100) + >>> res.statistic, res.pvalue + (-1.7320508075688772, 0.08837900929018157) + + The p-value is .088, indicating a near 9% chance of observing a value of + the test statistic under the null hypothesis. This exceeds 5%, so the + gardener does not reject the null hypothesis as the difference cannot be + regarded as significant at this level. + """ + + _poisson_means_test_iv(k1, n1, k2, n2, diff, alternative) + + # "for a given k_1 and k_2, an estimate of \lambda_2 is given by" [1] (3.4) + lmbd_hat2 = ((k1 + k2) / (n1 + n2) - diff * n1 / (n1 + n2)) + + # "\hat{\lambda_{2k}} may be less than or equal to zero ... and in this + # case the null hypothesis cannot be rejected ... [and] it is not necessary + # to compute the p-value". [1] page 26 below eq. (3.6). + if lmbd_hat2 <= 0: + return _stats_py.SignificanceResult(0, 1) + + # The unbiased variance estimate [1] (3.2) + var = k1 / (n1 ** 2) + k2 / (n2 ** 2) + + # The _observed_ pivot statistic from the input. It follows the + # unnumbered equation following equation (3.3) This is used later in + # comparison with the computed pivot statistics in an indicator function. + t_k1k2 = (k1 / n1 - k2 / n2 - diff) / np.sqrt(var) + + # Equation (3.5) of [1] is lengthy, so it is broken into several parts, + # beginning here. Note that the probability mass function of poisson is + # exp^(-\mu)*\mu^k/k!, so and this is called with shape \mu, here noted + # here as nlmbd_hat*. The strategy for evaluating the double summation in + # (3.5) is to create two arrays of the values of the two products inside + # the summation and then broadcast them together into a matrix, and then + # sum across the entire matrix. + + # Compute constants (as seen in the first and second separated products in + # (3.5).). (This is the shape (\mu) parameter of the poisson distribution.) + nlmbd_hat1 = n1 * (lmbd_hat2 + diff) + nlmbd_hat2 = n2 * lmbd_hat2 + + # Determine summation bounds for tail ends of distribution rather than + # summing to infinity. `x1*` is for the outer sum and `x2*` is the inner + # sum. + x1_lb, x1_ub = distributions.poisson.ppf([1e-10, 1 - 1e-16], nlmbd_hat1) + x2_lb, x2_ub = distributions.poisson.ppf([1e-10, 1 - 1e-16], nlmbd_hat2) + + # Construct arrays to function as the x_1 and x_2 counters on the summation + # in (3.5). `x1` is in columns and `x2` is in rows to allow for + # broadcasting. + x1 = np.arange(x1_lb, x1_ub + 1) + x2 = np.arange(x2_lb, x2_ub + 1)[:, None] + + # These are the two products in equation (3.5) with `prob_x1` being the + # first (left side) and `prob_x2` being the second (right side). (To + # make as clear as possible: the 1st contains a "+ d" term, the 2nd does + # not.) + prob_x1 = distributions.poisson.pmf(x1, nlmbd_hat1) + prob_x2 = distributions.poisson.pmf(x2, nlmbd_hat2) + + # compute constants for use in the "pivot statistic" per the + # unnumbered equation following (3.3). + lmbd_x1 = x1 / n1 + lmbd_x2 = x2 / n2 + lmbds_diff = lmbd_x1 - lmbd_x2 - diff + var_x1x2 = lmbd_x1 / n1 + lmbd_x2 / n2 + + # This is the 'pivot statistic' for use in the indicator of the summation + # (left side of "I[.]"). + with np.errstate(invalid='ignore', divide='ignore'): + t_x1x2 = lmbds_diff / np.sqrt(var_x1x2) + + # `[indicator]` implements the "I[.] ... the indicator function" per + # the paragraph following equation (3.5). + if alternative == 'two-sided': + indicator = np.abs(t_x1x2) >= np.abs(t_k1k2) + elif alternative == 'less': + indicator = t_x1x2 <= t_k1k2 + else: + indicator = t_x1x2 >= t_k1k2 + + # Multiply all combinations of the products together, exclude terms + # based on the `indicator` and then sum. (3.5) + pvalue = np.sum((prob_x1 * prob_x2)[indicator]) + return _stats_py.SignificanceResult(t_k1k2, pvalue) + + +def _poisson_means_test_iv(k1, n1, k2, n2, diff, alternative): + # """check for valid types and values of input to `poisson_mean_test`.""" + if k1 != int(k1) or k2 != int(k2): + raise TypeError('`k1` and `k2` must be integers.') + + count_err = '`k1` and `k2` must be greater than or equal to 0.' + if k1 < 0 or k2 < 0: + raise ValueError(count_err) + + if n1 <= 0 or n2 <= 0: + raise ValueError('`n1` and `n2` must be greater than 0.') + + if diff < 0: + raise ValueError('diff must be greater than or equal to 0.') + + alternatives = {'two-sided', 'less', 'greater'} + if alternative.lower() not in alternatives: + raise ValueError(f"Alternative must be one of '{alternatives}'.") + + +class CramerVonMisesResult: + def __init__(self, statistic, pvalue): + self.statistic = statistic + self.pvalue = pvalue + + def __repr__(self): + return (f"{self.__class__.__name__}(statistic={self.statistic}, " + f"pvalue={self.pvalue})") + + +def _psi1_mod(x): + """ + psi1 is defined in equation 1.10 in Csörgő, S. and Faraway, J. (1996). + This implements a modified version by excluding the term V(x) / 12 + (here: _cdf_cvm_inf(x) / 12) to avoid evaluating _cdf_cvm_inf(x) + twice in _cdf_cvm. + + Implementation based on MAPLE code of Julian Faraway and R code of the + function pCvM in the package goftest (v1.1.1), permission granted + by Adrian Baddeley. Main difference in the implementation: the code + here keeps adding terms of the series until the terms are small enough. + """ + + def _ed2(y): + z = y**2 / 4 + b = kv(1/4, z) + kv(3/4, z) + return np.exp(-z) * (y/2)**(3/2) * b / np.sqrt(np.pi) + + def _ed3(y): + z = y**2 / 4 + c = np.exp(-z) / np.sqrt(np.pi) + return c * (y/2)**(5/2) * (2*kv(1/4, z) + 3*kv(3/4, z) - kv(5/4, z)) + + def _Ak(k, x): + m = 2*k + 1 + sx = 2 * np.sqrt(x) + y1 = x**(3/4) + y2 = x**(5/4) + + e1 = m * gamma(k + 1/2) * _ed2((4 * k + 3)/sx) / (9 * y1) + e2 = gamma(k + 1/2) * _ed3((4 * k + 1) / sx) / (72 * y2) + e3 = 2 * (m + 2) * gamma(k + 3/2) * _ed3((4 * k + 5) / sx) / (12 * y2) + e4 = 7 * m * gamma(k + 1/2) * _ed2((4 * k + 1) / sx) / (144 * y1) + e5 = 7 * m * gamma(k + 1/2) * _ed2((4 * k + 5) / sx) / (144 * y1) + + return e1 + e2 + e3 + e4 + e5 + + x = np.asarray(x) + tot = np.zeros_like(x, dtype='float') + cond = np.ones_like(x, dtype='bool') + k = 0 + while np.any(cond): + z = -_Ak(k, x[cond]) / (np.pi * gamma(k + 1)) + tot[cond] = tot[cond] + z + cond[cond] = np.abs(z) >= 1e-7 + k += 1 + + return tot + + +def _cdf_cvm_inf(x): + """ + Calculate the cdf of the Cramér-von Mises statistic (infinite sample size). + + See equation 1.2 in Csörgő, S. and Faraway, J. (1996). + + Implementation based on MAPLE code of Julian Faraway and R code of the + function pCvM in the package goftest (v1.1.1), permission granted + by Adrian Baddeley. Main difference in the implementation: the code + here keeps adding terms of the series until the terms are small enough. + + The function is not expected to be accurate for large values of x, say + x > 4, when the cdf is very close to 1. + """ + x = np.asarray(x) + + def term(x, k): + # this expression can be found in [2], second line of (1.3) + u = np.exp(gammaln(k + 0.5) - gammaln(k+1)) / (np.pi**1.5 * np.sqrt(x)) + y = 4*k + 1 + q = y**2 / (16*x) + b = kv(0.25, q) + return u * np.sqrt(y) * np.exp(-q) * b + + tot = np.zeros_like(x, dtype='float') + cond = np.ones_like(x, dtype='bool') + k = 0 + while np.any(cond): + z = term(x[cond], k) + tot[cond] = tot[cond] + z + cond[cond] = np.abs(z) >= 1e-7 + k += 1 + + return tot + + +def _cdf_cvm(x, n=None): + """ + Calculate the cdf of the Cramér-von Mises statistic for a finite sample + size n. If N is None, use the asymptotic cdf (n=inf). + + See equation 1.8 in Csörgő, S. and Faraway, J. (1996) for finite samples, + 1.2 for the asymptotic cdf. + + The function is not expected to be accurate for large values of x, say + x > 2, when the cdf is very close to 1 and it might return values > 1 + in that case, e.g. _cdf_cvm(2.0, 12) = 1.0000027556716846. Moreover, it + is not accurate for small values of n, especially close to the bounds of + the distribution's domain, [1/(12*n), n/3], where the value jumps to 0 + and 1, respectively. These are limitations of the approximation by Csörgő + and Faraway (1996) implemented in this function. + """ + x = np.asarray(x) + if n is None: + y = _cdf_cvm_inf(x) + else: + # support of the test statistic is [12/n, n/3], see 1.1 in [2] + y = np.zeros_like(x, dtype='float') + sup = (1./(12*n) < x) & (x < n/3.) + # note: _psi1_mod does not include the term _cdf_cvm_inf(x) / 12 + # therefore, we need to add it here + y[sup] = _cdf_cvm_inf(x[sup]) * (1 + 1./(12*n)) + _psi1_mod(x[sup]) / n + y[x >= n/3] = 1 + + if y.ndim == 0: + return y[()] + return y + + +def _cvm_result_to_tuple(res): + return res.statistic, res.pvalue + + +@_axis_nan_policy_factory(CramerVonMisesResult, n_samples=1, too_small=1, + result_to_tuple=_cvm_result_to_tuple) +def cramervonmises(rvs, cdf, args=()): + """Perform the one-sample Cramér-von Mises test for goodness of fit. + + This performs a test of the goodness of fit of a cumulative distribution + function (cdf) :math:`F` compared to the empirical distribution function + :math:`F_n` of observed random variates :math:`X_1, ..., X_n` that are + assumed to be independent and identically distributed ([1]_). + The null hypothesis is that the :math:`X_i` have cumulative distribution + :math:`F`. + + Parameters + ---------- + rvs : array_like + A 1-D array of observed values of the random variables :math:`X_i`. + cdf : str or callable + The cumulative distribution function :math:`F` to test the + observations against. If a string, it should be the name of a + distribution in `scipy.stats`. If a callable, that callable is used + to calculate the cdf: ``cdf(x, *args) -> float``. + args : tuple, optional + Distribution parameters. These are assumed to be known; see Notes. + + Returns + ------- + res : object with attributes + statistic : float + Cramér-von Mises statistic. + pvalue : float + The p-value. + + See Also + -------- + kstest, cramervonmises_2samp + + Notes + ----- + .. versionadded:: 1.6.0 + + The p-value relies on the approximation given by equation 1.8 in [2]_. + It is important to keep in mind that the p-value is only accurate if + one tests a simple hypothesis, i.e. the parameters of the reference + distribution are known. If the parameters are estimated from the data + (composite hypothesis), the computed p-value is not reliable. + + References + ---------- + .. [1] Cramér-von Mises criterion, Wikipedia, + https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von_Mises_criterion + .. [2] Csörgő, S. and Faraway, J. (1996). The Exact and Asymptotic + Distribution of Cramér-von Mises Statistics. Journal of the + Royal Statistical Society, pp. 221-234. + + Examples + -------- + + Suppose we wish to test whether data generated by ``scipy.stats.norm.rvs`` + were, in fact, drawn from the standard normal distribution. We choose a + significance level of ``alpha=0.05``. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng(165417232101553420507139617764912913465) + >>> x = stats.norm.rvs(size=500, random_state=rng) + >>> res = stats.cramervonmises(x, 'norm') + >>> res.statistic, res.pvalue + (0.1072085112565724, 0.5508482238203407) + + The p-value exceeds our chosen significance level, so we do not + reject the null hypothesis that the observed sample is drawn from the + standard normal distribution. + + Now suppose we wish to check whether the same samples shifted by 2.1 is + consistent with being drawn from a normal distribution with a mean of 2. + + >>> y = x + 2.1 + >>> res = stats.cramervonmises(y, 'norm', args=(2,)) + >>> res.statistic, res.pvalue + (0.8364446265294695, 0.00596286797008283) + + Here we have used the `args` keyword to specify the mean (``loc``) + of the normal distribution to test the data against. This is equivalent + to the following, in which we create a frozen normal distribution with + mean 2.1, then pass its ``cdf`` method as an argument. + + >>> frozen_dist = stats.norm(loc=2) + >>> res = stats.cramervonmises(y, frozen_dist.cdf) + >>> res.statistic, res.pvalue + (0.8364446265294695, 0.00596286797008283) + + In either case, we would reject the null hypothesis that the observed + sample is drawn from a normal distribution with a mean of 2 (and default + variance of 1) because the p-value is less than our chosen + significance level. + + """ + if isinstance(cdf, str): + cdf = getattr(distributions, cdf).cdf + + vals = np.sort(np.asarray(rvs)) + + if vals.size <= 1: + raise ValueError('The sample must contain at least two observations.') + + n = len(vals) + cdfvals = cdf(vals, *args) + + u = (2*np.arange(1, n+1) - 1)/(2*n) + w = 1/(12*n) + np.sum((u - cdfvals)**2) + + # avoid small negative values that can occur due to the approximation + p = max(0, 1. - _cdf_cvm(w, n)) + + return CramerVonMisesResult(statistic=w, pvalue=p) + + +def _get_wilcoxon_distr(n): + """ + Distribution of probability of the Wilcoxon ranksum statistic r_plus (sum + of ranks of positive differences). + Returns an array with the probabilities of all the possible ranks + r = 0, ..., n*(n+1)/2 + """ + c = np.ones(1, dtype=np.float64) + for k in range(1, n + 1): + prev_c = c + c = np.zeros(k * (k + 1) // 2 + 1, dtype=np.float64) + m = len(prev_c) + c[:m] = prev_c * 0.5 + c[-m:] += prev_c * 0.5 + return c + + +def _get_wilcoxon_distr2(n): + """ + Distribution of probability of the Wilcoxon ranksum statistic r_plus (sum + of ranks of positive differences). + Returns an array with the probabilities of all the possible ranks + r = 0, ..., n*(n+1)/2 + This is a slower reference function + References + ---------- + .. [1] 1. Harris T, Hardin JW. Exact Wilcoxon Signed-Rank and Wilcoxon + Mann-Whitney Ranksum Tests. The Stata Journal. 2013;13(2):337-343. + """ + ai = np.arange(1, n+1)[:, None] + t = n*(n+1)/2 + q = 2*t + j = np.arange(q) + theta = 2*np.pi/q*j + phi_sp = np.prod(np.cos(theta*ai), axis=0) + phi_s = np.exp(1j*theta*t) * phi_sp + p = np.real(ifft(phi_s)) + res = np.zeros(int(t)+1) + res[:-1:] = p[::2] + res[0] /= 2 + res[-1] = res[0] + return res + + +def _tau_b(A): + """Calculate Kendall's tau-b and p-value from contingency table.""" + # See [2] 2.2 and 4.2 + + # contingency table must be truly 2D + if A.shape[0] == 1 or A.shape[1] == 1: + return np.nan, np.nan + + NA = A.sum() + PA = _P(A) + QA = _Q(A) + Sri2 = (A.sum(axis=1)**2).sum() + Scj2 = (A.sum(axis=0)**2).sum() + denominator = (NA**2 - Sri2)*(NA**2 - Scj2) + + tau = (PA-QA)/(denominator)**0.5 + + numerator = 4*(_a_ij_Aij_Dij2(A) - (PA - QA)**2 / NA) + s02_tau_b = numerator/denominator + if s02_tau_b == 0: # Avoid divide by zero + return tau, 0 + Z = tau/s02_tau_b**0.5 + p = 2*norm.sf(abs(Z)) # 2-sided p-value + + return tau, p + + +def _somers_d(A, alternative='two-sided'): + """Calculate Somers' D and p-value from contingency table.""" + # See [3] page 1740 + + # contingency table must be truly 2D + if A.shape[0] <= 1 or A.shape[1] <= 1: + return np.nan, np.nan + + NA = A.sum() + NA2 = NA**2 + PA = _P(A) + QA = _Q(A) + Sri2 = (A.sum(axis=1)**2).sum() + + d = (PA - QA)/(NA2 - Sri2) + + S = _a_ij_Aij_Dij2(A) - (PA-QA)**2/NA + + with np.errstate(divide='ignore'): + Z = (PA - QA)/(4*(S))**0.5 + + p = scipy.stats._stats_py._get_pvalue(Z, distributions.norm, alternative) + + return d, p + + +@dataclass +class SomersDResult: + statistic: float + pvalue: float + table: np.ndarray + + +def somersd(x, y=None, alternative='two-sided'): + r"""Calculates Somers' D, an asymmetric measure of ordinal association. + + Like Kendall's :math:`\tau`, Somers' :math:`D` is a measure of the + correspondence between two rankings. Both statistics consider the + difference between the number of concordant and discordant pairs in two + rankings :math:`X` and :math:`Y`, and both are normalized such that values + close to 1 indicate strong agreement and values close to -1 indicate + strong disagreement. They differ in how they are normalized. To show the + relationship, Somers' :math:`D` can be defined in terms of Kendall's + :math:`\tau_a`: + + .. math:: + D(Y|X) = \frac{\tau_a(X, Y)}{\tau_a(X, X)} + + Suppose the first ranking :math:`X` has :math:`r` distinct ranks and the + second ranking :math:`Y` has :math:`s` distinct ranks. These two lists of + :math:`n` rankings can also be viewed as an :math:`r \times s` contingency + table in which element :math:`i, j` is the number of rank pairs with rank + :math:`i` in ranking :math:`X` and rank :math:`j` in ranking :math:`Y`. + Accordingly, `somersd` also allows the input data to be supplied as a + single, 2D contingency table instead of as two separate, 1D rankings. + + Note that the definition of Somers' :math:`D` is asymmetric: in general, + :math:`D(Y|X) \neq D(X|Y)`. ``somersd(x, y)`` calculates Somers' + :math:`D(Y|X)`: the "row" variable :math:`X` is treated as an independent + variable, and the "column" variable :math:`Y` is dependent. For Somers' + :math:`D(X|Y)`, swap the input lists or transpose the input table. + + Parameters + ---------- + x : array_like + 1D array of rankings, treated as the (row) independent variable. + Alternatively, a 2D contingency table. + y : array_like, optional + If `x` is a 1D array of rankings, `y` is a 1D array of rankings of the + same length, treated as the (column) dependent variable. + If `x` is 2D, `y` is ignored. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + * 'two-sided': the rank correlation is nonzero + * 'less': the rank correlation is negative (less than zero) + * 'greater': the rank correlation is positive (greater than zero) + + Returns + ------- + res : SomersDResult + A `SomersDResult` object with the following fields: + + statistic : float + The Somers' :math:`D` statistic. + pvalue : float + The p-value for a hypothesis test whose null + hypothesis is an absence of association, :math:`D=0`. + See notes for more information. + table : 2D array + The contingency table formed from rankings `x` and `y` (or the + provided contingency table, if `x` is a 2D array) + + See Also + -------- + kendalltau : Calculates Kendall's tau, another correlation measure. + weightedtau : Computes a weighted version of Kendall's tau. + spearmanr : Calculates a Spearman rank-order correlation coefficient. + pearsonr : Calculates a Pearson correlation coefficient. + + Notes + ----- + This function follows the contingency table approach of [2]_ and + [3]_. *p*-values are computed based on an asymptotic approximation of + the test statistic distribution under the null hypothesis :math:`D=0`. + + Theoretically, hypothesis tests based on Kendall's :math:`tau` and Somers' + :math:`D` should be identical. + However, the *p*-values returned by `kendalltau` are based + on the null hypothesis of *independence* between :math:`X` and :math:`Y` + (i.e. the population from which pairs in :math:`X` and :math:`Y` are + sampled contains equal numbers of all possible pairs), which is more + specific than the null hypothesis :math:`D=0` used here. If the null + hypothesis of independence is desired, it is acceptable to use the + *p*-value returned by `kendalltau` with the statistic returned by + `somersd` and vice versa. For more information, see [2]_. + + Contingency tables are formatted according to the convention used by + SAS and R: the first ranking supplied (``x``) is the "row" variable, and + the second ranking supplied (``y``) is the "column" variable. This is + opposite the convention of Somers' original paper [1]_. + + References + ---------- + .. [1] Robert H. Somers, "A New Asymmetric Measure of Association for + Ordinal Variables", *American Sociological Review*, Vol. 27, No. 6, + pp. 799--811, 1962. + + .. [2] Morton B. Brown and Jacqueline K. Benedetti, "Sampling Behavior of + Tests for Correlation in Two-Way Contingency Tables", *Journal of + the American Statistical Association* Vol. 72, No. 358, pp. + 309--315, 1977. + + .. [3] SAS Institute, Inc., "The FREQ Procedure (Book Excerpt)", + *SAS/STAT 9.2 User's Guide, Second Edition*, SAS Publishing, 2009. + + .. [4] Laerd Statistics, "Somers' d using SPSS Statistics", *SPSS + Statistics Tutorials and Statistical Guides*, + https://statistics.laerd.com/spss-tutorials/somers-d-using-spss-statistics.php, + Accessed July 31, 2020. + + Examples + -------- + We calculate Somers' D for the example given in [4]_, in which a hotel + chain owner seeks to determine the association between hotel room + cleanliness and customer satisfaction. The independent variable, hotel + room cleanliness, is ranked on an ordinal scale: "below average (1)", + "average (2)", or "above average (3)". The dependent variable, customer + satisfaction, is ranked on a second scale: "very dissatisfied (1)", + "moderately dissatisfied (2)", "neither dissatisfied nor satisfied (3)", + "moderately satisfied (4)", or "very satisfied (5)". 189 customers + respond to the survey, and the results are cast into a contingency table + with the hotel room cleanliness as the "row" variable and customer + satisfaction as the "column" variable. + + +-----+-----+-----+-----+-----+-----+ + | | (1) | (2) | (3) | (4) | (5) | + +=====+=====+=====+=====+=====+=====+ + | (1) | 27 | 25 | 14 | 7 | 0 | + +-----+-----+-----+-----+-----+-----+ + | (2) | 7 | 14 | 18 | 35 | 12 | + +-----+-----+-----+-----+-----+-----+ + | (3) | 1 | 3 | 2 | 7 | 17 | + +-----+-----+-----+-----+-----+-----+ + + For example, 27 customers assigned their room a cleanliness ranking of + "below average (1)" and a corresponding satisfaction of "very + dissatisfied (1)". We perform the analysis as follows. + + >>> from scipy.stats import somersd + >>> table = [[27, 25, 14, 7, 0], [7, 14, 18, 35, 12], [1, 3, 2, 7, 17]] + >>> res = somersd(table) + >>> res.statistic + 0.6032766111513396 + >>> res.pvalue + 1.0007091191074533e-27 + + The value of the Somers' D statistic is approximately 0.6, indicating + a positive correlation between room cleanliness and customer satisfaction + in the sample. + The *p*-value is very small, indicating a very small probability of + observing such an extreme value of the statistic under the null + hypothesis that the statistic of the entire population (from which + our sample of 189 customers is drawn) is zero. This supports the + alternative hypothesis that the true value of Somers' D for the population + is nonzero. + + """ + x, y = np.array(x), np.array(y) + if x.ndim == 1: + if x.size != y.size: + raise ValueError("Rankings must be of equal length.") + table = scipy.stats.contingency.crosstab(x, y)[1] + elif x.ndim == 2: + if np.any(x < 0): + raise ValueError("All elements of the contingency table must be " + "non-negative.") + if np.any(x != x.astype(int)): + raise ValueError("All elements of the contingency table must be " + "integer.") + if x.nonzero()[0].size < 2: + raise ValueError("At least two elements of the contingency table " + "must be nonzero.") + table = x + else: + raise ValueError("x must be either a 1D or 2D array") + # The table type is converted to a float to avoid an integer overflow + d, p = _somers_d(table.astype(float), alternative) + + # add alias for consistency with other correlation functions + res = SomersDResult(d, p, table) + res.correlation = d + return res + + +# This could be combined with `_all_partitions` in `_resampling.py` +def _all_partitions(nx, ny): + """ + Partition a set of indices into two fixed-length sets in all possible ways + + Partition a set of indices 0 ... nx + ny - 1 into two sets of length nx and + ny in all possible ways (ignoring order of elements). + """ + z = np.arange(nx+ny) + for c in combinations(z, nx): + x = np.array(c) + mask = np.ones(nx+ny, bool) + mask[x] = False + y = z[mask] + yield x, y + + +def _compute_log_combinations(n): + """Compute all log combination of C(n, k).""" + gammaln_arr = gammaln(np.arange(n + 1) + 1) + return gammaln(n + 1) - gammaln_arr - gammaln_arr[::-1] + + +@dataclass +class BarnardExactResult: + statistic: float + pvalue: float + + +def barnard_exact(table, alternative="two-sided", pooled=True, n=32): + r"""Perform a Barnard exact test on a 2x2 contingency table. + + Parameters + ---------- + table : array_like of ints + A 2x2 contingency table. Elements should be non-negative integers. + + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the null and alternative hypotheses. Default is 'two-sided'. + Please see explanations in the Notes section below. + + pooled : bool, optional + Whether to compute score statistic with pooled variance (as in + Student's t-test, for example) or unpooled variance (as in Welch's + t-test). Default is ``True``. + + n : int, optional + Number of sampling points used in the construction of the sampling + method. Note that this argument will automatically be converted to + the next higher power of 2 since `scipy.stats.qmc.Sobol` is used to + select sample points. Default is 32. Must be positive. In most cases, + 32 points is enough to reach good precision. More points comes at + performance cost. + + Returns + ------- + ber : BarnardExactResult + A result object with the following attributes. + + statistic : float + The Wald statistic with pooled or unpooled variance, depending + on the user choice of `pooled`. + + pvalue : float + P-value, the probability of obtaining a distribution at least as + extreme as the one that was actually observed, assuming that the + null hypothesis is true. + + See Also + -------- + chi2_contingency : Chi-square test of independence of variables in a + contingency table. + fisher_exact : Fisher exact test on a 2x2 contingency table. + boschloo_exact : Boschloo's exact test on a 2x2 contingency table, + which is an uniformly more powerful alternative to Fisher's exact test. + + Notes + ----- + Barnard's test is an exact test used in the analysis of contingency + tables. It examines the association of two categorical variables, and + is a more powerful alternative than Fisher's exact test + for 2x2 contingency tables. + + Let's define :math:`X_0` a 2x2 matrix representing the observed sample, + where each column stores the binomial experiment, as in the example + below. Let's also define :math:`p_1, p_2` the theoretical binomial + probabilities for :math:`x_{11}` and :math:`x_{12}`. When using + Barnard exact test, we can assert three different null hypotheses : + + - :math:`H_0 : p_1 \geq p_2` versus :math:`H_1 : p_1 < p_2`, + with `alternative` = "less" + + - :math:`H_0 : p_1 \leq p_2` versus :math:`H_1 : p_1 > p_2`, + with `alternative` = "greater" + + - :math:`H_0 : p_1 = p_2` versus :math:`H_1 : p_1 \neq p_2`, + with `alternative` = "two-sided" (default one) + + In order to compute Barnard's exact test, we are using the Wald + statistic [3]_ with pooled or unpooled variance. + Under the default assumption that both variances are equal + (``pooled = True``), the statistic is computed as: + + .. math:: + + T(X) = \frac{ + \hat{p}_1 - \hat{p}_2 + }{ + \sqrt{ + \hat{p}(1 - \hat{p}) + (\frac{1}{c_1} + + \frac{1}{c_2}) + } + } + + with :math:`\hat{p}_1, \hat{p}_2` and :math:`\hat{p}` the estimator of + :math:`p_1, p_2` and :math:`p`, the latter being the combined probability, + given the assumption that :math:`p_1 = p_2`. + + If this assumption is invalid (``pooled = False``), the statistic is: + + .. math:: + + T(X) = \frac{ + \hat{p}_1 - \hat{p}_2 + }{ + \sqrt{ + \frac{\hat{p}_1 (1 - \hat{p}_1)}{c_1} + + \frac{\hat{p}_2 (1 - \hat{p}_2)}{c_2} + } + } + + The p-value is then computed as: + + .. math:: + + \sum + \binom{c_1}{x_{11}} + \binom{c_2}{x_{12}} + \pi^{x_{11} + x_{12}} + (1 - \pi)^{t - x_{11} - x_{12}} + + where the sum is over all 2x2 contingency tables :math:`X` such that: + * :math:`T(X) \leq T(X_0)` when `alternative` = "less", + * :math:`T(X) \geq T(X_0)` when `alternative` = "greater", or + * :math:`T(X) \geq |T(X_0)|` when `alternative` = "two-sided". + Above, :math:`c_1, c_2` are the sum of the columns 1 and 2, + and :math:`t` the total (sum of the 4 sample's element). + + The returned p-value is the maximum p-value taken over the nuisance + parameter :math:`\pi`, where :math:`0 \leq \pi \leq 1`. + + This function's complexity is :math:`O(n c_1 c_2)`, where `n` is the + number of sample points. + + References + ---------- + .. [1] Barnard, G. A. "Significance Tests for 2x2 Tables". *Biometrika*. + 34.1/2 (1947): 123-138. :doi:`dpgkg3` + + .. [2] Mehta, Cyrus R., and Pralay Senchaudhuri. "Conditional versus + unconditional exact tests for comparing two binomials." + *Cytel Software Corporation* 675 (2003): 1-5. + + .. [3] "Wald Test". *Wikipedia*. https://en.wikipedia.org/wiki/Wald_test + + Examples + -------- + An example use of Barnard's test is presented in [2]_. + + Consider the following example of a vaccine efficacy study + (Chan, 1998). In a randomized clinical trial of 30 subjects, 15 were + inoculated with a recombinant DNA influenza vaccine and the 15 were + inoculated with a placebo. Twelve of the 15 subjects in the placebo + group (80%) eventually became infected with influenza whereas for the + vaccine group, only 7 of the 15 subjects (47%) became infected. The + data are tabulated as a 2 x 2 table:: + + Vaccine Placebo + Yes 7 12 + No 8 3 + + When working with statistical hypothesis testing, we usually use a + threshold probability or significance level upon which we decide + to reject the null hypothesis :math:`H_0`. Suppose we choose the common + significance level of 5%. + + Our alternative hypothesis is that the vaccine will lower the chance of + becoming infected with the virus; that is, the probability :math:`p_1` of + catching the virus with the vaccine will be *less than* the probability + :math:`p_2` of catching the virus without the vaccine. Therefore, we call + `barnard_exact` with the ``alternative="less"`` option: + + >>> import scipy.stats as stats + >>> res = stats.barnard_exact([[7, 12], [8, 3]], alternative="less") + >>> res.statistic + -1.894... + >>> res.pvalue + 0.03407... + + Under the null hypothesis that the vaccine will not lower the chance of + becoming infected, the probability of obtaining test results at least as + extreme as the observed data is approximately 3.4%. Since this p-value is + less than our chosen significance level, we have evidence to reject + :math:`H_0` in favor of the alternative. + + Suppose we had used Fisher's exact test instead: + + >>> _, pvalue = stats.fisher_exact([[7, 12], [8, 3]], alternative="less") + >>> pvalue + 0.0640... + + With the same threshold significance of 5%, we would not have been able + to reject the null hypothesis in favor of the alternative. As stated in + [2]_, Barnard's test is uniformly more powerful than Fisher's exact test + because Barnard's test does not condition on any margin. Fisher's test + should only be used when both sets of marginals are fixed. + + """ + if n <= 0: + raise ValueError( + "Number of points `n` must be strictly positive, " + f"found {n!r}" + ) + + table = np.asarray(table, dtype=np.int64) + + if not table.shape == (2, 2): + raise ValueError("The input `table` must be of shape (2, 2).") + + if np.any(table < 0): + raise ValueError("All values in `table` must be nonnegative.") + + if 0 in table.sum(axis=0): + # If both values in column are zero, the p-value is 1 and + # the score's statistic is NaN. + return BarnardExactResult(np.nan, 1.0) + + total_col_1, total_col_2 = table.sum(axis=0) + + x1 = np.arange(total_col_1 + 1, dtype=np.int64).reshape(-1, 1) + x2 = np.arange(total_col_2 + 1, dtype=np.int64).reshape(1, -1) + + # We need to calculate the wald statistics for each combination of x1 and + # x2. + p1, p2 = x1 / total_col_1, x2 / total_col_2 + + if pooled: + p = (x1 + x2) / (total_col_1 + total_col_2) + variances = p * (1 - p) * (1 / total_col_1 + 1 / total_col_2) + else: + variances = p1 * (1 - p1) / total_col_1 + p2 * (1 - p2) / total_col_2 + + # To avoid warning when dividing by 0 + with np.errstate(divide="ignore", invalid="ignore"): + wald_statistic = np.divide((p1 - p2), np.sqrt(variances)) + + wald_statistic[p1 == p2] = 0 # Removing NaN values + + wald_stat_obs = wald_statistic[table[0, 0], table[0, 1]] + + if alternative == "two-sided": + index_arr = np.abs(wald_statistic) >= abs(wald_stat_obs) + elif alternative == "less": + index_arr = wald_statistic <= wald_stat_obs + elif alternative == "greater": + index_arr = wald_statistic >= wald_stat_obs + else: + msg = ( + "`alternative` should be one of {'two-sided', 'less', 'greater'}," + f" found {alternative!r}" + ) + raise ValueError(msg) + + x1_sum_x2 = x1 + x2 + + x1_log_comb = _compute_log_combinations(total_col_1) + x2_log_comb = _compute_log_combinations(total_col_2) + x1_sum_x2_log_comb = x1_log_comb[x1] + x2_log_comb[x2] + + result = shgo( + _get_binomial_log_p_value_with_nuisance_param, + args=(x1_sum_x2, x1_sum_x2_log_comb, index_arr), + bounds=((0, 1),), + n=n, + sampling_method="sobol", + ) + + # result.fun is the negative log pvalue and therefore needs to be + # changed before return + p_value = np.clip(np.exp(-result.fun), a_min=0, a_max=1) + return BarnardExactResult(wald_stat_obs, p_value) + + +@dataclass +class BoschlooExactResult: + statistic: float + pvalue: float + + +def boschloo_exact(table, alternative="two-sided", n=32): + r"""Perform Boschloo's exact test on a 2x2 contingency table. + + Parameters + ---------- + table : array_like of ints + A 2x2 contingency table. Elements should be non-negative integers. + + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the null and alternative hypotheses. Default is 'two-sided'. + Please see explanations in the Notes section below. + + n : int, optional + Number of sampling points used in the construction of the sampling + method. Note that this argument will automatically be converted to + the next higher power of 2 since `scipy.stats.qmc.Sobol` is used to + select sample points. Default is 32. Must be positive. In most cases, + 32 points is enough to reach good precision. More points comes at + performance cost. + + Returns + ------- + ber : BoschlooExactResult + A result object with the following attributes. + + statistic : float + The statistic used in Boschloo's test; that is, the p-value + from Fisher's exact test. + + pvalue : float + P-value, the probability of obtaining a distribution at least as + extreme as the one that was actually observed, assuming that the + null hypothesis is true. + + See Also + -------- + chi2_contingency : Chi-square test of independence of variables in a + contingency table. + fisher_exact : Fisher exact test on a 2x2 contingency table. + barnard_exact : Barnard's exact test, which is a more powerful alternative + than Fisher's exact test for 2x2 contingency tables. + + Notes + ----- + Boschloo's test is an exact test used in the analysis of contingency + tables. It examines the association of two categorical variables, and + is a uniformly more powerful alternative to Fisher's exact test + for 2x2 contingency tables. + + Boschloo's exact test uses the p-value of Fisher's exact test as a + statistic, and Boschloo's p-value is the probability under the null + hypothesis of observing such an extreme value of this statistic. + + Let's define :math:`X_0` a 2x2 matrix representing the observed sample, + where each column stores the binomial experiment, as in the example + below. Let's also define :math:`p_1, p_2` the theoretical binomial + probabilities for :math:`x_{11}` and :math:`x_{12}`. When using + Boschloo exact test, we can assert three different alternative hypotheses: + + - :math:`H_0 : p_1=p_2` versus :math:`H_1 : p_1 < p_2`, + with `alternative` = "less" + + - :math:`H_0 : p_1=p_2` versus :math:`H_1 : p_1 > p_2`, + with `alternative` = "greater" + + - :math:`H_0 : p_1=p_2` versus :math:`H_1 : p_1 \neq p_2`, + with `alternative` = "two-sided" (default) + + There are multiple conventions for computing a two-sided p-value when the + null distribution is asymmetric. Here, we apply the convention that the + p-value of a two-sided test is twice the minimum of the p-values of the + one-sided tests (clipped to 1.0). Note that `fisher_exact` follows a + different convention, so for a given `table`, the statistic reported by + `boschloo_exact` may differ from the p-value reported by `fisher_exact` + when ``alternative='two-sided'``. + + .. versionadded:: 1.7.0 + + References + ---------- + .. [1] R.D. Boschloo. "Raised conditional level of significance for the + 2 x 2-table when testing the equality of two probabilities", + Statistica Neerlandica, 24(1), 1970 + + .. [2] "Boschloo's test", Wikipedia, + https://en.wikipedia.org/wiki/Boschloo%27s_test + + .. [3] Lise M. Saari et al. "Employee attitudes and job satisfaction", + Human Resource Management, 43(4), 395-407, 2004, + :doi:`10.1002/hrm.20032`. + + Examples + -------- + In the following example, we consider the article "Employee + attitudes and job satisfaction" [3]_ + which reports the results of a survey from 63 scientists and 117 college + professors. Of the 63 scientists, 31 said they were very satisfied with + their jobs, whereas 74 of the college professors were very satisfied + with their work. Is this significant evidence that college + professors are happier with their work than scientists? + The following table summarizes the data mentioned above:: + + college professors scientists + Very Satisfied 74 31 + Dissatisfied 43 32 + + When working with statistical hypothesis testing, we usually use a + threshold probability or significance level upon which we decide + to reject the null hypothesis :math:`H_0`. Suppose we choose the common + significance level of 5%. + + Our alternative hypothesis is that college professors are truly more + satisfied with their work than scientists. Therefore, we expect + :math:`p_1` the proportion of very satisfied college professors to be + greater than :math:`p_2`, the proportion of very satisfied scientists. + We thus call `boschloo_exact` with the ``alternative="greater"`` option: + + >>> import scipy.stats as stats + >>> res = stats.boschloo_exact([[74, 31], [43, 32]], alternative="greater") + >>> res.statistic + 0.0483... + >>> res.pvalue + 0.0355... + + Under the null hypothesis that scientists are happier in their work than + college professors, the probability of obtaining test + results at least as extreme as the observed data is approximately 3.55%. + Since this p-value is less than our chosen significance level, we have + evidence to reject :math:`H_0` in favor of the alternative hypothesis. + + """ + hypergeom = distributions.hypergeom + + if n <= 0: + raise ValueError( + "Number of points `n` must be strictly positive," + f" found {n!r}" + ) + + table = np.asarray(table, dtype=np.int64) + + if not table.shape == (2, 2): + raise ValueError("The input `table` must be of shape (2, 2).") + + if np.any(table < 0): + raise ValueError("All values in `table` must be nonnegative.") + + if 0 in table.sum(axis=0): + # If both values in column are zero, the p-value is 1 and + # the score's statistic is NaN. + return BoschlooExactResult(np.nan, np.nan) + + total_col_1, total_col_2 = table.sum(axis=0) + total = total_col_1 + total_col_2 + x1 = np.arange(total_col_1 + 1, dtype=np.int64).reshape(1, -1) + x2 = np.arange(total_col_2 + 1, dtype=np.int64).reshape(-1, 1) + x1_sum_x2 = x1 + x2 + + if alternative == 'less': + pvalues = hypergeom.cdf(x1, total, x1_sum_x2, total_col_1).T + elif alternative == 'greater': + # Same formula as the 'less' case, but with the second column. + pvalues = hypergeom.cdf(x2, total, x1_sum_x2, total_col_2).T + elif alternative == 'two-sided': + boschloo_less = boschloo_exact(table, alternative="less", n=n) + boschloo_greater = boschloo_exact(table, alternative="greater", n=n) + + res = ( + boschloo_less if boschloo_less.pvalue < boschloo_greater.pvalue + else boschloo_greater + ) + + # Two-sided p-value is defined as twice the minimum of the one-sided + # p-values + pvalue = np.clip(2 * res.pvalue, a_min=0, a_max=1) + return BoschlooExactResult(res.statistic, pvalue) + else: + msg = ( + f"`alternative` should be one of {'two-sided', 'less', 'greater'}," + f" found {alternative!r}" + ) + raise ValueError(msg) + + fisher_stat = pvalues[table[0, 0], table[0, 1]] + + # fisher_stat * (1+1e-13) guards us from small numerical error. It is + # equivalent to np.isclose with relative tol of 1e-13 and absolute tol of 0 + # For more throughout explanations, see gh-14178 + index_arr = pvalues <= fisher_stat * (1+1e-13) + + x1, x2, x1_sum_x2 = x1.T, x2.T, x1_sum_x2.T + x1_log_comb = _compute_log_combinations(total_col_1) + x2_log_comb = _compute_log_combinations(total_col_2) + x1_sum_x2_log_comb = x1_log_comb[x1] + x2_log_comb[x2] + + result = shgo( + _get_binomial_log_p_value_with_nuisance_param, + args=(x1_sum_x2, x1_sum_x2_log_comb, index_arr), + bounds=((0, 1),), + n=n, + sampling_method="sobol", + ) + + # result.fun is the negative log pvalue and therefore needs to be + # changed before return + p_value = np.clip(np.exp(-result.fun), a_min=0, a_max=1) + return BoschlooExactResult(fisher_stat, p_value) + + +def _get_binomial_log_p_value_with_nuisance_param( + nuisance_param, x1_sum_x2, x1_sum_x2_log_comb, index_arr +): + r""" + Compute the log pvalue in respect of a nuisance parameter considering + a 2x2 sample space. + + Parameters + ---------- + nuisance_param : float + nuisance parameter used in the computation of the maximisation of + the p-value. Must be between 0 and 1 + + x1_sum_x2 : ndarray + Sum of x1 and x2 inside barnard_exact + + x1_sum_x2_log_comb : ndarray + sum of the log combination of x1 and x2 + + index_arr : ndarray of boolean + + Returns + ------- + p_value : float + Return the maximum p-value considering every nuisance parameter + between 0 and 1 + + Notes + ----- + + Both Barnard's test and Boschloo's test iterate over a nuisance parameter + :math:`\pi \in [0, 1]` to find the maximum p-value. To search this + maxima, this function return the negative log pvalue with respect to the + nuisance parameter passed in params. This negative log p-value is then + used in `shgo` to find the minimum negative pvalue which is our maximum + pvalue. + + Also, to compute the different combination used in the + p-values' computation formula, this function uses `gammaln` which is + more tolerant for large value than `scipy.special.comb`. `gammaln` gives + a log combination. For the little precision loss, performances are + improved a lot. + """ + t1, t2 = x1_sum_x2.shape + n = t1 + t2 - 2 + with np.errstate(divide="ignore", invalid="ignore"): + log_nuisance = np.log( + nuisance_param, + out=np.zeros_like(nuisance_param), + where=nuisance_param >= 0, + ) + log_1_minus_nuisance = np.log( + 1 - nuisance_param, + out=np.zeros_like(nuisance_param), + where=1 - nuisance_param >= 0, + ) + + nuisance_power_x1_x2 = log_nuisance * x1_sum_x2 + nuisance_power_x1_x2[(x1_sum_x2 == 0)[:, :]] = 0 + + nuisance_power_n_minus_x1_x2 = log_1_minus_nuisance * (n - x1_sum_x2) + nuisance_power_n_minus_x1_x2[(x1_sum_x2 == n)[:, :]] = 0 + + tmp_log_values_arr = ( + x1_sum_x2_log_comb + + nuisance_power_x1_x2 + + nuisance_power_n_minus_x1_x2 + ) + + tmp_values_from_index = tmp_log_values_arr[index_arr] + + # To avoid dividing by zero in log function and getting inf value, + # values are centered according to the max + max_value = tmp_values_from_index.max() + + # To have better result's precision, the log pvalue is taken here. + # Indeed, pvalue is included inside [0, 1] interval. Passing the + # pvalue to log makes the interval a lot bigger ([-inf, 0]), and thus + # help us to achieve better precision + with np.errstate(divide="ignore", invalid="ignore"): + log_probs = np.exp(tmp_values_from_index - max_value).sum() + log_pvalue = max_value + np.log( + log_probs, + out=np.full_like(log_probs, -np.inf), + where=log_probs > 0, + ) + + # Since shgo find the minima, minus log pvalue is returned + return -log_pvalue + + +def _pval_cvm_2samp_exact(s, m, n): + """ + Compute the exact p-value of the Cramer-von Mises two-sample test + for a given value s of the test statistic. + m and n are the sizes of the samples. + + [1] Y. Xiao, A. Gordon, and A. Yakovlev, "A C++ Program for + the Cramér-Von Mises Two-Sample Test", J. Stat. Soft., + vol. 17, no. 8, pp. 1-15, Dec. 2006. + [2] T. W. Anderson "On the Distribution of the Two-Sample Cramer-von Mises + Criterion," The Annals of Mathematical Statistics, Ann. Math. Statist. + 33(3), 1148-1159, (September, 1962) + """ + + # [1, p. 3] + lcm = np.lcm(m, n) + # [1, p. 4], below eq. 3 + a = lcm // m + b = lcm // n + # Combine Eq. 9 in [2] with Eq. 2 in [1] and solve for $\zeta$ + # Hint: `s` is $U$ in [2], and $T_2$ in [1] is $T$ in [2] + mn = m * n + zeta = lcm ** 2 * (m + n) * (6 * s - mn * (4 * mn - 1)) // (6 * mn ** 2) + + # bound maximum value that may appear in `gs` (remember both rows!) + zeta_bound = lcm**2 * (m + n) # bound elements in row 1 + combinations = comb(m + n, m) # sum of row 2 + max_gs = max(zeta_bound, combinations) + dtype = np.min_scalar_type(max_gs) + + # the frequency table of $g_{u, v}^+$ defined in [1, p. 6] + gs = ([np.array([[0], [1]], dtype=dtype)] + + [np.empty((2, 0), dtype=dtype) for _ in range(m)]) + for u in range(n + 1): + next_gs = [] + tmp = np.empty((2, 0), dtype=dtype) + for v, g in enumerate(gs): + # Calculate g recursively with eq. 11 in [1]. Even though it + # doesn't look like it, this also does 12/13 (all of Algorithm 1). + vi, i0, i1 = np.intersect1d(tmp[0], g[0], return_indices=True) + tmp = np.concatenate([ + np.stack([vi, tmp[1, i0] + g[1, i1]]), + np.delete(tmp, i0, 1), + np.delete(g, i1, 1) + ], 1) + res = (a * v - b * u) ** 2 + tmp[0] += res.astype(dtype) + next_gs.append(tmp) + gs = next_gs + value, freq = gs[m] + return np.float64(np.sum(freq[value >= zeta]) / combinations) + + +@_axis_nan_policy_factory(CramerVonMisesResult, n_samples=2, too_small=1, + result_to_tuple=_cvm_result_to_tuple) +def cramervonmises_2samp(x, y, method='auto'): + """Perform the two-sample Cramér-von Mises test for goodness of fit. + + This is the two-sample version of the Cramér-von Mises test ([1]_): + for two independent samples :math:`X_1, ..., X_n` and + :math:`Y_1, ..., Y_m`, the null hypothesis is that the samples + come from the same (unspecified) continuous distribution. + + Parameters + ---------- + x : array_like + A 1-D array of observed values of the random variables :math:`X_i`. + y : array_like + A 1-D array of observed values of the random variables :math:`Y_i`. + method : {'auto', 'asymptotic', 'exact'}, optional + The method used to compute the p-value, see Notes for details. + The default is 'auto'. + + Returns + ------- + res : object with attributes + statistic : float + Cramér-von Mises statistic. + pvalue : float + The p-value. + + See Also + -------- + cramervonmises, anderson_ksamp, epps_singleton_2samp, ks_2samp + + Notes + ----- + .. versionadded:: 1.7.0 + + The statistic is computed according to equation 9 in [2]_. The + calculation of the p-value depends on the keyword `method`: + + - ``asymptotic``: The p-value is approximated by using the limiting + distribution of the test statistic. + - ``exact``: The exact p-value is computed by enumerating all + possible combinations of the test statistic, see [2]_. + + If ``method='auto'``, the exact approach is used + if both samples contain equal to or less than 20 observations, + otherwise the asymptotic distribution is used. + + If the underlying distribution is not continuous, the p-value is likely to + be conservative (Section 6.2 in [3]_). When ranking the data to compute + the test statistic, midranks are used if there are ties. + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Cramer-von_Mises_criterion + .. [2] Anderson, T.W. (1962). On the distribution of the two-sample + Cramer-von-Mises criterion. The Annals of Mathematical + Statistics, pp. 1148-1159. + .. [3] Conover, W.J., Practical Nonparametric Statistics, 1971. + + Examples + -------- + + Suppose we wish to test whether two samples generated by + ``scipy.stats.norm.rvs`` have the same distribution. We choose a + significance level of alpha=0.05. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> x = stats.norm.rvs(size=100, random_state=rng) + >>> y = stats.norm.rvs(size=70, random_state=rng) + >>> res = stats.cramervonmises_2samp(x, y) + >>> res.statistic, res.pvalue + (0.29376470588235293, 0.1412873014573014) + + The p-value exceeds our chosen significance level, so we do not + reject the null hypothesis that the observed samples are drawn from the + same distribution. + + For small sample sizes, one can compute the exact p-values: + + >>> x = stats.norm.rvs(size=7, random_state=rng) + >>> y = stats.t.rvs(df=2, size=6, random_state=rng) + >>> res = stats.cramervonmises_2samp(x, y, method='exact') + >>> res.statistic, res.pvalue + (0.197802197802198, 0.31643356643356646) + + The p-value based on the asymptotic distribution is a good approximation + even though the sample size is small. + + >>> res = stats.cramervonmises_2samp(x, y, method='asymptotic') + >>> res.statistic, res.pvalue + (0.197802197802198, 0.2966041181527128) + + Independent of the method, one would not reject the null hypothesis at the + chosen significance level in this example. + + """ + xa = np.sort(np.asarray(x)) + ya = np.sort(np.asarray(y)) + + if xa.size <= 1 or ya.size <= 1: + raise ValueError('x and y must contain at least two observations.') + if method not in ['auto', 'exact', 'asymptotic']: + raise ValueError('method must be either auto, exact or asymptotic.') + + nx = len(xa) + ny = len(ya) + + if method == 'auto': + if max(nx, ny) > 20: + method = 'asymptotic' + else: + method = 'exact' + + # get ranks of x and y in the pooled sample + z = np.concatenate([xa, ya]) + # in case of ties, use midrank (see [1]) + r = scipy.stats.rankdata(z, method='average') + rx = r[:nx] + ry = r[nx:] + + # compute U (eq. 10 in [2]) + u = nx * np.sum((rx - np.arange(1, nx+1))**2) + u += ny * np.sum((ry - np.arange(1, ny+1))**2) + + # compute T (eq. 9 in [2]) + k, N = nx*ny, nx + ny + t = u / (k*N) - (4*k - 1)/(6*N) + + if method == 'exact': + p = _pval_cvm_2samp_exact(u, nx, ny) + else: + # compute expected value and variance of T (eq. 11 and 14 in [2]) + et = (1 + 1/N)/6 + vt = (N+1) * (4*k*N - 3*(nx**2 + ny**2) - 2*k) + vt = vt / (45 * N**2 * 4 * k) + + # computed the normalized statistic (eq. 15 in [2]) + tn = 1/6 + (t - et) / np.sqrt(45 * vt) + + # approximate distribution of tn with limiting distribution + # of the one-sample test statistic + # if tn < 0.003, the _cdf_cvm_inf(tn) < 1.28*1e-18, return 1.0 directly + if tn < 0.003: + p = 1.0 + else: + p = max(0, 1. - _cdf_cvm_inf(tn)) + + return CramerVonMisesResult(statistic=t, pvalue=p) + + +class TukeyHSDResult: + """Result of `scipy.stats.tukey_hsd`. + + Attributes + ---------- + statistic : float ndarray + The computed statistic of the test for each comparison. The element + at index ``(i, j)`` is the statistic for the comparison between groups + ``i`` and ``j``. + pvalue : float ndarray + The associated p-value from the studentized range distribution. The + element at index ``(i, j)`` is the p-value for the comparison + between groups ``i`` and ``j``. + + Notes + ----- + The string representation of this object displays the most recently + calculated confidence interval, and if none have been previously + calculated, it will evaluate ``confidence_interval()``. + + References + ---------- + .. [1] NIST/SEMATECH e-Handbook of Statistical Methods, "7.4.7.1. Tukey's + Method." + https://www.itl.nist.gov/div898/handbook/prc/section4/prc471.htm, + 28 November 2020. + """ + + def __init__(self, statistic, pvalue, _nobs, _ntreatments, _stand_err): + self.statistic = statistic + self.pvalue = pvalue + self._ntreatments = _ntreatments + self._nobs = _nobs + self._stand_err = _stand_err + self._ci = None + self._ci_cl = None + + def __str__(self): + # Note: `__str__` prints the confidence intervals from the most + # recent call to `confidence_interval`. If it has not been called, + # it will be called with the default CL of .95. + if self._ci is None: + self.confidence_interval(confidence_level=.95) + s = ("Tukey's HSD Pairwise Group Comparisons" + f" ({self._ci_cl*100:.1f}% Confidence Interval)\n") + s += "Comparison Statistic p-value Lower CI Upper CI\n" + for i in range(self.pvalue.shape[0]): + for j in range(self.pvalue.shape[0]): + if i != j: + s += (f" ({i} - {j}) {self.statistic[i, j]:>10.3f}" + f"{self.pvalue[i, j]:>10.3f}" + f"{self._ci.low[i, j]:>10.3f}" + f"{self._ci.high[i, j]:>10.3f}\n") + return s + + def confidence_interval(self, confidence_level=.95): + """Compute the confidence interval for the specified confidence level. + + Parameters + ---------- + confidence_level : float, optional + Confidence level for the computed confidence interval + of the estimated proportion. Default is .95. + + Returns + ------- + ci : ``ConfidenceInterval`` object + The object has attributes ``low`` and ``high`` that hold the + lower and upper bounds of the confidence intervals for each + comparison. The high and low values are accessible for each + comparison at index ``(i, j)`` between groups ``i`` and ``j``. + + References + ---------- + .. [1] NIST/SEMATECH e-Handbook of Statistical Methods, "7.4.7.1. + Tukey's Method." + https://www.itl.nist.gov/div898/handbook/prc/section4/prc471.htm, + 28 November 2020. + + Examples + -------- + >>> from scipy.stats import tukey_hsd + >>> group0 = [24.5, 23.5, 26.4, 27.1, 29.9] + >>> group1 = [28.4, 34.2, 29.5, 32.2, 30.1] + >>> group2 = [26.1, 28.3, 24.3, 26.2, 27.8] + >>> result = tukey_hsd(group0, group1, group2) + >>> ci = result.confidence_interval() + >>> ci.low + array([[-3.649159, -8.249159, -3.909159], + [ 0.950841, -3.649159, 0.690841], + [-3.389159, -7.989159, -3.649159]]) + >>> ci.high + array([[ 3.649159, -0.950841, 3.389159], + [ 8.249159, 3.649159, 7.989159], + [ 3.909159, -0.690841, 3.649159]]) + """ + # check to see if the supplied confidence level matches that of the + # previously computed CI. + if (self._ci is not None and self._ci_cl is not None and + confidence_level == self._ci_cl): + return self._ci + + if not 0 < confidence_level < 1: + raise ValueError("Confidence level must be between 0 and 1.") + # determine the critical value of the studentized range using the + # appropriate confidence level, number of treatments, and degrees + # of freedom as determined by the number of data less the number of + # treatments. ("Confidence limits for Tukey's method")[1]. Note that + # in the cases of unequal sample sizes there will be a criterion for + # each group comparison. + params = (confidence_level, self._nobs, self._ntreatments - self._nobs) + srd = distributions.studentized_range.ppf(*params) + # also called maximum critical value, the Tukey criterion is the + # studentized range critical value * the square root of mean square + # error over the sample size. + tukey_criterion = srd * self._stand_err + # the confidence levels are determined by the + # `mean_differences` +- `tukey_criterion` + upper_conf = self.statistic + tukey_criterion + lower_conf = self.statistic - tukey_criterion + self._ci = ConfidenceInterval(low=lower_conf, high=upper_conf) + self._ci_cl = confidence_level + return self._ci + + +def _tukey_hsd_iv(args): + if (len(args)) < 2: + raise ValueError("There must be more than 1 treatment.") + args = [np.asarray(arg) for arg in args] + for arg in args: + if arg.ndim != 1: + raise ValueError("Input samples must be one-dimensional.") + if arg.size <= 1: + raise ValueError("Input sample size must be greater than one.") + if np.isinf(arg).any(): + raise ValueError("Input samples must be finite.") + return args + + +def tukey_hsd(*args): + """Perform Tukey's HSD test for equality of means over multiple treatments. + + Tukey's honestly significant difference (HSD) test performs pairwise + comparison of means for a set of samples. Whereas ANOVA (e.g. `f_oneway`) + assesses whether the true means underlying each sample are identical, + Tukey's HSD is a post hoc test used to compare the mean of each sample + to the mean of each other sample. + + The null hypothesis is that the distributions underlying the samples all + have the same mean. The test statistic, which is computed for every + possible pairing of samples, is simply the difference between the sample + means. For each pair, the p-value is the probability under the null + hypothesis (and other assumptions; see notes) of observing such an extreme + value of the statistic, considering that many pairwise comparisons are + being performed. Confidence intervals for the difference between each pair + of means are also available. + + Parameters + ---------- + sample1, sample2, ... : array_like + The sample measurements for each group. There must be at least + two arguments. + + Returns + ------- + result : `~scipy.stats._result_classes.TukeyHSDResult` instance + The return value is an object with the following attributes: + + statistic : float ndarray + The computed statistic of the test for each comparison. The element + at index ``(i, j)`` is the statistic for the comparison between + groups ``i`` and ``j``. + pvalue : float ndarray + The computed p-value of the test for each comparison. The element + at index ``(i, j)`` is the p-value for the comparison between + groups ``i`` and ``j``. + + The object has the following methods: + + confidence_interval(confidence_level=0.95): + Compute the confidence interval for the specified confidence level. + + See Also + -------- + dunnett : performs comparison of means against a control group. + + Notes + ----- + The use of this test relies on several assumptions. + + 1. The observations are independent within and among groups. + 2. The observations within each group are normally distributed. + 3. The distributions from which the samples are drawn have the same finite + variance. + + The original formulation of the test was for samples of equal size [6]_. + In case of unequal sample sizes, the test uses the Tukey-Kramer method + [4]_. + + References + ---------- + .. [1] NIST/SEMATECH e-Handbook of Statistical Methods, "7.4.7.1. Tukey's + Method." + https://www.itl.nist.gov/div898/handbook/prc/section4/prc471.htm, + 28 November 2020. + .. [2] Abdi, Herve & Williams, Lynne. (2021). "Tukey's Honestly Significant + Difference (HSD) Test." + https://personal.utdallas.edu/~herve/abdi-HSD2010-pretty.pdf + .. [3] "One-Way ANOVA Using SAS PROC ANOVA & PROC GLM." SAS + Tutorials, 2007, www.stattutorials.com/SAS/TUTORIAL-PROC-GLM.htm. + .. [4] Kramer, Clyde Young. "Extension of Multiple Range Tests to Group + Means with Unequal Numbers of Replications." Biometrics, vol. 12, + no. 3, 1956, pp. 307-310. JSTOR, www.jstor.org/stable/3001469. + Accessed 25 May 2021. + .. [5] NIST/SEMATECH e-Handbook of Statistical Methods, "7.4.3.3. + The ANOVA table and tests of hypotheses about means" + https://www.itl.nist.gov/div898/handbook/prc/section4/prc433.htm, + 2 June 2021. + .. [6] Tukey, John W. "Comparing Individual Means in the Analysis of + Variance." Biometrics, vol. 5, no. 2, 1949, pp. 99-114. JSTOR, + www.jstor.org/stable/3001913. Accessed 14 June 2021. + + + Examples + -------- + Here are some data comparing the time to relief of three brands of + headache medicine, reported in minutes. Data adapted from [3]_. + + >>> import numpy as np + >>> from scipy.stats import tukey_hsd + >>> group0 = [24.5, 23.5, 26.4, 27.1, 29.9] + >>> group1 = [28.4, 34.2, 29.5, 32.2, 30.1] + >>> group2 = [26.1, 28.3, 24.3, 26.2, 27.8] + + We would like to see if the means between any of the groups are + significantly different. First, visually examine a box and whisker plot. + + >>> import matplotlib.pyplot as plt + >>> fig, ax = plt.subplots(1, 1) + >>> ax.boxplot([group0, group1, group2]) + >>> ax.set_xticklabels(["group0", "group1", "group2"]) # doctest: +SKIP + >>> ax.set_ylabel("mean") # doctest: +SKIP + >>> plt.show() + + From the box and whisker plot, we can see overlap in the interquartile + ranges group 1 to group 2 and group 3, but we can apply the ``tukey_hsd`` + test to determine if the difference between means is significant. We + set a significance level of .05 to reject the null hypothesis. + + >>> res = tukey_hsd(group0, group1, group2) + >>> print(res) + Tukey's HSD Pairwise Group Comparisons (95.0% Confidence Interval) + Comparison Statistic p-value Lower CI Upper CI + (0 - 1) -4.600 0.014 -8.249 -0.951 + (0 - 2) -0.260 0.980 -3.909 3.389 + (1 - 0) 4.600 0.014 0.951 8.249 + (1 - 2) 4.340 0.020 0.691 7.989 + (2 - 0) 0.260 0.980 -3.389 3.909 + (2 - 1) -4.340 0.020 -7.989 -0.691 + + The null hypothesis is that each group has the same mean. The p-value for + comparisons between ``group0`` and ``group1`` as well as ``group1`` and + ``group2`` do not exceed .05, so we reject the null hypothesis that they + have the same means. The p-value of the comparison between ``group0`` + and ``group2`` exceeds .05, so we accept the null hypothesis that there + is not a significant difference between their means. + + We can also compute the confidence interval associated with our chosen + confidence level. + + >>> group0 = [24.5, 23.5, 26.4, 27.1, 29.9] + >>> group1 = [28.4, 34.2, 29.5, 32.2, 30.1] + >>> group2 = [26.1, 28.3, 24.3, 26.2, 27.8] + >>> result = tukey_hsd(group0, group1, group2) + >>> conf = res.confidence_interval(confidence_level=.99) + >>> for ((i, j), l) in np.ndenumerate(conf.low): + ... # filter out self comparisons + ... if i != j: + ... h = conf.high[i,j] + ... print(f"({i} - {j}) {l:>6.3f} {h:>6.3f}") + (0 - 1) -9.480 0.280 + (0 - 2) -5.140 4.620 + (1 - 0) -0.280 9.480 + (1 - 2) -0.540 9.220 + (2 - 0) -4.620 5.140 + (2 - 1) -9.220 0.540 + """ + args = _tukey_hsd_iv(args) + ntreatments = len(args) + means = np.asarray([np.mean(arg) for arg in args]) + nsamples_treatments = np.asarray([a.size for a in args]) + nobs = np.sum(nsamples_treatments) + + # determine mean square error [5]. Note that this is sometimes called + # mean square error within. + mse = (np.sum([np.var(arg, ddof=1) for arg in args] * + (nsamples_treatments - 1)) / (nobs - ntreatments)) + + # The calculation of the standard error differs when treatments differ in + # size. See ("Unequal sample sizes")[1]. + if np.unique(nsamples_treatments).size == 1: + # all input groups are the same length, so only one value needs to be + # calculated [1]. + normalize = 2 / nsamples_treatments[0] + else: + # to compare groups of differing sizes, we must compute a variance + # value for each individual comparison. Use broadcasting to get the + # resulting matrix. [3], verified against [4] (page 308). + normalize = 1 / nsamples_treatments + 1 / nsamples_treatments[None].T + + # the standard error is used in the computation of the tukey criterion and + # finding the p-values. + stand_err = np.sqrt(normalize * mse / 2) + + # the mean difference is the test statistic. + mean_differences = means[None].T - means + + # Calculate the t-statistic to use within the survival function of the + # studentized range to get the p-value. + t_stat = np.abs(mean_differences) / stand_err + + params = t_stat, ntreatments, nobs - ntreatments + pvalues = distributions.studentized_range.sf(*params) + + return TukeyHSDResult(mean_differences, pvalues, ntreatments, + nobs, stand_err) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_mstats_basic.py b/venv/lib/python3.10/site-packages/scipy/stats/_mstats_basic.py new file mode 100644 index 0000000000000000000000000000000000000000..6b2d46b271d03e3d9fca895d5726ea2a6c78471f --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_mstats_basic.py @@ -0,0 +1,3564 @@ +""" +An extension of scipy.stats._stats_py to support masked arrays + +""" +# Original author (2007): Pierre GF Gerard-Marchant + + +__all__ = ['argstoarray', + 'count_tied_groups', + 'describe', + 'f_oneway', 'find_repeats','friedmanchisquare', + 'kendalltau','kendalltau_seasonal','kruskal','kruskalwallis', + 'ks_twosamp', 'ks_2samp', 'kurtosis', 'kurtosistest', + 'ks_1samp', 'kstest', + 'linregress', + 'mannwhitneyu', 'meppf','mode','moment','mquantiles','msign', + 'normaltest', + 'obrientransform', + 'pearsonr','plotting_positions','pointbiserialr', + 'rankdata', + 'scoreatpercentile','sem', + 'sen_seasonal_slopes','skew','skewtest','spearmanr', + 'siegelslopes', 'theilslopes', + 'tmax','tmean','tmin','trim','trimboth', + 'trimtail','trima','trimr','trimmed_mean','trimmed_std', + 'trimmed_stde','trimmed_var','tsem','ttest_1samp','ttest_onesamp', + 'ttest_ind','ttest_rel','tvar', + 'variation', + 'winsorize', + 'brunnermunzel', + ] + +import numpy as np +from numpy import ndarray +import numpy.ma as ma +from numpy.ma import masked, nomask +import math + +import itertools +import warnings +from collections import namedtuple + +from . import distributions +from scipy._lib._util import _rename_parameter, _contains_nan +from scipy._lib._bunch import _make_tuple_bunch +import scipy.special as special +import scipy.stats._stats_py + +from ._stats_mstats_common import ( + _find_repeats, + linregress as stats_linregress, + LinregressResult as stats_LinregressResult, + theilslopes as stats_theilslopes, + siegelslopes as stats_siegelslopes + ) + + +def _chk_asarray(a, axis): + # Always returns a masked array, raveled for axis=None + a = ma.asanyarray(a) + if axis is None: + a = ma.ravel(a) + outaxis = 0 + else: + outaxis = axis + return a, outaxis + + +def _chk2_asarray(a, b, axis): + a = ma.asanyarray(a) + b = ma.asanyarray(b) + if axis is None: + a = ma.ravel(a) + b = ma.ravel(b) + outaxis = 0 + else: + outaxis = axis + return a, b, outaxis + + +def _chk_size(a, b): + a = ma.asanyarray(a) + b = ma.asanyarray(b) + (na, nb) = (a.size, b.size) + if na != nb: + raise ValueError("The size of the input array should match!" + f" ({na} <> {nb})") + return (a, b, na) + + +def _ttest_finish(df, t, alternative): + """Common code between all 3 t-test functions.""" + # We use ``stdtr`` directly here to preserve masked arrays + + if alternative == 'less': + pval = special.stdtr(df, t) + elif alternative == 'greater': + pval = special.stdtr(df, -t) + elif alternative == 'two-sided': + pval = special.stdtr(df, -np.abs(t))*2 + else: + raise ValueError("alternative must be " + "'less', 'greater' or 'two-sided'") + + if t.ndim == 0: + t = t[()] + if pval.ndim == 0: + pval = pval[()] + + return t, pval + + +def argstoarray(*args): + """ + Constructs a 2D array from a group of sequences. + + Sequences are filled with missing values to match the length of the longest + sequence. + + Parameters + ---------- + *args : sequences + Group of sequences. + + Returns + ------- + argstoarray : MaskedArray + A ( `m` x `n` ) masked array, where `m` is the number of arguments and + `n` the length of the longest argument. + + Notes + ----- + `numpy.ma.vstack` has identical behavior, but is called with a sequence + of sequences. + + Examples + -------- + A 2D masked array constructed from a group of sequences is returned. + + >>> from scipy.stats.mstats import argstoarray + >>> argstoarray([1, 2, 3], [4, 5, 6]) + masked_array( + data=[[1.0, 2.0, 3.0], + [4.0, 5.0, 6.0]], + mask=[[False, False, False], + [False, False, False]], + fill_value=1e+20) + + The returned masked array filled with missing values when the lengths of + sequences are different. + + >>> argstoarray([1, 3], [4, 5, 6]) + masked_array( + data=[[1.0, 3.0, --], + [4.0, 5.0, 6.0]], + mask=[[False, False, True], + [False, False, False]], + fill_value=1e+20) + + """ + if len(args) == 1 and not isinstance(args[0], ndarray): + output = ma.asarray(args[0]) + if output.ndim != 2: + raise ValueError("The input should be 2D") + else: + n = len(args) + m = max([len(k) for k in args]) + output = ma.array(np.empty((n,m), dtype=float), mask=True) + for (k,v) in enumerate(args): + output[k,:len(v)] = v + + output[np.logical_not(np.isfinite(output._data))] = masked + return output + + +def find_repeats(arr): + """Find repeats in arr and return a tuple (repeats, repeat_count). + + The input is cast to float64. Masked values are discarded. + + Parameters + ---------- + arr : sequence + Input array. The array is flattened if it is not 1D. + + Returns + ------- + repeats : ndarray + Array of repeated values. + counts : ndarray + Array of counts. + + Examples + -------- + >>> from scipy.stats import mstats + >>> mstats.find_repeats([2, 1, 2, 3, 2, 2, 5]) + (array([2.]), array([4])) + + In the above example, 2 repeats 4 times. + + >>> mstats.find_repeats([[10, 20, 1, 2], [5, 5, 4, 4]]) + (array([4., 5.]), array([2, 2])) + + In the above example, both 4 and 5 repeat 2 times. + + """ + # Make sure we get a copy. ma.compressed promises a "new array", but can + # actually return a reference. + compr = np.asarray(ma.compressed(arr), dtype=np.float64) + try: + need_copy = np.may_share_memory(compr, arr) + except AttributeError: + # numpy < 1.8.2 bug: np.may_share_memory([], []) raises, + # while in numpy 1.8.2 and above it just (correctly) returns False. + need_copy = False + if need_copy: + compr = compr.copy() + return _find_repeats(compr) + + +def count_tied_groups(x, use_missing=False): + """ + Counts the number of tied values. + + Parameters + ---------- + x : sequence + Sequence of data on which to counts the ties + use_missing : bool, optional + Whether to consider missing values as tied. + + Returns + ------- + count_tied_groups : dict + Returns a dictionary (nb of ties: nb of groups). + + Examples + -------- + >>> from scipy.stats import mstats + >>> import numpy as np + >>> z = [0, 0, 0, 2, 2, 2, 3, 3, 4, 5, 6] + >>> mstats.count_tied_groups(z) + {2: 1, 3: 2} + + In the above example, the ties were 0 (3x), 2 (3x) and 3 (2x). + + >>> z = np.ma.array([0, 0, 1, 2, 2, 2, 3, 3, 4, 5, 6]) + >>> mstats.count_tied_groups(z) + {2: 2, 3: 1} + >>> z[[1,-1]] = np.ma.masked + >>> mstats.count_tied_groups(z, use_missing=True) + {2: 2, 3: 1} + + """ + nmasked = ma.getmask(x).sum() + # We need the copy as find_repeats will overwrite the initial data + data = ma.compressed(x).copy() + (ties, counts) = find_repeats(data) + nties = {} + if len(ties): + nties = dict(zip(np.unique(counts), itertools.repeat(1))) + nties.update(dict(zip(*find_repeats(counts)))) + + if nmasked and use_missing: + try: + nties[nmasked] += 1 + except KeyError: + nties[nmasked] = 1 + + return nties + + +def rankdata(data, axis=None, use_missing=False): + """Returns the rank (also known as order statistics) of each data point + along the given axis. + + If some values are tied, their rank is averaged. + If some values are masked, their rank is set to 0 if use_missing is False, + or set to the average rank of the unmasked values if use_missing is True. + + Parameters + ---------- + data : sequence + Input data. The data is transformed to a masked array + axis : {None,int}, optional + Axis along which to perform the ranking. + If None, the array is first flattened. An exception is raised if + the axis is specified for arrays with a dimension larger than 2 + use_missing : bool, optional + Whether the masked values have a rank of 0 (False) or equal to the + average rank of the unmasked values (True). + + """ + def _rank1d(data, use_missing=False): + n = data.count() + rk = np.empty(data.size, dtype=float) + idx = data.argsort() + rk[idx[:n]] = np.arange(1,n+1) + + if use_missing: + rk[idx[n:]] = (n+1)/2. + else: + rk[idx[n:]] = 0 + + repeats = find_repeats(data.copy()) + for r in repeats[0]: + condition = (data == r).filled(False) + rk[condition] = rk[condition].mean() + return rk + + data = ma.array(data, copy=False) + if axis is None: + if data.ndim > 1: + return _rank1d(data.ravel(), use_missing).reshape(data.shape) + else: + return _rank1d(data, use_missing) + else: + return ma.apply_along_axis(_rank1d,axis,data,use_missing).view(ndarray) + + +ModeResult = namedtuple('ModeResult', ('mode', 'count')) + + +def mode(a, axis=0): + """ + Returns an array of the modal (most common) value in the passed array. + + Parameters + ---------- + a : array_like + n-dimensional array of which to find mode(s). + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over + the whole array `a`. + + Returns + ------- + mode : ndarray + Array of modal values. + count : ndarray + Array of counts for each mode. + + Notes + ----- + For more details, see `scipy.stats.mode`. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> from scipy.stats import mstats + >>> m_arr = np.ma.array([1, 1, 0, 0, 0, 0], mask=[0, 0, 1, 1, 1, 0]) + >>> mstats.mode(m_arr) # note that most zeros are masked + ModeResult(mode=array([1.]), count=array([2.])) + + """ + return _mode(a, axis=axis, keepdims=True) + + +def _mode(a, axis=0, keepdims=True): + # Don't want to expose `keepdims` from the public `mstats.mode` + a, axis = _chk_asarray(a, axis) + + def _mode1D(a): + (rep,cnt) = find_repeats(a) + if not cnt.ndim: + return (0, 0) + elif cnt.size: + return (rep[cnt.argmax()], cnt.max()) + else: + return (a.min(), 1) + + if axis is None: + output = _mode1D(ma.ravel(a)) + output = (ma.array(output[0]), ma.array(output[1])) + else: + output = ma.apply_along_axis(_mode1D, axis, a) + if keepdims is None or keepdims: + newshape = list(a.shape) + newshape[axis] = 1 + slices = [slice(None)] * output.ndim + slices[axis] = 0 + modes = output[tuple(slices)].reshape(newshape) + slices[axis] = 1 + counts = output[tuple(slices)].reshape(newshape) + output = (modes, counts) + else: + output = np.moveaxis(output, axis, 0) + + return ModeResult(*output) + + +def _betai(a, b, x): + x = np.asanyarray(x) + x = ma.where(x < 1.0, x, 1.0) # if x > 1 then return 1.0 + return special.betainc(a, b, x) + + +def msign(x): + """Returns the sign of x, or 0 if x is masked.""" + return ma.filled(np.sign(x), 0) + + +def pearsonr(x, y): + r""" + Pearson correlation coefficient and p-value for testing non-correlation. + + The Pearson correlation coefficient [1]_ measures the linear relationship + between two datasets. The calculation of the p-value relies on the + assumption that each dataset is normally distributed. (See Kowalski [3]_ + for a discussion of the effects of non-normality of the input on the + distribution of the correlation coefficient.) Like other correlation + coefficients, this one varies between -1 and +1 with 0 implying no + correlation. Correlations of -1 or +1 imply an exact linear relationship. + + Parameters + ---------- + x : (N,) array_like + Input array. + y : (N,) array_like + Input array. + + Returns + ------- + r : float + Pearson's correlation coefficient. + p-value : float + Two-tailed p-value. + + Warns + ----- + `~scipy.stats.ConstantInputWarning` + Raised if an input is a constant array. The correlation coefficient + is not defined in this case, so ``np.nan`` is returned. + + `~scipy.stats.NearConstantInputWarning` + Raised if an input is "nearly" constant. The array ``x`` is considered + nearly constant if ``norm(x - mean(x)) < 1e-13 * abs(mean(x))``. + Numerical errors in the calculation ``x - mean(x)`` in this case might + result in an inaccurate calculation of r. + + See Also + -------- + spearmanr : Spearman rank-order correlation coefficient. + kendalltau : Kendall's tau, a correlation measure for ordinal data. + + Notes + ----- + The correlation coefficient is calculated as follows: + + .. math:: + + r = \frac{\sum (x - m_x) (y - m_y)} + {\sqrt{\sum (x - m_x)^2 \sum (y - m_y)^2}} + + where :math:`m_x` is the mean of the vector x and :math:`m_y` is + the mean of the vector y. + + Under the assumption that x and y are drawn from + independent normal distributions (so the population correlation coefficient + is 0), the probability density function of the sample correlation + coefficient r is ([1]_, [2]_): + + .. math:: + + f(r) = \frac{{(1-r^2)}^{n/2-2}}{\mathrm{B}(\frac{1}{2},\frac{n}{2}-1)} + + where n is the number of samples, and B is the beta function. This + is sometimes referred to as the exact distribution of r. This is + the distribution that is used in `pearsonr` to compute the p-value. + The distribution is a beta distribution on the interval [-1, 1], + with equal shape parameters a = b = n/2 - 1. In terms of SciPy's + implementation of the beta distribution, the distribution of r is:: + + dist = scipy.stats.beta(n/2 - 1, n/2 - 1, loc=-1, scale=2) + + The p-value returned by `pearsonr` is a two-sided p-value. The p-value + roughly indicates the probability of an uncorrelated system + producing datasets that have a Pearson correlation at least as extreme + as the one computed from these datasets. More precisely, for a + given sample with correlation coefficient r, the p-value is + the probability that abs(r') of a random sample x' and y' drawn from + the population with zero correlation would be greater than or equal + to abs(r). In terms of the object ``dist`` shown above, the p-value + for a given r and length n can be computed as:: + + p = 2*dist.cdf(-abs(r)) + + When n is 2, the above continuous distribution is not well-defined. + One can interpret the limit of the beta distribution as the shape + parameters a and b approach a = b = 0 as a discrete distribution with + equal probability masses at r = 1 and r = -1. More directly, one + can observe that, given the data x = [x1, x2] and y = [y1, y2], and + assuming x1 != x2 and y1 != y2, the only possible values for r are 1 + and -1. Because abs(r') for any sample x' and y' with length 2 will + be 1, the two-sided p-value for a sample of length 2 is always 1. + + References + ---------- + .. [1] "Pearson correlation coefficient", Wikipedia, + https://en.wikipedia.org/wiki/Pearson_correlation_coefficient + .. [2] Student, "Probable error of a correlation coefficient", + Biometrika, Volume 6, Issue 2-3, 1 September 1908, pp. 302-310. + .. [3] C. J. Kowalski, "On the Effects of Non-Normality on the Distribution + of the Sample Product-Moment Correlation Coefficient" + Journal of the Royal Statistical Society. Series C (Applied + Statistics), Vol. 21, No. 1 (1972), pp. 1-12. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> from scipy.stats import mstats + >>> mstats.pearsonr([1, 2, 3, 4, 5], [10, 9, 2.5, 6, 4]) + (-0.7426106572325057, 0.1505558088534455) + + There is a linear dependence between x and y if y = a + b*x + e, where + a,b are constants and e is a random error term, assumed to be independent + of x. For simplicity, assume that x is standard normal, a=0, b=1 and let + e follow a normal distribution with mean zero and standard deviation s>0. + + >>> s = 0.5 + >>> x = stats.norm.rvs(size=500) + >>> e = stats.norm.rvs(scale=s, size=500) + >>> y = x + e + >>> mstats.pearsonr(x, y) + (0.9029601878969703, 8.428978827629898e-185) # may vary + + This should be close to the exact value given by + + >>> 1/np.sqrt(1 + s**2) + 0.8944271909999159 + + For s=0.5, we observe a high level of correlation. In general, a large + variance of the noise reduces the correlation, while the correlation + approaches one as the variance of the error goes to zero. + + It is important to keep in mind that no correlation does not imply + independence unless (x, y) is jointly normal. Correlation can even be zero + when there is a very simple dependence structure: if X follows a + standard normal distribution, let y = abs(x). Note that the correlation + between x and y is zero. Indeed, since the expectation of x is zero, + cov(x, y) = E[x*y]. By definition, this equals E[x*abs(x)] which is zero + by symmetry. The following lines of code illustrate this observation: + + >>> y = np.abs(x) + >>> mstats.pearsonr(x, y) + (-0.016172891856853524, 0.7182823678751942) # may vary + + A non-zero correlation coefficient can be misleading. For example, if X has + a standard normal distribution, define y = x if x < 0 and y = 0 otherwise. + A simple calculation shows that corr(x, y) = sqrt(2/Pi) = 0.797..., + implying a high level of correlation: + + >>> y = np.where(x < 0, x, 0) + >>> mstats.pearsonr(x, y) + (0.8537091583771509, 3.183461621422181e-143) # may vary + + This is unintuitive since there is no dependence of x and y if x is larger + than zero which happens in about half of the cases if we sample x and y. + """ + (x, y, n) = _chk_size(x, y) + (x, y) = (x.ravel(), y.ravel()) + # Get the common mask and the total nb of unmasked elements + m = ma.mask_or(ma.getmask(x), ma.getmask(y)) + n -= m.sum() + df = n-2 + if df < 0: + return (masked, masked) + + return scipy.stats._stats_py.pearsonr( + ma.masked_array(x, mask=m).compressed(), + ma.masked_array(y, mask=m).compressed()) + + +def spearmanr(x, y=None, use_ties=True, axis=None, nan_policy='propagate', + alternative='two-sided'): + """ + Calculates a Spearman rank-order correlation coefficient and the p-value + to test for non-correlation. + + The Spearman correlation is a nonparametric measure of the linear + relationship between two datasets. Unlike the Pearson correlation, the + Spearman correlation does not assume that both datasets are normally + distributed. Like other correlation coefficients, this one varies + between -1 and +1 with 0 implying no correlation. Correlations of -1 or + +1 imply a monotonic relationship. Positive correlations imply that + as `x` increases, so does `y`. Negative correlations imply that as `x` + increases, `y` decreases. + + Missing values are discarded pair-wise: if a value is missing in `x`, the + corresponding value in `y` is masked. + + The p-value roughly indicates the probability of an uncorrelated system + producing datasets that have a Spearman correlation at least as extreme + as the one computed from these datasets. The p-values are not entirely + reliable but are probably reasonable for datasets larger than 500 or so. + + Parameters + ---------- + x, y : 1D or 2D array_like, y is optional + One or two 1-D or 2-D arrays containing multiple variables and + observations. When these are 1-D, each represents a vector of + observations of a single variable. For the behavior in the 2-D case, + see under ``axis``, below. + use_ties : bool, optional + DO NOT USE. Does not do anything, keyword is only left in place for + backwards compatibility reasons. + axis : int or None, optional + If axis=0 (default), then each column represents a variable, with + observations in the rows. If axis=1, the relationship is transposed: + each row represents a variable, while the columns contain observations. + If axis=None, then both arrays will be raveled. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. 'propagate' returns nan, + 'raise' throws an error, 'omit' performs the calculations ignoring nan + values. Default is 'propagate'. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + + * 'two-sided': the correlation is nonzero + * 'less': the correlation is negative (less than zero) + * 'greater': the correlation is positive (greater than zero) + + .. versionadded:: 1.7.0 + + Returns + ------- + res : SignificanceResult + An object containing attributes: + + statistic : float or ndarray (2-D square) + Spearman correlation matrix or correlation coefficient (if only 2 + variables are given as parameters). Correlation matrix is square + with length equal to total number of variables (columns or rows) in + ``a`` and ``b`` combined. + pvalue : float + The p-value for a hypothesis test whose null hypothesis + is that two sets of data are linearly uncorrelated. See + `alternative` above for alternative hypotheses. `pvalue` has the + same shape as `statistic`. + + References + ---------- + [CRCProbStat2000] section 14.7 + + """ + if not use_ties: + raise ValueError("`use_ties=False` is not supported in SciPy >= 1.2.0") + + # Always returns a masked array, raveled if axis=None + x, axisout = _chk_asarray(x, axis) + if y is not None: + # Deal only with 2-D `x` case. + y, _ = _chk_asarray(y, axis) + if axisout == 0: + x = ma.column_stack((x, y)) + else: + x = ma.vstack((x, y)) + + if axisout == 1: + # To simplify the code that follow (always use `n_obs, n_vars` shape) + x = x.T + + if nan_policy == 'omit': + x = ma.masked_invalid(x) + + def _spearmanr_2cols(x): + # Mask the same observations for all variables, and then drop those + # observations (can't leave them masked, rankdata is weird). + x = ma.mask_rowcols(x, axis=0) + x = x[~x.mask.any(axis=1), :] + + # If either column is entirely NaN or Inf + if not np.any(x.data): + res = scipy.stats._stats_py.SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + + m = ma.getmask(x) + n_obs = x.shape[0] + dof = n_obs - 2 - int(m.sum(axis=0)[0]) + if dof < 0: + raise ValueError("The input must have at least 3 entries!") + + # Gets the ranks and rank differences + x_ranked = rankdata(x, axis=0) + rs = ma.corrcoef(x_ranked, rowvar=False).data + + # rs can have elements equal to 1, so avoid zero division warnings + with np.errstate(divide='ignore'): + # clip the small negative values possibly caused by rounding + # errors before taking the square root + t = rs * np.sqrt((dof / ((rs+1.0) * (1.0-rs))).clip(0)) + + t, prob = _ttest_finish(dof, t, alternative) + + # For backwards compatibility, return scalars when comparing 2 columns + if rs.shape == (2, 2): + res = scipy.stats._stats_py.SignificanceResult(rs[1, 0], + prob[1, 0]) + res.correlation = rs[1, 0] + return res + else: + res = scipy.stats._stats_py.SignificanceResult(rs, prob) + res.correlation = rs + return res + + # Need to do this per pair of variables, otherwise the dropped observations + # in a third column mess up the result for a pair. + n_vars = x.shape[1] + if n_vars == 2: + return _spearmanr_2cols(x) + else: + rs = np.ones((n_vars, n_vars), dtype=float) + prob = np.zeros((n_vars, n_vars), dtype=float) + for var1 in range(n_vars - 1): + for var2 in range(var1+1, n_vars): + result = _spearmanr_2cols(x[:, [var1, var2]]) + rs[var1, var2] = result.correlation + rs[var2, var1] = result.correlation + prob[var1, var2] = result.pvalue + prob[var2, var1] = result.pvalue + + res = scipy.stats._stats_py.SignificanceResult(rs, prob) + res.correlation = rs + return res + + +def _kendall_p_exact(n, c, alternative='two-sided'): + + # Use the fact that distribution is symmetric: always calculate a CDF in + # the left tail. + # This will be the one-sided p-value if `c` is on the side of + # the null distribution predicted by the alternative hypothesis. + # The two-sided p-value will be twice this value. + # If `c` is on the other side of the null distribution, we'll need to + # take the complement and add back the probability mass at `c`. + in_right_tail = (c >= (n*(n-1))//2 - c) + alternative_greater = (alternative == 'greater') + c = int(min(c, (n*(n-1))//2 - c)) + + # Exact p-value, see Maurice G. Kendall, "Rank Correlation Methods" + # (4th Edition), Charles Griffin & Co., 1970. + if n <= 0: + raise ValueError(f'n ({n}) must be positive') + elif c < 0 or 4*c > n*(n-1): + raise ValueError(f'c ({c}) must satisfy 0 <= 4c <= n(n-1) = {n*(n-1)}.') + elif n == 1: + prob = 1.0 + p_mass_at_c = 1 + elif n == 2: + prob = 1.0 + p_mass_at_c = 0.5 + elif c == 0: + prob = 2.0/math.factorial(n) if n < 171 else 0.0 + p_mass_at_c = prob/2 + elif c == 1: + prob = 2.0/math.factorial(n-1) if n < 172 else 0.0 + p_mass_at_c = (n-1)/math.factorial(n) + elif 4*c == n*(n-1) and alternative == 'two-sided': + # I'm sure there's a simple formula for p_mass_at_c in this + # case, but I don't know it. Use generic formula for one-sided p-value. + prob = 1.0 + elif n < 171: + new = np.zeros(c+1) + new[0:2] = 1.0 + for j in range(3,n+1): + new = np.cumsum(new) + if j <= c: + new[j:] -= new[:c+1-j] + prob = 2.0*np.sum(new)/math.factorial(n) + p_mass_at_c = new[-1]/math.factorial(n) + else: + new = np.zeros(c+1) + new[0:2] = 1.0 + for j in range(3, n+1): + new = np.cumsum(new)/j + if j <= c: + new[j:] -= new[:c+1-j] + prob = np.sum(new) + p_mass_at_c = new[-1]/2 + + if alternative != 'two-sided': + # if the alternative hypothesis and alternative agree, + # one-sided p-value is half the two-sided p-value + if in_right_tail == alternative_greater: + prob /= 2 + else: + prob = 1 - prob/2 + p_mass_at_c + + prob = np.clip(prob, 0, 1) + + return prob + + +def kendalltau(x, y, use_ties=True, use_missing=False, method='auto', + alternative='two-sided'): + """ + Computes Kendall's rank correlation tau on two variables *x* and *y*. + + Parameters + ---------- + x : sequence + First data list (for example, time). + y : sequence + Second data list. + use_ties : {True, False}, optional + Whether ties correction should be performed. + use_missing : {False, True}, optional + Whether missing data should be allocated a rank of 0 (False) or the + average rank (True) + method : {'auto', 'asymptotic', 'exact'}, optional + Defines which method is used to calculate the p-value [1]_. + 'asymptotic' uses a normal approximation valid for large samples. + 'exact' computes the exact p-value, but can only be used if no ties + are present. As the sample size increases, the 'exact' computation + time may grow and the result may lose some precision. + 'auto' is the default and selects the appropriate + method based on a trade-off between speed and accuracy. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + + * 'two-sided': the rank correlation is nonzero + * 'less': the rank correlation is negative (less than zero) + * 'greater': the rank correlation is positive (greater than zero) + + Returns + ------- + res : SignificanceResult + An object containing attributes: + + statistic : float + The tau statistic. + pvalue : float + The p-value for a hypothesis test whose null hypothesis is + an absence of association, tau = 0. + + References + ---------- + .. [1] Maurice G. Kendall, "Rank Correlation Methods" (4th Edition), + Charles Griffin & Co., 1970. + + """ + (x, y, n) = _chk_size(x, y) + (x, y) = (x.flatten(), y.flatten()) + m = ma.mask_or(ma.getmask(x), ma.getmask(y)) + if m is not nomask: + x = ma.array(x, mask=m, copy=True) + y = ma.array(y, mask=m, copy=True) + # need int() here, otherwise numpy defaults to 32 bit + # integer on all Windows architectures, causing overflow. + # int() will keep it infinite precision. + n -= int(m.sum()) + + if n < 2: + res = scipy.stats._stats_py.SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + + rx = ma.masked_equal(rankdata(x, use_missing=use_missing), 0) + ry = ma.masked_equal(rankdata(y, use_missing=use_missing), 0) + idx = rx.argsort() + (rx, ry) = (rx[idx], ry[idx]) + C = np.sum([((ry[i+1:] > ry[i]) * (rx[i+1:] > rx[i])).filled(0).sum() + for i in range(len(ry)-1)], dtype=float) + D = np.sum([((ry[i+1:] < ry[i])*(rx[i+1:] > rx[i])).filled(0).sum() + for i in range(len(ry)-1)], dtype=float) + xties = count_tied_groups(x) + yties = count_tied_groups(y) + if use_ties: + corr_x = np.sum([v*k*(k-1) for (k,v) in xties.items()], dtype=float) + corr_y = np.sum([v*k*(k-1) for (k,v) in yties.items()], dtype=float) + denom = ma.sqrt((n*(n-1)-corr_x)/2. * (n*(n-1)-corr_y)/2.) + else: + denom = n*(n-1)/2. + tau = (C-D) / denom + + if method == 'exact' and (xties or yties): + raise ValueError("Ties found, exact method cannot be used.") + + if method == 'auto': + if (not xties and not yties) and (n <= 33 or min(C, n*(n-1)/2.0-C) <= 1): + method = 'exact' + else: + method = 'asymptotic' + + if not xties and not yties and method == 'exact': + prob = _kendall_p_exact(n, C, alternative) + + elif method == 'asymptotic': + var_s = n*(n-1)*(2*n+5) + if use_ties: + var_s -= np.sum([v*k*(k-1)*(2*k+5)*1. for (k,v) in xties.items()]) + var_s -= np.sum([v*k*(k-1)*(2*k+5)*1. for (k,v) in yties.items()]) + v1 = (np.sum([v*k*(k-1) for (k, v) in xties.items()], dtype=float) * + np.sum([v*k*(k-1) for (k, v) in yties.items()], dtype=float)) + v1 /= 2.*n*(n-1) + if n > 2: + v2 = np.sum([v*k*(k-1)*(k-2) for (k,v) in xties.items()], + dtype=float) * \ + np.sum([v*k*(k-1)*(k-2) for (k,v) in yties.items()], + dtype=float) + v2 /= 9.*n*(n-1)*(n-2) + else: + v2 = 0 + else: + v1 = v2 = 0 + + var_s /= 18. + var_s += (v1 + v2) + z = (C-D)/np.sqrt(var_s) + prob = scipy.stats._stats_py._get_pvalue(z, distributions.norm, alternative) + else: + raise ValueError("Unknown method "+str(method)+" specified, please " + "use auto, exact or asymptotic.") + + res = scipy.stats._stats_py.SignificanceResult(tau[()], prob[()]) + res.correlation = tau + return res + + +def kendalltau_seasonal(x): + """ + Computes a multivariate Kendall's rank correlation tau, for seasonal data. + + Parameters + ---------- + x : 2-D ndarray + Array of seasonal data, with seasons in columns. + + """ + x = ma.array(x, subok=True, copy=False, ndmin=2) + (n,m) = x.shape + n_p = x.count(0) + + S_szn = sum(msign(x[i:]-x[i]).sum(0) for i in range(n)) + S_tot = S_szn.sum() + + n_tot = x.count() + ties = count_tied_groups(x.compressed()) + corr_ties = sum(v*k*(k-1) for (k,v) in ties.items()) + denom_tot = ma.sqrt(1.*n_tot*(n_tot-1)*(n_tot*(n_tot-1)-corr_ties))/2. + + R = rankdata(x, axis=0, use_missing=True) + K = ma.empty((m,m), dtype=int) + covmat = ma.empty((m,m), dtype=float) + denom_szn = ma.empty(m, dtype=float) + for j in range(m): + ties_j = count_tied_groups(x[:,j].compressed()) + corr_j = sum(v*k*(k-1) for (k,v) in ties_j.items()) + cmb = n_p[j]*(n_p[j]-1) + for k in range(j,m,1): + K[j,k] = sum(msign((x[i:,j]-x[i,j])*(x[i:,k]-x[i,k])).sum() + for i in range(n)) + covmat[j,k] = (K[j,k] + 4*(R[:,j]*R[:,k]).sum() - + n*(n_p[j]+1)*(n_p[k]+1))/3. + K[k,j] = K[j,k] + covmat[k,j] = covmat[j,k] + + denom_szn[j] = ma.sqrt(cmb*(cmb-corr_j)) / 2. + + var_szn = covmat.diagonal() + + z_szn = msign(S_szn) * (abs(S_szn)-1) / ma.sqrt(var_szn) + z_tot_ind = msign(S_tot) * (abs(S_tot)-1) / ma.sqrt(var_szn.sum()) + z_tot_dep = msign(S_tot) * (abs(S_tot)-1) / ma.sqrt(covmat.sum()) + + prob_szn = special.erfc(abs(z_szn.data)/np.sqrt(2)) + prob_tot_ind = special.erfc(abs(z_tot_ind)/np.sqrt(2)) + prob_tot_dep = special.erfc(abs(z_tot_dep)/np.sqrt(2)) + + chi2_tot = (z_szn*z_szn).sum() + chi2_trd = m * z_szn.mean()**2 + output = {'seasonal tau': S_szn/denom_szn, + 'global tau': S_tot/denom_tot, + 'global tau (alt)': S_tot/denom_szn.sum(), + 'seasonal p-value': prob_szn, + 'global p-value (indep)': prob_tot_ind, + 'global p-value (dep)': prob_tot_dep, + 'chi2 total': chi2_tot, + 'chi2 trend': chi2_trd, + } + return output + + +PointbiserialrResult = namedtuple('PointbiserialrResult', ('correlation', + 'pvalue')) + + +def pointbiserialr(x, y): + """Calculates a point biserial correlation coefficient and its p-value. + + Parameters + ---------- + x : array_like of bools + Input array. + y : array_like + Input array. + + Returns + ------- + correlation : float + R value + pvalue : float + 2-tailed p-value + + Notes + ----- + Missing values are considered pair-wise: if a value is missing in x, + the corresponding value in y is masked. + + For more details on `pointbiserialr`, see `scipy.stats.pointbiserialr`. + + """ + x = ma.fix_invalid(x, copy=True).astype(bool) + y = ma.fix_invalid(y, copy=True).astype(float) + # Get rid of the missing data + m = ma.mask_or(ma.getmask(x), ma.getmask(y)) + if m is not nomask: + unmask = np.logical_not(m) + x = x[unmask] + y = y[unmask] + + n = len(x) + # phat is the fraction of x values that are True + phat = x.sum() / float(n) + y0 = y[~x] # y-values where x is False + y1 = y[x] # y-values where x is True + y0m = y0.mean() + y1m = y1.mean() + + rpb = (y1m - y0m)*np.sqrt(phat * (1-phat)) / y.std() + + df = n-2 + t = rpb*ma.sqrt(df/(1.0-rpb**2)) + prob = _betai(0.5*df, 0.5, df/(df+t*t)) + + return PointbiserialrResult(rpb, prob) + + +def linregress(x, y=None): + r""" + Linear regression calculation + + Note that the non-masked version is used, and that this docstring is + replaced by the non-masked docstring + some info on missing data. + + """ + if y is None: + x = ma.array(x) + if x.shape[0] == 2: + x, y = x + elif x.shape[1] == 2: + x, y = x.T + else: + raise ValueError("If only `x` is given as input, " + "it has to be of shape (2, N) or (N, 2), " + f"provided shape was {x.shape}") + else: + x = ma.array(x) + y = ma.array(y) + + x = x.flatten() + y = y.flatten() + + if np.amax(x) == np.amin(x) and len(x) > 1: + raise ValueError("Cannot calculate a linear regression " + "if all x values are identical") + + m = ma.mask_or(ma.getmask(x), ma.getmask(y), shrink=False) + if m is not nomask: + x = ma.array(x, mask=m) + y = ma.array(y, mask=m) + if np.any(~m): + result = stats_linregress(x.data[~m], y.data[~m]) + else: + # All data is masked + result = stats_LinregressResult(slope=None, intercept=None, + rvalue=None, pvalue=None, + stderr=None, + intercept_stderr=None) + else: + result = stats_linregress(x.data, y.data) + + return result + + +def theilslopes(y, x=None, alpha=0.95, method='separate'): + r""" + Computes the Theil-Sen estimator for a set of points (x, y). + + `theilslopes` implements a method for robust linear regression. It + computes the slope as the median of all slopes between paired values. + + Parameters + ---------- + y : array_like + Dependent variable. + x : array_like or None, optional + Independent variable. If None, use ``arange(len(y))`` instead. + alpha : float, optional + Confidence degree between 0 and 1. Default is 95% confidence. + Note that `alpha` is symmetric around 0.5, i.e. both 0.1 and 0.9 are + interpreted as "find the 90% confidence interval". + method : {'joint', 'separate'}, optional + Method to be used for computing estimate for intercept. + Following methods are supported, + + * 'joint': Uses np.median(y - slope * x) as intercept. + * 'separate': Uses np.median(y) - slope * np.median(x) + as intercept. + + The default is 'separate'. + + .. versionadded:: 1.8.0 + + Returns + ------- + result : ``TheilslopesResult`` instance + The return value is an object with the following attributes: + + slope : float + Theil slope. + intercept : float + Intercept of the Theil line. + low_slope : float + Lower bound of the confidence interval on `slope`. + high_slope : float + Upper bound of the confidence interval on `slope`. + + See Also + -------- + siegelslopes : a similar technique using repeated medians + + + Notes + ----- + For more details on `theilslopes`, see `scipy.stats.theilslopes`. + + """ + y = ma.asarray(y).flatten() + if x is None: + x = ma.arange(len(y), dtype=float) + else: + x = ma.asarray(x).flatten() + if len(x) != len(y): + raise ValueError(f"Incompatible lengths ! ({len(y)}<>{len(x)})") + + m = ma.mask_or(ma.getmask(x), ma.getmask(y)) + y._mask = x._mask = m + # Disregard any masked elements of x or y + y = y.compressed() + x = x.compressed().astype(float) + # We now have unmasked arrays so can use `scipy.stats.theilslopes` + return stats_theilslopes(y, x, alpha=alpha, method=method) + + +def siegelslopes(y, x=None, method="hierarchical"): + r""" + Computes the Siegel estimator for a set of points (x, y). + + `siegelslopes` implements a method for robust linear regression + using repeated medians to fit a line to the points (x, y). + The method is robust to outliers with an asymptotic breakdown point + of 50%. + + Parameters + ---------- + y : array_like + Dependent variable. + x : array_like or None, optional + Independent variable. If None, use ``arange(len(y))`` instead. + method : {'hierarchical', 'separate'} + If 'hierarchical', estimate the intercept using the estimated + slope ``slope`` (default option). + If 'separate', estimate the intercept independent of the estimated + slope. See Notes for details. + + Returns + ------- + result : ``SiegelslopesResult`` instance + The return value is an object with the following attributes: + + slope : float + Estimate of the slope of the regression line. + intercept : float + Estimate of the intercept of the regression line. + + See Also + -------- + theilslopes : a similar technique without repeated medians + + Notes + ----- + For more details on `siegelslopes`, see `scipy.stats.siegelslopes`. + + """ + y = ma.asarray(y).ravel() + if x is None: + x = ma.arange(len(y), dtype=float) + else: + x = ma.asarray(x).ravel() + if len(x) != len(y): + raise ValueError(f"Incompatible lengths ! ({len(y)}<>{len(x)})") + + m = ma.mask_or(ma.getmask(x), ma.getmask(y)) + y._mask = x._mask = m + # Disregard any masked elements of x or y + y = y.compressed() + x = x.compressed().astype(float) + # We now have unmasked arrays so can use `scipy.stats.siegelslopes` + return stats_siegelslopes(y, x, method=method) + + +SenSeasonalSlopesResult = _make_tuple_bunch('SenSeasonalSlopesResult', + ['intra_slope', 'inter_slope']) + + +def sen_seasonal_slopes(x): + r""" + Computes seasonal Theil-Sen and Kendall slope estimators. + + The seasonal generalization of Sen's slope computes the slopes between all + pairs of values within a "season" (column) of a 2D array. It returns an + array containing the median of these "within-season" slopes for each + season (the Theil-Sen slope estimator of each season), and it returns the + median of the within-season slopes across all seasons (the seasonal Kendall + slope estimator). + + Parameters + ---------- + x : 2D array_like + Each column of `x` contains measurements of the dependent variable + within a season. The independent variable (usually time) of each season + is assumed to be ``np.arange(x.shape[0])``. + + Returns + ------- + result : ``SenSeasonalSlopesResult`` instance + The return value is an object with the following attributes: + + intra_slope : ndarray + For each season, the Theil-Sen slope estimator: the median of + within-season slopes. + inter_slope : float + The seasonal Kendall slope estimateor: the median of within-season + slopes *across all* seasons. + + See Also + -------- + theilslopes : the analogous function for non-seasonal data + scipy.stats.theilslopes : non-seasonal slopes for non-masked arrays + + Notes + ----- + The slopes :math:`d_{ijk}` within season :math:`i` are: + + .. math:: + + d_{ijk} = \frac{x_{ij} - x_{ik}} + {j - k} + + for pairs of distinct integer indices :math:`j, k` of :math:`x`. + + Element :math:`i` of the returned `intra_slope` array is the median of the + :math:`d_{ijk}` over all :math:`j < k`; this is the Theil-Sen slope + estimator of season :math:`i`. The returned `inter_slope` value, better + known as the seasonal Kendall slope estimator, is the median of the + :math:`d_{ijk}` over all :math:`i, j, k`. + + References + ---------- + .. [1] Hirsch, Robert M., James R. Slack, and Richard A. Smith. + "Techniques of trend analysis for monthly water quality data." + *Water Resources Research* 18.1 (1982): 107-121. + + Examples + -------- + Suppose we have 100 observations of a dependent variable for each of four + seasons: + + >>> import numpy as np + >>> rng = np.random.default_rng() + >>> x = rng.random(size=(100, 4)) + + We compute the seasonal slopes as: + + >>> from scipy import stats + >>> intra_slope, inter_slope = stats.mstats.sen_seasonal_slopes(x) + + If we define a function to compute all slopes between observations within + a season: + + >>> def dijk(yi): + ... n = len(yi) + ... x = np.arange(n) + ... dy = yi - yi[:, np.newaxis] + ... dx = x - x[:, np.newaxis] + ... # we only want unique pairs of distinct indices + ... mask = np.triu(np.ones((n, n), dtype=bool), k=1) + ... return dy[mask]/dx[mask] + + then element ``i`` of ``intra_slope`` is the median of ``dijk[x[:, i]]``: + + >>> i = 2 + >>> np.allclose(np.median(dijk(x[:, i])), intra_slope[i]) + True + + and ``inter_slope`` is the median of the values returned by ``dijk`` for + all seasons: + + >>> all_slopes = np.concatenate([dijk(x[:, i]) for i in range(x.shape[1])]) + >>> np.allclose(np.median(all_slopes), inter_slope) + True + + Because the data are randomly generated, we would expect the median slopes + to be nearly zero both within and across all seasons, and indeed they are: + + >>> intra_slope.data + array([ 0.00124504, -0.00277761, -0.00221245, -0.00036338]) + >>> inter_slope + -0.0010511779872922058 + + """ + x = ma.array(x, subok=True, copy=False, ndmin=2) + (n,_) = x.shape + # Get list of slopes per season + szn_slopes = ma.vstack([(x[i+1:]-x[i])/np.arange(1,n-i)[:,None] + for i in range(n)]) + szn_medslopes = ma.median(szn_slopes, axis=0) + medslope = ma.median(szn_slopes, axis=None) + return SenSeasonalSlopesResult(szn_medslopes, medslope) + + +Ttest_1sampResult = namedtuple('Ttest_1sampResult', ('statistic', 'pvalue')) + + +def ttest_1samp(a, popmean, axis=0, alternative='two-sided'): + """ + Calculates the T-test for the mean of ONE group of scores. + + Parameters + ---------- + a : array_like + sample observation + popmean : float or array_like + expected value in null hypothesis, if array_like than it must have the + same shape as `a` excluding the axis dimension + axis : int or None, optional + Axis along which to compute test. If None, compute over the whole + array `a`. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the mean of the underlying distribution of the sample + is different than the given population mean (`popmean`) + * 'less': the mean of the underlying distribution of the sample is + less than the given population mean (`popmean`) + * 'greater': the mean of the underlying distribution of the sample is + greater than the given population mean (`popmean`) + + .. versionadded:: 1.7.0 + + Returns + ------- + statistic : float or array + t-statistic + pvalue : float or array + The p-value + + Notes + ----- + For more details on `ttest_1samp`, see `scipy.stats.ttest_1samp`. + + """ + a, axis = _chk_asarray(a, axis) + if a.size == 0: + return (np.nan, np.nan) + + x = a.mean(axis=axis) + v = a.var(axis=axis, ddof=1) + n = a.count(axis=axis) + # force df to be an array for masked division not to throw a warning + df = ma.asanyarray(n - 1.0) + svar = ((n - 1.0) * v) / df + with np.errstate(divide='ignore', invalid='ignore'): + t = (x - popmean) / ma.sqrt(svar / n) + + t, prob = _ttest_finish(df, t, alternative) + return Ttest_1sampResult(t, prob) + + +ttest_onesamp = ttest_1samp + + +Ttest_indResult = namedtuple('Ttest_indResult', ('statistic', 'pvalue')) + + +def ttest_ind(a, b, axis=0, equal_var=True, alternative='two-sided'): + """ + Calculates the T-test for the means of TWO INDEPENDENT samples of scores. + + Parameters + ---------- + a, b : array_like + The arrays must have the same shape, except in the dimension + corresponding to `axis` (the first, by default). + axis : int or None, optional + Axis along which to compute test. If None, compute over the whole + arrays, `a`, and `b`. + equal_var : bool, optional + If True, perform a standard independent 2 sample test that assumes equal + population variances. + If False, perform Welch's t-test, which does not assume equal population + variance. + + .. versionadded:: 0.17.0 + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the means of the distributions underlying the samples + are unequal. + * 'less': the mean of the distribution underlying the first sample + is less than the mean of the distribution underlying the second + sample. + * 'greater': the mean of the distribution underlying the first + sample is greater than the mean of the distribution underlying + the second sample. + + .. versionadded:: 1.7.0 + + Returns + ------- + statistic : float or array + The calculated t-statistic. + pvalue : float or array + The p-value. + + Notes + ----- + For more details on `ttest_ind`, see `scipy.stats.ttest_ind`. + + """ + a, b, axis = _chk2_asarray(a, b, axis) + + if a.size == 0 or b.size == 0: + return Ttest_indResult(np.nan, np.nan) + + (x1, x2) = (a.mean(axis), b.mean(axis)) + (v1, v2) = (a.var(axis=axis, ddof=1), b.var(axis=axis, ddof=1)) + (n1, n2) = (a.count(axis), b.count(axis)) + + if equal_var: + # force df to be an array for masked division not to throw a warning + df = ma.asanyarray(n1 + n2 - 2.0) + svar = ((n1-1)*v1+(n2-1)*v2) / df + denom = ma.sqrt(svar*(1.0/n1 + 1.0/n2)) # n-D computation here! + else: + vn1 = v1/n1 + vn2 = v2/n2 + with np.errstate(divide='ignore', invalid='ignore'): + df = (vn1 + vn2)**2 / (vn1**2 / (n1 - 1) + vn2**2 / (n2 - 1)) + + # If df is undefined, variances are zero. + # It doesn't matter what df is as long as it is not NaN. + df = np.where(np.isnan(df), 1, df) + denom = ma.sqrt(vn1 + vn2) + + with np.errstate(divide='ignore', invalid='ignore'): + t = (x1-x2) / denom + + t, prob = _ttest_finish(df, t, alternative) + return Ttest_indResult(t, prob) + + +Ttest_relResult = namedtuple('Ttest_relResult', ('statistic', 'pvalue')) + + +def ttest_rel(a, b, axis=0, alternative='two-sided'): + """ + Calculates the T-test on TWO RELATED samples of scores, a and b. + + Parameters + ---------- + a, b : array_like + The arrays must have the same shape. + axis : int or None, optional + Axis along which to compute test. If None, compute over the whole + arrays, `a`, and `b`. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the means of the distributions underlying the samples + are unequal. + * 'less': the mean of the distribution underlying the first sample + is less than the mean of the distribution underlying the second + sample. + * 'greater': the mean of the distribution underlying the first + sample is greater than the mean of the distribution underlying + the second sample. + + .. versionadded:: 1.7.0 + + Returns + ------- + statistic : float or array + t-statistic + pvalue : float or array + two-tailed p-value + + Notes + ----- + For more details on `ttest_rel`, see `scipy.stats.ttest_rel`. + + """ + a, b, axis = _chk2_asarray(a, b, axis) + if len(a) != len(b): + raise ValueError('unequal length arrays') + + if a.size == 0 or b.size == 0: + return Ttest_relResult(np.nan, np.nan) + + n = a.count(axis) + df = ma.asanyarray(n-1.0) + d = (a-b).astype('d') + dm = d.mean(axis) + v = d.var(axis=axis, ddof=1) + denom = ma.sqrt(v / n) + with np.errstate(divide='ignore', invalid='ignore'): + t = dm / denom + + t, prob = _ttest_finish(df, t, alternative) + return Ttest_relResult(t, prob) + + +MannwhitneyuResult = namedtuple('MannwhitneyuResult', ('statistic', + 'pvalue')) + + +def mannwhitneyu(x,y, use_continuity=True): + """ + Computes the Mann-Whitney statistic + + Missing values in `x` and/or `y` are discarded. + + Parameters + ---------- + x : sequence + Input + y : sequence + Input + use_continuity : {True, False}, optional + Whether a continuity correction (1/2.) should be taken into account. + + Returns + ------- + statistic : float + The minimum of the Mann-Whitney statistics + pvalue : float + Approximate two-sided p-value assuming a normal distribution. + + """ + x = ma.asarray(x).compressed().view(ndarray) + y = ma.asarray(y).compressed().view(ndarray) + ranks = rankdata(np.concatenate([x,y])) + (nx, ny) = (len(x), len(y)) + nt = nx + ny + U = ranks[:nx].sum() - nx*(nx+1)/2. + U = max(U, nx*ny - U) + u = nx*ny - U + + mu = (nx*ny)/2. + sigsq = (nt**3 - nt)/12. + ties = count_tied_groups(ranks) + sigsq -= sum(v*(k**3-k) for (k,v) in ties.items())/12. + sigsq *= nx*ny/float(nt*(nt-1)) + + if use_continuity: + z = (U - 1/2. - mu) / ma.sqrt(sigsq) + else: + z = (U - mu) / ma.sqrt(sigsq) + + prob = special.erfc(abs(z)/np.sqrt(2)) + return MannwhitneyuResult(u, prob) + + +KruskalResult = namedtuple('KruskalResult', ('statistic', 'pvalue')) + + +def kruskal(*args): + """ + Compute the Kruskal-Wallis H-test for independent samples + + Parameters + ---------- + sample1, sample2, ... : array_like + Two or more arrays with the sample measurements can be given as + arguments. + + Returns + ------- + statistic : float + The Kruskal-Wallis H statistic, corrected for ties + pvalue : float + The p-value for the test using the assumption that H has a chi + square distribution + + Notes + ----- + For more details on `kruskal`, see `scipy.stats.kruskal`. + + Examples + -------- + >>> from scipy.stats.mstats import kruskal + + Random samples from three different brands of batteries were tested + to see how long the charge lasted. Results were as follows: + + >>> a = [6.3, 5.4, 5.7, 5.2, 5.0] + >>> b = [6.9, 7.0, 6.1, 7.9] + >>> c = [7.2, 6.9, 6.1, 6.5] + + Test the hypothesis that the distribution functions for all of the brands' + durations are identical. Use 5% level of significance. + + >>> kruskal(a, b, c) + KruskalResult(statistic=7.113812154696133, pvalue=0.028526948491942164) + + The null hypothesis is rejected at the 5% level of significance + because the returned p-value is less than the critical value of 5%. + + """ + output = argstoarray(*args) + ranks = ma.masked_equal(rankdata(output, use_missing=False), 0) + sumrk = ranks.sum(-1) + ngrp = ranks.count(-1) + ntot = ranks.count() + H = 12./(ntot*(ntot+1)) * (sumrk**2/ngrp).sum() - 3*(ntot+1) + # Tie correction + ties = count_tied_groups(ranks) + T = 1. - sum(v*(k**3-k) for (k,v) in ties.items())/float(ntot**3-ntot) + if T == 0: + raise ValueError('All numbers are identical in kruskal') + + H /= T + df = len(output) - 1 + prob = distributions.chi2.sf(H, df) + return KruskalResult(H, prob) + + +kruskalwallis = kruskal + + +@_rename_parameter("mode", "method") +def ks_1samp(x, cdf, args=(), alternative="two-sided", method='auto'): + """ + Computes the Kolmogorov-Smirnov test on one sample of masked values. + + Missing values in `x` are discarded. + + Parameters + ---------- + x : array_like + a 1-D array of observations of random variables. + cdf : str or callable + If a string, it should be the name of a distribution in `scipy.stats`. + If a callable, that callable is used to calculate the cdf. + args : tuple, sequence, optional + Distribution parameters, used if `cdf` is a string. + alternative : {'two-sided', 'less', 'greater'}, optional + Indicates the alternative hypothesis. Default is 'two-sided'. + method : {'auto', 'exact', 'asymp'}, optional + Defines the method used for calculating the p-value. + The following options are available (default is 'auto'): + + * 'auto' : use 'exact' for small size arrays, 'asymp' for large + * 'exact' : use approximation to exact distribution of test statistic + * 'asymp' : use asymptotic distribution of test statistic + + Returns + ------- + d : float + Value of the Kolmogorov Smirnov test + p : float + Corresponding p-value. + + """ + alternative = {'t': 'two-sided', 'g': 'greater', 'l': 'less'}.get( + alternative.lower()[0], alternative) + return scipy.stats._stats_py.ks_1samp( + x, cdf, args=args, alternative=alternative, method=method) + + +@_rename_parameter("mode", "method") +def ks_2samp(data1, data2, alternative="two-sided", method='auto'): + """ + Computes the Kolmogorov-Smirnov test on two samples. + + Missing values in `x` and/or `y` are discarded. + + Parameters + ---------- + data1 : array_like + First data set + data2 : array_like + Second data set + alternative : {'two-sided', 'less', 'greater'}, optional + Indicates the alternative hypothesis. Default is 'two-sided'. + method : {'auto', 'exact', 'asymp'}, optional + Defines the method used for calculating the p-value. + The following options are available (default is 'auto'): + + * 'auto' : use 'exact' for small size arrays, 'asymp' for large + * 'exact' : use approximation to exact distribution of test statistic + * 'asymp' : use asymptotic distribution of test statistic + + Returns + ------- + d : float + Value of the Kolmogorov Smirnov test + p : float + Corresponding p-value. + + """ + # Ideally this would be accomplished by + # ks_2samp = scipy.stats._stats_py.ks_2samp + # but the circular dependencies between _mstats_basic and stats prevent that. + alternative = {'t': 'two-sided', 'g': 'greater', 'l': 'less'}.get( + alternative.lower()[0], alternative) + return scipy.stats._stats_py.ks_2samp(data1, data2, + alternative=alternative, + method=method) + + +ks_twosamp = ks_2samp + + +@_rename_parameter("mode", "method") +def kstest(data1, data2, args=(), alternative='two-sided', method='auto'): + """ + + Parameters + ---------- + data1 : array_like + data2 : str, callable or array_like + args : tuple, sequence, optional + Distribution parameters, used if `data1` or `data2` are strings. + alternative : str, as documented in stats.kstest + method : str, as documented in stats.kstest + + Returns + ------- + tuple of (K-S statistic, probability) + + """ + return scipy.stats._stats_py.kstest(data1, data2, args, + alternative=alternative, method=method) + + +def trima(a, limits=None, inclusive=(True,True)): + """ + Trims an array by masking the data outside some given limits. + + Returns a masked version of the input array. + + Parameters + ---------- + a : array_like + Input array. + limits : {None, tuple}, optional + Tuple of (lower limit, upper limit) in absolute values. + Values of the input array lower (greater) than the lower (upper) limit + will be masked. A limit is None indicates an open interval. + inclusive : (bool, bool) tuple, optional + Tuple of (lower flag, upper flag), indicating whether values exactly + equal to the lower (upper) limit are allowed. + + Examples + -------- + >>> from scipy.stats.mstats import trima + >>> import numpy as np + + >>> a = np.arange(10) + + The interval is left-closed and right-open, i.e., `[2, 8)`. + Trim the array by keeping only values in the interval. + + >>> trima(a, limits=(2, 8), inclusive=(True, False)) + masked_array(data=[--, --, 2, 3, 4, 5, 6, 7, --, --], + mask=[ True, True, False, False, False, False, False, False, + True, True], + fill_value=999999) + + """ + a = ma.asarray(a) + a.unshare_mask() + if (limits is None) or (limits == (None, None)): + return a + + (lower_lim, upper_lim) = limits + (lower_in, upper_in) = inclusive + condition = False + if lower_lim is not None: + if lower_in: + condition |= (a < lower_lim) + else: + condition |= (a <= lower_lim) + + if upper_lim is not None: + if upper_in: + condition |= (a > upper_lim) + else: + condition |= (a >= upper_lim) + + a[condition.filled(True)] = masked + return a + + +def trimr(a, limits=None, inclusive=(True, True), axis=None): + """ + Trims an array by masking some proportion of the data on each end. + Returns a masked version of the input array. + + Parameters + ---------- + a : sequence + Input array. + limits : {None, tuple}, optional + Tuple of the percentages to cut on each side of the array, with respect + to the number of unmasked data, as floats between 0. and 1. + Noting n the number of unmasked data before trimming, the + (n*limits[0])th smallest data and the (n*limits[1])th largest data are + masked, and the total number of unmasked data after trimming is + n*(1.-sum(limits)). The value of one limit can be set to None to + indicate an open interval. + inclusive : {(True,True) tuple}, optional + Tuple of flags indicating whether the number of data being masked on + the left (right) end should be truncated (True) or rounded (False) to + integers. + axis : {None,int}, optional + Axis along which to trim. If None, the whole array is trimmed, but its + shape is maintained. + + """ + def _trimr1D(a, low_limit, up_limit, low_inclusive, up_inclusive): + n = a.count() + idx = a.argsort() + if low_limit: + if low_inclusive: + lowidx = int(low_limit*n) + else: + lowidx = int(np.round(low_limit*n)) + a[idx[:lowidx]] = masked + if up_limit is not None: + if up_inclusive: + upidx = n - int(n*up_limit) + else: + upidx = n - int(np.round(n*up_limit)) + a[idx[upidx:]] = masked + return a + + a = ma.asarray(a) + a.unshare_mask() + if limits is None: + return a + + # Check the limits + (lolim, uplim) = limits + errmsg = "The proportion to cut from the %s should be between 0. and 1." + if lolim is not None: + if lolim > 1. or lolim < 0: + raise ValueError(errmsg % 'beginning' + "(got %s)" % lolim) + if uplim is not None: + if uplim > 1. or uplim < 0: + raise ValueError(errmsg % 'end' + "(got %s)" % uplim) + + (loinc, upinc) = inclusive + + if axis is None: + shp = a.shape + return _trimr1D(a.ravel(),lolim,uplim,loinc,upinc).reshape(shp) + else: + return ma.apply_along_axis(_trimr1D, axis, a, lolim,uplim,loinc,upinc) + + +trimdoc = """ + Parameters + ---------- + a : sequence + Input array + limits : {None, tuple}, optional + If `relative` is False, tuple (lower limit, upper limit) in absolute values. + Values of the input array lower (greater) than the lower (upper) limit are + masked. + + If `relative` is True, tuple (lower percentage, upper percentage) to cut + on each side of the array, with respect to the number of unmasked data. + + Noting n the number of unmasked data before trimming, the (n*limits[0])th + smallest data and the (n*limits[1])th largest data are masked, and the + total number of unmasked data after trimming is n*(1.-sum(limits)) + In each case, the value of one limit can be set to None to indicate an + open interval. + + If limits is None, no trimming is performed + inclusive : {(bool, bool) tuple}, optional + If `relative` is False, tuple indicating whether values exactly equal + to the absolute limits are allowed. + If `relative` is True, tuple indicating whether the number of data + being masked on each side should be rounded (True) or truncated + (False). + relative : bool, optional + Whether to consider the limits as absolute values (False) or proportions + to cut (True). + axis : int, optional + Axis along which to trim. +""" + + +def trim(a, limits=None, inclusive=(True,True), relative=False, axis=None): + """ + Trims an array by masking the data outside some given limits. + + Returns a masked version of the input array. + + %s + + Examples + -------- + >>> from scipy.stats.mstats import trim + >>> z = [ 1, 2, 3, 4, 5, 6, 7, 8, 9,10] + >>> print(trim(z,(3,8))) + [-- -- 3 4 5 6 7 8 -- --] + >>> print(trim(z,(0.1,0.2),relative=True)) + [-- 2 3 4 5 6 7 8 -- --] + + """ + if relative: + return trimr(a, limits=limits, inclusive=inclusive, axis=axis) + else: + return trima(a, limits=limits, inclusive=inclusive) + + +if trim.__doc__: + trim.__doc__ = trim.__doc__ % trimdoc + + +def trimboth(data, proportiontocut=0.2, inclusive=(True,True), axis=None): + """ + Trims the smallest and largest data values. + + Trims the `data` by masking the ``int(proportiontocut * n)`` smallest and + ``int(proportiontocut * n)`` largest values of data along the given axis, + where n is the number of unmasked values before trimming. + + Parameters + ---------- + data : ndarray + Data to trim. + proportiontocut : float, optional + Percentage of trimming (as a float between 0 and 1). + If n is the number of unmasked values before trimming, the number of + values after trimming is ``(1 - 2*proportiontocut) * n``. + Default is 0.2. + inclusive : {(bool, bool) tuple}, optional + Tuple indicating whether the number of data being masked on each side + should be rounded (True) or truncated (False). + axis : int, optional + Axis along which to perform the trimming. + If None, the input array is first flattened. + + """ + return trimr(data, limits=(proportiontocut,proportiontocut), + inclusive=inclusive, axis=axis) + + +def trimtail(data, proportiontocut=0.2, tail='left', inclusive=(True,True), + axis=None): + """ + Trims the data by masking values from one tail. + + Parameters + ---------- + data : array_like + Data to trim. + proportiontocut : float, optional + Percentage of trimming. If n is the number of unmasked values + before trimming, the number of values after trimming is + ``(1 - proportiontocut) * n``. Default is 0.2. + tail : {'left','right'}, optional + If 'left' the `proportiontocut` lowest values will be masked. + If 'right' the `proportiontocut` highest values will be masked. + Default is 'left'. + inclusive : {(bool, bool) tuple}, optional + Tuple indicating whether the number of data being masked on each side + should be rounded (True) or truncated (False). Default is + (True, True). + axis : int, optional + Axis along which to perform the trimming. + If None, the input array is first flattened. Default is None. + + Returns + ------- + trimtail : ndarray + Returned array of same shape as `data` with masked tail values. + + """ + tail = str(tail).lower()[0] + if tail == 'l': + limits = (proportiontocut,None) + elif tail == 'r': + limits = (None, proportiontocut) + else: + raise TypeError("The tail argument should be in ('left','right')") + + return trimr(data, limits=limits, axis=axis, inclusive=inclusive) + + +trim1 = trimtail + + +def trimmed_mean(a, limits=(0.1,0.1), inclusive=(1,1), relative=True, + axis=None): + """Returns the trimmed mean of the data along the given axis. + + %s + + """ + if (not isinstance(limits,tuple)) and isinstance(limits,float): + limits = (limits, limits) + if relative: + return trimr(a,limits=limits,inclusive=inclusive,axis=axis).mean(axis=axis) + else: + return trima(a,limits=limits,inclusive=inclusive).mean(axis=axis) + + +if trimmed_mean.__doc__: + trimmed_mean.__doc__ = trimmed_mean.__doc__ % trimdoc + + +def trimmed_var(a, limits=(0.1,0.1), inclusive=(1,1), relative=True, + axis=None, ddof=0): + """Returns the trimmed variance of the data along the given axis. + + %s + ddof : {0,integer}, optional + Means Delta Degrees of Freedom. The denominator used during computations + is (n-ddof). DDOF=0 corresponds to a biased estimate, DDOF=1 to an un- + biased estimate of the variance. + + """ + if (not isinstance(limits,tuple)) and isinstance(limits,float): + limits = (limits, limits) + if relative: + out = trimr(a,limits=limits, inclusive=inclusive,axis=axis) + else: + out = trima(a,limits=limits,inclusive=inclusive) + + return out.var(axis=axis, ddof=ddof) + + +if trimmed_var.__doc__: + trimmed_var.__doc__ = trimmed_var.__doc__ % trimdoc + + +def trimmed_std(a, limits=(0.1,0.1), inclusive=(1,1), relative=True, + axis=None, ddof=0): + """Returns the trimmed standard deviation of the data along the given axis. + + %s + ddof : {0,integer}, optional + Means Delta Degrees of Freedom. The denominator used during computations + is (n-ddof). DDOF=0 corresponds to a biased estimate, DDOF=1 to an un- + biased estimate of the variance. + + """ + if (not isinstance(limits,tuple)) and isinstance(limits,float): + limits = (limits, limits) + if relative: + out = trimr(a,limits=limits,inclusive=inclusive,axis=axis) + else: + out = trima(a,limits=limits,inclusive=inclusive) + return out.std(axis=axis,ddof=ddof) + + +if trimmed_std.__doc__: + trimmed_std.__doc__ = trimmed_std.__doc__ % trimdoc + + +def trimmed_stde(a, limits=(0.1,0.1), inclusive=(1,1), axis=None): + """ + Returns the standard error of the trimmed mean along the given axis. + + Parameters + ---------- + a : sequence + Input array + limits : {(0.1,0.1), tuple of float}, optional + tuple (lower percentage, upper percentage) to cut on each side of the + array, with respect to the number of unmasked data. + + If n is the number of unmasked data before trimming, the values + smaller than ``n * limits[0]`` and the values larger than + ``n * `limits[1]`` are masked, and the total number of unmasked + data after trimming is ``n * (1.-sum(limits))``. In each case, + the value of one limit can be set to None to indicate an open interval. + If `limits` is None, no trimming is performed. + inclusive : {(bool, bool) tuple} optional + Tuple indicating whether the number of data being masked on each side + should be rounded (True) or truncated (False). + axis : int, optional + Axis along which to trim. + + Returns + ------- + trimmed_stde : scalar or ndarray + + """ + def _trimmed_stde_1D(a, low_limit, up_limit, low_inclusive, up_inclusive): + "Returns the standard error of the trimmed mean for a 1D input data." + n = a.count() + idx = a.argsort() + if low_limit: + if low_inclusive: + lowidx = int(low_limit*n) + else: + lowidx = np.round(low_limit*n) + a[idx[:lowidx]] = masked + if up_limit is not None: + if up_inclusive: + upidx = n - int(n*up_limit) + else: + upidx = n - np.round(n*up_limit) + a[idx[upidx:]] = masked + a[idx[:lowidx]] = a[idx[lowidx]] + a[idx[upidx:]] = a[idx[upidx-1]] + winstd = a.std(ddof=1) + return winstd / ((1-low_limit-up_limit)*np.sqrt(len(a))) + + a = ma.array(a, copy=True, subok=True) + a.unshare_mask() + if limits is None: + return a.std(axis=axis,ddof=1)/ma.sqrt(a.count(axis)) + if (not isinstance(limits,tuple)) and isinstance(limits,float): + limits = (limits, limits) + + # Check the limits + (lolim, uplim) = limits + errmsg = "The proportion to cut from the %s should be between 0. and 1." + if lolim is not None: + if lolim > 1. or lolim < 0: + raise ValueError(errmsg % 'beginning' + "(got %s)" % lolim) + if uplim is not None: + if uplim > 1. or uplim < 0: + raise ValueError(errmsg % 'end' + "(got %s)" % uplim) + + (loinc, upinc) = inclusive + if (axis is None): + return _trimmed_stde_1D(a.ravel(),lolim,uplim,loinc,upinc) + else: + if a.ndim > 2: + raise ValueError("Array 'a' must be at most two dimensional, " + "but got a.ndim = %d" % a.ndim) + return ma.apply_along_axis(_trimmed_stde_1D, axis, a, + lolim,uplim,loinc,upinc) + + +def _mask_to_limits(a, limits, inclusive): + """Mask an array for values outside of given limits. + + This is primarily a utility function. + + Parameters + ---------- + a : array + limits : (float or None, float or None) + A tuple consisting of the (lower limit, upper limit). Values in the + input array less than the lower limit or greater than the upper limit + will be masked out. None implies no limit. + inclusive : (bool, bool) + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to lower or upper are allowed. + + Returns + ------- + A MaskedArray. + + Raises + ------ + A ValueError if there are no values within the given limits. + """ + lower_limit, upper_limit = limits + lower_include, upper_include = inclusive + am = ma.MaskedArray(a) + if lower_limit is not None: + if lower_include: + am = ma.masked_less(am, lower_limit) + else: + am = ma.masked_less_equal(am, lower_limit) + + if upper_limit is not None: + if upper_include: + am = ma.masked_greater(am, upper_limit) + else: + am = ma.masked_greater_equal(am, upper_limit) + + if am.count() == 0: + raise ValueError("No array values within given limits") + + return am + + +def tmean(a, limits=None, inclusive=(True, True), axis=None): + """ + Compute the trimmed mean. + + Parameters + ---------- + a : array_like + Array of values. + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None (default), then all + values are used. Either of the limit values in the tuple can also be + None representing a half-open interval. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + axis : int or None, optional + Axis along which to operate. If None, compute over the + whole array. Default is None. + + Returns + ------- + tmean : float + + Notes + ----- + For more details on `tmean`, see `scipy.stats.tmean`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import mstats + >>> a = np.array([[6, 8, 3, 0], + ... [3, 9, 1, 2], + ... [8, 7, 8, 2], + ... [5, 6, 0, 2], + ... [4, 5, 5, 2]]) + ... + ... + >>> mstats.tmean(a, (2,5)) + 3.3 + >>> mstats.tmean(a, (2,5), axis=0) + masked_array(data=[4.0, 5.0, 4.0, 2.0], + mask=[False, False, False, False], + fill_value=1e+20) + + """ + return trima(a, limits=limits, inclusive=inclusive).mean(axis=axis) + + +def tvar(a, limits=None, inclusive=(True, True), axis=0, ddof=1): + """ + Compute the trimmed variance + + This function computes the sample variance of an array of values, + while ignoring values which are outside of given `limits`. + + Parameters + ---------- + a : array_like + Array of values. + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None, then all values are + used. Either of the limit values in the tuple can also be None + representing a half-open interval. The default value is None. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + axis : int or None, optional + Axis along which to operate. If None, compute over the + whole array. Default is zero. + ddof : int, optional + Delta degrees of freedom. Default is 1. + + Returns + ------- + tvar : float + Trimmed variance. + + Notes + ----- + For more details on `tvar`, see `scipy.stats.tvar`. + + """ + a = a.astype(float).ravel() + if limits is None: + n = (~a.mask).sum() # todo: better way to do that? + return np.ma.var(a) * n/(n-1.) + am = _mask_to_limits(a, limits=limits, inclusive=inclusive) + + return np.ma.var(am, axis=axis, ddof=ddof) + + +def tmin(a, lowerlimit=None, axis=0, inclusive=True): + """ + Compute the trimmed minimum + + Parameters + ---------- + a : array_like + array of values + lowerlimit : None or float, optional + Values in the input array less than the given limit will be ignored. + When lowerlimit is None, then all values are used. The default value + is None. + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over the + whole array `a`. + inclusive : {True, False}, optional + This flag determines whether values exactly equal to the lower limit + are included. The default value is True. + + Returns + ------- + tmin : float, int or ndarray + + Notes + ----- + For more details on `tmin`, see `scipy.stats.tmin`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import mstats + >>> a = np.array([[6, 8, 3, 0], + ... [3, 2, 1, 2], + ... [8, 1, 8, 2], + ... [5, 3, 0, 2], + ... [4, 7, 5, 2]]) + ... + >>> mstats.tmin(a, 5) + masked_array(data=[5, 7, 5, --], + mask=[False, False, False, True], + fill_value=999999) + + """ + a, axis = _chk_asarray(a, axis) + am = trima(a, (lowerlimit, None), (inclusive, False)) + return ma.minimum.reduce(am, axis) + + +def tmax(a, upperlimit=None, axis=0, inclusive=True): + """ + Compute the trimmed maximum + + This function computes the maximum value of an array along a given axis, + while ignoring values larger than a specified upper limit. + + Parameters + ---------- + a : array_like + array of values + upperlimit : None or float, optional + Values in the input array greater than the given limit will be ignored. + When upperlimit is None, then all values are used. The default value + is None. + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over the + whole array `a`. + inclusive : {True, False}, optional + This flag determines whether values exactly equal to the upper limit + are included. The default value is True. + + Returns + ------- + tmax : float, int or ndarray + + Notes + ----- + For more details on `tmax`, see `scipy.stats.tmax`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import mstats + >>> a = np.array([[6, 8, 3, 0], + ... [3, 9, 1, 2], + ... [8, 7, 8, 2], + ... [5, 6, 0, 2], + ... [4, 5, 5, 2]]) + ... + ... + >>> mstats.tmax(a, 4) + masked_array(data=[4, --, 3, 2], + mask=[False, True, False, False], + fill_value=999999) + + """ + a, axis = _chk_asarray(a, axis) + am = trima(a, (None, upperlimit), (False, inclusive)) + return ma.maximum.reduce(am, axis) + + +def tsem(a, limits=None, inclusive=(True, True), axis=0, ddof=1): + """ + Compute the trimmed standard error of the mean. + + This function finds the standard error of the mean for given + values, ignoring values outside the given `limits`. + + Parameters + ---------- + a : array_like + array of values + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None, then all values are + used. Either of the limit values in the tuple can also be None + representing a half-open interval. The default value is None. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + axis : int or None, optional + Axis along which to operate. If None, compute over the + whole array. Default is zero. + ddof : int, optional + Delta degrees of freedom. Default is 1. + + Returns + ------- + tsem : float + + Notes + ----- + For more details on `tsem`, see `scipy.stats.tsem`. + + """ + a = ma.asarray(a).ravel() + if limits is None: + n = float(a.count()) + return a.std(axis=axis, ddof=ddof)/ma.sqrt(n) + + am = trima(a.ravel(), limits, inclusive) + sd = np.sqrt(am.var(axis=axis, ddof=ddof)) + return sd / np.sqrt(am.count()) + + +def winsorize(a, limits=None, inclusive=(True, True), inplace=False, + axis=None, nan_policy='propagate'): + """Returns a Winsorized version of the input array. + + The (limits[0])th lowest values are set to the (limits[0])th percentile, + and the (limits[1])th highest values are set to the (1 - limits[1])th + percentile. + Masked values are skipped. + + + Parameters + ---------- + a : sequence + Input array. + limits : {None, tuple of float}, optional + Tuple of the percentages to cut on each side of the array, with respect + to the number of unmasked data, as floats between 0. and 1. + Noting n the number of unmasked data before trimming, the + (n*limits[0])th smallest data and the (n*limits[1])th largest data are + masked, and the total number of unmasked data after trimming + is n*(1.-sum(limits)) The value of one limit can be set to None to + indicate an open interval. + inclusive : {(True, True) tuple}, optional + Tuple indicating whether the number of data being masked on each side + should be truncated (True) or rounded (False). + inplace : {False, True}, optional + Whether to winsorize in place (True) or to use a copy (False) + axis : {None, int}, optional + Axis along which to trim. If None, the whole array is trimmed, but its + shape is maintained. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': allows nan values and may overwrite or propagate them + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Notes + ----- + This function is applied to reduce the effect of possibly spurious outliers + by limiting the extreme values. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats.mstats import winsorize + + A shuffled array contains integers from 1 to 10. + + >>> a = np.array([10, 4, 9, 8, 5, 3, 7, 2, 1, 6]) + + The 10% of the lowest value (i.e., `1`) and the 20% of the highest + values (i.e., `9` and `10`) are replaced. + + >>> winsorize(a, limits=[0.1, 0.2]) + masked_array(data=[8, 4, 8, 8, 5, 3, 7, 2, 2, 6], + mask=False, + fill_value=999999) + + """ + def _winsorize1D(a, low_limit, up_limit, low_include, up_include, + contains_nan, nan_policy): + n = a.count() + idx = a.argsort() + if contains_nan: + nan_count = np.count_nonzero(np.isnan(a)) + if low_limit: + if low_include: + lowidx = int(low_limit * n) + else: + lowidx = np.round(low_limit * n).astype(int) + if contains_nan and nan_policy == 'omit': + lowidx = min(lowidx, n-nan_count-1) + a[idx[:lowidx]] = a[idx[lowidx]] + if up_limit is not None: + if up_include: + upidx = n - int(n * up_limit) + else: + upidx = n - np.round(n * up_limit).astype(int) + if contains_nan and nan_policy == 'omit': + a[idx[upidx:-nan_count]] = a[idx[upidx - 1]] + else: + a[idx[upidx:]] = a[idx[upidx - 1]] + return a + + contains_nan, nan_policy = _contains_nan(a, nan_policy) + # We are going to modify a: better make a copy + a = ma.array(a, copy=np.logical_not(inplace)) + + if limits is None: + return a + if (not isinstance(limits, tuple)) and isinstance(limits, float): + limits = (limits, limits) + + # Check the limits + (lolim, uplim) = limits + errmsg = "The proportion to cut from the %s should be between 0. and 1." + if lolim is not None: + if lolim > 1. or lolim < 0: + raise ValueError(errmsg % 'beginning' + "(got %s)" % lolim) + if uplim is not None: + if uplim > 1. or uplim < 0: + raise ValueError(errmsg % 'end' + "(got %s)" % uplim) + + (loinc, upinc) = inclusive + + if axis is None: + shp = a.shape + return _winsorize1D(a.ravel(), lolim, uplim, loinc, upinc, + contains_nan, nan_policy).reshape(shp) + else: + return ma.apply_along_axis(_winsorize1D, axis, a, lolim, uplim, loinc, + upinc, contains_nan, nan_policy) + + +def moment(a, moment=1, axis=0): + """ + Calculates the nth moment about the mean for a sample. + + Parameters + ---------- + a : array_like + data + moment : int, optional + order of central moment that is returned + axis : int or None, optional + Axis along which the central moment is computed. Default is 0. + If None, compute over the whole array `a`. + + Returns + ------- + n-th central moment : ndarray or float + The appropriate moment along the given axis or over all values if axis + is None. The denominator for the moment calculation is the number of + observations, no degrees of freedom correction is done. + + Notes + ----- + For more details about `moment`, see `scipy.stats.moment`. + + """ + a, axis = _chk_asarray(a, axis) + if a.size == 0: + moment_shape = list(a.shape) + del moment_shape[axis] + dtype = a.dtype.type if a.dtype.kind in 'fc' else np.float64 + # empty array, return nan(s) with shape matching `moment` + out_shape = (moment_shape if np.isscalar(moment) + else [len(moment)] + moment_shape) + if len(out_shape) == 0: + return dtype(np.nan) + else: + return ma.array(np.full(out_shape, np.nan, dtype=dtype)) + + # for array_like moment input, return a value for each. + if not np.isscalar(moment): + mean = a.mean(axis, keepdims=True) + mmnt = [_moment(a, i, axis, mean=mean) for i in moment] + return ma.array(mmnt) + else: + return _moment(a, moment, axis) + + +# Moment with optional pre-computed mean, equal to a.mean(axis, keepdims=True) +def _moment(a, moment, axis, *, mean=None): + if np.abs(moment - np.round(moment)) > 0: + raise ValueError("All moment parameters must be integers") + + if moment == 0 or moment == 1: + # By definition the zeroth moment about the mean is 1, and the first + # moment is 0. + shape = list(a.shape) + del shape[axis] + dtype = a.dtype.type if a.dtype.kind in 'fc' else np.float64 + + if len(shape) == 0: + return dtype(1.0 if moment == 0 else 0.0) + else: + return (ma.ones(shape, dtype=dtype) if moment == 0 + else ma.zeros(shape, dtype=dtype)) + else: + # Exponentiation by squares: form exponent sequence + n_list = [moment] + current_n = moment + while current_n > 2: + if current_n % 2: + current_n = (current_n-1)/2 + else: + current_n /= 2 + n_list.append(current_n) + + # Starting point for exponentiation by squares + mean = a.mean(axis, keepdims=True) if mean is None else mean + a_zero_mean = a - mean + if n_list[-1] == 1: + s = a_zero_mean.copy() + else: + s = a_zero_mean**2 + + # Perform multiplications + for n in n_list[-2::-1]: + s = s**2 + if n % 2: + s *= a_zero_mean + return s.mean(axis) + + +def variation(a, axis=0, ddof=0): + """ + Compute the coefficient of variation. + + The coefficient of variation is the standard deviation divided by the + mean. This function is equivalent to:: + + np.std(x, axis=axis, ddof=ddof) / np.mean(x) + + The default for ``ddof`` is 0, but many definitions of the coefficient + of variation use the square root of the unbiased sample variance + for the sample standard deviation, which corresponds to ``ddof=1``. + + Parameters + ---------- + a : array_like + Input array. + axis : int or None, optional + Axis along which to calculate the coefficient of variation. Default + is 0. If None, compute over the whole array `a`. + ddof : int, optional + Delta degrees of freedom. Default is 0. + + Returns + ------- + variation : ndarray + The calculated variation along the requested axis. + + Notes + ----- + For more details about `variation`, see `scipy.stats.variation`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats.mstats import variation + >>> a = np.array([2,8,4]) + >>> variation(a) + 0.5345224838248487 + >>> b = np.array([2,8,3,4]) + >>> c = np.ma.masked_array(b, mask=[0,0,1,0]) + >>> variation(c) + 0.5345224838248487 + + In the example above, it can be seen that this works the same as + `scipy.stats.variation` except 'stats.mstats.variation' ignores masked + array elements. + + """ + a, axis = _chk_asarray(a, axis) + return a.std(axis, ddof=ddof)/a.mean(axis) + + +def skew(a, axis=0, bias=True): + """ + Computes the skewness of a data set. + + Parameters + ---------- + a : ndarray + data + axis : int or None, optional + Axis along which skewness is calculated. Default is 0. + If None, compute over the whole array `a`. + bias : bool, optional + If False, then the calculations are corrected for statistical bias. + + Returns + ------- + skewness : ndarray + The skewness of values along an axis, returning 0 where all values are + equal. + + Notes + ----- + For more details about `skew`, see `scipy.stats.skew`. + + """ + a, axis = _chk_asarray(a,axis) + mean = a.mean(axis, keepdims=True) + m2 = _moment(a, 2, axis, mean=mean) + m3 = _moment(a, 3, axis, mean=mean) + zero = (m2 <= (np.finfo(m2.dtype).resolution * mean.squeeze(axis))**2) + with np.errstate(all='ignore'): + vals = ma.where(zero, 0, m3 / m2**1.5) + + if not bias and zero is not ma.masked and m2 is not ma.masked: + n = a.count(axis) + can_correct = ~zero & (n > 2) + if can_correct.any(): + n = np.extract(can_correct, n) + m2 = np.extract(can_correct, m2) + m3 = np.extract(can_correct, m3) + nval = ma.sqrt((n-1.0)*n)/(n-2.0)*m3/m2**1.5 + np.place(vals, can_correct, nval) + return vals + + +def kurtosis(a, axis=0, fisher=True, bias=True): + """ + Computes the kurtosis (Fisher or Pearson) of a dataset. + + Kurtosis is the fourth central moment divided by the square of the + variance. If Fisher's definition is used, then 3.0 is subtracted from + the result to give 0.0 for a normal distribution. + + If bias is False then the kurtosis is calculated using k statistics to + eliminate bias coming from biased moment estimators + + Use `kurtosistest` to see if result is close enough to normal. + + Parameters + ---------- + a : array + data for which the kurtosis is calculated + axis : int or None, optional + Axis along which the kurtosis is calculated. Default is 0. + If None, compute over the whole array `a`. + fisher : bool, optional + If True, Fisher's definition is used (normal ==> 0.0). If False, + Pearson's definition is used (normal ==> 3.0). + bias : bool, optional + If False, then the calculations are corrected for statistical bias. + + Returns + ------- + kurtosis : array + The kurtosis of values along an axis. If all values are equal, + return -3 for Fisher's definition and 0 for Pearson's definition. + + Notes + ----- + For more details about `kurtosis`, see `scipy.stats.kurtosis`. + + """ + a, axis = _chk_asarray(a, axis) + mean = a.mean(axis, keepdims=True) + m2 = _moment(a, 2, axis, mean=mean) + m4 = _moment(a, 4, axis, mean=mean) + zero = (m2 <= (np.finfo(m2.dtype).resolution * mean.squeeze(axis))**2) + with np.errstate(all='ignore'): + vals = ma.where(zero, 0, m4 / m2**2.0) + + if not bias and zero is not ma.masked and m2 is not ma.masked: + n = a.count(axis) + can_correct = ~zero & (n > 3) + if can_correct.any(): + n = np.extract(can_correct, n) + m2 = np.extract(can_correct, m2) + m4 = np.extract(can_correct, m4) + nval = 1.0/(n-2)/(n-3)*((n*n-1.0)*m4/m2**2.0-3*(n-1)**2.0) + np.place(vals, can_correct, nval+3.0) + if fisher: + return vals - 3 + else: + return vals + + +DescribeResult = namedtuple('DescribeResult', ('nobs', 'minmax', 'mean', + 'variance', 'skewness', + 'kurtosis')) + + +def describe(a, axis=0, ddof=0, bias=True): + """ + Computes several descriptive statistics of the passed array. + + Parameters + ---------- + a : array_like + Data array + axis : int or None, optional + Axis along which to calculate statistics. Default 0. If None, + compute over the whole array `a`. + ddof : int, optional + degree of freedom (default 0); note that default ddof is different + from the same routine in stats.describe + bias : bool, optional + If False, then the skewness and kurtosis calculations are corrected for + statistical bias. + + Returns + ------- + nobs : int + (size of the data (discarding missing values) + + minmax : (int, int) + min, max + + mean : float + arithmetic mean + + variance : float + unbiased variance + + skewness : float + biased skewness + + kurtosis : float + biased kurtosis + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats.mstats import describe + >>> ma = np.ma.array(range(6), mask=[0, 0, 0, 1, 1, 1]) + >>> describe(ma) + DescribeResult(nobs=3, minmax=(masked_array(data=0, + mask=False, + fill_value=999999), masked_array(data=2, + mask=False, + fill_value=999999)), mean=1.0, variance=0.6666666666666666, + skewness=masked_array(data=0., mask=False, fill_value=1e+20), + kurtosis=-1.5) + + """ + a, axis = _chk_asarray(a, axis) + n = a.count(axis) + mm = (ma.minimum.reduce(a, axis=axis), ma.maximum.reduce(a, axis=axis)) + m = a.mean(axis) + v = a.var(axis, ddof=ddof) + sk = skew(a, axis, bias=bias) + kurt = kurtosis(a, axis, bias=bias) + + return DescribeResult(n, mm, m, v, sk, kurt) + + +def stde_median(data, axis=None): + """Returns the McKean-Schrader estimate of the standard error of the sample + median along the given axis. masked values are discarded. + + Parameters + ---------- + data : ndarray + Data to trim. + axis : {None,int}, optional + Axis along which to perform the trimming. + If None, the input array is first flattened. + + """ + def _stdemed_1D(data): + data = np.sort(data.compressed()) + n = len(data) + z = 2.5758293035489004 + k = int(np.round((n+1)/2. - z * np.sqrt(n/4.),0)) + return ((data[n-k] - data[k-1])/(2.*z)) + + data = ma.array(data, copy=False, subok=True) + if (axis is None): + return _stdemed_1D(data) + else: + if data.ndim > 2: + raise ValueError("Array 'data' must be at most two dimensional, " + "but got data.ndim = %d" % data.ndim) + return ma.apply_along_axis(_stdemed_1D, axis, data) + + +SkewtestResult = namedtuple('SkewtestResult', ('statistic', 'pvalue')) + + +def skewtest(a, axis=0, alternative='two-sided'): + """ + Tests whether the skew is different from the normal distribution. + + Parameters + ---------- + a : array_like + The data to be tested + axis : int or None, optional + Axis along which statistics are calculated. Default is 0. + If None, compute over the whole array `a`. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + + * 'two-sided': the skewness of the distribution underlying the sample + is different from that of the normal distribution (i.e. 0) + * 'less': the skewness of the distribution underlying the sample + is less than that of the normal distribution + * 'greater': the skewness of the distribution underlying the sample + is greater than that of the normal distribution + + .. versionadded:: 1.7.0 + + Returns + ------- + statistic : array_like + The computed z-score for this test. + pvalue : array_like + A p-value for the hypothesis test + + Notes + ----- + For more details about `skewtest`, see `scipy.stats.skewtest`. + + """ + a, axis = _chk_asarray(a, axis) + if axis is None: + a = a.ravel() + axis = 0 + b2 = skew(a,axis) + n = a.count(axis) + if np.min(n) < 8: + raise ValueError( + "skewtest is not valid with less than 8 samples; %i samples" + " were given." % np.min(n)) + + y = b2 * ma.sqrt(((n+1)*(n+3)) / (6.0*(n-2))) + beta2 = (3.0*(n*n+27*n-70)*(n+1)*(n+3)) / ((n-2.0)*(n+5)*(n+7)*(n+9)) + W2 = -1 + ma.sqrt(2*(beta2-1)) + delta = 1/ma.sqrt(0.5*ma.log(W2)) + alpha = ma.sqrt(2.0/(W2-1)) + y = ma.where(y == 0, 1, y) + Z = delta*ma.log(y/alpha + ma.sqrt((y/alpha)**2+1)) + pvalue = scipy.stats._stats_py._get_pvalue(Z, distributions.norm, alternative) + + return SkewtestResult(Z[()], pvalue[()]) + + +KurtosistestResult = namedtuple('KurtosistestResult', ('statistic', 'pvalue')) + + +def kurtosistest(a, axis=0, alternative='two-sided'): + """ + Tests whether a dataset has normal kurtosis + + Parameters + ---------- + a : array_like + array of the sample data + axis : int or None, optional + Axis along which to compute test. Default is 0. If None, + compute over the whole array `a`. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the kurtosis of the distribution underlying the sample + is different from that of the normal distribution + * 'less': the kurtosis of the distribution underlying the sample + is less than that of the normal distribution + * 'greater': the kurtosis of the distribution underlying the sample + is greater than that of the normal distribution + + .. versionadded:: 1.7.0 + + Returns + ------- + statistic : array_like + The computed z-score for this test. + pvalue : array_like + The p-value for the hypothesis test + + Notes + ----- + For more details about `kurtosistest`, see `scipy.stats.kurtosistest`. + + """ + a, axis = _chk_asarray(a, axis) + n = a.count(axis=axis) + if np.min(n) < 5: + raise ValueError( + "kurtosistest requires at least 5 observations; %i observations" + " were given." % np.min(n)) + if np.min(n) < 20: + warnings.warn( + "kurtosistest only valid for n>=20 ... continuing anyway, n=%i" % np.min(n), + stacklevel=2, + ) + + b2 = kurtosis(a, axis, fisher=False) + E = 3.0*(n-1) / (n+1) + varb2 = 24.0*n*(n-2.)*(n-3) / ((n+1)*(n+1.)*(n+3)*(n+5)) + x = (b2-E)/ma.sqrt(varb2) + sqrtbeta1 = 6.0*(n*n-5*n+2)/((n+7)*(n+9)) * np.sqrt((6.0*(n+3)*(n+5)) / + (n*(n-2)*(n-3))) + A = 6.0 + 8.0/sqrtbeta1 * (2.0/sqrtbeta1 + np.sqrt(1+4.0/(sqrtbeta1**2))) + term1 = 1 - 2./(9.0*A) + denom = 1 + x*ma.sqrt(2/(A-4.0)) + if np.ma.isMaskedArray(denom): + # For multi-dimensional array input + denom[denom == 0.0] = masked + elif denom == 0.0: + denom = masked + + term2 = np.ma.where(denom > 0, ma.power((1-2.0/A)/denom, 1/3.0), + -ma.power(-(1-2.0/A)/denom, 1/3.0)) + Z = (term1 - term2) / np.sqrt(2/(9.0*A)) + pvalue = scipy.stats._stats_py._get_pvalue(Z, distributions.norm, alternative) + + return KurtosistestResult(Z[()], pvalue[()]) + + +NormaltestResult = namedtuple('NormaltestResult', ('statistic', 'pvalue')) + + +def normaltest(a, axis=0): + """ + Tests whether a sample differs from a normal distribution. + + Parameters + ---------- + a : array_like + The array containing the data to be tested. + axis : int or None, optional + Axis along which to compute test. Default is 0. If None, + compute over the whole array `a`. + + Returns + ------- + statistic : float or array + ``s^2 + k^2``, where ``s`` is the z-score returned by `skewtest` and + ``k`` is the z-score returned by `kurtosistest`. + pvalue : float or array + A 2-sided chi squared probability for the hypothesis test. + + Notes + ----- + For more details about `normaltest`, see `scipy.stats.normaltest`. + + """ + a, axis = _chk_asarray(a, axis) + s, _ = skewtest(a, axis) + k, _ = kurtosistest(a, axis) + k2 = s*s + k*k + + return NormaltestResult(k2, distributions.chi2.sf(k2, 2)) + + +def mquantiles(a, prob=list([.25,.5,.75]), alphap=.4, betap=.4, axis=None, + limit=()): + """ + Computes empirical quantiles for a data array. + + Samples quantile are defined by ``Q(p) = (1-gamma)*x[j] + gamma*x[j+1]``, + where ``x[j]`` is the j-th order statistic, and gamma is a function of + ``j = floor(n*p + m)``, ``m = alphap + p*(1 - alphap - betap)`` and + ``g = n*p + m - j``. + + Reinterpreting the above equations to compare to **R** lead to the + equation: ``p(k) = (k - alphap)/(n + 1 - alphap - betap)`` + + Typical values of (alphap,betap) are: + - (0,1) : ``p(k) = k/n`` : linear interpolation of cdf + (**R** type 4) + - (.5,.5) : ``p(k) = (k - 1/2.)/n`` : piecewise linear function + (**R** type 5) + - (0,0) : ``p(k) = k/(n+1)`` : + (**R** type 6) + - (1,1) : ``p(k) = (k-1)/(n-1)``: p(k) = mode[F(x[k])]. + (**R** type 7, **R** default) + - (1/3,1/3): ``p(k) = (k-1/3)/(n+1/3)``: Then p(k) ~ median[F(x[k])]. + The resulting quantile estimates are approximately median-unbiased + regardless of the distribution of x. + (**R** type 8) + - (3/8,3/8): ``p(k) = (k-3/8)/(n+1/4)``: Blom. + The resulting quantile estimates are approximately unbiased + if x is normally distributed + (**R** type 9) + - (.4,.4) : approximately quantile unbiased (Cunnane) + - (.35,.35): APL, used with PWM + + Parameters + ---------- + a : array_like + Input data, as a sequence or array of dimension at most 2. + prob : array_like, optional + List of quantiles to compute. + alphap : float, optional + Plotting positions parameter, default is 0.4. + betap : float, optional + Plotting positions parameter, default is 0.4. + axis : int, optional + Axis along which to perform the trimming. + If None (default), the input array is first flattened. + limit : tuple, optional + Tuple of (lower, upper) values. + Values of `a` outside this open interval are ignored. + + Returns + ------- + mquantiles : MaskedArray + An array containing the calculated quantiles. + + Notes + ----- + This formulation is very similar to **R** except the calculation of + ``m`` from ``alphap`` and ``betap``, where in **R** ``m`` is defined + with each type. + + References + ---------- + .. [1] *R* statistical software: https://www.r-project.org/ + .. [2] *R* ``quantile`` function: + http://stat.ethz.ch/R-manual/R-devel/library/stats/html/quantile.html + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats.mstats import mquantiles + >>> a = np.array([6., 47., 49., 15., 42., 41., 7., 39., 43., 40., 36.]) + >>> mquantiles(a) + array([ 19.2, 40. , 42.8]) + + Using a 2D array, specifying axis and limit. + + >>> data = np.array([[ 6., 7., 1.], + ... [ 47., 15., 2.], + ... [ 49., 36., 3.], + ... [ 15., 39., 4.], + ... [ 42., 40., -999.], + ... [ 41., 41., -999.], + ... [ 7., -999., -999.], + ... [ 39., -999., -999.], + ... [ 43., -999., -999.], + ... [ 40., -999., -999.], + ... [ 36., -999., -999.]]) + >>> print(mquantiles(data, axis=0, limit=(0, 50))) + [[19.2 14.6 1.45] + [40. 37.5 2.5 ] + [42.8 40.05 3.55]] + + >>> data[:, 2] = -999. + >>> print(mquantiles(data, axis=0, limit=(0, 50))) + [[19.200000000000003 14.6 --] + [40.0 37.5 --] + [42.800000000000004 40.05 --]] + + """ + def _quantiles1D(data,m,p): + x = np.sort(data.compressed()) + n = len(x) + if n == 0: + return ma.array(np.empty(len(p), dtype=float), mask=True) + elif n == 1: + return ma.array(np.resize(x, p.shape), mask=nomask) + aleph = (n*p + m) + k = np.floor(aleph.clip(1, n-1)).astype(int) + gamma = (aleph-k).clip(0,1) + return (1.-gamma)*x[(k-1).tolist()] + gamma*x[k.tolist()] + + data = ma.array(a, copy=False) + if data.ndim > 2: + raise TypeError("Array should be 2D at most !") + + if limit: + condition = (limit[0] < data) & (data < limit[1]) + data[~condition.filled(True)] = masked + + p = np.atleast_1d(np.asarray(prob)) + m = alphap + p*(1.-alphap-betap) + # Computes quantiles along axis (or globally) + if (axis is None): + return _quantiles1D(data, m, p) + + return ma.apply_along_axis(_quantiles1D, axis, data, m, p) + + +def scoreatpercentile(data, per, limit=(), alphap=.4, betap=.4): + """Calculate the score at the given 'per' percentile of the + sequence a. For example, the score at per=50 is the median. + + This function is a shortcut to mquantile + + """ + if (per < 0) or (per > 100.): + raise ValueError("The percentile should be between 0. and 100. !" + " (got %s)" % per) + + return mquantiles(data, prob=[per/100.], alphap=alphap, betap=betap, + limit=limit, axis=0).squeeze() + + +def plotting_positions(data, alpha=0.4, beta=0.4): + """ + Returns plotting positions (or empirical percentile points) for the data. + + Plotting positions are defined as ``(i-alpha)/(n+1-alpha-beta)``, where: + - i is the rank order statistics + - n is the number of unmasked values along the given axis + - `alpha` and `beta` are two parameters. + + Typical values for `alpha` and `beta` are: + - (0,1) : ``p(k) = k/n``, linear interpolation of cdf (R, type 4) + - (.5,.5) : ``p(k) = (k-1/2.)/n``, piecewise linear function + (R, type 5) + - (0,0) : ``p(k) = k/(n+1)``, Weibull (R type 6) + - (1,1) : ``p(k) = (k-1)/(n-1)``, in this case, + ``p(k) = mode[F(x[k])]``. That's R default (R type 7) + - (1/3,1/3): ``p(k) = (k-1/3)/(n+1/3)``, then + ``p(k) ~ median[F(x[k])]``. + The resulting quantile estimates are approximately median-unbiased + regardless of the distribution of x. (R type 8) + - (3/8,3/8): ``p(k) = (k-3/8)/(n+1/4)``, Blom. + The resulting quantile estimates are approximately unbiased + if x is normally distributed (R type 9) + - (.4,.4) : approximately quantile unbiased (Cunnane) + - (.35,.35): APL, used with PWM + - (.3175, .3175): used in scipy.stats.probplot + + Parameters + ---------- + data : array_like + Input data, as a sequence or array of dimension at most 2. + alpha : float, optional + Plotting positions parameter. Default is 0.4. + beta : float, optional + Plotting positions parameter. Default is 0.4. + + Returns + ------- + positions : MaskedArray + The calculated plotting positions. + + """ + data = ma.array(data, copy=False).reshape(1,-1) + n = data.count() + plpos = np.empty(data.size, dtype=float) + plpos[n:] = 0 + plpos[data.argsort(axis=None)[:n]] = ((np.arange(1, n+1) - alpha) / + (n + 1.0 - alpha - beta)) + return ma.array(plpos, mask=data._mask) + + +meppf = plotting_positions + + +def obrientransform(*args): + """ + Computes a transform on input data (any number of columns). Used to + test for homogeneity of variance prior to running one-way stats. Each + array in ``*args`` is one level of a factor. If an `f_oneway()` run on + the transformed data and found significant, variances are unequal. From + Maxwell and Delaney, p.112. + + Returns: transformed data for use in an ANOVA + """ + data = argstoarray(*args).T + v = data.var(axis=0,ddof=1) + m = data.mean(0) + n = data.count(0).astype(float) + # result = ((N-1.5)*N*(a-m)**2 - 0.5*v*(n-1))/((n-1)*(n-2)) + data -= m + data **= 2 + data *= (n-1.5)*n + data -= 0.5*v*(n-1) + data /= (n-1.)*(n-2.) + if not ma.allclose(v,data.mean(0)): + raise ValueError("Lack of convergence in obrientransform.") + + return data + + +def sem(a, axis=0, ddof=1): + """ + Calculates the standard error of the mean of the input array. + + Also sometimes called standard error of measurement. + + Parameters + ---------- + a : array_like + An array containing the values for which the standard error is + returned. + axis : int or None, optional + If axis is None, ravel `a` first. If axis is an integer, this will be + the axis over which to operate. Defaults to 0. + ddof : int, optional + Delta degrees-of-freedom. How many degrees of freedom to adjust + for bias in limited samples relative to the population estimate + of variance. Defaults to 1. + + Returns + ------- + s : ndarray or float + The standard error of the mean in the sample(s), along the input axis. + + Notes + ----- + The default value for `ddof` changed in scipy 0.15.0 to be consistent with + `scipy.stats.sem` as well as with the most common definition used (like in + the R documentation). + + Examples + -------- + Find standard error along the first axis: + + >>> import numpy as np + >>> from scipy import stats + >>> a = np.arange(20).reshape(5,4) + >>> print(stats.mstats.sem(a)) + [2.8284271247461903 2.8284271247461903 2.8284271247461903 + 2.8284271247461903] + + Find standard error across the whole array, using n degrees of freedom: + + >>> print(stats.mstats.sem(a, axis=None, ddof=0)) + 1.2893796958227628 + + """ + a, axis = _chk_asarray(a, axis) + n = a.count(axis=axis) + s = a.std(axis=axis, ddof=ddof) / ma.sqrt(n) + return s + + +F_onewayResult = namedtuple('F_onewayResult', ('statistic', 'pvalue')) + + +def f_oneway(*args): + """ + Performs a 1-way ANOVA, returning an F-value and probability given + any number of groups. From Heiman, pp.394-7. + + Usage: ``f_oneway(*args)``, where ``*args`` is 2 or more arrays, + one per treatment group. + + Returns + ------- + statistic : float + The computed F-value of the test. + pvalue : float + The associated p-value from the F-distribution. + + """ + # Construct a single array of arguments: each row is a group + data = argstoarray(*args) + ngroups = len(data) + ntot = data.count() + sstot = (data**2).sum() - (data.sum())**2/float(ntot) + ssbg = (data.count(-1) * (data.mean(-1)-data.mean())**2).sum() + sswg = sstot-ssbg + dfbg = ngroups-1 + dfwg = ntot - ngroups + msb = ssbg/float(dfbg) + msw = sswg/float(dfwg) + f = msb/msw + prob = special.fdtrc(dfbg, dfwg, f) # equivalent to stats.f.sf + + return F_onewayResult(f, prob) + + +FriedmanchisquareResult = namedtuple('FriedmanchisquareResult', + ('statistic', 'pvalue')) + + +def friedmanchisquare(*args): + """Friedman Chi-Square is a non-parametric, one-way within-subjects ANOVA. + This function calculates the Friedman Chi-square test for repeated measures + and returns the result, along with the associated probability value. + + Each input is considered a given group. Ideally, the number of treatments + among each group should be equal. If this is not the case, only the first + n treatments are taken into account, where n is the number of treatments + of the smallest group. + If a group has some missing values, the corresponding treatments are masked + in the other groups. + The test statistic is corrected for ties. + + Masked values in one group are propagated to the other groups. + + Returns + ------- + statistic : float + the test statistic. + pvalue : float + the associated p-value. + + """ + data = argstoarray(*args).astype(float) + k = len(data) + if k < 3: + raise ValueError("Less than 3 groups (%i): " % k + + "the Friedman test is NOT appropriate.") + + ranked = ma.masked_values(rankdata(data, axis=0), 0) + if ranked._mask is not nomask: + ranked = ma.mask_cols(ranked) + ranked = ranked.compressed().reshape(k,-1).view(ndarray) + else: + ranked = ranked._data + (k,n) = ranked.shape + # Ties correction + repeats = [find_repeats(row) for row in ranked.T] + ties = np.array([y for x, y in repeats if x.size > 0]) + tie_correction = 1 - (ties**3-ties).sum()/float(n*(k**3-k)) + + ssbg = np.sum((ranked.sum(-1) - n*(k+1)/2.)**2) + chisq = ssbg * 12./(n*k*(k+1)) * 1./tie_correction + + return FriedmanchisquareResult(chisq, + distributions.chi2.sf(chisq, k-1)) + + +BrunnerMunzelResult = namedtuple('BrunnerMunzelResult', ('statistic', 'pvalue')) + + +def brunnermunzel(x, y, alternative="two-sided", distribution="t"): + """ + Compute the Brunner-Munzel test on samples x and y. + + Any missing values in `x` and/or `y` are discarded. + + The Brunner-Munzel test is a nonparametric test of the null hypothesis that + when values are taken one by one from each group, the probabilities of + getting large values in both groups are equal. + Unlike the Wilcoxon-Mann-Whitney's U test, this does not require the + assumption of equivariance of two groups. Note that this does not assume + the distributions are same. This test works on two independent samples, + which may have different sizes. + + Parameters + ---------- + x, y : array_like + Array of samples, should be one-dimensional. + alternative : 'less', 'two-sided', or 'greater', optional + Whether to get the p-value for the one-sided hypothesis ('less' + or 'greater') or for the two-sided hypothesis ('two-sided'). + Defaults value is 'two-sided' . + distribution : 't' or 'normal', optional + Whether to get the p-value by t-distribution or by standard normal + distribution. + Defaults value is 't' . + + Returns + ------- + statistic : float + The Brunner-Munzer W statistic. + pvalue : float + p-value assuming an t distribution. One-sided or + two-sided, depending on the choice of `alternative` and `distribution`. + + See Also + -------- + mannwhitneyu : Mann-Whitney rank test on two samples. + + Notes + ----- + For more details on `brunnermunzel`, see `scipy.stats.brunnermunzel`. + + Examples + -------- + >>> from scipy.stats.mstats import brunnermunzel + >>> import numpy as np + >>> x1 = [1, 2, np.nan, np.nan, 1, 1, 1, 1, 1, 1, 2, 4, 1, 1] + >>> x2 = [3, 3, 4, 3, 1, 2, 3, 1, 1, 5, 4] + >>> brunnermunzel(x1, x2) + BrunnerMunzelResult(statistic=1.4723186918922935, pvalue=0.15479415300426624) # may vary + + """ # noqa: E501 + x = ma.asarray(x).compressed().view(ndarray) + y = ma.asarray(y).compressed().view(ndarray) + nx = len(x) + ny = len(y) + if nx == 0 or ny == 0: + return BrunnerMunzelResult(np.nan, np.nan) + rankc = rankdata(np.concatenate((x,y))) + rankcx = rankc[0:nx] + rankcy = rankc[nx:nx+ny] + rankcx_mean = np.mean(rankcx) + rankcy_mean = np.mean(rankcy) + rankx = rankdata(x) + ranky = rankdata(y) + rankx_mean = np.mean(rankx) + ranky_mean = np.mean(ranky) + + Sx = np.sum(np.power(rankcx - rankx - rankcx_mean + rankx_mean, 2.0)) + Sx /= nx - 1 + Sy = np.sum(np.power(rankcy - ranky - rankcy_mean + ranky_mean, 2.0)) + Sy /= ny - 1 + + wbfn = nx * ny * (rankcy_mean - rankcx_mean) + wbfn /= (nx + ny) * np.sqrt(nx * Sx + ny * Sy) + + if distribution == "t": + df_numer = np.power(nx * Sx + ny * Sy, 2.0) + df_denom = np.power(nx * Sx, 2.0) / (nx - 1) + df_denom += np.power(ny * Sy, 2.0) / (ny - 1) + df = df_numer / df_denom + p = distributions.t.cdf(wbfn, df) + elif distribution == "normal": + p = distributions.norm.cdf(wbfn) + else: + raise ValueError( + "distribution should be 't' or 'normal'") + + if alternative == "greater": + pass + elif alternative == "less": + p = 1 - p + elif alternative == "two-sided": + p = 2 * np.min([p, 1-p]) + else: + raise ValueError( + "alternative should be 'less', 'greater' or 'two-sided'") + + return BrunnerMunzelResult(wbfn, p) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_mstats_extras.py b/venv/lib/python3.10/site-packages/scipy/stats/_mstats_extras.py new file mode 100644 index 0000000000000000000000000000000000000000..f711e28dd36d0f190e49d6411deebb43a3884b27 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_mstats_extras.py @@ -0,0 +1,521 @@ +""" +Additional statistics functions with support for masked arrays. + +""" + +# Original author (2007): Pierre GF Gerard-Marchant + + +__all__ = ['compare_medians_ms', + 'hdquantiles', 'hdmedian', 'hdquantiles_sd', + 'idealfourths', + 'median_cihs','mjci','mquantiles_cimj', + 'rsh', + 'trimmed_mean_ci',] + + +import numpy as np +from numpy import float64, ndarray + +import numpy.ma as ma +from numpy.ma import MaskedArray + +from . import _mstats_basic as mstats + +from scipy.stats.distributions import norm, beta, t, binom + + +def hdquantiles(data, prob=list([.25,.5,.75]), axis=None, var=False,): + """ + Computes quantile estimates with the Harrell-Davis method. + + The quantile estimates are calculated as a weighted linear combination + of order statistics. + + Parameters + ---------- + data : array_like + Data array. + prob : sequence, optional + Sequence of probabilities at which to compute the quantiles. + axis : int or None, optional + Axis along which to compute the quantiles. If None, use a flattened + array. + var : bool, optional + Whether to return the variance of the estimate. + + Returns + ------- + hdquantiles : MaskedArray + A (p,) array of quantiles (if `var` is False), or a (2,p) array of + quantiles and variances (if `var` is True), where ``p`` is the + number of quantiles. + + See Also + -------- + hdquantiles_sd + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats.mstats import hdquantiles + >>> + >>> # Sample data + >>> data = np.array([1.2, 2.5, 3.7, 4.0, 5.1, 6.3, 7.0, 8.2, 9.4]) + >>> + >>> # Probabilities at which to compute quantiles + >>> probabilities = [0.25, 0.5, 0.75] + >>> + >>> # Compute Harrell-Davis quantile estimates + >>> quantile_estimates = hdquantiles(data, prob=probabilities) + >>> + >>> # Display the quantile estimates + >>> for i, quantile in enumerate(probabilities): + ... print(f"{int(quantile * 100)}th percentile: {quantile_estimates[i]}") + 25th percentile: 3.1505820231763066 # may vary + 50th percentile: 5.194344084883956 + 75th percentile: 7.430626414674935 + + """ + def _hd_1D(data,prob,var): + "Computes the HD quantiles for a 1D array. Returns nan for invalid data." + xsorted = np.squeeze(np.sort(data.compressed().view(ndarray))) + # Don't use length here, in case we have a numpy scalar + n = xsorted.size + + hd = np.empty((2,len(prob)), float64) + if n < 2: + hd.flat = np.nan + if var: + return hd + return hd[0] + + v = np.arange(n+1) / float(n) + betacdf = beta.cdf + for (i,p) in enumerate(prob): + _w = betacdf(v, (n+1)*p, (n+1)*(1-p)) + w = _w[1:] - _w[:-1] + hd_mean = np.dot(w, xsorted) + hd[0,i] = hd_mean + # + hd[1,i] = np.dot(w, (xsorted-hd_mean)**2) + # + hd[0, prob == 0] = xsorted[0] + hd[0, prob == 1] = xsorted[-1] + if var: + hd[1, prob == 0] = hd[1, prob == 1] = np.nan + return hd + return hd[0] + # Initialization & checks + data = ma.array(data, copy=False, dtype=float64) + p = np.atleast_1d(np.asarray(prob)) + # Computes quantiles along axis (or globally) + if (axis is None) or (data.ndim == 1): + result = _hd_1D(data, p, var) + else: + if data.ndim > 2: + raise ValueError("Array 'data' must be at most two dimensional, " + "but got data.ndim = %d" % data.ndim) + result = ma.apply_along_axis(_hd_1D, axis, data, p, var) + + return ma.fix_invalid(result, copy=False) + + +def hdmedian(data, axis=-1, var=False): + """ + Returns the Harrell-Davis estimate of the median along the given axis. + + Parameters + ---------- + data : ndarray + Data array. + axis : int, optional + Axis along which to compute the quantiles. If None, use a flattened + array. + var : bool, optional + Whether to return the variance of the estimate. + + Returns + ------- + hdmedian : MaskedArray + The median values. If ``var=True``, the variance is returned inside + the masked array. E.g. for a 1-D array the shape change from (1,) to + (2,). + + """ + result = hdquantiles(data,[0.5], axis=axis, var=var) + return result.squeeze() + + +def hdquantiles_sd(data, prob=list([.25,.5,.75]), axis=None): + """ + The standard error of the Harrell-Davis quantile estimates by jackknife. + + Parameters + ---------- + data : array_like + Data array. + prob : sequence, optional + Sequence of quantiles to compute. + axis : int, optional + Axis along which to compute the quantiles. If None, use a flattened + array. + + Returns + ------- + hdquantiles_sd : MaskedArray + Standard error of the Harrell-Davis quantile estimates. + + See Also + -------- + hdquantiles + + """ + def _hdsd_1D(data, prob): + "Computes the std error for 1D arrays." + xsorted = np.sort(data.compressed()) + n = len(xsorted) + + hdsd = np.empty(len(prob), float64) + if n < 2: + hdsd.flat = np.nan + + vv = np.arange(n) / float(n-1) + betacdf = beta.cdf + + for (i,p) in enumerate(prob): + _w = betacdf(vv, n*p, n*(1-p)) + w = _w[1:] - _w[:-1] + # cumulative sum of weights and data points if + # ith point is left out for jackknife + mx_ = np.zeros_like(xsorted) + mx_[1:] = np.cumsum(w * xsorted[:-1]) + # similar but from the right + mx_[:-1] += np.cumsum(w[::-1] * xsorted[:0:-1])[::-1] + hdsd[i] = np.sqrt(mx_.var() * (n - 1)) + return hdsd + + # Initialization & checks + data = ma.array(data, copy=False, dtype=float64) + p = np.atleast_1d(np.asarray(prob)) + # Computes quantiles along axis (or globally) + if (axis is None): + result = _hdsd_1D(data, p) + else: + if data.ndim > 2: + raise ValueError("Array 'data' must be at most two dimensional, " + "but got data.ndim = %d" % data.ndim) + result = ma.apply_along_axis(_hdsd_1D, axis, data, p) + + return ma.fix_invalid(result, copy=False).ravel() + + +def trimmed_mean_ci(data, limits=(0.2,0.2), inclusive=(True,True), + alpha=0.05, axis=None): + """ + Selected confidence interval of the trimmed mean along the given axis. + + Parameters + ---------- + data : array_like + Input data. + limits : {None, tuple}, optional + None or a two item tuple. + Tuple of the percentages to cut on each side of the array, with respect + to the number of unmasked data, as floats between 0. and 1. If ``n`` + is the number of unmasked data before trimming, then + (``n * limits[0]``)th smallest data and (``n * limits[1]``)th + largest data are masked. The total number of unmasked data after + trimming is ``n * (1. - sum(limits))``. + The value of one limit can be set to None to indicate an open interval. + + Defaults to (0.2, 0.2). + inclusive : (2,) tuple of boolean, optional + If relative==False, tuple indicating whether values exactly equal to + the absolute limits are allowed. + If relative==True, tuple indicating whether the number of data being + masked on each side should be rounded (True) or truncated (False). + + Defaults to (True, True). + alpha : float, optional + Confidence level of the intervals. + + Defaults to 0.05. + axis : int, optional + Axis along which to cut. If None, uses a flattened version of `data`. + + Defaults to None. + + Returns + ------- + trimmed_mean_ci : (2,) ndarray + The lower and upper confidence intervals of the trimmed data. + + """ + data = ma.array(data, copy=False) + trimmed = mstats.trimr(data, limits=limits, inclusive=inclusive, axis=axis) + tmean = trimmed.mean(axis) + tstde = mstats.trimmed_stde(data,limits=limits,inclusive=inclusive,axis=axis) + df = trimmed.count(axis) - 1 + tppf = t.ppf(1-alpha/2.,df) + return np.array((tmean - tppf*tstde, tmean+tppf*tstde)) + + +def mjci(data, prob=[0.25,0.5,0.75], axis=None): + """ + Returns the Maritz-Jarrett estimators of the standard error of selected + experimental quantiles of the data. + + Parameters + ---------- + data : ndarray + Data array. + prob : sequence, optional + Sequence of quantiles to compute. + axis : int or None, optional + Axis along which to compute the quantiles. If None, use a flattened + array. + + """ + def _mjci_1D(data, p): + data = np.sort(data.compressed()) + n = data.size + prob = (np.array(p) * n + 0.5).astype(int) + betacdf = beta.cdf + + mj = np.empty(len(prob), float64) + x = np.arange(1,n+1, dtype=float64) / n + y = x - 1./n + for (i,m) in enumerate(prob): + W = betacdf(x,m-1,n-m) - betacdf(y,m-1,n-m) + C1 = np.dot(W,data) + C2 = np.dot(W,data**2) + mj[i] = np.sqrt(C2 - C1**2) + return mj + + data = ma.array(data, copy=False) + if data.ndim > 2: + raise ValueError("Array 'data' must be at most two dimensional, " + "but got data.ndim = %d" % data.ndim) + + p = np.atleast_1d(np.asarray(prob)) + # Computes quantiles along axis (or globally) + if (axis is None): + return _mjci_1D(data, p) + else: + return ma.apply_along_axis(_mjci_1D, axis, data, p) + + +def mquantiles_cimj(data, prob=[0.25,0.50,0.75], alpha=0.05, axis=None): + """ + Computes the alpha confidence interval for the selected quantiles of the + data, with Maritz-Jarrett estimators. + + Parameters + ---------- + data : ndarray + Data array. + prob : sequence, optional + Sequence of quantiles to compute. + alpha : float, optional + Confidence level of the intervals. + axis : int or None, optional + Axis along which to compute the quantiles. + If None, use a flattened array. + + Returns + ------- + ci_lower : ndarray + The lower boundaries of the confidence interval. Of the same length as + `prob`. + ci_upper : ndarray + The upper boundaries of the confidence interval. Of the same length as + `prob`. + + """ + alpha = min(alpha, 1 - alpha) + z = norm.ppf(1 - alpha/2.) + xq = mstats.mquantiles(data, prob, alphap=0, betap=0, axis=axis) + smj = mjci(data, prob, axis=axis) + return (xq - z * smj, xq + z * smj) + + +def median_cihs(data, alpha=0.05, axis=None): + """ + Computes the alpha-level confidence interval for the median of the data. + + Uses the Hettmasperger-Sheather method. + + Parameters + ---------- + data : array_like + Input data. Masked values are discarded. The input should be 1D only, + or `axis` should be set to None. + alpha : float, optional + Confidence level of the intervals. + axis : int or None, optional + Axis along which to compute the quantiles. If None, use a flattened + array. + + Returns + ------- + median_cihs + Alpha level confidence interval. + + """ + def _cihs_1D(data, alpha): + data = np.sort(data.compressed()) + n = len(data) + alpha = min(alpha, 1-alpha) + k = int(binom._ppf(alpha/2., n, 0.5)) + gk = binom.cdf(n-k,n,0.5) - binom.cdf(k-1,n,0.5) + if gk < 1-alpha: + k -= 1 + gk = binom.cdf(n-k,n,0.5) - binom.cdf(k-1,n,0.5) + gkk = binom.cdf(n-k-1,n,0.5) - binom.cdf(k,n,0.5) + I = (gk - 1 + alpha)/(gk - gkk) + lambd = (n-k) * I / float(k + (n-2*k)*I) + lims = (lambd*data[k] + (1-lambd)*data[k-1], + lambd*data[n-k-1] + (1-lambd)*data[n-k]) + return lims + data = ma.array(data, copy=False) + # Computes quantiles along axis (or globally) + if (axis is None): + result = _cihs_1D(data, alpha) + else: + if data.ndim > 2: + raise ValueError("Array 'data' must be at most two dimensional, " + "but got data.ndim = %d" % data.ndim) + result = ma.apply_along_axis(_cihs_1D, axis, data, alpha) + + return result + + +def compare_medians_ms(group_1, group_2, axis=None): + """ + Compares the medians from two independent groups along the given axis. + + The comparison is performed using the McKean-Schrader estimate of the + standard error of the medians. + + Parameters + ---------- + group_1 : array_like + First dataset. Has to be of size >=7. + group_2 : array_like + Second dataset. Has to be of size >=7. + axis : int, optional + Axis along which the medians are estimated. If None, the arrays are + flattened. If `axis` is not None, then `group_1` and `group_2` + should have the same shape. + + Returns + ------- + compare_medians_ms : {float, ndarray} + If `axis` is None, then returns a float, otherwise returns a 1-D + ndarray of floats with a length equal to the length of `group_1` + along `axis`. + + Examples + -------- + + >>> from scipy import stats + >>> a = [1, 2, 3, 4, 5, 6, 7] + >>> b = [8, 9, 10, 11, 12, 13, 14] + >>> stats.mstats.compare_medians_ms(a, b, axis=None) + 1.0693225866553746e-05 + + The function is vectorized to compute along a given axis. + + >>> import numpy as np + >>> rng = np.random.default_rng() + >>> x = rng.random(size=(3, 7)) + >>> y = rng.random(size=(3, 8)) + >>> stats.mstats.compare_medians_ms(x, y, axis=1) + array([0.36908985, 0.36092538, 0.2765313 ]) + + References + ---------- + .. [1] McKean, Joseph W., and Ronald M. Schrader. "A comparison of methods + for studentizing the sample median." Communications in + Statistics-Simulation and Computation 13.6 (1984): 751-773. + + """ + (med_1, med_2) = (ma.median(group_1,axis=axis), ma.median(group_2,axis=axis)) + (std_1, std_2) = (mstats.stde_median(group_1, axis=axis), + mstats.stde_median(group_2, axis=axis)) + W = np.abs(med_1 - med_2) / ma.sqrt(std_1**2 + std_2**2) + return 1 - norm.cdf(W) + + +def idealfourths(data, axis=None): + """ + Returns an estimate of the lower and upper quartiles. + + Uses the ideal fourths algorithm. + + Parameters + ---------- + data : array_like + Input array. + axis : int, optional + Axis along which the quartiles are estimated. If None, the arrays are + flattened. + + Returns + ------- + idealfourths : {list of floats, masked array} + Returns the two internal values that divide `data` into four parts + using the ideal fourths algorithm either along the flattened array + (if `axis` is None) or along `axis` of `data`. + + """ + def _idf(data): + x = data.compressed() + n = len(x) + if n < 3: + return [np.nan,np.nan] + (j,h) = divmod(n/4. + 5/12.,1) + j = int(j) + qlo = (1-h)*x[j-1] + h*x[j] + k = n - j + qup = (1-h)*x[k] + h*x[k-1] + return [qlo, qup] + data = ma.sort(data, axis=axis).view(MaskedArray) + if (axis is None): + return _idf(data) + else: + return ma.apply_along_axis(_idf, axis, data) + + +def rsh(data, points=None): + """ + Evaluates Rosenblatt's shifted histogram estimators for each data point. + + Rosenblatt's estimator is a centered finite-difference approximation to the + derivative of the empirical cumulative distribution function. + + Parameters + ---------- + data : sequence + Input data, should be 1-D. Masked values are ignored. + points : sequence or None, optional + Sequence of points where to evaluate Rosenblatt shifted histogram. + If None, use the data. + + """ + data = ma.array(data, copy=False) + if points is None: + points = data + else: + points = np.atleast_1d(np.asarray(points)) + + if data.ndim != 1: + raise AttributeError("The input array should be 1D only !") + + n = data.count() + r = idealfourths(data, axis=None) + h = 1.2 * (r[-1]-r[0]) / n**(1./5) + nhi = (data[:,None] <= points[None,:] + h).sum(0) + nlo = (data[:,None] < points[None,:] - h).sum(0) + return (nhi-nlo) / (2.*n*h) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_multivariate.py b/venv/lib/python3.10/site-packages/scipy/stats/_multivariate.py new file mode 100644 index 0000000000000000000000000000000000000000..ddb59ae8c68ae0f0018f8cccfd79c7fadee742ec --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_multivariate.py @@ -0,0 +1,6981 @@ +# +# Author: Joris Vankerschaver 2013 +# +import math +import numpy as np +import scipy.linalg +from scipy._lib import doccer +from scipy.special import (gammaln, psi, multigammaln, xlogy, entr, betaln, + ive, loggamma) +from scipy._lib._util import check_random_state, _lazywhere +from scipy.linalg.blas import drot, get_blas_funcs +from ._continuous_distns import norm +from ._discrete_distns import binom +from . import _mvn, _covariance, _rcont +from ._qmvnt import _qmvt +from ._morestats import directional_stats +from scipy.optimize import root_scalar + +__all__ = ['multivariate_normal', + 'matrix_normal', + 'dirichlet', + 'dirichlet_multinomial', + 'wishart', + 'invwishart', + 'multinomial', + 'special_ortho_group', + 'ortho_group', + 'random_correlation', + 'unitary_group', + 'multivariate_t', + 'multivariate_hypergeom', + 'random_table', + 'uniform_direction', + 'vonmises_fisher'] + +_LOG_2PI = np.log(2 * np.pi) +_LOG_2 = np.log(2) +_LOG_PI = np.log(np.pi) + + +_doc_random_state = """\ +seed : {None, int, np.random.RandomState, np.random.Generator}, optional + Used for drawing random variates. + If `seed` is `None`, the `~np.random.RandomState` singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, seeded + with seed. + If `seed` is already a ``RandomState`` or ``Generator`` instance, + then that object is used. + Default is `None`. +""" + + +def _squeeze_output(out): + """ + Remove single-dimensional entries from array and convert to scalar, + if necessary. + """ + out = out.squeeze() + if out.ndim == 0: + out = out[()] + return out + + +def _eigvalsh_to_eps(spectrum, cond=None, rcond=None): + """Determine which eigenvalues are "small" given the spectrum. + + This is for compatibility across various linear algebra functions + that should agree about whether or not a Hermitian matrix is numerically + singular and what is its numerical matrix rank. + This is designed to be compatible with scipy.linalg.pinvh. + + Parameters + ---------- + spectrum : 1d ndarray + Array of eigenvalues of a Hermitian matrix. + cond, rcond : float, optional + Cutoff for small eigenvalues. + Singular values smaller than rcond * largest_eigenvalue are + considered zero. + If None or -1, suitable machine precision is used. + + Returns + ------- + eps : float + Magnitude cutoff for numerical negligibility. + + """ + if rcond is not None: + cond = rcond + if cond in [None, -1]: + t = spectrum.dtype.char.lower() + factor = {'f': 1E3, 'd': 1E6} + cond = factor[t] * np.finfo(t).eps + eps = cond * np.max(abs(spectrum)) + return eps + + +def _pinv_1d(v, eps=1e-5): + """A helper function for computing the pseudoinverse. + + Parameters + ---------- + v : iterable of numbers + This may be thought of as a vector of eigenvalues or singular values. + eps : float + Values with magnitude no greater than eps are considered negligible. + + Returns + ------- + v_pinv : 1d float ndarray + A vector of pseudo-inverted numbers. + + """ + return np.array([0 if abs(x) <= eps else 1/x for x in v], dtype=float) + + +class _PSD: + """ + Compute coordinated functions of a symmetric positive semidefinite matrix. + + This class addresses two issues. Firstly it allows the pseudoinverse, + the logarithm of the pseudo-determinant, and the rank of the matrix + to be computed using one call to eigh instead of three. + Secondly it allows these functions to be computed in a way + that gives mutually compatible results. + All of the functions are computed with a common understanding as to + which of the eigenvalues are to be considered negligibly small. + The functions are designed to coordinate with scipy.linalg.pinvh() + but not necessarily with np.linalg.det() or with np.linalg.matrix_rank(). + + Parameters + ---------- + M : array_like + Symmetric positive semidefinite matrix (2-D). + cond, rcond : float, optional + Cutoff for small eigenvalues. + Singular values smaller than rcond * largest_eigenvalue are + considered zero. + If None or -1, suitable machine precision is used. + lower : bool, optional + Whether the pertinent array data is taken from the lower + or upper triangle of M. (Default: lower) + check_finite : bool, optional + Whether to check that the input matrices contain only finite + numbers. Disabling may give a performance gain, but may result + in problems (crashes, non-termination) if the inputs do contain + infinities or NaNs. + allow_singular : bool, optional + Whether to allow a singular matrix. (Default: True) + + Notes + ----- + The arguments are similar to those of scipy.linalg.pinvh(). + + """ + + def __init__(self, M, cond=None, rcond=None, lower=True, + check_finite=True, allow_singular=True): + self._M = np.asarray(M) + + # Compute the symmetric eigendecomposition. + # Note that eigh takes care of array conversion, chkfinite, + # and assertion that the matrix is square. + s, u = scipy.linalg.eigh(M, lower=lower, check_finite=check_finite) + + eps = _eigvalsh_to_eps(s, cond, rcond) + if np.min(s) < -eps: + msg = "The input matrix must be symmetric positive semidefinite." + raise ValueError(msg) + d = s[s > eps] + if len(d) < len(s) and not allow_singular: + msg = ("When `allow_singular is False`, the input matrix must be " + "symmetric positive definite.") + raise np.linalg.LinAlgError(msg) + s_pinv = _pinv_1d(s, eps) + U = np.multiply(u, np.sqrt(s_pinv)) + + # Save the eigenvector basis, and tolerance for testing support + self.eps = 1e3*eps + self.V = u[:, s <= eps] + + # Initialize the eagerly precomputed attributes. + self.rank = len(d) + self.U = U + self.log_pdet = np.sum(np.log(d)) + + # Initialize attributes to be lazily computed. + self._pinv = None + + def _support_mask(self, x): + """ + Check whether x lies in the support of the distribution. + """ + residual = np.linalg.norm(x @ self.V, axis=-1) + in_support = residual < self.eps + return in_support + + @property + def pinv(self): + if self._pinv is None: + self._pinv = np.dot(self.U, self.U.T) + return self._pinv + + +class multi_rv_generic: + """ + Class which encapsulates common functionality between all multivariate + distributions. + """ + def __init__(self, seed=None): + super().__init__() + self._random_state = check_random_state(seed) + + @property + def random_state(self): + """ Get or set the Generator object for generating random variates. + + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance then + that instance is used. + + """ + return self._random_state + + @random_state.setter + def random_state(self, seed): + self._random_state = check_random_state(seed) + + def _get_random_state(self, random_state): + if random_state is not None: + return check_random_state(random_state) + else: + return self._random_state + + +class multi_rv_frozen: + """ + Class which encapsulates common functionality between all frozen + multivariate distributions. + """ + @property + def random_state(self): + return self._dist._random_state + + @random_state.setter + def random_state(self, seed): + self._dist._random_state = check_random_state(seed) + + +_mvn_doc_default_callparams = """\ +mean : array_like, default: ``[0]`` + Mean of the distribution. +cov : array_like or `Covariance`, default: ``[1]`` + Symmetric positive (semi)definite covariance matrix of the distribution. +allow_singular : bool, default: ``False`` + Whether to allow a singular covariance matrix. This is ignored if `cov` is + a `Covariance` object. +""" + +_mvn_doc_callparams_note = """\ +Setting the parameter `mean` to `None` is equivalent to having `mean` +be the zero-vector. The parameter `cov` can be a scalar, in which case +the covariance matrix is the identity times that value, a vector of +diagonal entries for the covariance matrix, a two-dimensional array_like, +or a `Covariance` object. +""" + +_mvn_doc_frozen_callparams = "" + +_mvn_doc_frozen_callparams_note = """\ +See class definition for a detailed description of parameters.""" + +mvn_docdict_params = { + '_mvn_doc_default_callparams': _mvn_doc_default_callparams, + '_mvn_doc_callparams_note': _mvn_doc_callparams_note, + '_doc_random_state': _doc_random_state +} + +mvn_docdict_noparams = { + '_mvn_doc_default_callparams': _mvn_doc_frozen_callparams, + '_mvn_doc_callparams_note': _mvn_doc_frozen_callparams_note, + '_doc_random_state': _doc_random_state +} + + +class multivariate_normal_gen(multi_rv_generic): + r"""A multivariate normal random variable. + + The `mean` keyword specifies the mean. The `cov` keyword specifies the + covariance matrix. + + Methods + ------- + pdf(x, mean=None, cov=1, allow_singular=False) + Probability density function. + logpdf(x, mean=None, cov=1, allow_singular=False) + Log of the probability density function. + cdf(x, mean=None, cov=1, allow_singular=False, maxpts=1000000*dim, abseps=1e-5, releps=1e-5, lower_limit=None) + Cumulative distribution function. + logcdf(x, mean=None, cov=1, allow_singular=False, maxpts=1000000*dim, abseps=1e-5, releps=1e-5) + Log of the cumulative distribution function. + rvs(mean=None, cov=1, size=1, random_state=None) + Draw random samples from a multivariate normal distribution. + entropy(mean=None, cov=1) + Compute the differential entropy of the multivariate normal. + fit(x, fix_mean=None, fix_cov=None) + Fit a multivariate normal distribution to data. + + Parameters + ---------- + %(_mvn_doc_default_callparams)s + %(_doc_random_state)s + + Notes + ----- + %(_mvn_doc_callparams_note)s + + The covariance matrix `cov` may be an instance of a subclass of + `Covariance`, e.g. `scipy.stats.CovViaPrecision`. If so, `allow_singular` + is ignored. + + Otherwise, `cov` must be a symmetric positive semidefinite + matrix when `allow_singular` is True; it must be (strictly) positive + definite when `allow_singular` is False. + Symmetry is not checked; only the lower triangular portion is used. + The determinant and inverse of `cov` are computed + as the pseudo-determinant and pseudo-inverse, respectively, so + that `cov` does not need to have full rank. + + The probability density function for `multivariate_normal` is + + .. math:: + + f(x) = \frac{1}{\sqrt{(2 \pi)^k \det \Sigma}} + \exp\left( -\frac{1}{2} (x - \mu)^T \Sigma^{-1} (x - \mu) \right), + + where :math:`\mu` is the mean, :math:`\Sigma` the covariance matrix, + :math:`k` the rank of :math:`\Sigma`. In case of singular :math:`\Sigma`, + SciPy extends this definition according to [1]_. + + .. versionadded:: 0.14.0 + + References + ---------- + .. [1] Multivariate Normal Distribution - Degenerate Case, Wikipedia, + https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Degenerate_case + + Examples + -------- + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy.stats import multivariate_normal + + >>> x = np.linspace(0, 5, 10, endpoint=False) + >>> y = multivariate_normal.pdf(x, mean=2.5, cov=0.5); y + array([ 0.00108914, 0.01033349, 0.05946514, 0.20755375, 0.43939129, + 0.56418958, 0.43939129, 0.20755375, 0.05946514, 0.01033349]) + >>> fig1 = plt.figure() + >>> ax = fig1.add_subplot(111) + >>> ax.plot(x, y) + >>> plt.show() + + Alternatively, the object may be called (as a function) to fix the mean + and covariance parameters, returning a "frozen" multivariate normal + random variable: + + >>> rv = multivariate_normal(mean=None, cov=1, allow_singular=False) + >>> # Frozen object with the same methods but holding the given + >>> # mean and covariance fixed. + + The input quantiles can be any shape of array, as long as the last + axis labels the components. This allows us for instance to + display the frozen pdf for a non-isotropic random variable in 2D as + follows: + + >>> x, y = np.mgrid[-1:1:.01, -1:1:.01] + >>> pos = np.dstack((x, y)) + >>> rv = multivariate_normal([0.5, -0.2], [[2.0, 0.3], [0.3, 0.5]]) + >>> fig2 = plt.figure() + >>> ax2 = fig2.add_subplot(111) + >>> ax2.contourf(x, y, rv.pdf(pos)) + + """ # noqa: E501 + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__, mvn_docdict_params) + + def __call__(self, mean=None, cov=1, allow_singular=False, seed=None): + """Create a frozen multivariate normal distribution. + + See `multivariate_normal_frozen` for more information. + """ + return multivariate_normal_frozen(mean, cov, + allow_singular=allow_singular, + seed=seed) + + def _process_parameters(self, mean, cov, allow_singular=True): + """ + Infer dimensionality from mean or covariance matrix, ensure that + mean and covariance are full vector resp. matrix. + """ + if isinstance(cov, _covariance.Covariance): + return self._process_parameters_Covariance(mean, cov) + else: + # Before `Covariance` classes were introduced, + # `multivariate_normal` accepted plain arrays as `cov` and used the + # following input validation. To avoid disturbing the behavior of + # `multivariate_normal` when plain arrays are used, we use the + # original input validation here. + dim, mean, cov = self._process_parameters_psd(None, mean, cov) + # After input validation, some methods then processed the arrays + # with a `_PSD` object and used that to perform computation. + # To avoid branching statements in each method depending on whether + # `cov` is an array or `Covariance` object, we always process the + # array with `_PSD`, and then use wrapper that satisfies the + # `Covariance` interface, `CovViaPSD`. + psd = _PSD(cov, allow_singular=allow_singular) + cov_object = _covariance.CovViaPSD(psd) + return dim, mean, cov_object + + def _process_parameters_Covariance(self, mean, cov): + dim = cov.shape[-1] + mean = np.array([0.]) if mean is None else mean + message = (f"`cov` represents a covariance matrix in {dim} dimensions," + f"and so `mean` must be broadcastable to shape {(dim,)}") + try: + mean = np.broadcast_to(mean, dim) + except ValueError as e: + raise ValueError(message) from e + return dim, mean, cov + + def _process_parameters_psd(self, dim, mean, cov): + # Try to infer dimensionality + if dim is None: + if mean is None: + if cov is None: + dim = 1 + else: + cov = np.asarray(cov, dtype=float) + if cov.ndim < 2: + dim = 1 + else: + dim = cov.shape[0] + else: + mean = np.asarray(mean, dtype=float) + dim = mean.size + else: + if not np.isscalar(dim): + raise ValueError("Dimension of random variable must be " + "a scalar.") + + # Check input sizes and return full arrays for mean and cov if + # necessary + if mean is None: + mean = np.zeros(dim) + mean = np.asarray(mean, dtype=float) + + if cov is None: + cov = 1.0 + cov = np.asarray(cov, dtype=float) + + if dim == 1: + mean = mean.reshape(1) + cov = cov.reshape(1, 1) + + if mean.ndim != 1 or mean.shape[0] != dim: + raise ValueError("Array 'mean' must be a vector of length %d." % + dim) + if cov.ndim == 0: + cov = cov * np.eye(dim) + elif cov.ndim == 1: + cov = np.diag(cov) + elif cov.ndim == 2 and cov.shape != (dim, dim): + rows, cols = cov.shape + if rows != cols: + msg = ("Array 'cov' must be square if it is two dimensional," + " but cov.shape = %s." % str(cov.shape)) + else: + msg = ("Dimension mismatch: array 'cov' is of shape %s," + " but 'mean' is a vector of length %d.") + msg = msg % (str(cov.shape), len(mean)) + raise ValueError(msg) + elif cov.ndim > 2: + raise ValueError("Array 'cov' must be at most two-dimensional," + " but cov.ndim = %d" % cov.ndim) + + return dim, mean, cov + + def _process_quantiles(self, x, dim): + """ + Adjust quantiles array so that last axis labels the components of + each data point. + """ + x = np.asarray(x, dtype=float) + + if x.ndim == 0: + x = x[np.newaxis] + elif x.ndim == 1: + if dim == 1: + x = x[:, np.newaxis] + else: + x = x[np.newaxis, :] + + return x + + def _logpdf(self, x, mean, cov_object): + """Log of the multivariate normal probability density function. + + Parameters + ---------- + x : ndarray + Points at which to evaluate the log of the probability + density function + mean : ndarray + Mean of the distribution + cov_object : Covariance + An object representing the Covariance matrix + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'logpdf' instead. + + """ + log_det_cov, rank = cov_object.log_pdet, cov_object.rank + dev = x - mean + if dev.ndim > 1: + log_det_cov = log_det_cov[..., np.newaxis] + rank = rank[..., np.newaxis] + maha = np.sum(np.square(cov_object.whiten(dev)), axis=-1) + return -0.5 * (rank * _LOG_2PI + log_det_cov + maha) + + def logpdf(self, x, mean=None, cov=1, allow_singular=False): + """Log of the multivariate normal probability density function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_mvn_doc_default_callparams)s + + Returns + ------- + pdf : ndarray or scalar + Log of the probability density function evaluated at `x` + + Notes + ----- + %(_mvn_doc_callparams_note)s + + """ + params = self._process_parameters(mean, cov, allow_singular) + dim, mean, cov_object = params + x = self._process_quantiles(x, dim) + out = self._logpdf(x, mean, cov_object) + if np.any(cov_object.rank < dim): + out_of_bounds = ~cov_object._support_mask(x-mean) + out[out_of_bounds] = -np.inf + return _squeeze_output(out) + + def pdf(self, x, mean=None, cov=1, allow_singular=False): + """Multivariate normal probability density function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_mvn_doc_default_callparams)s + + Returns + ------- + pdf : ndarray or scalar + Probability density function evaluated at `x` + + Notes + ----- + %(_mvn_doc_callparams_note)s + + """ + params = self._process_parameters(mean, cov, allow_singular) + dim, mean, cov_object = params + x = self._process_quantiles(x, dim) + out = np.exp(self._logpdf(x, mean, cov_object)) + if np.any(cov_object.rank < dim): + out_of_bounds = ~cov_object._support_mask(x-mean) + out[out_of_bounds] = 0.0 + return _squeeze_output(out) + + def _cdf(self, x, mean, cov, maxpts, abseps, releps, lower_limit): + """Multivariate normal cumulative distribution function. + + Parameters + ---------- + x : ndarray + Points at which to evaluate the cumulative distribution function. + mean : ndarray + Mean of the distribution + cov : array_like + Covariance matrix of the distribution + maxpts : integer + The maximum number of points to use for integration + abseps : float + Absolute error tolerance + releps : float + Relative error tolerance + lower_limit : array_like, optional + Lower limit of integration of the cumulative distribution function. + Default is negative infinity. Must be broadcastable with `x`. + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'cdf' instead. + + + .. versionadded:: 1.0.0 + + """ + lower = (np.full(mean.shape, -np.inf) + if lower_limit is None else lower_limit) + # In 2d, _mvn.mvnun accepts input in which `lower` bound elements + # are greater than `x`. Not so in other dimensions. Fix this by + # ensuring that lower bounds are indeed lower when passed, then + # set signs of resulting CDF manually. + b, a = np.broadcast_arrays(x, lower) + i_swap = b < a + signs = (-1)**(i_swap.sum(axis=-1)) # odd # of swaps -> negative + a, b = a.copy(), b.copy() + a[i_swap], b[i_swap] = b[i_swap], a[i_swap] + n = x.shape[-1] + limits = np.concatenate((a, b), axis=-1) + + # mvnun expects 1-d arguments, so process points sequentially + def func1d(limits): + return _mvn.mvnun(limits[:n], limits[n:], mean, cov, + maxpts, abseps, releps)[0] + + out = np.apply_along_axis(func1d, -1, limits) * signs + return _squeeze_output(out) + + def logcdf(self, x, mean=None, cov=1, allow_singular=False, maxpts=None, + abseps=1e-5, releps=1e-5, *, lower_limit=None): + """Log of the multivariate normal cumulative distribution function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_mvn_doc_default_callparams)s + maxpts : integer, optional + The maximum number of points to use for integration + (default `1000000*dim`) + abseps : float, optional + Absolute error tolerance (default 1e-5) + releps : float, optional + Relative error tolerance (default 1e-5) + lower_limit : array_like, optional + Lower limit of integration of the cumulative distribution function. + Default is negative infinity. Must be broadcastable with `x`. + + Returns + ------- + cdf : ndarray or scalar + Log of the cumulative distribution function evaluated at `x` + + Notes + ----- + %(_mvn_doc_callparams_note)s + + .. versionadded:: 1.0.0 + + """ + params = self._process_parameters(mean, cov, allow_singular) + dim, mean, cov_object = params + cov = cov_object.covariance + x = self._process_quantiles(x, dim) + if not maxpts: + maxpts = 1000000 * dim + cdf = self._cdf(x, mean, cov, maxpts, abseps, releps, lower_limit) + # the log of a negative real is complex, and cdf can be negative + # if lower limit is greater than upper limit + cdf = cdf + 0j if np.any(cdf < 0) else cdf + out = np.log(cdf) + return out + + def cdf(self, x, mean=None, cov=1, allow_singular=False, maxpts=None, + abseps=1e-5, releps=1e-5, *, lower_limit=None): + """Multivariate normal cumulative distribution function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_mvn_doc_default_callparams)s + maxpts : integer, optional + The maximum number of points to use for integration + (default `1000000*dim`) + abseps : float, optional + Absolute error tolerance (default 1e-5) + releps : float, optional + Relative error tolerance (default 1e-5) + lower_limit : array_like, optional + Lower limit of integration of the cumulative distribution function. + Default is negative infinity. Must be broadcastable with `x`. + + Returns + ------- + cdf : ndarray or scalar + Cumulative distribution function evaluated at `x` + + Notes + ----- + %(_mvn_doc_callparams_note)s + + .. versionadded:: 1.0.0 + + """ + params = self._process_parameters(mean, cov, allow_singular) + dim, mean, cov_object = params + cov = cov_object.covariance + x = self._process_quantiles(x, dim) + if not maxpts: + maxpts = 1000000 * dim + out = self._cdf(x, mean, cov, maxpts, abseps, releps, lower_limit) + return out + + def rvs(self, mean=None, cov=1, size=1, random_state=None): + """Draw random samples from a multivariate normal distribution. + + Parameters + ---------- + %(_mvn_doc_default_callparams)s + size : integer, optional + Number of samples to draw (default 1). + %(_doc_random_state)s + + Returns + ------- + rvs : ndarray or scalar + Random variates of size (`size`, `N`), where `N` is the + dimension of the random variable. + + Notes + ----- + %(_mvn_doc_callparams_note)s + + """ + dim, mean, cov_object = self._process_parameters(mean, cov) + random_state = self._get_random_state(random_state) + + if isinstance(cov_object, _covariance.CovViaPSD): + cov = cov_object.covariance + out = random_state.multivariate_normal(mean, cov, size) + out = _squeeze_output(out) + else: + size = size or tuple() + if not np.iterable(size): + size = (size,) + shape = tuple(size) + (cov_object.shape[-1],) + x = random_state.normal(size=shape) + out = mean + cov_object.colorize(x) + return out + + def entropy(self, mean=None, cov=1): + """Compute the differential entropy of the multivariate normal. + + Parameters + ---------- + %(_mvn_doc_default_callparams)s + + Returns + ------- + h : scalar + Entropy of the multivariate normal distribution + + Notes + ----- + %(_mvn_doc_callparams_note)s + + """ + dim, mean, cov_object = self._process_parameters(mean, cov) + return 0.5 * (cov_object.rank * (_LOG_2PI + 1) + cov_object.log_pdet) + + def fit(self, x, fix_mean=None, fix_cov=None): + """Fit a multivariate normal distribution to data. + + Parameters + ---------- + x : ndarray (m, n) + Data the distribution is fitted to. Must have two axes. + The first axis of length `m` represents the number of vectors + the distribution is fitted to. The second axis of length `n` + determines the dimensionality of the fitted distribution. + fix_mean : ndarray(n, ) + Fixed mean vector. Must have length `n`. + fix_cov: ndarray (n, n) + Fixed covariance matrix. Must have shape `(n, n)`. + + Returns + ------- + mean : ndarray (n, ) + Maximum likelihood estimate of the mean vector + cov : ndarray (n, n) + Maximum likelihood estimate of the covariance matrix + + """ + # input validation for data to be fitted + x = np.asarray(x) + if x.ndim != 2: + raise ValueError("`x` must be two-dimensional.") + + n_vectors, dim = x.shape + + # parameter estimation + # reference: https://home.ttic.edu/~shubhendu/Slides/Estimation.pdf + if fix_mean is not None: + # input validation for `fix_mean` + fix_mean = np.atleast_1d(fix_mean) + if fix_mean.shape != (dim, ): + msg = ("`fix_mean` must be a one-dimensional array the same " + "length as the dimensionality of the vectors `x`.") + raise ValueError(msg) + mean = fix_mean + else: + mean = x.mean(axis=0) + + if fix_cov is not None: + # input validation for `fix_cov` + fix_cov = np.atleast_2d(fix_cov) + # validate shape + if fix_cov.shape != (dim, dim): + msg = ("`fix_cov` must be a two-dimensional square array " + "of same side length as the dimensionality of the " + "vectors `x`.") + raise ValueError(msg) + # validate positive semidefiniteness + # a trimmed down copy from _PSD + s, u = scipy.linalg.eigh(fix_cov, lower=True, check_finite=True) + eps = _eigvalsh_to_eps(s) + if np.min(s) < -eps: + msg = "`fix_cov` must be symmetric positive semidefinite." + raise ValueError(msg) + cov = fix_cov + else: + centered_data = x - mean + cov = centered_data.T @ centered_data / n_vectors + return mean, cov + + +multivariate_normal = multivariate_normal_gen() + + +class multivariate_normal_frozen(multi_rv_frozen): + def __init__(self, mean=None, cov=1, allow_singular=False, seed=None, + maxpts=None, abseps=1e-5, releps=1e-5): + """Create a frozen multivariate normal distribution. + + Parameters + ---------- + mean : array_like, default: ``[0]`` + Mean of the distribution. + cov : array_like, default: ``[1]`` + Symmetric positive (semi)definite covariance matrix of the + distribution. + allow_singular : bool, default: ``False`` + Whether to allow a singular covariance matrix. + seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + maxpts : integer, optional + The maximum number of points to use for integration of the + cumulative distribution function (default `1000000*dim`) + abseps : float, optional + Absolute error tolerance for the cumulative distribution function + (default 1e-5) + releps : float, optional + Relative error tolerance for the cumulative distribution function + (default 1e-5) + + Examples + -------- + When called with the default parameters, this will create a 1D random + variable with mean 0 and covariance 1: + + >>> from scipy.stats import multivariate_normal + >>> r = multivariate_normal() + >>> r.mean + array([ 0.]) + >>> r.cov + array([[1.]]) + + """ # numpy/numpydoc#87 # noqa: E501 + self._dist = multivariate_normal_gen(seed) + self.dim, self.mean, self.cov_object = ( + self._dist._process_parameters(mean, cov, allow_singular)) + self.allow_singular = allow_singular or self.cov_object._allow_singular + if not maxpts: + maxpts = 1000000 * self.dim + self.maxpts = maxpts + self.abseps = abseps + self.releps = releps + + @property + def cov(self): + return self.cov_object.covariance + + def logpdf(self, x): + x = self._dist._process_quantiles(x, self.dim) + out = self._dist._logpdf(x, self.mean, self.cov_object) + if np.any(self.cov_object.rank < self.dim): + out_of_bounds = ~self.cov_object._support_mask(x-self.mean) + out[out_of_bounds] = -np.inf + return _squeeze_output(out) + + def pdf(self, x): + return np.exp(self.logpdf(x)) + + def logcdf(self, x, *, lower_limit=None): + cdf = self.cdf(x, lower_limit=lower_limit) + # the log of a negative real is complex, and cdf can be negative + # if lower limit is greater than upper limit + cdf = cdf + 0j if np.any(cdf < 0) else cdf + out = np.log(cdf) + return out + + def cdf(self, x, *, lower_limit=None): + x = self._dist._process_quantiles(x, self.dim) + out = self._dist._cdf(x, self.mean, self.cov_object.covariance, + self.maxpts, self.abseps, self.releps, + lower_limit) + return _squeeze_output(out) + + def rvs(self, size=1, random_state=None): + return self._dist.rvs(self.mean, self.cov_object, size, random_state) + + def entropy(self): + """Computes the differential entropy of the multivariate normal. + + Returns + ------- + h : scalar + Entropy of the multivariate normal distribution + + """ + log_pdet = self.cov_object.log_pdet + rank = self.cov_object.rank + return 0.5 * (rank * (_LOG_2PI + 1) + log_pdet) + + +# Set frozen generator docstrings from corresponding docstrings in +# multivariate_normal_gen and fill in default strings in class docstrings +for name in ['logpdf', 'pdf', 'logcdf', 'cdf', 'rvs']: + method = multivariate_normal_gen.__dict__[name] + method_frozen = multivariate_normal_frozen.__dict__[name] + method_frozen.__doc__ = doccer.docformat(method.__doc__, + mvn_docdict_noparams) + method.__doc__ = doccer.docformat(method.__doc__, mvn_docdict_params) + +_matnorm_doc_default_callparams = """\ +mean : array_like, optional + Mean of the distribution (default: `None`) +rowcov : array_like, optional + Among-row covariance matrix of the distribution (default: `1`) +colcov : array_like, optional + Among-column covariance matrix of the distribution (default: `1`) +""" + +_matnorm_doc_callparams_note = """\ +If `mean` is set to `None` then a matrix of zeros is used for the mean. +The dimensions of this matrix are inferred from the shape of `rowcov` and +`colcov`, if these are provided, or set to `1` if ambiguous. + +`rowcov` and `colcov` can be two-dimensional array_likes specifying the +covariance matrices directly. Alternatively, a one-dimensional array will +be be interpreted as the entries of a diagonal matrix, and a scalar or +zero-dimensional array will be interpreted as this value times the +identity matrix. +""" + +_matnorm_doc_frozen_callparams = "" + +_matnorm_doc_frozen_callparams_note = """\ +See class definition for a detailed description of parameters.""" + +matnorm_docdict_params = { + '_matnorm_doc_default_callparams': _matnorm_doc_default_callparams, + '_matnorm_doc_callparams_note': _matnorm_doc_callparams_note, + '_doc_random_state': _doc_random_state +} + +matnorm_docdict_noparams = { + '_matnorm_doc_default_callparams': _matnorm_doc_frozen_callparams, + '_matnorm_doc_callparams_note': _matnorm_doc_frozen_callparams_note, + '_doc_random_state': _doc_random_state +} + + +class matrix_normal_gen(multi_rv_generic): + r"""A matrix normal random variable. + + The `mean` keyword specifies the mean. The `rowcov` keyword specifies the + among-row covariance matrix. The 'colcov' keyword specifies the + among-column covariance matrix. + + Methods + ------- + pdf(X, mean=None, rowcov=1, colcov=1) + Probability density function. + logpdf(X, mean=None, rowcov=1, colcov=1) + Log of the probability density function. + rvs(mean=None, rowcov=1, colcov=1, size=1, random_state=None) + Draw random samples. + entropy(rowcol=1, colcov=1) + Differential entropy. + + Parameters + ---------- + %(_matnorm_doc_default_callparams)s + %(_doc_random_state)s + + Notes + ----- + %(_matnorm_doc_callparams_note)s + + The covariance matrices specified by `rowcov` and `colcov` must be + (symmetric) positive definite. If the samples in `X` are + :math:`m \times n`, then `rowcov` must be :math:`m \times m` and + `colcov` must be :math:`n \times n`. `mean` must be the same shape as `X`. + + The probability density function for `matrix_normal` is + + .. math:: + + f(X) = (2 \pi)^{-\frac{mn}{2}}|U|^{-\frac{n}{2}} |V|^{-\frac{m}{2}} + \exp\left( -\frac{1}{2} \mathrm{Tr}\left[ U^{-1} (X-M) V^{-1} + (X-M)^T \right] \right), + + where :math:`M` is the mean, :math:`U` the among-row covariance matrix, + :math:`V` the among-column covariance matrix. + + The `allow_singular` behaviour of the `multivariate_normal` + distribution is not currently supported. Covariance matrices must be + full rank. + + The `matrix_normal` distribution is closely related to the + `multivariate_normal` distribution. Specifically, :math:`\mathrm{Vec}(X)` + (the vector formed by concatenating the columns of :math:`X`) has a + multivariate normal distribution with mean :math:`\mathrm{Vec}(M)` + and covariance :math:`V \otimes U` (where :math:`\otimes` is the Kronecker + product). Sampling and pdf evaluation are + :math:`\mathcal{O}(m^3 + n^3 + m^2 n + m n^2)` for the matrix normal, but + :math:`\mathcal{O}(m^3 n^3)` for the equivalent multivariate normal, + making this equivalent form algorithmically inefficient. + + .. versionadded:: 0.17.0 + + Examples + -------- + + >>> import numpy as np + >>> from scipy.stats import matrix_normal + + >>> M = np.arange(6).reshape(3,2); M + array([[0, 1], + [2, 3], + [4, 5]]) + >>> U = np.diag([1,2,3]); U + array([[1, 0, 0], + [0, 2, 0], + [0, 0, 3]]) + >>> V = 0.3*np.identity(2); V + array([[ 0.3, 0. ], + [ 0. , 0.3]]) + >>> X = M + 0.1; X + array([[ 0.1, 1.1], + [ 2.1, 3.1], + [ 4.1, 5.1]]) + >>> matrix_normal.pdf(X, mean=M, rowcov=U, colcov=V) + 0.023410202050005054 + + >>> # Equivalent multivariate normal + >>> from scipy.stats import multivariate_normal + >>> vectorised_X = X.T.flatten() + >>> equiv_mean = M.T.flatten() + >>> equiv_cov = np.kron(V,U) + >>> multivariate_normal.pdf(vectorised_X, mean=equiv_mean, cov=equiv_cov) + 0.023410202050005054 + + Alternatively, the object may be called (as a function) to fix the mean + and covariance parameters, returning a "frozen" matrix normal + random variable: + + >>> rv = matrix_normal(mean=None, rowcov=1, colcov=1) + >>> # Frozen object with the same methods but holding the given + >>> # mean and covariance fixed. + + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__, matnorm_docdict_params) + + def __call__(self, mean=None, rowcov=1, colcov=1, seed=None): + """Create a frozen matrix normal distribution. + + See `matrix_normal_frozen` for more information. + + """ + return matrix_normal_frozen(mean, rowcov, colcov, seed=seed) + + def _process_parameters(self, mean, rowcov, colcov): + """ + Infer dimensionality from mean or covariance matrices. Handle + defaults. Ensure compatible dimensions. + """ + + # Process mean + if mean is not None: + mean = np.asarray(mean, dtype=float) + meanshape = mean.shape + if len(meanshape) != 2: + raise ValueError("Array `mean` must be two dimensional.") + if np.any(meanshape == 0): + raise ValueError("Array `mean` has invalid shape.") + + # Process among-row covariance + rowcov = np.asarray(rowcov, dtype=float) + if rowcov.ndim == 0: + if mean is not None: + rowcov = rowcov * np.identity(meanshape[0]) + else: + rowcov = rowcov * np.identity(1) + elif rowcov.ndim == 1: + rowcov = np.diag(rowcov) + rowshape = rowcov.shape + if len(rowshape) != 2: + raise ValueError("`rowcov` must be a scalar or a 2D array.") + if rowshape[0] != rowshape[1]: + raise ValueError("Array `rowcov` must be square.") + if rowshape[0] == 0: + raise ValueError("Array `rowcov` has invalid shape.") + numrows = rowshape[0] + + # Process among-column covariance + colcov = np.asarray(colcov, dtype=float) + if colcov.ndim == 0: + if mean is not None: + colcov = colcov * np.identity(meanshape[1]) + else: + colcov = colcov * np.identity(1) + elif colcov.ndim == 1: + colcov = np.diag(colcov) + colshape = colcov.shape + if len(colshape) != 2: + raise ValueError("`colcov` must be a scalar or a 2D array.") + if colshape[0] != colshape[1]: + raise ValueError("Array `colcov` must be square.") + if colshape[0] == 0: + raise ValueError("Array `colcov` has invalid shape.") + numcols = colshape[0] + + # Ensure mean and covariances compatible + if mean is not None: + if meanshape[0] != numrows: + raise ValueError("Arrays `mean` and `rowcov` must have the " + "same number of rows.") + if meanshape[1] != numcols: + raise ValueError("Arrays `mean` and `colcov` must have the " + "same number of columns.") + else: + mean = np.zeros((numrows, numcols)) + + dims = (numrows, numcols) + + return dims, mean, rowcov, colcov + + def _process_quantiles(self, X, dims): + """ + Adjust quantiles array so that last two axes labels the components of + each data point. + """ + X = np.asarray(X, dtype=float) + if X.ndim == 2: + X = X[np.newaxis, :] + if X.shape[-2:] != dims: + raise ValueError("The shape of array `X` is not compatible " + "with the distribution parameters.") + return X + + def _logpdf(self, dims, X, mean, row_prec_rt, log_det_rowcov, + col_prec_rt, log_det_colcov): + """Log of the matrix normal probability density function. + + Parameters + ---------- + dims : tuple + Dimensions of the matrix variates + X : ndarray + Points at which to evaluate the log of the probability + density function + mean : ndarray + Mean of the distribution + row_prec_rt : ndarray + A decomposition such that np.dot(row_prec_rt, row_prec_rt.T) + is the inverse of the among-row covariance matrix + log_det_rowcov : float + Logarithm of the determinant of the among-row covariance matrix + col_prec_rt : ndarray + A decomposition such that np.dot(col_prec_rt, col_prec_rt.T) + is the inverse of the among-column covariance matrix + log_det_colcov : float + Logarithm of the determinant of the among-column covariance matrix + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'logpdf' instead. + + """ + numrows, numcols = dims + roll_dev = np.moveaxis(X-mean, -1, 0) + scale_dev = np.tensordot(col_prec_rt.T, + np.dot(roll_dev, row_prec_rt), 1) + maha = np.sum(np.sum(np.square(scale_dev), axis=-1), axis=0) + return -0.5 * (numrows*numcols*_LOG_2PI + numcols*log_det_rowcov + + numrows*log_det_colcov + maha) + + def logpdf(self, X, mean=None, rowcov=1, colcov=1): + """Log of the matrix normal probability density function. + + Parameters + ---------- + X : array_like + Quantiles, with the last two axes of `X` denoting the components. + %(_matnorm_doc_default_callparams)s + + Returns + ------- + logpdf : ndarray + Log of the probability density function evaluated at `X` + + Notes + ----- + %(_matnorm_doc_callparams_note)s + + """ + dims, mean, rowcov, colcov = self._process_parameters(mean, rowcov, + colcov) + X = self._process_quantiles(X, dims) + rowpsd = _PSD(rowcov, allow_singular=False) + colpsd = _PSD(colcov, allow_singular=False) + out = self._logpdf(dims, X, mean, rowpsd.U, rowpsd.log_pdet, colpsd.U, + colpsd.log_pdet) + return _squeeze_output(out) + + def pdf(self, X, mean=None, rowcov=1, colcov=1): + """Matrix normal probability density function. + + Parameters + ---------- + X : array_like + Quantiles, with the last two axes of `X` denoting the components. + %(_matnorm_doc_default_callparams)s + + Returns + ------- + pdf : ndarray + Probability density function evaluated at `X` + + Notes + ----- + %(_matnorm_doc_callparams_note)s + + """ + return np.exp(self.logpdf(X, mean, rowcov, colcov)) + + def rvs(self, mean=None, rowcov=1, colcov=1, size=1, random_state=None): + """Draw random samples from a matrix normal distribution. + + Parameters + ---------- + %(_matnorm_doc_default_callparams)s + size : integer, optional + Number of samples to draw (default 1). + %(_doc_random_state)s + + Returns + ------- + rvs : ndarray or scalar + Random variates of size (`size`, `dims`), where `dims` is the + dimension of the random matrices. + + Notes + ----- + %(_matnorm_doc_callparams_note)s + + """ + size = int(size) + dims, mean, rowcov, colcov = self._process_parameters(mean, rowcov, + colcov) + rowchol = scipy.linalg.cholesky(rowcov, lower=True) + colchol = scipy.linalg.cholesky(colcov, lower=True) + random_state = self._get_random_state(random_state) + # We aren't generating standard normal variates with size=(size, + # dims[0], dims[1]) directly to ensure random variates remain backwards + # compatible. See https://github.com/scipy/scipy/pull/12312 for more + # details. + std_norm = random_state.standard_normal( + size=(dims[1], size, dims[0]) + ).transpose(1, 2, 0) + out = mean + np.einsum('jp,ipq,kq->ijk', + rowchol, std_norm, colchol, + optimize=True) + if size == 1: + out = out.reshape(mean.shape) + return out + + def entropy(self, rowcov=1, colcov=1): + """Log of the matrix normal probability density function. + + Parameters + ---------- + rowcov : array_like, optional + Among-row covariance matrix of the distribution (default: `1`) + colcov : array_like, optional + Among-column covariance matrix of the distribution (default: `1`) + + Returns + ------- + entropy : float + Entropy of the distribution + + Notes + ----- + %(_matnorm_doc_callparams_note)s + + """ + dummy_mean = np.zeros((rowcov.shape[0], colcov.shape[0])) + dims, _, rowcov, colcov = self._process_parameters(dummy_mean, + rowcov, + colcov) + rowpsd = _PSD(rowcov, allow_singular=False) + colpsd = _PSD(colcov, allow_singular=False) + + return self._entropy(dims, rowpsd.log_pdet, colpsd.log_pdet) + + def _entropy(self, dims, row_cov_logdet, col_cov_logdet): + n, p = dims + return (0.5 * n * p * (1 + _LOG_2PI) + 0.5 * p * row_cov_logdet + + 0.5 * n * col_cov_logdet) + + +matrix_normal = matrix_normal_gen() + + +class matrix_normal_frozen(multi_rv_frozen): + """ + Create a frozen matrix normal distribution. + + Parameters + ---------- + %(_matnorm_doc_default_callparams)s + seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional + If `seed` is `None` the `~np.random.RandomState` singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, seeded + with seed. + If `seed` is already a ``RandomState`` or ``Generator`` instance, + then that object is used. + Default is `None`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import matrix_normal + + >>> distn = matrix_normal(mean=np.zeros((3,3))) + >>> X = distn.rvs(); X + array([[-0.02976962, 0.93339138, -0.09663178], + [ 0.67405524, 0.28250467, -0.93308929], + [-0.31144782, 0.74535536, 1.30412916]]) + >>> distn.pdf(X) + 2.5160642368346784e-05 + >>> distn.logpdf(X) + -10.590229595124615 + """ + + def __init__(self, mean=None, rowcov=1, colcov=1, seed=None): + self._dist = matrix_normal_gen(seed) + self.dims, self.mean, self.rowcov, self.colcov = \ + self._dist._process_parameters(mean, rowcov, colcov) + self.rowpsd = _PSD(self.rowcov, allow_singular=False) + self.colpsd = _PSD(self.colcov, allow_singular=False) + + def logpdf(self, X): + X = self._dist._process_quantiles(X, self.dims) + out = self._dist._logpdf(self.dims, X, self.mean, self.rowpsd.U, + self.rowpsd.log_pdet, self.colpsd.U, + self.colpsd.log_pdet) + return _squeeze_output(out) + + def pdf(self, X): + return np.exp(self.logpdf(X)) + + def rvs(self, size=1, random_state=None): + return self._dist.rvs(self.mean, self.rowcov, self.colcov, size, + random_state) + + def entropy(self): + return self._dist._entropy(self.dims, self.rowpsd.log_pdet, + self.colpsd.log_pdet) + + +# Set frozen generator docstrings from corresponding docstrings in +# matrix_normal_gen and fill in default strings in class docstrings +for name in ['logpdf', 'pdf', 'rvs', 'entropy']: + method = matrix_normal_gen.__dict__[name] + method_frozen = matrix_normal_frozen.__dict__[name] + method_frozen.__doc__ = doccer.docformat(method.__doc__, + matnorm_docdict_noparams) + method.__doc__ = doccer.docformat(method.__doc__, matnorm_docdict_params) + +_dirichlet_doc_default_callparams = """\ +alpha : array_like + The concentration parameters. The number of entries determines the + dimensionality of the distribution. +""" +_dirichlet_doc_frozen_callparams = "" + +_dirichlet_doc_frozen_callparams_note = """\ +See class definition for a detailed description of parameters.""" + +dirichlet_docdict_params = { + '_dirichlet_doc_default_callparams': _dirichlet_doc_default_callparams, + '_doc_random_state': _doc_random_state +} + +dirichlet_docdict_noparams = { + '_dirichlet_doc_default_callparams': _dirichlet_doc_frozen_callparams, + '_doc_random_state': _doc_random_state +} + + +def _dirichlet_check_parameters(alpha): + alpha = np.asarray(alpha) + if np.min(alpha) <= 0: + raise ValueError("All parameters must be greater than 0") + elif alpha.ndim != 1: + raise ValueError("Parameter vector 'a' must be one dimensional, " + f"but a.shape = {alpha.shape}.") + return alpha + + +def _dirichlet_check_input(alpha, x): + x = np.asarray(x) + + if x.shape[0] + 1 != alpha.shape[0] and x.shape[0] != alpha.shape[0]: + raise ValueError("Vector 'x' must have either the same number " + "of entries as, or one entry fewer than, " + f"parameter vector 'a', but alpha.shape = {alpha.shape} " + f"and x.shape = {x.shape}.") + + if x.shape[0] != alpha.shape[0]: + xk = np.array([1 - np.sum(x, 0)]) + if xk.ndim == 1: + x = np.append(x, xk) + elif xk.ndim == 2: + x = np.vstack((x, xk)) + else: + raise ValueError("The input must be one dimensional or a two " + "dimensional matrix containing the entries.") + + if np.min(x) < 0: + raise ValueError("Each entry in 'x' must be greater than or equal " + "to zero.") + + if np.max(x) > 1: + raise ValueError("Each entry in 'x' must be smaller or equal one.") + + # Check x_i > 0 or alpha_i > 1 + xeq0 = (x == 0) + alphalt1 = (alpha < 1) + if x.shape != alpha.shape: + alphalt1 = np.repeat(alphalt1, x.shape[-1], axis=-1).reshape(x.shape) + chk = np.logical_and(xeq0, alphalt1) + + if np.sum(chk): + raise ValueError("Each entry in 'x' must be greater than zero if its " + "alpha is less than one.") + + if (np.abs(np.sum(x, 0) - 1.0) > 10e-10).any(): + raise ValueError("The input vector 'x' must lie within the normal " + "simplex. but np.sum(x, 0) = %s." % np.sum(x, 0)) + + return x + + +def _lnB(alpha): + r"""Internal helper function to compute the log of the useful quotient. + + .. math:: + + B(\alpha) = \frac{\prod_{i=1}{K}\Gamma(\alpha_i)} + {\Gamma\left(\sum_{i=1}^{K} \alpha_i \right)} + + Parameters + ---------- + %(_dirichlet_doc_default_callparams)s + + Returns + ------- + B : scalar + Helper quotient, internal use only + + """ + return np.sum(gammaln(alpha)) - gammaln(np.sum(alpha)) + + +class dirichlet_gen(multi_rv_generic): + r"""A Dirichlet random variable. + + The ``alpha`` keyword specifies the concentration parameters of the + distribution. + + .. versionadded:: 0.15.0 + + Methods + ------- + pdf(x, alpha) + Probability density function. + logpdf(x, alpha) + Log of the probability density function. + rvs(alpha, size=1, random_state=None) + Draw random samples from a Dirichlet distribution. + mean(alpha) + The mean of the Dirichlet distribution + var(alpha) + The variance of the Dirichlet distribution + cov(alpha) + The covariance of the Dirichlet distribution + entropy(alpha) + Compute the differential entropy of the Dirichlet distribution. + + Parameters + ---------- + %(_dirichlet_doc_default_callparams)s + %(_doc_random_state)s + + Notes + ----- + Each :math:`\alpha` entry must be positive. The distribution has only + support on the simplex defined by + + .. math:: + \sum_{i=1}^{K} x_i = 1 + + where :math:`0 < x_i < 1`. + + If the quantiles don't lie within the simplex, a ValueError is raised. + + The probability density function for `dirichlet` is + + .. math:: + + f(x) = \frac{1}{\mathrm{B}(\boldsymbol\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1} + + where + + .. math:: + + \mathrm{B}(\boldsymbol\alpha) = \frac{\prod_{i=1}^K \Gamma(\alpha_i)} + {\Gamma\bigl(\sum_{i=1}^K \alpha_i\bigr)} + + and :math:`\boldsymbol\alpha=(\alpha_1,\ldots,\alpha_K)`, the + concentration parameters and :math:`K` is the dimension of the space + where :math:`x` takes values. + + Note that the `dirichlet` interface is somewhat inconsistent. + The array returned by the rvs function is transposed + with respect to the format expected by the pdf and logpdf. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import dirichlet + + Generate a dirichlet random variable + + >>> quantiles = np.array([0.2, 0.2, 0.6]) # specify quantiles + >>> alpha = np.array([0.4, 5, 15]) # specify concentration parameters + >>> dirichlet.pdf(quantiles, alpha) + 0.2843831684937255 + + The same PDF but following a log scale + + >>> dirichlet.logpdf(quantiles, alpha) + -1.2574327653159187 + + Once we specify the dirichlet distribution + we can then calculate quantities of interest + + >>> dirichlet.mean(alpha) # get the mean of the distribution + array([0.01960784, 0.24509804, 0.73529412]) + >>> dirichlet.var(alpha) # get variance + array([0.00089829, 0.00864603, 0.00909517]) + >>> dirichlet.entropy(alpha) # calculate the differential entropy + -4.3280162474082715 + + We can also return random samples from the distribution + + >>> dirichlet.rvs(alpha, size=1, random_state=1) + array([[0.00766178, 0.24670518, 0.74563305]]) + >>> dirichlet.rvs(alpha, size=2, random_state=2) + array([[0.01639427, 0.1292273 , 0.85437844], + [0.00156917, 0.19033695, 0.80809388]]) + + Alternatively, the object may be called (as a function) to fix + concentration parameters, returning a "frozen" Dirichlet + random variable: + + >>> rv = dirichlet(alpha) + >>> # Frozen object with the same methods but holding the given + >>> # concentration parameters fixed. + + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__, dirichlet_docdict_params) + + def __call__(self, alpha, seed=None): + return dirichlet_frozen(alpha, seed=seed) + + def _logpdf(self, x, alpha): + """Log of the Dirichlet probability density function. + + Parameters + ---------- + x : ndarray + Points at which to evaluate the log of the probability + density function + %(_dirichlet_doc_default_callparams)s + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'logpdf' instead. + + """ + lnB = _lnB(alpha) + return - lnB + np.sum((xlogy(alpha - 1, x.T)).T, 0) + + def logpdf(self, x, alpha): + """Log of the Dirichlet probability density function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_dirichlet_doc_default_callparams)s + + Returns + ------- + pdf : ndarray or scalar + Log of the probability density function evaluated at `x`. + + """ + alpha = _dirichlet_check_parameters(alpha) + x = _dirichlet_check_input(alpha, x) + + out = self._logpdf(x, alpha) + return _squeeze_output(out) + + def pdf(self, x, alpha): + """The Dirichlet probability density function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_dirichlet_doc_default_callparams)s + + Returns + ------- + pdf : ndarray or scalar + The probability density function evaluated at `x`. + + """ + alpha = _dirichlet_check_parameters(alpha) + x = _dirichlet_check_input(alpha, x) + + out = np.exp(self._logpdf(x, alpha)) + return _squeeze_output(out) + + def mean(self, alpha): + """Mean of the Dirichlet distribution. + + Parameters + ---------- + %(_dirichlet_doc_default_callparams)s + + Returns + ------- + mu : ndarray or scalar + Mean of the Dirichlet distribution. + + """ + alpha = _dirichlet_check_parameters(alpha) + + out = alpha / (np.sum(alpha)) + return _squeeze_output(out) + + def var(self, alpha): + """Variance of the Dirichlet distribution. + + Parameters + ---------- + %(_dirichlet_doc_default_callparams)s + + Returns + ------- + v : ndarray or scalar + Variance of the Dirichlet distribution. + + """ + + alpha = _dirichlet_check_parameters(alpha) + + alpha0 = np.sum(alpha) + out = (alpha * (alpha0 - alpha)) / ((alpha0 * alpha0) * (alpha0 + 1)) + return _squeeze_output(out) + + def cov(self, alpha): + """Covariance matrix of the Dirichlet distribution. + + Parameters + ---------- + %(_dirichlet_doc_default_callparams)s + + Returns + ------- + cov : ndarray + The covariance matrix of the distribution. + """ + + alpha = _dirichlet_check_parameters(alpha) + alpha0 = np.sum(alpha) + a = alpha / alpha0 + + cov = (np.diag(a) - np.outer(a, a)) / (alpha0 + 1) + return _squeeze_output(cov) + + def entropy(self, alpha): + """ + Differential entropy of the Dirichlet distribution. + + Parameters + ---------- + %(_dirichlet_doc_default_callparams)s + + Returns + ------- + h : scalar + Entropy of the Dirichlet distribution + + """ + + alpha = _dirichlet_check_parameters(alpha) + + alpha0 = np.sum(alpha) + lnB = _lnB(alpha) + K = alpha.shape[0] + + out = lnB + (alpha0 - K) * scipy.special.psi(alpha0) - np.sum( + (alpha - 1) * scipy.special.psi(alpha)) + return _squeeze_output(out) + + def rvs(self, alpha, size=1, random_state=None): + """ + Draw random samples from a Dirichlet distribution. + + Parameters + ---------- + %(_dirichlet_doc_default_callparams)s + size : int, optional + Number of samples to draw (default 1). + %(_doc_random_state)s + + Returns + ------- + rvs : ndarray or scalar + Random variates of size (`size`, `N`), where `N` is the + dimension of the random variable. + + """ + alpha = _dirichlet_check_parameters(alpha) + random_state = self._get_random_state(random_state) + return random_state.dirichlet(alpha, size=size) + + +dirichlet = dirichlet_gen() + + +class dirichlet_frozen(multi_rv_frozen): + def __init__(self, alpha, seed=None): + self.alpha = _dirichlet_check_parameters(alpha) + self._dist = dirichlet_gen(seed) + + def logpdf(self, x): + return self._dist.logpdf(x, self.alpha) + + def pdf(self, x): + return self._dist.pdf(x, self.alpha) + + def mean(self): + return self._dist.mean(self.alpha) + + def var(self): + return self._dist.var(self.alpha) + + def cov(self): + return self._dist.cov(self.alpha) + + def entropy(self): + return self._dist.entropy(self.alpha) + + def rvs(self, size=1, random_state=None): + return self._dist.rvs(self.alpha, size, random_state) + + +# Set frozen generator docstrings from corresponding docstrings in +# multivariate_normal_gen and fill in default strings in class docstrings +for name in ['logpdf', 'pdf', 'rvs', 'mean', 'var', 'cov', 'entropy']: + method = dirichlet_gen.__dict__[name] + method_frozen = dirichlet_frozen.__dict__[name] + method_frozen.__doc__ = doccer.docformat( + method.__doc__, dirichlet_docdict_noparams) + method.__doc__ = doccer.docformat(method.__doc__, dirichlet_docdict_params) + + +_wishart_doc_default_callparams = """\ +df : int + Degrees of freedom, must be greater than or equal to dimension of the + scale matrix +scale : array_like + Symmetric positive definite scale matrix of the distribution +""" + +_wishart_doc_callparams_note = "" + +_wishart_doc_frozen_callparams = "" + +_wishart_doc_frozen_callparams_note = """\ +See class definition for a detailed description of parameters.""" + +wishart_docdict_params = { + '_doc_default_callparams': _wishart_doc_default_callparams, + '_doc_callparams_note': _wishart_doc_callparams_note, + '_doc_random_state': _doc_random_state +} + +wishart_docdict_noparams = { + '_doc_default_callparams': _wishart_doc_frozen_callparams, + '_doc_callparams_note': _wishart_doc_frozen_callparams_note, + '_doc_random_state': _doc_random_state +} + + +class wishart_gen(multi_rv_generic): + r"""A Wishart random variable. + + The `df` keyword specifies the degrees of freedom. The `scale` keyword + specifies the scale matrix, which must be symmetric and positive definite. + In this context, the scale matrix is often interpreted in terms of a + multivariate normal precision matrix (the inverse of the covariance + matrix). These arguments must satisfy the relationship + ``df > scale.ndim - 1``, but see notes on using the `rvs` method with + ``df < scale.ndim``. + + Methods + ------- + pdf(x, df, scale) + Probability density function. + logpdf(x, df, scale) + Log of the probability density function. + rvs(df, scale, size=1, random_state=None) + Draw random samples from a Wishart distribution. + entropy() + Compute the differential entropy of the Wishart distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + %(_doc_random_state)s + + Raises + ------ + scipy.linalg.LinAlgError + If the scale matrix `scale` is not positive definite. + + See Also + -------- + invwishart, chi2 + + Notes + ----- + %(_doc_callparams_note)s + + The scale matrix `scale` must be a symmetric positive definite + matrix. Singular matrices, including the symmetric positive semi-definite + case, are not supported. Symmetry is not checked; only the lower triangular + portion is used. + + The Wishart distribution is often denoted + + .. math:: + + W_p(\nu, \Sigma) + + where :math:`\nu` is the degrees of freedom and :math:`\Sigma` is the + :math:`p \times p` scale matrix. + + The probability density function for `wishart` has support over positive + definite matrices :math:`S`; if :math:`S \sim W_p(\nu, \Sigma)`, then + its PDF is given by: + + .. math:: + + f(S) = \frac{|S|^{\frac{\nu - p - 1}{2}}}{2^{ \frac{\nu p}{2} } + |\Sigma|^\frac{\nu}{2} \Gamma_p \left ( \frac{\nu}{2} \right )} + \exp\left( -tr(\Sigma^{-1} S) / 2 \right) + + If :math:`S \sim W_p(\nu, \Sigma)` (Wishart) then + :math:`S^{-1} \sim W_p^{-1}(\nu, \Sigma^{-1})` (inverse Wishart). + + If the scale matrix is 1-dimensional and equal to one, then the Wishart + distribution :math:`W_1(\nu, 1)` collapses to the :math:`\chi^2(\nu)` + distribution. + + The algorithm [2]_ implemented by the `rvs` method may + produce numerically singular matrices with :math:`p - 1 < \nu < p`; the + user may wish to check for this condition and generate replacement samples + as necessary. + + + .. versionadded:: 0.16.0 + + References + ---------- + .. [1] M.L. Eaton, "Multivariate Statistics: A Vector Space Approach", + Wiley, 1983. + .. [2] W.B. Smith and R.R. Hocking, "Algorithm AS 53: Wishart Variate + Generator", Applied Statistics, vol. 21, pp. 341-345, 1972. + + Examples + -------- + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy.stats import wishart, chi2 + >>> x = np.linspace(1e-5, 8, 100) + >>> w = wishart.pdf(x, df=3, scale=1); w[:5] + array([ 0.00126156, 0.10892176, 0.14793434, 0.17400548, 0.1929669 ]) + >>> c = chi2.pdf(x, 3); c[:5] + array([ 0.00126156, 0.10892176, 0.14793434, 0.17400548, 0.1929669 ]) + >>> plt.plot(x, w) + >>> plt.show() + + The input quantiles can be any shape of array, as long as the last + axis labels the components. + + Alternatively, the object may be called (as a function) to fix the degrees + of freedom and scale parameters, returning a "frozen" Wishart random + variable: + + >>> rv = wishart(df=1, scale=1) + >>> # Frozen object with the same methods but holding the given + >>> # degrees of freedom and scale fixed. + + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__, wishart_docdict_params) + + def __call__(self, df=None, scale=None, seed=None): + """Create a frozen Wishart distribution. + + See `wishart_frozen` for more information. + """ + return wishart_frozen(df, scale, seed) + + def _process_parameters(self, df, scale): + if scale is None: + scale = 1.0 + scale = np.asarray(scale, dtype=float) + + if scale.ndim == 0: + scale = scale[np.newaxis, np.newaxis] + elif scale.ndim == 1: + scale = np.diag(scale) + elif scale.ndim == 2 and not scale.shape[0] == scale.shape[1]: + raise ValueError("Array 'scale' must be square if it is two" + " dimensional, but scale.scale = %s." + % str(scale.shape)) + elif scale.ndim > 2: + raise ValueError("Array 'scale' must be at most two-dimensional," + " but scale.ndim = %d" % scale.ndim) + + dim = scale.shape[0] + + if df is None: + df = dim + elif not np.isscalar(df): + raise ValueError("Degrees of freedom must be a scalar.") + elif df <= dim - 1: + raise ValueError("Degrees of freedom must be greater than the " + "dimension of scale matrix minus 1.") + + return dim, df, scale + + def _process_quantiles(self, x, dim): + """ + Adjust quantiles array so that last axis labels the components of + each data point. + """ + x = np.asarray(x, dtype=float) + + if x.ndim == 0: + x = x * np.eye(dim)[:, :, np.newaxis] + if x.ndim == 1: + if dim == 1: + x = x[np.newaxis, np.newaxis, :] + else: + x = np.diag(x)[:, :, np.newaxis] + elif x.ndim == 2: + if not x.shape[0] == x.shape[1]: + raise ValueError("Quantiles must be square if they are two" + " dimensional, but x.shape = %s." + % str(x.shape)) + x = x[:, :, np.newaxis] + elif x.ndim == 3: + if not x.shape[0] == x.shape[1]: + raise ValueError("Quantiles must be square in the first two" + " dimensions if they are three dimensional" + ", but x.shape = %s." % str(x.shape)) + elif x.ndim > 3: + raise ValueError("Quantiles must be at most two-dimensional with" + " an additional dimension for multiple" + "components, but x.ndim = %d" % x.ndim) + + # Now we have 3-dim array; should have shape [dim, dim, *] + if not x.shape[0:2] == (dim, dim): + raise ValueError('Quantiles have incompatible dimensions: should' + f' be {(dim, dim)}, got {x.shape[0:2]}.') + + return x + + def _process_size(self, size): + size = np.asarray(size) + + if size.ndim == 0: + size = size[np.newaxis] + elif size.ndim > 1: + raise ValueError('Size must be an integer or tuple of integers;' + ' thus must have dimension <= 1.' + ' Got size.ndim = %s' % str(tuple(size))) + n = size.prod() + shape = tuple(size) + + return n, shape + + def _logpdf(self, x, dim, df, scale, log_det_scale, C): + """Log of the Wishart probability density function. + + Parameters + ---------- + x : ndarray + Points at which to evaluate the log of the probability + density function + dim : int + Dimension of the scale matrix + df : int + Degrees of freedom + scale : ndarray + Scale matrix + log_det_scale : float + Logarithm of the determinant of the scale matrix + C : ndarray + Cholesky factorization of the scale matrix, lower triagular. + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'logpdf' instead. + + """ + # log determinant of x + # Note: x has components along the last axis, so that x.T has + # components alone the 0-th axis. Then since det(A) = det(A'), this + # gives us a 1-dim vector of determinants + + # Retrieve tr(scale^{-1} x) + log_det_x = np.empty(x.shape[-1]) + scale_inv_x = np.empty(x.shape) + tr_scale_inv_x = np.empty(x.shape[-1]) + for i in range(x.shape[-1]): + _, log_det_x[i] = self._cholesky_logdet(x[:, :, i]) + scale_inv_x[:, :, i] = scipy.linalg.cho_solve((C, True), x[:, :, i]) + tr_scale_inv_x[i] = scale_inv_x[:, :, i].trace() + + # Log PDF + out = ((0.5 * (df - dim - 1) * log_det_x - 0.5 * tr_scale_inv_x) - + (0.5 * df * dim * _LOG_2 + 0.5 * df * log_det_scale + + multigammaln(0.5*df, dim))) + + return out + + def logpdf(self, x, df, scale): + """Log of the Wishart probability density function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + Each quantile must be a symmetric positive definite matrix. + %(_doc_default_callparams)s + + Returns + ------- + pdf : ndarray + Log of the probability density function evaluated at `x` + + Notes + ----- + %(_doc_callparams_note)s + + """ + dim, df, scale = self._process_parameters(df, scale) + x = self._process_quantiles(x, dim) + + # Cholesky decomposition of scale, get log(det(scale)) + C, log_det_scale = self._cholesky_logdet(scale) + + out = self._logpdf(x, dim, df, scale, log_det_scale, C) + return _squeeze_output(out) + + def pdf(self, x, df, scale): + """Wishart probability density function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + Each quantile must be a symmetric positive definite matrix. + %(_doc_default_callparams)s + + Returns + ------- + pdf : ndarray + Probability density function evaluated at `x` + + Notes + ----- + %(_doc_callparams_note)s + + """ + return np.exp(self.logpdf(x, df, scale)) + + def _mean(self, dim, df, scale): + """Mean of the Wishart distribution. + + Parameters + ---------- + dim : int + Dimension of the scale matrix + %(_doc_default_callparams)s + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'mean' instead. + + """ + return df * scale + + def mean(self, df, scale): + """Mean of the Wishart distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + mean : float + The mean of the distribution + """ + dim, df, scale = self._process_parameters(df, scale) + out = self._mean(dim, df, scale) + return _squeeze_output(out) + + def _mode(self, dim, df, scale): + """Mode of the Wishart distribution. + + Parameters + ---------- + dim : int + Dimension of the scale matrix + %(_doc_default_callparams)s + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'mode' instead. + + """ + if df >= dim + 1: + out = (df-dim-1) * scale + else: + out = None + return out + + def mode(self, df, scale): + """Mode of the Wishart distribution + + Only valid if the degrees of freedom are greater than the dimension of + the scale matrix. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + mode : float or None + The Mode of the distribution + """ + dim, df, scale = self._process_parameters(df, scale) + out = self._mode(dim, df, scale) + return _squeeze_output(out) if out is not None else out + + def _var(self, dim, df, scale): + """Variance of the Wishart distribution. + + Parameters + ---------- + dim : int + Dimension of the scale matrix + %(_doc_default_callparams)s + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'var' instead. + + """ + var = scale**2 + diag = scale.diagonal() # 1 x dim array + var += np.outer(diag, diag) + var *= df + return var + + def var(self, df, scale): + """Variance of the Wishart distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + var : float + The variance of the distribution + """ + dim, df, scale = self._process_parameters(df, scale) + out = self._var(dim, df, scale) + return _squeeze_output(out) + + def _standard_rvs(self, n, shape, dim, df, random_state): + """ + Parameters + ---------- + n : integer + Number of variates to generate + shape : iterable + Shape of the variates to generate + dim : int + Dimension of the scale matrix + df : int + Degrees of freedom + random_state : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'rvs' instead. + + """ + # Random normal variates for off-diagonal elements + n_tril = dim * (dim-1) // 2 + covariances = random_state.normal( + size=n*n_tril).reshape(shape+(n_tril,)) + + # Random chi-square variates for diagonal elements + variances = (np.r_[[random_state.chisquare(df-(i+1)+1, size=n)**0.5 + for i in range(dim)]].reshape((dim,) + + shape[::-1]).T) + + # Create the A matri(ces) - lower triangular + A = np.zeros(shape + (dim, dim)) + + # Input the covariances + size_idx = tuple([slice(None, None, None)]*len(shape)) + tril_idx = np.tril_indices(dim, k=-1) + A[size_idx + tril_idx] = covariances + + # Input the variances + diag_idx = np.diag_indices(dim) + A[size_idx + diag_idx] = variances + + return A + + def _rvs(self, n, shape, dim, df, C, random_state): + """Draw random samples from a Wishart distribution. + + Parameters + ---------- + n : integer + Number of variates to generate + shape : iterable + Shape of the variates to generate + dim : int + Dimension of the scale matrix + df : int + Degrees of freedom + C : ndarray + Cholesky factorization of the scale matrix, lower triangular. + %(_doc_random_state)s + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'rvs' instead. + + """ + random_state = self._get_random_state(random_state) + # Calculate the matrices A, which are actually lower triangular + # Cholesky factorizations of a matrix B such that B ~ W(df, I) + A = self._standard_rvs(n, shape, dim, df, random_state) + + # Calculate SA = C A A' C', where SA ~ W(df, scale) + # Note: this is the product of a (lower) (lower) (lower)' (lower)' + # or, denoting B = AA', it is C B C' where C is the lower + # triangular Cholesky factorization of the scale matrix. + # this appears to conflict with the instructions in [1]_, which + # suggest that it should be D' B D where D is the lower + # triangular factorization of the scale matrix. However, it is + # meant to refer to the Bartlett (1933) representation of a + # Wishart random variate as L A A' L' where L is lower triangular + # so it appears that understanding D' to be upper triangular + # is either a typo in or misreading of [1]_. + for index in np.ndindex(shape): + CA = np.dot(C, A[index]) + A[index] = np.dot(CA, CA.T) + + return A + + def rvs(self, df, scale, size=1, random_state=None): + """Draw random samples from a Wishart distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + size : integer or iterable of integers, optional + Number of samples to draw (default 1). + %(_doc_random_state)s + + Returns + ------- + rvs : ndarray + Random variates of shape (`size`) + (``dim``, ``dim``), where + ``dim`` is the dimension of the scale matrix. + + Notes + ----- + %(_doc_callparams_note)s + + """ + n, shape = self._process_size(size) + dim, df, scale = self._process_parameters(df, scale) + + # Cholesky decomposition of scale + C = scipy.linalg.cholesky(scale, lower=True) + + out = self._rvs(n, shape, dim, df, C, random_state) + + return _squeeze_output(out) + + def _entropy(self, dim, df, log_det_scale): + """Compute the differential entropy of the Wishart. + + Parameters + ---------- + dim : int + Dimension of the scale matrix + df : int + Degrees of freedom + log_det_scale : float + Logarithm of the determinant of the scale matrix + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'entropy' instead. + + """ + return ( + 0.5 * (dim+1) * log_det_scale + + 0.5 * dim * (dim+1) * _LOG_2 + + multigammaln(0.5*df, dim) - + 0.5 * (df - dim - 1) * np.sum( + [psi(0.5*(df + 1 - (i+1))) for i in range(dim)] + ) + + 0.5 * df * dim + ) + + def entropy(self, df, scale): + """Compute the differential entropy of the Wishart. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + h : scalar + Entropy of the Wishart distribution + + Notes + ----- + %(_doc_callparams_note)s + + """ + dim, df, scale = self._process_parameters(df, scale) + _, log_det_scale = self._cholesky_logdet(scale) + return self._entropy(dim, df, log_det_scale) + + def _cholesky_logdet(self, scale): + """Compute Cholesky decomposition and determine (log(det(scale)). + + Parameters + ---------- + scale : ndarray + Scale matrix. + + Returns + ------- + c_decomp : ndarray + The Cholesky decomposition of `scale`. + logdet : scalar + The log of the determinant of `scale`. + + Notes + ----- + This computation of ``logdet`` is equivalent to + ``np.linalg.slogdet(scale)``. It is ~2x faster though. + + """ + c_decomp = scipy.linalg.cholesky(scale, lower=True) + logdet = 2 * np.sum(np.log(c_decomp.diagonal())) + return c_decomp, logdet + + +wishart = wishart_gen() + + +class wishart_frozen(multi_rv_frozen): + """Create a frozen Wishart distribution. + + Parameters + ---------- + df : array_like + Degrees of freedom of the distribution + scale : array_like + Scale matrix of the distribution + seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance then + that instance is used. + + """ + def __init__(self, df, scale, seed=None): + self._dist = wishart_gen(seed) + self.dim, self.df, self.scale = self._dist._process_parameters( + df, scale) + self.C, self.log_det_scale = self._dist._cholesky_logdet(self.scale) + + def logpdf(self, x): + x = self._dist._process_quantiles(x, self.dim) + + out = self._dist._logpdf(x, self.dim, self.df, self.scale, + self.log_det_scale, self.C) + return _squeeze_output(out) + + def pdf(self, x): + return np.exp(self.logpdf(x)) + + def mean(self): + out = self._dist._mean(self.dim, self.df, self.scale) + return _squeeze_output(out) + + def mode(self): + out = self._dist._mode(self.dim, self.df, self.scale) + return _squeeze_output(out) if out is not None else out + + def var(self): + out = self._dist._var(self.dim, self.df, self.scale) + return _squeeze_output(out) + + def rvs(self, size=1, random_state=None): + n, shape = self._dist._process_size(size) + out = self._dist._rvs(n, shape, self.dim, self.df, + self.C, random_state) + return _squeeze_output(out) + + def entropy(self): + return self._dist._entropy(self.dim, self.df, self.log_det_scale) + + +# Set frozen generator docstrings from corresponding docstrings in +# Wishart and fill in default strings in class docstrings +for name in ['logpdf', 'pdf', 'mean', 'mode', 'var', 'rvs', 'entropy']: + method = wishart_gen.__dict__[name] + method_frozen = wishart_frozen.__dict__[name] + method_frozen.__doc__ = doccer.docformat( + method.__doc__, wishart_docdict_noparams) + method.__doc__ = doccer.docformat(method.__doc__, wishart_docdict_params) + + +class invwishart_gen(wishart_gen): + r"""An inverse Wishart random variable. + + The `df` keyword specifies the degrees of freedom. The `scale` keyword + specifies the scale matrix, which must be symmetric and positive definite. + In this context, the scale matrix is often interpreted in terms of a + multivariate normal covariance matrix. + + Methods + ------- + pdf(x, df, scale) + Probability density function. + logpdf(x, df, scale) + Log of the probability density function. + rvs(df, scale, size=1, random_state=None) + Draw random samples from an inverse Wishart distribution. + entropy(df, scale) + Differential entropy of the distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + %(_doc_random_state)s + + Raises + ------ + scipy.linalg.LinAlgError + If the scale matrix `scale` is not positive definite. + + See Also + -------- + wishart + + Notes + ----- + %(_doc_callparams_note)s + + The scale matrix `scale` must be a symmetric positive definite + matrix. Singular matrices, including the symmetric positive semi-definite + case, are not supported. Symmetry is not checked; only the lower triangular + portion is used. + + The inverse Wishart distribution is often denoted + + .. math:: + + W_p^{-1}(\nu, \Psi) + + where :math:`\nu` is the degrees of freedom and :math:`\Psi` is the + :math:`p \times p` scale matrix. + + The probability density function for `invwishart` has support over positive + definite matrices :math:`S`; if :math:`S \sim W^{-1}_p(\nu, \Sigma)`, + then its PDF is given by: + + .. math:: + + f(S) = \frac{|\Sigma|^\frac{\nu}{2}}{2^{ \frac{\nu p}{2} } + |S|^{\frac{\nu + p + 1}{2}} \Gamma_p \left(\frac{\nu}{2} \right)} + \exp\left( -tr(\Sigma S^{-1}) / 2 \right) + + If :math:`S \sim W_p^{-1}(\nu, \Psi)` (inverse Wishart) then + :math:`S^{-1} \sim W_p(\nu, \Psi^{-1})` (Wishart). + + If the scale matrix is 1-dimensional and equal to one, then the inverse + Wishart distribution :math:`W_1(\nu, 1)` collapses to the + inverse Gamma distribution with parameters shape = :math:`\frac{\nu}{2}` + and scale = :math:`\frac{1}{2}`. + + Instead of inverting a randomly generated Wishart matrix as described in [2], + here the algorithm in [4] is used to directly generate a random inverse-Wishart + matrix without inversion. + + .. versionadded:: 0.16.0 + + References + ---------- + .. [1] M.L. Eaton, "Multivariate Statistics: A Vector Space Approach", + Wiley, 1983. + .. [2] M.C. Jones, "Generating Inverse Wishart Matrices", Communications + in Statistics - Simulation and Computation, vol. 14.2, pp.511-514, + 1985. + .. [3] Gupta, M. and Srivastava, S. "Parametric Bayesian Estimation of + Differential Entropy and Relative Entropy". Entropy 12, 818 - 843. + 2010. + .. [4] S.D. Axen, "Efficiently generating inverse-Wishart matrices and + their Cholesky factors", :arXiv:`2310.15884v1`. 2023. + + Examples + -------- + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy.stats import invwishart, invgamma + >>> x = np.linspace(0.01, 1, 100) + >>> iw = invwishart.pdf(x, df=6, scale=1) + >>> iw[:3] + array([ 1.20546865e-15, 5.42497807e-06, 4.45813929e-03]) + >>> ig = invgamma.pdf(x, 6/2., scale=1./2) + >>> ig[:3] + array([ 1.20546865e-15, 5.42497807e-06, 4.45813929e-03]) + >>> plt.plot(x, iw) + >>> plt.show() + + The input quantiles can be any shape of array, as long as the last + axis labels the components. + + Alternatively, the object may be called (as a function) to fix the degrees + of freedom and scale parameters, returning a "frozen" inverse Wishart + random variable: + + >>> rv = invwishart(df=1, scale=1) + >>> # Frozen object with the same methods but holding the given + >>> # degrees of freedom and scale fixed. + + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__, wishart_docdict_params) + + def __call__(self, df=None, scale=None, seed=None): + """Create a frozen inverse Wishart distribution. + + See `invwishart_frozen` for more information. + + """ + return invwishart_frozen(df, scale, seed) + + def _logpdf(self, x, dim, df, log_det_scale, C): + """Log of the inverse Wishart probability density function. + + Parameters + ---------- + x : ndarray + Points at which to evaluate the log of the probability + density function. + dim : int + Dimension of the scale matrix + df : int + Degrees of freedom + log_det_scale : float + Logarithm of the determinant of the scale matrix + C : ndarray + Cholesky factorization of the scale matrix, lower triagular. + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'logpdf' instead. + + """ + # Retrieve tr(scale x^{-1}) + log_det_x = np.empty(x.shape[-1]) + tr_scale_x_inv = np.empty(x.shape[-1]) + trsm = get_blas_funcs(('trsm'), (x,)) + if dim > 1: + for i in range(x.shape[-1]): + Cx, log_det_x[i] = self._cholesky_logdet(x[:, :, i]) + A = trsm(1., Cx, C, side=0, lower=True) + tr_scale_x_inv[i] = np.linalg.norm(A)**2 + else: + log_det_x[:] = np.log(x[0, 0]) + tr_scale_x_inv[:] = C[0, 0]**2 / x[0, 0] + + # Log PDF + out = ((0.5 * df * log_det_scale - 0.5 * tr_scale_x_inv) - + (0.5 * df * dim * _LOG_2 + 0.5 * (df + dim + 1) * log_det_x) - + multigammaln(0.5*df, dim)) + + return out + + def logpdf(self, x, df, scale): + """Log of the inverse Wishart probability density function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + Each quantile must be a symmetric positive definite matrix. + %(_doc_default_callparams)s + + Returns + ------- + pdf : ndarray + Log of the probability density function evaluated at `x` + + Notes + ----- + %(_doc_callparams_note)s + + """ + dim, df, scale = self._process_parameters(df, scale) + x = self._process_quantiles(x, dim) + C, log_det_scale = self._cholesky_logdet(scale) + out = self._logpdf(x, dim, df, log_det_scale, C) + return _squeeze_output(out) + + def pdf(self, x, df, scale): + """Inverse Wishart probability density function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + Each quantile must be a symmetric positive definite matrix. + %(_doc_default_callparams)s + + Returns + ------- + pdf : ndarray + Probability density function evaluated at `x` + + Notes + ----- + %(_doc_callparams_note)s + + """ + return np.exp(self.logpdf(x, df, scale)) + + def _mean(self, dim, df, scale): + """Mean of the inverse Wishart distribution. + + Parameters + ---------- + dim : int + Dimension of the scale matrix + %(_doc_default_callparams)s + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'mean' instead. + + """ + if df > dim + 1: + out = scale / (df - dim - 1) + else: + out = None + return out + + def mean(self, df, scale): + """Mean of the inverse Wishart distribution. + + Only valid if the degrees of freedom are greater than the dimension of + the scale matrix plus one. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + mean : float or None + The mean of the distribution + + """ + dim, df, scale = self._process_parameters(df, scale) + out = self._mean(dim, df, scale) + return _squeeze_output(out) if out is not None else out + + def _mode(self, dim, df, scale): + """Mode of the inverse Wishart distribution. + + Parameters + ---------- + dim : int + Dimension of the scale matrix + %(_doc_default_callparams)s + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'mode' instead. + + """ + return scale / (df + dim + 1) + + def mode(self, df, scale): + """Mode of the inverse Wishart distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + mode : float + The Mode of the distribution + + """ + dim, df, scale = self._process_parameters(df, scale) + out = self._mode(dim, df, scale) + return _squeeze_output(out) + + def _var(self, dim, df, scale): + """Variance of the inverse Wishart distribution. + + Parameters + ---------- + dim : int + Dimension of the scale matrix + %(_doc_default_callparams)s + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'var' instead. + + """ + if df > dim + 3: + var = (df - dim + 1) * scale**2 + diag = scale.diagonal() # 1 x dim array + var += (df - dim - 1) * np.outer(diag, diag) + var /= (df - dim) * (df - dim - 1)**2 * (df - dim - 3) + else: + var = None + return var + + def var(self, df, scale): + """Variance of the inverse Wishart distribution. + + Only valid if the degrees of freedom are greater than the dimension of + the scale matrix plus three. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + var : float + The variance of the distribution + """ + dim, df, scale = self._process_parameters(df, scale) + out = self._var(dim, df, scale) + return _squeeze_output(out) if out is not None else out + + def _inv_standard_rvs(self, n, shape, dim, df, random_state): + """ + Parameters + ---------- + n : integer + Number of variates to generate + shape : iterable + Shape of the variates to generate + dim : int + Dimension of the scale matrix + df : int + Degrees of freedom + random_state : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + + Returns + ------- + A : ndarray + Random variates of shape (`shape`) + (``dim``, ``dim``). + Each slice `A[..., :, :]` is lower-triangular, and its + inverse is the lower Cholesky factor of a draw from + `invwishart(df, np.eye(dim))`. + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'rvs' instead. + + """ + A = np.zeros(shape + (dim, dim)) + + # Random normal variates for off-diagonal elements + tri_rows, tri_cols = np.tril_indices(dim, k=-1) + n_tril = dim * (dim-1) // 2 + A[..., tri_rows, tri_cols] = random_state.normal( + size=(*shape, n_tril), + ) + + # Random chi variates for diagonal elements + rows = np.arange(dim) + chi_dfs = (df - dim + 1) + rows + A[..., rows, rows] = random_state.chisquare( + df=chi_dfs, size=(*shape, dim), + )**0.5 + + return A + + def _rvs(self, n, shape, dim, df, C, random_state): + """Draw random samples from an inverse Wishart distribution. + + Parameters + ---------- + n : integer + Number of variates to generate + shape : iterable + Shape of the variates to generate + dim : int + Dimension of the scale matrix + df : int + Degrees of freedom + C : ndarray + Cholesky factorization of the scale matrix, lower triagular. + %(_doc_random_state)s + + Notes + ----- + As this function does no argument checking, it should not be + called directly; use 'rvs' instead. + + """ + random_state = self._get_random_state(random_state) + # Get random draws A such that inv(A) ~ iW(df, I) + A = self._inv_standard_rvs(n, shape, dim, df, random_state) + + # Calculate SA = (CA)'^{-1} (CA)^{-1} ~ iW(df, scale) + trsm = get_blas_funcs(('trsm'), (A,)) + trmm = get_blas_funcs(('trmm'), (A,)) + + for index in np.ndindex(A.shape[:-2]): + if dim > 1: + # Calculate CA + # Get CA = C A^{-1} via triangular solver + CA = trsm(1., A[index], C, side=1, lower=True) + # get SA + A[index] = trmm(1., CA, CA, side=1, lower=True, trans_a=True) + else: + A[index][0, 0] = (C[0, 0] / A[index][0, 0])**2 + + return A + + def rvs(self, df, scale, size=1, random_state=None): + """Draw random samples from an inverse Wishart distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + size : integer or iterable of integers, optional + Number of samples to draw (default 1). + %(_doc_random_state)s + + Returns + ------- + rvs : ndarray + Random variates of shape (`size`) + (``dim``, ``dim``), where + ``dim`` is the dimension of the scale matrix. + + Notes + ----- + %(_doc_callparams_note)s + + """ + n, shape = self._process_size(size) + dim, df, scale = self._process_parameters(df, scale) + + # Cholesky decomposition of scale + C = scipy.linalg.cholesky(scale, lower=True) + + out = self._rvs(n, shape, dim, df, C, random_state) + + return _squeeze_output(out) + + def _entropy(self, dim, df, log_det_scale): + # reference: eq. (17) from ref. 3 + psi_eval_points = [0.5 * (df - dim + i) for i in range(1, dim + 1)] + psi_eval_points = np.asarray(psi_eval_points) + return multigammaln(0.5 * df, dim) + 0.5 * dim * df + \ + 0.5 * (dim + 1) * (log_det_scale - _LOG_2) - \ + 0.5 * (df + dim + 1) * \ + psi(psi_eval_points, out=psi_eval_points).sum() + + def entropy(self, df, scale): + dim, df, scale = self._process_parameters(df, scale) + _, log_det_scale = self._cholesky_logdet(scale) + return self._entropy(dim, df, log_det_scale) + + +invwishart = invwishart_gen() + + +class invwishart_frozen(multi_rv_frozen): + def __init__(self, df, scale, seed=None): + """Create a frozen inverse Wishart distribution. + + Parameters + ---------- + df : array_like + Degrees of freedom of the distribution + scale : array_like + Scale matrix of the distribution + seed : {None, int, `numpy.random.Generator`}, optional + If `seed` is None the `numpy.random.Generator` singleton is used. + If `seed` is an int, a new ``Generator`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` instance then that instance is + used. + + """ + self._dist = invwishart_gen(seed) + self.dim, self.df, self.scale = self._dist._process_parameters( + df, scale + ) + + # Get the determinant via Cholesky factorization + self.C = scipy.linalg.cholesky(self.scale, lower=True) + self.log_det_scale = 2 * np.sum(np.log(self.C.diagonal())) + + def logpdf(self, x): + x = self._dist._process_quantiles(x, self.dim) + out = self._dist._logpdf(x, self.dim, self.df, + self.log_det_scale, self.C) + return _squeeze_output(out) + + def pdf(self, x): + return np.exp(self.logpdf(x)) + + def mean(self): + out = self._dist._mean(self.dim, self.df, self.scale) + return _squeeze_output(out) if out is not None else out + + def mode(self): + out = self._dist._mode(self.dim, self.df, self.scale) + return _squeeze_output(out) + + def var(self): + out = self._dist._var(self.dim, self.df, self.scale) + return _squeeze_output(out) if out is not None else out + + def rvs(self, size=1, random_state=None): + n, shape = self._dist._process_size(size) + + out = self._dist._rvs(n, shape, self.dim, self.df, + self.C, random_state) + + return _squeeze_output(out) + + def entropy(self): + return self._dist._entropy(self.dim, self.df, self.log_det_scale) + + +# Set frozen generator docstrings from corresponding docstrings in +# inverse Wishart and fill in default strings in class docstrings +for name in ['logpdf', 'pdf', 'mean', 'mode', 'var', 'rvs']: + method = invwishart_gen.__dict__[name] + method_frozen = wishart_frozen.__dict__[name] + method_frozen.__doc__ = doccer.docformat( + method.__doc__, wishart_docdict_noparams) + method.__doc__ = doccer.docformat(method.__doc__, wishart_docdict_params) + +_multinomial_doc_default_callparams = """\ +n : int + Number of trials +p : array_like + Probability of a trial falling into each category; should sum to 1 +""" + +_multinomial_doc_callparams_note = """\ +`n` should be a nonnegative integer. Each element of `p` should be in the +interval :math:`[0,1]` and the elements should sum to 1. If they do not sum to +1, the last element of the `p` array is not used and is replaced with the +remaining probability left over from the earlier elements. +""" + +_multinomial_doc_frozen_callparams = "" + +_multinomial_doc_frozen_callparams_note = """\ +See class definition for a detailed description of parameters.""" + +multinomial_docdict_params = { + '_doc_default_callparams': _multinomial_doc_default_callparams, + '_doc_callparams_note': _multinomial_doc_callparams_note, + '_doc_random_state': _doc_random_state +} + +multinomial_docdict_noparams = { + '_doc_default_callparams': _multinomial_doc_frozen_callparams, + '_doc_callparams_note': _multinomial_doc_frozen_callparams_note, + '_doc_random_state': _doc_random_state +} + + +class multinomial_gen(multi_rv_generic): + r"""A multinomial random variable. + + Methods + ------- + pmf(x, n, p) + Probability mass function. + logpmf(x, n, p) + Log of the probability mass function. + rvs(n, p, size=1, random_state=None) + Draw random samples from a multinomial distribution. + entropy(n, p) + Compute the entropy of the multinomial distribution. + cov(n, p) + Compute the covariance matrix of the multinomial distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + %(_doc_random_state)s + + Notes + ----- + %(_doc_callparams_note)s + + The probability mass function for `multinomial` is + + .. math:: + + f(x) = \frac{n!}{x_1! \cdots x_k!} p_1^{x_1} \cdots p_k^{x_k}, + + supported on :math:`x=(x_1, \ldots, x_k)` where each :math:`x_i` is a + nonnegative integer and their sum is :math:`n`. + + .. versionadded:: 0.19.0 + + Examples + -------- + + >>> from scipy.stats import multinomial + >>> rv = multinomial(8, [0.3, 0.2, 0.5]) + >>> rv.pmf([1, 3, 4]) + 0.042000000000000072 + + The multinomial distribution for :math:`k=2` is identical to the + corresponding binomial distribution (tiny numerical differences + notwithstanding): + + >>> from scipy.stats import binom + >>> multinomial.pmf([3, 4], n=7, p=[0.4, 0.6]) + 0.29030399999999973 + >>> binom.pmf(3, 7, 0.4) + 0.29030400000000012 + + The functions ``pmf``, ``logpmf``, ``entropy``, and ``cov`` support + broadcasting, under the convention that the vector parameters (``x`` and + ``p``) are interpreted as if each row along the last axis is a single + object. For instance: + + >>> multinomial.pmf([[3, 4], [3, 5]], n=[7, 8], p=[.3, .7]) + array([0.2268945, 0.25412184]) + + Here, ``x.shape == (2, 2)``, ``n.shape == (2,)``, and ``p.shape == (2,)``, + but following the rules mentioned above they behave as if the rows + ``[3, 4]`` and ``[3, 5]`` in ``x`` and ``[.3, .7]`` in ``p`` were a single + object, and as if we had ``x.shape = (2,)``, ``n.shape = (2,)``, and + ``p.shape = ()``. To obtain the individual elements without broadcasting, + we would do this: + + >>> multinomial.pmf([3, 4], n=7, p=[.3, .7]) + 0.2268945 + >>> multinomial.pmf([3, 5], 8, p=[.3, .7]) + 0.25412184 + + This broadcasting also works for ``cov``, where the output objects are + square matrices of size ``p.shape[-1]``. For example: + + >>> multinomial.cov([4, 5], [[.3, .7], [.4, .6]]) + array([[[ 0.84, -0.84], + [-0.84, 0.84]], + [[ 1.2 , -1.2 ], + [-1.2 , 1.2 ]]]) + + In this example, ``n.shape == (2,)`` and ``p.shape == (2, 2)``, and + following the rules above, these broadcast as if ``p.shape == (2,)``. + Thus the result should also be of shape ``(2,)``, but since each output is + a :math:`2 \times 2` matrix, the result in fact has shape ``(2, 2, 2)``, + where ``result[0]`` is equal to ``multinomial.cov(n=4, p=[.3, .7])`` and + ``result[1]`` is equal to ``multinomial.cov(n=5, p=[.4, .6])``. + + Alternatively, the object may be called (as a function) to fix the `n` and + `p` parameters, returning a "frozen" multinomial random variable: + + >>> rv = multinomial(n=7, p=[.3, .7]) + >>> # Frozen object with the same methods but holding the given + >>> # degrees of freedom and scale fixed. + + See also + -------- + scipy.stats.binom : The binomial distribution. + numpy.random.Generator.multinomial : Sampling from the multinomial distribution. + scipy.stats.multivariate_hypergeom : + The multivariate hypergeometric distribution. + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = \ + doccer.docformat(self.__doc__, multinomial_docdict_params) + + def __call__(self, n, p, seed=None): + """Create a frozen multinomial distribution. + + See `multinomial_frozen` for more information. + """ + return multinomial_frozen(n, p, seed) + + def _process_parameters(self, n, p, eps=1e-15): + """Returns: n_, p_, npcond. + + n_ and p_ are arrays of the correct shape; npcond is a boolean array + flagging values out of the domain. + """ + p = np.array(p, dtype=np.float64, copy=True) + p_adjusted = 1. - p[..., :-1].sum(axis=-1) + i_adjusted = np.abs(p_adjusted) > eps + p[i_adjusted, -1] = p_adjusted[i_adjusted] + + # true for bad p + pcond = np.any(p < 0, axis=-1) + pcond |= np.any(p > 1, axis=-1) + + n = np.array(n, dtype=int, copy=True) + + # true for bad n + ncond = n < 0 + + return n, p, ncond | pcond + + def _process_quantiles(self, x, n, p): + """Returns: x_, xcond. + + x_ is an int array; xcond is a boolean array flagging values out of the + domain. + """ + xx = np.asarray(x, dtype=int) + + if xx.ndim == 0: + raise ValueError("x must be an array.") + + if xx.size != 0 and not xx.shape[-1] == p.shape[-1]: + raise ValueError("Size of each quantile should be size of p: " + "received %d, but expected %d." % + (xx.shape[-1], p.shape[-1])) + + # true for x out of the domain + cond = np.any(xx != x, axis=-1) + cond |= np.any(xx < 0, axis=-1) + cond = cond | (np.sum(xx, axis=-1) != n) + + return xx, cond + + def _checkresult(self, result, cond, bad_value): + result = np.asarray(result) + + if cond.ndim != 0: + result[cond] = bad_value + elif cond: + if result.ndim == 0: + return bad_value + result[...] = bad_value + return result + + def _logpmf(self, x, n, p): + return gammaln(n+1) + np.sum(xlogy(x, p) - gammaln(x+1), axis=-1) + + def logpmf(self, x, n, p): + """Log of the Multinomial probability mass function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_doc_default_callparams)s + + Returns + ------- + logpmf : ndarray or scalar + Log of the probability mass function evaluated at `x` + + Notes + ----- + %(_doc_callparams_note)s + """ + n, p, npcond = self._process_parameters(n, p) + x, xcond = self._process_quantiles(x, n, p) + + result = self._logpmf(x, n, p) + + # replace values for which x was out of the domain; broadcast + # xcond to the right shape + xcond_ = xcond | np.zeros(npcond.shape, dtype=np.bool_) + result = self._checkresult(result, xcond_, -np.inf) + + # replace values bad for n or p; broadcast npcond to the right shape + npcond_ = npcond | np.zeros(xcond.shape, dtype=np.bool_) + return self._checkresult(result, npcond_, np.nan) + + def pmf(self, x, n, p): + """Multinomial probability mass function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_doc_default_callparams)s + + Returns + ------- + pmf : ndarray or scalar + Probability density function evaluated at `x` + + Notes + ----- + %(_doc_callparams_note)s + """ + return np.exp(self.logpmf(x, n, p)) + + def mean(self, n, p): + """Mean of the Multinomial distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + mean : float + The mean of the distribution + """ + n, p, npcond = self._process_parameters(n, p) + result = n[..., np.newaxis]*p + return self._checkresult(result, npcond, np.nan) + + def cov(self, n, p): + """Covariance matrix of the multinomial distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + cov : ndarray + The covariance matrix of the distribution + """ + n, p, npcond = self._process_parameters(n, p) + + nn = n[..., np.newaxis, np.newaxis] + result = nn * np.einsum('...j,...k->...jk', -p, p) + + # change the diagonal + for i in range(p.shape[-1]): + result[..., i, i] += n*p[..., i] + + return self._checkresult(result, npcond, np.nan) + + def entropy(self, n, p): + r"""Compute the entropy of the multinomial distribution. + + The entropy is computed using this expression: + + .. math:: + + f(x) = - \log n! - n\sum_{i=1}^k p_i \log p_i + + \sum_{i=1}^k \sum_{x=0}^n \binom n x p_i^x(1-p_i)^{n-x} \log x! + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + h : scalar + Entropy of the Multinomial distribution + + Notes + ----- + %(_doc_callparams_note)s + """ + n, p, npcond = self._process_parameters(n, p) + + x = np.r_[1:np.max(n)+1] + + term1 = n*np.sum(entr(p), axis=-1) + term1 -= gammaln(n+1) + + n = n[..., np.newaxis] + new_axes_needed = max(p.ndim, n.ndim) - x.ndim + 1 + x.shape += (1,)*new_axes_needed + + term2 = np.sum(binom.pmf(x, n, p)*gammaln(x+1), + axis=(-1, -1-new_axes_needed)) + + return self._checkresult(term1 + term2, npcond, np.nan) + + def rvs(self, n, p, size=None, random_state=None): + """Draw random samples from a Multinomial distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + size : integer or iterable of integers, optional + Number of samples to draw (default 1). + %(_doc_random_state)s + + Returns + ------- + rvs : ndarray or scalar + Random variates of shape (`size`, `len(p)`) + + Notes + ----- + %(_doc_callparams_note)s + """ + n, p, npcond = self._process_parameters(n, p) + random_state = self._get_random_state(random_state) + return random_state.multinomial(n, p, size) + + +multinomial = multinomial_gen() + + +class multinomial_frozen(multi_rv_frozen): + r"""Create a frozen Multinomial distribution. + + Parameters + ---------- + n : int + number of trials + p: array_like + probability of a trial falling into each category; should sum to 1 + seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance then + that instance is used. + """ + def __init__(self, n, p, seed=None): + self._dist = multinomial_gen(seed) + self.n, self.p, self.npcond = self._dist._process_parameters(n, p) + + # monkey patch self._dist + def _process_parameters(n, p): + return self.n, self.p, self.npcond + + self._dist._process_parameters = _process_parameters + + def logpmf(self, x): + return self._dist.logpmf(x, self.n, self.p) + + def pmf(self, x): + return self._dist.pmf(x, self.n, self.p) + + def mean(self): + return self._dist.mean(self.n, self.p) + + def cov(self): + return self._dist.cov(self.n, self.p) + + def entropy(self): + return self._dist.entropy(self.n, self.p) + + def rvs(self, size=1, random_state=None): + return self._dist.rvs(self.n, self.p, size, random_state) + + +# Set frozen generator docstrings from corresponding docstrings in +# multinomial and fill in default strings in class docstrings +for name in ['logpmf', 'pmf', 'mean', 'cov', 'rvs']: + method = multinomial_gen.__dict__[name] + method_frozen = multinomial_frozen.__dict__[name] + method_frozen.__doc__ = doccer.docformat( + method.__doc__, multinomial_docdict_noparams) + method.__doc__ = doccer.docformat(method.__doc__, + multinomial_docdict_params) + + +class special_ortho_group_gen(multi_rv_generic): + r"""A Special Orthogonal matrix (SO(N)) random variable. + + Return a random rotation matrix, drawn from the Haar distribution + (the only uniform distribution on SO(N)) with a determinant of +1. + + The `dim` keyword specifies the dimension N. + + Methods + ------- + rvs(dim=None, size=1, random_state=None) + Draw random samples from SO(N). + + Parameters + ---------- + dim : scalar + Dimension of matrices + seed : {None, int, np.random.RandomState, np.random.Generator}, optional + Used for drawing random variates. + If `seed` is `None`, the `~np.random.RandomState` singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, seeded + with seed. + If `seed` is already a ``RandomState`` or ``Generator`` instance, + then that object is used. + Default is `None`. + + Notes + ----- + This class is wrapping the random_rot code from the MDP Toolkit, + https://github.com/mdp-toolkit/mdp-toolkit + + Return a random rotation matrix, drawn from the Haar distribution + (the only uniform distribution on SO(N)). + The algorithm is described in the paper + Stewart, G.W., "The efficient generation of random orthogonal + matrices with an application to condition estimators", SIAM Journal + on Numerical Analysis, 17(3), pp. 403-409, 1980. + For more information see + https://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization + + See also the similar `ortho_group`. For a random rotation in three + dimensions, see `scipy.spatial.transform.Rotation.random`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import special_ortho_group + >>> x = special_ortho_group.rvs(3) + + >>> np.dot(x, x.T) + array([[ 1.00000000e+00, 1.13231364e-17, -2.86852790e-16], + [ 1.13231364e-17, 1.00000000e+00, -1.46845020e-16], + [ -2.86852790e-16, -1.46845020e-16, 1.00000000e+00]]) + + >>> import scipy.linalg + >>> scipy.linalg.det(x) + 1.0 + + This generates one random matrix from SO(3). It is orthogonal and + has a determinant of 1. + + Alternatively, the object may be called (as a function) to fix the `dim` + parameter, returning a "frozen" special_ortho_group random variable: + + >>> rv = special_ortho_group(5) + >>> # Frozen object with the same methods but holding the + >>> # dimension parameter fixed. + + See Also + -------- + ortho_group, scipy.spatial.transform.Rotation.random + + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__) + + def __call__(self, dim=None, seed=None): + """Create a frozen SO(N) distribution. + + See `special_ortho_group_frozen` for more information. + """ + return special_ortho_group_frozen(dim, seed=seed) + + def _process_parameters(self, dim): + """Dimension N must be specified; it cannot be inferred.""" + if dim is None or not np.isscalar(dim) or dim <= 1 or dim != int(dim): + raise ValueError("""Dimension of rotation must be specified, + and must be a scalar greater than 1.""") + + return dim + + def rvs(self, dim, size=1, random_state=None): + """Draw random samples from SO(N). + + Parameters + ---------- + dim : integer + Dimension of rotation space (N). + size : integer, optional + Number of samples to draw (default 1). + + Returns + ------- + rvs : ndarray or scalar + Random size N-dimensional matrices, dimension (size, dim, dim) + + """ + random_state = self._get_random_state(random_state) + + size = int(size) + size = (size,) if size > 1 else () + + dim = self._process_parameters(dim) + + # H represents a (dim, dim) matrix, while D represents the diagonal of + # a (dim, dim) diagonal matrix. The algorithm that follows is + # broadcasted on the leading shape in `size` to vectorize along + # samples. + H = np.empty(size + (dim, dim)) + H[..., :, :] = np.eye(dim) + D = np.empty(size + (dim,)) + + for n in range(dim-1): + + # x is a vector with length dim-n, xrow and xcol are views of it as + # a row vector and column vector respectively. It's important they + # are views and not copies because we are going to modify x + # in-place. + x = random_state.normal(size=size + (dim-n,)) + xrow = x[..., None, :] + xcol = x[..., :, None] + + # This is the squared norm of x, without vectorization it would be + # dot(x, x), to have proper broadcasting we use matmul and squeeze + # out (convert to scalar) the resulting 1x1 matrix + norm2 = np.matmul(xrow, xcol).squeeze((-2, -1)) + + x0 = x[..., 0].copy() + D[..., n] = np.where(x0 != 0, np.sign(x0), 1) + x[..., 0] += D[..., n]*np.sqrt(norm2) + + # In renormalizing x we have to append an additional axis with + # [..., None] to broadcast the scalar against the vector x + x /= np.sqrt((norm2 - x0**2 + x[..., 0]**2) / 2.)[..., None] + + # Householder transformation, without vectorization the RHS can be + # written as outer(H @ x, x) (apart from the slicing) + H[..., :, n:] -= np.matmul(H[..., :, n:], xcol) * xrow + + D[..., -1] = (-1)**(dim-1)*D[..., :-1].prod(axis=-1) + + # Without vectorization this could be written as H = diag(D) @ H, + # left-multiplication by a diagonal matrix amounts to multiplying each + # row of H by an element of the diagonal, so we add a dummy axis for + # the column index + H *= D[..., :, None] + return H + + +special_ortho_group = special_ortho_group_gen() + + +class special_ortho_group_frozen(multi_rv_frozen): + def __init__(self, dim=None, seed=None): + """Create a frozen SO(N) distribution. + + Parameters + ---------- + dim : scalar + Dimension of matrices + seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + + Examples + -------- + >>> from scipy.stats import special_ortho_group + >>> g = special_ortho_group(5) + >>> x = g.rvs() + + """ # numpy/numpydoc#87 # noqa: E501 + self._dist = special_ortho_group_gen(seed) + self.dim = self._dist._process_parameters(dim) + + def rvs(self, size=1, random_state=None): + return self._dist.rvs(self.dim, size, random_state) + + +class ortho_group_gen(multi_rv_generic): + r"""An Orthogonal matrix (O(N)) random variable. + + Return a random orthogonal matrix, drawn from the O(N) Haar + distribution (the only uniform distribution on O(N)). + + The `dim` keyword specifies the dimension N. + + Methods + ------- + rvs(dim=None, size=1, random_state=None) + Draw random samples from O(N). + + Parameters + ---------- + dim : scalar + Dimension of matrices + seed : {None, int, np.random.RandomState, np.random.Generator}, optional + Used for drawing random variates. + If `seed` is `None`, the `~np.random.RandomState` singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, seeded + with seed. + If `seed` is already a ``RandomState`` or ``Generator`` instance, + then that object is used. + Default is `None`. + + Notes + ----- + This class is closely related to `special_ortho_group`. + + Some care is taken to avoid numerical error, as per the paper by Mezzadri. + + References + ---------- + .. [1] F. Mezzadri, "How to generate random matrices from the classical + compact groups", :arXiv:`math-ph/0609050v2`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import ortho_group + >>> x = ortho_group.rvs(3) + + >>> np.dot(x, x.T) + array([[ 1.00000000e+00, 1.13231364e-17, -2.86852790e-16], + [ 1.13231364e-17, 1.00000000e+00, -1.46845020e-16], + [ -2.86852790e-16, -1.46845020e-16, 1.00000000e+00]]) + + >>> import scipy.linalg + >>> np.fabs(scipy.linalg.det(x)) + 1.0 + + This generates one random matrix from O(3). It is orthogonal and + has a determinant of +1 or -1. + + Alternatively, the object may be called (as a function) to fix the `dim` + parameter, returning a "frozen" ortho_group random variable: + + >>> rv = ortho_group(5) + >>> # Frozen object with the same methods but holding the + >>> # dimension parameter fixed. + + See Also + -------- + special_ortho_group + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__) + + def __call__(self, dim=None, seed=None): + """Create a frozen O(N) distribution. + + See `ortho_group_frozen` for more information. + """ + return ortho_group_frozen(dim, seed=seed) + + def _process_parameters(self, dim): + """Dimension N must be specified; it cannot be inferred.""" + if dim is None or not np.isscalar(dim) or dim <= 1 or dim != int(dim): + raise ValueError("Dimension of rotation must be specified," + "and must be a scalar greater than 1.") + + return dim + + def rvs(self, dim, size=1, random_state=None): + """Draw random samples from O(N). + + Parameters + ---------- + dim : integer + Dimension of rotation space (N). + size : integer, optional + Number of samples to draw (default 1). + + Returns + ------- + rvs : ndarray or scalar + Random size N-dimensional matrices, dimension (size, dim, dim) + + """ + random_state = self._get_random_state(random_state) + + size = int(size) + + dim = self._process_parameters(dim) + + size = (size,) if size > 1 else () + z = random_state.normal(size=size + (dim, dim)) + q, r = np.linalg.qr(z) + # The last two dimensions are the rows and columns of R matrices. + # Extract the diagonals. Note that this eliminates a dimension. + d = r.diagonal(offset=0, axis1=-2, axis2=-1) + # Add back a dimension for proper broadcasting: we're dividing + # each row of each R matrix by the diagonal of the R matrix. + q *= (d/abs(d))[..., np.newaxis, :] # to broadcast properly + return q + + +ortho_group = ortho_group_gen() + + +class ortho_group_frozen(multi_rv_frozen): + def __init__(self, dim=None, seed=None): + """Create a frozen O(N) distribution. + + Parameters + ---------- + dim : scalar + Dimension of matrices + seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + + Examples + -------- + >>> from scipy.stats import ortho_group + >>> g = ortho_group(5) + >>> x = g.rvs() + + """ # numpy/numpydoc#87 # noqa: E501 + self._dist = ortho_group_gen(seed) + self.dim = self._dist._process_parameters(dim) + + def rvs(self, size=1, random_state=None): + return self._dist.rvs(self.dim, size, random_state) + + +class random_correlation_gen(multi_rv_generic): + r"""A random correlation matrix. + + Return a random correlation matrix, given a vector of eigenvalues. + + The `eigs` keyword specifies the eigenvalues of the correlation matrix, + and implies the dimension. + + Methods + ------- + rvs(eigs=None, random_state=None) + Draw random correlation matrices, all with eigenvalues eigs. + + Parameters + ---------- + eigs : 1d ndarray + Eigenvalues of correlation matrix + seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + tol : float, optional + Tolerance for input parameter checks + diag_tol : float, optional + Tolerance for deviation of the diagonal of the resulting + matrix. Default: 1e-7 + + Raises + ------ + RuntimeError + Floating point error prevented generating a valid correlation + matrix. + + Returns + ------- + rvs : ndarray or scalar + Random size N-dimensional matrices, dimension (size, dim, dim), + each having eigenvalues eigs. + + Notes + ----- + + Generates a random correlation matrix following a numerically stable + algorithm spelled out by Davies & Higham. This algorithm uses a single O(N) + similarity transformation to construct a symmetric positive semi-definite + matrix, and applies a series of Givens rotations to scale it to have ones + on the diagonal. + + References + ---------- + + .. [1] Davies, Philip I; Higham, Nicholas J; "Numerically stable generation + of correlation matrices and their factors", BIT 2000, Vol. 40, + No. 4, pp. 640 651 + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import random_correlation + >>> rng = np.random.default_rng() + >>> x = random_correlation.rvs((.5, .8, 1.2, 1.5), random_state=rng) + >>> x + array([[ 1. , -0.02423399, 0.03130519, 0.4946965 ], + [-0.02423399, 1. , 0.20334736, 0.04039817], + [ 0.03130519, 0.20334736, 1. , 0.02694275], + [ 0.4946965 , 0.04039817, 0.02694275, 1. ]]) + >>> import scipy.linalg + >>> e, v = scipy.linalg.eigh(x) + >>> e + array([ 0.5, 0.8, 1.2, 1.5]) + + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__) + + def __call__(self, eigs, seed=None, tol=1e-13, diag_tol=1e-7): + """Create a frozen random correlation matrix. + + See `random_correlation_frozen` for more information. + """ + return random_correlation_frozen(eigs, seed=seed, tol=tol, + diag_tol=diag_tol) + + def _process_parameters(self, eigs, tol): + eigs = np.asarray(eigs, dtype=float) + dim = eigs.size + + if eigs.ndim != 1 or eigs.shape[0] != dim or dim <= 1: + raise ValueError("Array 'eigs' must be a vector of length " + "greater than 1.") + + if np.fabs(np.sum(eigs) - dim) > tol: + raise ValueError("Sum of eigenvalues must equal dimensionality.") + + for x in eigs: + if x < -tol: + raise ValueError("All eigenvalues must be non-negative.") + + return dim, eigs + + def _givens_to_1(self, aii, ajj, aij): + """Computes a 2x2 Givens matrix to put 1's on the diagonal. + + The input matrix is a 2x2 symmetric matrix M = [ aii aij ; aij ajj ]. + + The output matrix g is a 2x2 anti-symmetric matrix of the form + [ c s ; -s c ]; the elements c and s are returned. + + Applying the output matrix to the input matrix (as b=g.T M g) + results in a matrix with bii=1, provided tr(M) - det(M) >= 1 + and floating point issues do not occur. Otherwise, some other + valid rotation is returned. When tr(M)==2, also bjj=1. + + """ + aiid = aii - 1. + ajjd = ajj - 1. + + if ajjd == 0: + # ajj==1, so swap aii and ajj to avoid division by zero + return 0., 1. + + dd = math.sqrt(max(aij**2 - aiid*ajjd, 0)) + + # The choice of t should be chosen to avoid cancellation [1] + t = (aij + math.copysign(dd, aij)) / ajjd + c = 1. / math.sqrt(1. + t*t) + if c == 0: + # Underflow + s = 1.0 + else: + s = c*t + return c, s + + def _to_corr(self, m): + """ + Given a psd matrix m, rotate to put one's on the diagonal, turning it + into a correlation matrix. This also requires the trace equal the + dimensionality. Note: modifies input matrix + """ + # Check requirements for in-place Givens + if not (m.flags.c_contiguous and m.dtype == np.float64 and + m.shape[0] == m.shape[1]): + raise ValueError() + + d = m.shape[0] + for i in range(d-1): + if m[i, i] == 1: + continue + elif m[i, i] > 1: + for j in range(i+1, d): + if m[j, j] < 1: + break + else: + for j in range(i+1, d): + if m[j, j] > 1: + break + + c, s = self._givens_to_1(m[i, i], m[j, j], m[i, j]) + + # Use BLAS to apply Givens rotations in-place. Equivalent to: + # g = np.eye(d) + # g[i, i] = g[j,j] = c + # g[j, i] = -s; g[i, j] = s + # m = np.dot(g.T, np.dot(m, g)) + mv = m.ravel() + drot(mv, mv, c, -s, n=d, + offx=i*d, incx=1, offy=j*d, incy=1, + overwrite_x=True, overwrite_y=True) + drot(mv, mv, c, -s, n=d, + offx=i, incx=d, offy=j, incy=d, + overwrite_x=True, overwrite_y=True) + + return m + + def rvs(self, eigs, random_state=None, tol=1e-13, diag_tol=1e-7): + """Draw random correlation matrices. + + Parameters + ---------- + eigs : 1d ndarray + Eigenvalues of correlation matrix + tol : float, optional + Tolerance for input parameter checks + diag_tol : float, optional + Tolerance for deviation of the diagonal of the resulting + matrix. Default: 1e-7 + + Raises + ------ + RuntimeError + Floating point error prevented generating a valid correlation + matrix. + + Returns + ------- + rvs : ndarray or scalar + Random size N-dimensional matrices, dimension (size, dim, dim), + each having eigenvalues eigs. + + """ + dim, eigs = self._process_parameters(eigs, tol=tol) + + random_state = self._get_random_state(random_state) + + m = ortho_group.rvs(dim, random_state=random_state) + m = np.dot(np.dot(m, np.diag(eigs)), m.T) # Set the trace of m + m = self._to_corr(m) # Carefully rotate to unit diagonal + + # Check diagonal + if abs(m.diagonal() - 1).max() > diag_tol: + raise RuntimeError("Failed to generate a valid correlation matrix") + + return m + + +random_correlation = random_correlation_gen() + + +class random_correlation_frozen(multi_rv_frozen): + def __init__(self, eigs, seed=None, tol=1e-13, diag_tol=1e-7): + """Create a frozen random correlation matrix distribution. + + Parameters + ---------- + eigs : 1d ndarray + Eigenvalues of correlation matrix + seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + tol : float, optional + Tolerance for input parameter checks + diag_tol : float, optional + Tolerance for deviation of the diagonal of the resulting + matrix. Default: 1e-7 + + Raises + ------ + RuntimeError + Floating point error prevented generating a valid correlation + matrix. + + Returns + ------- + rvs : ndarray or scalar + Random size N-dimensional matrices, dimension (size, dim, dim), + each having eigenvalues eigs. + """ # numpy/numpydoc#87 # noqa: E501 + + self._dist = random_correlation_gen(seed) + self.tol = tol + self.diag_tol = diag_tol + _, self.eigs = self._dist._process_parameters(eigs, tol=self.tol) + + def rvs(self, random_state=None): + return self._dist.rvs(self.eigs, random_state=random_state, + tol=self.tol, diag_tol=self.diag_tol) + + +class unitary_group_gen(multi_rv_generic): + r"""A matrix-valued U(N) random variable. + + Return a random unitary matrix. + + The `dim` keyword specifies the dimension N. + + Methods + ------- + rvs(dim=None, size=1, random_state=None) + Draw random samples from U(N). + + Parameters + ---------- + dim : scalar + Dimension of matrices, must be greater than 1. + seed : {None, int, np.random.RandomState, np.random.Generator}, optional + Used for drawing random variates. + If `seed` is `None`, the `~np.random.RandomState` singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, seeded + with seed. + If `seed` is already a ``RandomState`` or ``Generator`` instance, + then that object is used. + Default is `None`. + + Notes + ----- + This class is similar to `ortho_group`. + + References + ---------- + .. [1] F. Mezzadri, "How to generate random matrices from the classical + compact groups", :arXiv:`math-ph/0609050v2`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import unitary_group + >>> x = unitary_group.rvs(3) + + >>> np.dot(x, x.conj().T) + array([[ 1.00000000e+00, 1.13231364e-17, -2.86852790e-16], + [ 1.13231364e-17, 1.00000000e+00, -1.46845020e-16], + [ -2.86852790e-16, -1.46845020e-16, 1.00000000e+00]]) + + This generates one random matrix from U(3). The dot product confirms that + it is unitary up to machine precision. + + Alternatively, the object may be called (as a function) to fix the `dim` + parameter, return a "frozen" unitary_group random variable: + + >>> rv = unitary_group(5) + + See Also + -------- + ortho_group + + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__) + + def __call__(self, dim=None, seed=None): + """Create a frozen (U(N)) n-dimensional unitary matrix distribution. + + See `unitary_group_frozen` for more information. + """ + return unitary_group_frozen(dim, seed=seed) + + def _process_parameters(self, dim): + """Dimension N must be specified; it cannot be inferred.""" + if dim is None or not np.isscalar(dim) or dim <= 1 or dim != int(dim): + raise ValueError("Dimension of rotation must be specified," + "and must be a scalar greater than 1.") + + return dim + + def rvs(self, dim, size=1, random_state=None): + """Draw random samples from U(N). + + Parameters + ---------- + dim : integer + Dimension of space (N). + size : integer, optional + Number of samples to draw (default 1). + + Returns + ------- + rvs : ndarray or scalar + Random size N-dimensional matrices, dimension (size, dim, dim) + + """ + random_state = self._get_random_state(random_state) + + size = int(size) + + dim = self._process_parameters(dim) + + size = (size,) if size > 1 else () + z = 1/math.sqrt(2)*(random_state.normal(size=size + (dim, dim)) + + 1j*random_state.normal(size=size + (dim, dim))) + q, r = np.linalg.qr(z) + # The last two dimensions are the rows and columns of R matrices. + # Extract the diagonals. Note that this eliminates a dimension. + d = r.diagonal(offset=0, axis1=-2, axis2=-1) + # Add back a dimension for proper broadcasting: we're dividing + # each row of each R matrix by the diagonal of the R matrix. + q *= (d/abs(d))[..., np.newaxis, :] # to broadcast properly + return q + + +unitary_group = unitary_group_gen() + + +class unitary_group_frozen(multi_rv_frozen): + def __init__(self, dim=None, seed=None): + """Create a frozen (U(N)) n-dimensional unitary matrix distribution. + + Parameters + ---------- + dim : scalar + Dimension of matrices + seed : {None, int, `numpy.random.Generator`, `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + + Examples + -------- + >>> from scipy.stats import unitary_group + >>> x = unitary_group(3) + >>> x.rvs() + + """ # numpy/numpydoc#87 # noqa: E501 + self._dist = unitary_group_gen(seed) + self.dim = self._dist._process_parameters(dim) + + def rvs(self, size=1, random_state=None): + return self._dist.rvs(self.dim, size, random_state) + + +_mvt_doc_default_callparams = """\ +loc : array_like, optional + Location of the distribution. (default ``0``) +shape : array_like, optional + Positive semidefinite matrix of the distribution. (default ``1``) +df : float, optional + Degrees of freedom of the distribution; must be greater than zero. + If ``np.inf`` then results are multivariate normal. The default is ``1``. +allow_singular : bool, optional + Whether to allow a singular matrix. (default ``False``) +""" + +_mvt_doc_callparams_note = """\ +Setting the parameter `loc` to ``None`` is equivalent to having `loc` +be the zero-vector. The parameter `shape` can be a scalar, in which case +the shape matrix is the identity times that value, a vector of +diagonal entries for the shape matrix, or a two-dimensional array_like. +""" + +_mvt_doc_frozen_callparams_note = """\ +See class definition for a detailed description of parameters.""" + +mvt_docdict_params = { + '_mvt_doc_default_callparams': _mvt_doc_default_callparams, + '_mvt_doc_callparams_note': _mvt_doc_callparams_note, + '_doc_random_state': _doc_random_state +} + +mvt_docdict_noparams = { + '_mvt_doc_default_callparams': "", + '_mvt_doc_callparams_note': _mvt_doc_frozen_callparams_note, + '_doc_random_state': _doc_random_state +} + + +class multivariate_t_gen(multi_rv_generic): + r"""A multivariate t-distributed random variable. + + The `loc` parameter specifies the location. The `shape` parameter specifies + the positive semidefinite shape matrix. The `df` parameter specifies the + degrees of freedom. + + In addition to calling the methods below, the object itself may be called + as a function to fix the location, shape matrix, and degrees of freedom + parameters, returning a "frozen" multivariate t-distribution random. + + Methods + ------- + pdf(x, loc=None, shape=1, df=1, allow_singular=False) + Probability density function. + logpdf(x, loc=None, shape=1, df=1, allow_singular=False) + Log of the probability density function. + cdf(x, loc=None, shape=1, df=1, allow_singular=False, *, + maxpts=None, lower_limit=None, random_state=None) + Cumulative distribution function. + rvs(loc=None, shape=1, df=1, size=1, random_state=None) + Draw random samples from a multivariate t-distribution. + entropy(loc=None, shape=1, df=1) + Differential entropy of a multivariate t-distribution. + + Parameters + ---------- + %(_mvt_doc_default_callparams)s + %(_doc_random_state)s + + Notes + ----- + %(_mvt_doc_callparams_note)s + The matrix `shape` must be a (symmetric) positive semidefinite matrix. The + determinant and inverse of `shape` are computed as the pseudo-determinant + and pseudo-inverse, respectively, so that `shape` does not need to have + full rank. + + The probability density function for `multivariate_t` is + + .. math:: + + f(x) = \frac{\Gamma((\nu + p)/2)}{\Gamma(\nu/2)\nu^{p/2}\pi^{p/2}|\Sigma|^{1/2}} + \left[1 + \frac{1}{\nu} (\mathbf{x} - \boldsymbol{\mu})^{\top} + \boldsymbol{\Sigma}^{-1} + (\mathbf{x} - \boldsymbol{\mu}) \right]^{-(\nu + p)/2}, + + where :math:`p` is the dimension of :math:`\mathbf{x}`, + :math:`\boldsymbol{\mu}` is the :math:`p`-dimensional location, + :math:`\boldsymbol{\Sigma}` the :math:`p \times p`-dimensional shape + matrix, and :math:`\nu` is the degrees of freedom. + + .. versionadded:: 1.6.0 + + References + ---------- + .. [1] Arellano-Valle et al. "Shannon Entropy and Mutual Information for + Multivariate Skew-Elliptical Distributions". Scandinavian Journal + of Statistics. Vol. 40, issue 1. + + Examples + -------- + The object may be called (as a function) to fix the `loc`, `shape`, + `df`, and `allow_singular` parameters, returning a "frozen" + multivariate_t random variable: + + >>> import numpy as np + >>> from scipy.stats import multivariate_t + >>> rv = multivariate_t([1.0, -0.5], [[2.1, 0.3], [0.3, 1.5]], df=2) + >>> # Frozen object with the same methods but holding the given location, + >>> # scale, and degrees of freedom fixed. + + Create a contour plot of the PDF. + + >>> import matplotlib.pyplot as plt + >>> x, y = np.mgrid[-1:3:.01, -2:1.5:.01] + >>> pos = np.dstack((x, y)) + >>> fig, ax = plt.subplots(1, 1) + >>> ax.set_aspect('equal') + >>> plt.contourf(x, y, rv.pdf(pos)) + + """ + + def __init__(self, seed=None): + """Initialize a multivariate t-distributed random variable. + + Parameters + ---------- + seed : Random state. + + """ + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__, mvt_docdict_params) + self._random_state = check_random_state(seed) + + def __call__(self, loc=None, shape=1, df=1, allow_singular=False, + seed=None): + """Create a frozen multivariate t-distribution. + + See `multivariate_t_frozen` for parameters. + """ + if df == np.inf: + return multivariate_normal_frozen(mean=loc, cov=shape, + allow_singular=allow_singular, + seed=seed) + return multivariate_t_frozen(loc=loc, shape=shape, df=df, + allow_singular=allow_singular, seed=seed) + + def pdf(self, x, loc=None, shape=1, df=1, allow_singular=False): + """Multivariate t-distribution probability density function. + + Parameters + ---------- + x : array_like + Points at which to evaluate the probability density function. + %(_mvt_doc_default_callparams)s + + Returns + ------- + pdf : Probability density function evaluated at `x`. + + Examples + -------- + >>> from scipy.stats import multivariate_t + >>> x = [0.4, 5] + >>> loc = [0, 1] + >>> shape = [[1, 0.1], [0.1, 1]] + >>> df = 7 + >>> multivariate_t.pdf(x, loc, shape, df) + 0.00075713 + + """ + dim, loc, shape, df = self._process_parameters(loc, shape, df) + x = self._process_quantiles(x, dim) + shape_info = _PSD(shape, allow_singular=allow_singular) + logpdf = self._logpdf(x, loc, shape_info.U, shape_info.log_pdet, df, + dim, shape_info.rank) + return np.exp(logpdf) + + def logpdf(self, x, loc=None, shape=1, df=1): + """Log of the multivariate t-distribution probability density function. + + Parameters + ---------- + x : array_like + Points at which to evaluate the log of the probability density + function. + %(_mvt_doc_default_callparams)s + + Returns + ------- + logpdf : Log of the probability density function evaluated at `x`. + + Examples + -------- + >>> from scipy.stats import multivariate_t + >>> x = [0.4, 5] + >>> loc = [0, 1] + >>> shape = [[1, 0.1], [0.1, 1]] + >>> df = 7 + >>> multivariate_t.logpdf(x, loc, shape, df) + -7.1859802 + + See Also + -------- + pdf : Probability density function. + + """ + dim, loc, shape, df = self._process_parameters(loc, shape, df) + x = self._process_quantiles(x, dim) + shape_info = _PSD(shape) + return self._logpdf(x, loc, shape_info.U, shape_info.log_pdet, df, dim, + shape_info.rank) + + def _logpdf(self, x, loc, prec_U, log_pdet, df, dim, rank): + """Utility method `pdf`, `logpdf` for parameters. + + Parameters + ---------- + x : ndarray + Points at which to evaluate the log of the probability density + function. + loc : ndarray + Location of the distribution. + prec_U : ndarray + A decomposition such that `np.dot(prec_U, prec_U.T)` is the inverse + of the shape matrix. + log_pdet : float + Logarithm of the determinant of the shape matrix. + df : float + Degrees of freedom of the distribution. + dim : int + Dimension of the quantiles x. + rank : int + Rank of the shape matrix. + + Notes + ----- + As this function does no argument checking, it should not be called + directly; use 'logpdf' instead. + + """ + if df == np.inf: + return multivariate_normal._logpdf(x, loc, prec_U, log_pdet, rank) + + dev = x - loc + maha = np.square(np.dot(dev, prec_U)).sum(axis=-1) + + t = 0.5 * (df + dim) + A = gammaln(t) + B = gammaln(0.5 * df) + C = dim/2. * np.log(df * np.pi) + D = 0.5 * log_pdet + E = -t * np.log(1 + (1./df) * maha) + + return _squeeze_output(A - B - C - D + E) + + def _cdf(self, x, loc, shape, df, dim, maxpts=None, lower_limit=None, + random_state=None): + + # All of this - random state validation, maxpts, apply_along_axis, + # etc. needs to go in this private method unless we want + # frozen distribution's `cdf` method to duplicate it or call `cdf`, + # which would require re-processing parameters + if random_state is not None: + rng = check_random_state(random_state) + else: + rng = self._random_state + + if not maxpts: + maxpts = 1000 * dim + + x = self._process_quantiles(x, dim) + lower_limit = (np.full(loc.shape, -np.inf) + if lower_limit is None else lower_limit) + + # remove the mean + x, lower_limit = x - loc, lower_limit - loc + + b, a = np.broadcast_arrays(x, lower_limit) + i_swap = b < a + signs = (-1)**(i_swap.sum(axis=-1)) # odd # of swaps -> negative + a, b = a.copy(), b.copy() + a[i_swap], b[i_swap] = b[i_swap], a[i_swap] + n = x.shape[-1] + limits = np.concatenate((a, b), axis=-1) + + def func1d(limits): + a, b = limits[:n], limits[n:] + return _qmvt(maxpts, df, shape, a, b, rng)[0] + + res = np.apply_along_axis(func1d, -1, limits) * signs + # Fixing the output shape for existing distributions is a separate + # issue. For now, let's keep this consistent with pdf. + return _squeeze_output(res) + + def cdf(self, x, loc=None, shape=1, df=1, allow_singular=False, *, + maxpts=None, lower_limit=None, random_state=None): + """Multivariate t-distribution cumulative distribution function. + + Parameters + ---------- + x : array_like + Points at which to evaluate the cumulative distribution function. + %(_mvt_doc_default_callparams)s + maxpts : int, optional + Maximum number of points to use for integration. The default is + 1000 times the number of dimensions. + lower_limit : array_like, optional + Lower limit of integration of the cumulative distribution function. + Default is negative infinity. Must be broadcastable with `x`. + %(_doc_random_state)s + + Returns + ------- + cdf : ndarray or scalar + Cumulative distribution function evaluated at `x`. + + Examples + -------- + >>> from scipy.stats import multivariate_t + >>> x = [0.4, 5] + >>> loc = [0, 1] + >>> shape = [[1, 0.1], [0.1, 1]] + >>> df = 7 + >>> multivariate_t.cdf(x, loc, shape, df) + 0.64798491 + + """ + dim, loc, shape, df = self._process_parameters(loc, shape, df) + shape = _PSD(shape, allow_singular=allow_singular)._M + + return self._cdf(x, loc, shape, df, dim, maxpts, + lower_limit, random_state) + + def _entropy(self, dim, df=1, shape=1): + if df == np.inf: + return multivariate_normal(None, cov=shape).entropy() + + shape_info = _PSD(shape) + shape_term = 0.5 * shape_info.log_pdet + + def regular(dim, df): + halfsum = 0.5 * (dim + df) + half_df = 0.5 * df + return ( + -gammaln(halfsum) + gammaln(half_df) + + 0.5 * dim * np.log(df * np.pi) + halfsum + * (psi(halfsum) - psi(half_df)) + + shape_term + ) + + def asymptotic(dim, df): + # Formula from Wolfram Alpha: + # "asymptotic expansion -gammaln((m+d)/2) + gammaln(d/2) + (m*log(d*pi))/2 + # + ((m+d)/2) * (digamma((m+d)/2) - digamma(d/2))" + return ( + dim * norm._entropy() + dim / df + - dim * (dim - 2) * df**-2.0 / 4 + + dim**2 * (dim - 2) * df**-3.0 / 6 + + dim * (-3 * dim**3 + 8 * dim**2 - 8) * df**-4.0 / 24 + + dim**2 * (3 * dim**3 - 10 * dim**2 + 16) * df**-5.0 / 30 + + shape_term + )[()] + + # preserves ~12 digits accuracy up to at least `dim=1e5`. See gh-18465. + threshold = dim * 100 * 4 / (np.log(dim) + 1) + return _lazywhere(df >= threshold, (dim, df), f=asymptotic, f2=regular) + + def entropy(self, loc=None, shape=1, df=1): + """Calculate the differential entropy of a multivariate + t-distribution. + + Parameters + ---------- + %(_mvt_doc_default_callparams)s + + Returns + ------- + h : float + Differential entropy + + """ + dim, loc, shape, df = self._process_parameters(None, shape, df) + return self._entropy(dim, df, shape) + + def rvs(self, loc=None, shape=1, df=1, size=1, random_state=None): + """Draw random samples from a multivariate t-distribution. + + Parameters + ---------- + %(_mvt_doc_default_callparams)s + size : integer, optional + Number of samples to draw (default 1). + %(_doc_random_state)s + + Returns + ------- + rvs : ndarray or scalar + Random variates of size (`size`, `P`), where `P` is the + dimension of the random variable. + + Examples + -------- + >>> from scipy.stats import multivariate_t + >>> x = [0.4, 5] + >>> loc = [0, 1] + >>> shape = [[1, 0.1], [0.1, 1]] + >>> df = 7 + >>> multivariate_t.rvs(loc, shape, df) + array([[0.93477495, 3.00408716]]) + + """ + # For implementation details, see equation (3): + # + # Hofert, "On Sampling from the Multivariatet Distribution", 2013 + # http://rjournal.github.io/archive/2013-2/hofert.pdf + # + dim, loc, shape, df = self._process_parameters(loc, shape, df) + if random_state is not None: + rng = check_random_state(random_state) + else: + rng = self._random_state + + if np.isinf(df): + x = np.ones(size) + else: + x = rng.chisquare(df, size=size) / df + + z = rng.multivariate_normal(np.zeros(dim), shape, size=size) + samples = loc + z / np.sqrt(x)[..., None] + return _squeeze_output(samples) + + def _process_quantiles(self, x, dim): + """ + Adjust quantiles array so that last axis labels the components of + each data point. + """ + x = np.asarray(x, dtype=float) + if x.ndim == 0: + x = x[np.newaxis] + elif x.ndim == 1: + if dim == 1: + x = x[:, np.newaxis] + else: + x = x[np.newaxis, :] + return x + + def _process_parameters(self, loc, shape, df): + """ + Infer dimensionality from location array and shape matrix, handle + defaults, and ensure compatible dimensions. + """ + if loc is None and shape is None: + loc = np.asarray(0, dtype=float) + shape = np.asarray(1, dtype=float) + dim = 1 + elif loc is None: + shape = np.asarray(shape, dtype=float) + if shape.ndim < 2: + dim = 1 + else: + dim = shape.shape[0] + loc = np.zeros(dim) + elif shape is None: + loc = np.asarray(loc, dtype=float) + dim = loc.size + shape = np.eye(dim) + else: + shape = np.asarray(shape, dtype=float) + loc = np.asarray(loc, dtype=float) + dim = loc.size + + if dim == 1: + loc = loc.reshape(1) + shape = shape.reshape(1, 1) + + if loc.ndim != 1 or loc.shape[0] != dim: + raise ValueError("Array 'loc' must be a vector of length %d." % + dim) + if shape.ndim == 0: + shape = shape * np.eye(dim) + elif shape.ndim == 1: + shape = np.diag(shape) + elif shape.ndim == 2 and shape.shape != (dim, dim): + rows, cols = shape.shape + if rows != cols: + msg = ("Array 'cov' must be square if it is two dimensional," + " but cov.shape = %s." % str(shape.shape)) + else: + msg = ("Dimension mismatch: array 'cov' is of shape %s," + " but 'loc' is a vector of length %d.") + msg = msg % (str(shape.shape), len(loc)) + raise ValueError(msg) + elif shape.ndim > 2: + raise ValueError("Array 'cov' must be at most two-dimensional," + " but cov.ndim = %d" % shape.ndim) + + # Process degrees of freedom. + if df is None: + df = 1 + elif df <= 0: + raise ValueError("'df' must be greater than zero.") + elif np.isnan(df): + raise ValueError("'df' is 'nan' but must be greater than zero or 'np.inf'.") + + return dim, loc, shape, df + + +class multivariate_t_frozen(multi_rv_frozen): + + def __init__(self, loc=None, shape=1, df=1, allow_singular=False, + seed=None): + """Create a frozen multivariate t distribution. + + Parameters + ---------- + %(_mvt_doc_default_callparams)s + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import multivariate_t + >>> loc = np.zeros(3) + >>> shape = np.eye(3) + >>> df = 10 + >>> dist = multivariate_t(loc, shape, df) + >>> dist.rvs() + array([[ 0.81412036, -1.53612361, 0.42199647]]) + >>> dist.pdf([1, 1, 1]) + array([0.01237803]) + + """ + self._dist = multivariate_t_gen(seed) + dim, loc, shape, df = self._dist._process_parameters(loc, shape, df) + self.dim, self.loc, self.shape, self.df = dim, loc, shape, df + self.shape_info = _PSD(shape, allow_singular=allow_singular) + + def logpdf(self, x): + x = self._dist._process_quantiles(x, self.dim) + U = self.shape_info.U + log_pdet = self.shape_info.log_pdet + return self._dist._logpdf(x, self.loc, U, log_pdet, self.df, self.dim, + self.shape_info.rank) + + def cdf(self, x, *, maxpts=None, lower_limit=None, random_state=None): + x = self._dist._process_quantiles(x, self.dim) + return self._dist._cdf(x, self.loc, self.shape, self.df, self.dim, + maxpts, lower_limit, random_state) + + def pdf(self, x): + return np.exp(self.logpdf(x)) + + def rvs(self, size=1, random_state=None): + return self._dist.rvs(loc=self.loc, + shape=self.shape, + df=self.df, + size=size, + random_state=random_state) + + def entropy(self): + return self._dist._entropy(self.dim, self.df, self.shape) + + +multivariate_t = multivariate_t_gen() + + +# Set frozen generator docstrings from corresponding docstrings in +# multivariate_t_gen and fill in default strings in class docstrings +for name in ['logpdf', 'pdf', 'rvs', 'cdf', 'entropy']: + method = multivariate_t_gen.__dict__[name] + method_frozen = multivariate_t_frozen.__dict__[name] + method_frozen.__doc__ = doccer.docformat(method.__doc__, + mvt_docdict_noparams) + method.__doc__ = doccer.docformat(method.__doc__, mvt_docdict_params) + + +_mhg_doc_default_callparams = """\ +m : array_like + The number of each type of object in the population. + That is, :math:`m[i]` is the number of objects of + type :math:`i`. +n : array_like + The number of samples taken from the population. +""" + +_mhg_doc_callparams_note = """\ +`m` must be an array of positive integers. If the quantile +:math:`i` contains values out of the range :math:`[0, m_i]` +where :math:`m_i` is the number of objects of type :math:`i` +in the population or if the parameters are inconsistent with one +another (e.g. ``x.sum() != n``), methods return the appropriate +value (e.g. ``0`` for ``pmf``). If `m` or `n` contain negative +values, the result will contain ``nan`` there. +""" + +_mhg_doc_frozen_callparams = "" + +_mhg_doc_frozen_callparams_note = """\ +See class definition for a detailed description of parameters.""" + +mhg_docdict_params = { + '_doc_default_callparams': _mhg_doc_default_callparams, + '_doc_callparams_note': _mhg_doc_callparams_note, + '_doc_random_state': _doc_random_state +} + +mhg_docdict_noparams = { + '_doc_default_callparams': _mhg_doc_frozen_callparams, + '_doc_callparams_note': _mhg_doc_frozen_callparams_note, + '_doc_random_state': _doc_random_state +} + + +class multivariate_hypergeom_gen(multi_rv_generic): + r"""A multivariate hypergeometric random variable. + + Methods + ------- + pmf(x, m, n) + Probability mass function. + logpmf(x, m, n) + Log of the probability mass function. + rvs(m, n, size=1, random_state=None) + Draw random samples from a multivariate hypergeometric + distribution. + mean(m, n) + Mean of the multivariate hypergeometric distribution. + var(m, n) + Variance of the multivariate hypergeometric distribution. + cov(m, n) + Compute the covariance matrix of the multivariate + hypergeometric distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + %(_doc_random_state)s + + Notes + ----- + %(_doc_callparams_note)s + + The probability mass function for `multivariate_hypergeom` is + + .. math:: + + P(X_1 = x_1, X_2 = x_2, \ldots, X_k = x_k) = \frac{\binom{m_1}{x_1} + \binom{m_2}{x_2} \cdots \binom{m_k}{x_k}}{\binom{M}{n}}, \\ \quad + (x_1, x_2, \ldots, x_k) \in \mathbb{N}^k \text{ with } + \sum_{i=1}^k x_i = n + + where :math:`m_i` are the number of objects of type :math:`i`, :math:`M` + is the total number of objects in the population (sum of all the + :math:`m_i`), and :math:`n` is the size of the sample to be taken + from the population. + + .. versionadded:: 1.6.0 + + Examples + -------- + To evaluate the probability mass function of the multivariate + hypergeometric distribution, with a dichotomous population of size + :math:`10` and :math:`20`, at a sample of size :math:`12` with + :math:`8` objects of the first type and :math:`4` objects of the + second type, use: + + >>> from scipy.stats import multivariate_hypergeom + >>> multivariate_hypergeom.pmf(x=[8, 4], m=[10, 20], n=12) + 0.0025207176631464523 + + The `multivariate_hypergeom` distribution is identical to the + corresponding `hypergeom` distribution (tiny numerical differences + notwithstanding) when only two types (good and bad) of objects + are present in the population as in the example above. Consider + another example for a comparison with the hypergeometric distribution: + + >>> from scipy.stats import hypergeom + >>> multivariate_hypergeom.pmf(x=[3, 1], m=[10, 5], n=4) + 0.4395604395604395 + >>> hypergeom.pmf(k=3, M=15, n=4, N=10) + 0.43956043956044005 + + The functions ``pmf``, ``logpmf``, ``mean``, ``var``, ``cov``, and ``rvs`` + support broadcasting, under the convention that the vector parameters + (``x``, ``m``, and ``n``) are interpreted as if each row along the last + axis is a single object. For instance, we can combine the previous two + calls to `multivariate_hypergeom` as + + >>> multivariate_hypergeom.pmf(x=[[8, 4], [3, 1]], m=[[10, 20], [10, 5]], + ... n=[12, 4]) + array([0.00252072, 0.43956044]) + + This broadcasting also works for ``cov``, where the output objects are + square matrices of size ``m.shape[-1]``. For example: + + >>> multivariate_hypergeom.cov(m=[[7, 9], [10, 15]], n=[8, 12]) + array([[[ 1.05, -1.05], + [-1.05, 1.05]], + [[ 1.56, -1.56], + [-1.56, 1.56]]]) + + That is, ``result[0]`` is equal to + ``multivariate_hypergeom.cov(m=[7, 9], n=8)`` and ``result[1]`` is equal + to ``multivariate_hypergeom.cov(m=[10, 15], n=12)``. + + Alternatively, the object may be called (as a function) to fix the `m` + and `n` parameters, returning a "frozen" multivariate hypergeometric + random variable. + + >>> rv = multivariate_hypergeom(m=[10, 20], n=12) + >>> rv.pmf(x=[8, 4]) + 0.0025207176631464523 + + See Also + -------- + scipy.stats.hypergeom : The hypergeometric distribution. + scipy.stats.multinomial : The multinomial distribution. + + References + ---------- + .. [1] The Multivariate Hypergeometric Distribution, + http://www.randomservices.org/random/urn/MultiHypergeometric.html + .. [2] Thomas J. Sargent and John Stachurski, 2020, + Multivariate Hypergeometric Distribution + https://python.quantecon.org/multi_hyper.html + """ + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__, mhg_docdict_params) + + def __call__(self, m, n, seed=None): + """Create a frozen multivariate_hypergeom distribution. + + See `multivariate_hypergeom_frozen` for more information. + """ + return multivariate_hypergeom_frozen(m, n, seed=seed) + + def _process_parameters(self, m, n): + m = np.asarray(m) + n = np.asarray(n) + if m.size == 0: + m = m.astype(int) + if n.size == 0: + n = n.astype(int) + if not np.issubdtype(m.dtype, np.integer): + raise TypeError("'m' must an array of integers.") + if not np.issubdtype(n.dtype, np.integer): + raise TypeError("'n' must an array of integers.") + if m.ndim == 0: + raise ValueError("'m' must be an array with" + " at least one dimension.") + + # check for empty arrays + if m.size != 0: + n = n[..., np.newaxis] + + m, n = np.broadcast_arrays(m, n) + + # check for empty arrays + if m.size != 0: + n = n[..., 0] + + mcond = m < 0 + + M = m.sum(axis=-1) + + ncond = (n < 0) | (n > M) + return M, m, n, mcond, ncond, np.any(mcond, axis=-1) | ncond + + def _process_quantiles(self, x, M, m, n): + x = np.asarray(x) + if not np.issubdtype(x.dtype, np.integer): + raise TypeError("'x' must an array of integers.") + if x.ndim == 0: + raise ValueError("'x' must be an array with" + " at least one dimension.") + if not x.shape[-1] == m.shape[-1]: + raise ValueError(f"Size of each quantile must be size of 'm': " + f"received {x.shape[-1]}, " + f"but expected {m.shape[-1]}.") + + # check for empty arrays + if m.size != 0: + n = n[..., np.newaxis] + M = M[..., np.newaxis] + + x, m, n, M = np.broadcast_arrays(x, m, n, M) + + # check for empty arrays + if m.size != 0: + n, M = n[..., 0], M[..., 0] + + xcond = (x < 0) | (x > m) + return (x, M, m, n, xcond, + np.any(xcond, axis=-1) | (x.sum(axis=-1) != n)) + + def _checkresult(self, result, cond, bad_value): + result = np.asarray(result) + if cond.ndim != 0: + result[cond] = bad_value + elif cond: + return bad_value + if result.ndim == 0: + return result[()] + return result + + def _logpmf(self, x, M, m, n, mxcond, ncond): + # This equation of the pmf comes from the relation, + # n combine r = beta(n+1, 1) / beta(r+1, n-r+1) + num = np.zeros_like(m, dtype=np.float64) + den = np.zeros_like(n, dtype=np.float64) + m, x = m[~mxcond], x[~mxcond] + M, n = M[~ncond], n[~ncond] + num[~mxcond] = (betaln(m+1, 1) - betaln(x+1, m-x+1)) + den[~ncond] = (betaln(M+1, 1) - betaln(n+1, M-n+1)) + num[mxcond] = np.nan + den[ncond] = np.nan + num = num.sum(axis=-1) + return num - den + + def logpmf(self, x, m, n): + """Log of the multivariate hypergeometric probability mass function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_doc_default_callparams)s + + Returns + ------- + logpmf : ndarray or scalar + Log of the probability mass function evaluated at `x` + + Notes + ----- + %(_doc_callparams_note)s + """ + M, m, n, mcond, ncond, mncond = self._process_parameters(m, n) + (x, M, m, n, xcond, + xcond_reduced) = self._process_quantiles(x, M, m, n) + mxcond = mcond | xcond + ncond = ncond | np.zeros(n.shape, dtype=np.bool_) + + result = self._logpmf(x, M, m, n, mxcond, ncond) + + # replace values for which x was out of the domain; broadcast + # xcond to the right shape + xcond_ = xcond_reduced | np.zeros(mncond.shape, dtype=np.bool_) + result = self._checkresult(result, xcond_, -np.inf) + + # replace values bad for n or m; broadcast + # mncond to the right shape + mncond_ = mncond | np.zeros(xcond_reduced.shape, dtype=np.bool_) + return self._checkresult(result, mncond_, np.nan) + + def pmf(self, x, m, n): + """Multivariate hypergeometric probability mass function. + + Parameters + ---------- + x : array_like + Quantiles, with the last axis of `x` denoting the components. + %(_doc_default_callparams)s + + Returns + ------- + pmf : ndarray or scalar + Probability density function evaluated at `x` + + Notes + ----- + %(_doc_callparams_note)s + """ + out = np.exp(self.logpmf(x, m, n)) + return out + + def mean(self, m, n): + """Mean of the multivariate hypergeometric distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + mean : array_like or scalar + The mean of the distribution + """ + M, m, n, _, _, mncond = self._process_parameters(m, n) + # check for empty arrays + if m.size != 0: + M, n = M[..., np.newaxis], n[..., np.newaxis] + cond = (M == 0) + M = np.ma.masked_array(M, mask=cond) + mu = n*(m/M) + if m.size != 0: + mncond = (mncond[..., np.newaxis] | + np.zeros(mu.shape, dtype=np.bool_)) + return self._checkresult(mu, mncond, np.nan) + + def var(self, m, n): + """Variance of the multivariate hypergeometric distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + array_like + The variances of the components of the distribution. This is + the diagonal of the covariance matrix of the distribution + """ + M, m, n, _, _, mncond = self._process_parameters(m, n) + # check for empty arrays + if m.size != 0: + M, n = M[..., np.newaxis], n[..., np.newaxis] + cond = (M == 0) & (M-1 == 0) + M = np.ma.masked_array(M, mask=cond) + output = n * m/M * (M-m)/M * (M-n)/(M-1) + if m.size != 0: + mncond = (mncond[..., np.newaxis] | + np.zeros(output.shape, dtype=np.bool_)) + return self._checkresult(output, mncond, np.nan) + + def cov(self, m, n): + """Covariance matrix of the multivariate hypergeometric distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + + Returns + ------- + cov : array_like + The covariance matrix of the distribution + """ + # see [1]_ for the formula and [2]_ for implementation + # cov( x_i,x_j ) = -n * (M-n)/(M-1) * (K_i*K_j) / (M**2) + M, m, n, _, _, mncond = self._process_parameters(m, n) + # check for empty arrays + if m.size != 0: + M = M[..., np.newaxis, np.newaxis] + n = n[..., np.newaxis, np.newaxis] + cond = (M == 0) & (M-1 == 0) + M = np.ma.masked_array(M, mask=cond) + output = (-n * (M-n)/(M-1) * + np.einsum("...i,...j->...ij", m, m) / (M**2)) + # check for empty arrays + if m.size != 0: + M, n = M[..., 0, 0], n[..., 0, 0] + cond = cond[..., 0, 0] + dim = m.shape[-1] + # diagonal entries need to be computed differently + for i in range(dim): + output[..., i, i] = (n * (M-n) * m[..., i]*(M-m[..., i])) + output[..., i, i] = output[..., i, i] / (M-1) + output[..., i, i] = output[..., i, i] / (M**2) + if m.size != 0: + mncond = (mncond[..., np.newaxis, np.newaxis] | + np.zeros(output.shape, dtype=np.bool_)) + return self._checkresult(output, mncond, np.nan) + + def rvs(self, m, n, size=None, random_state=None): + """Draw random samples from a multivariate hypergeometric distribution. + + Parameters + ---------- + %(_doc_default_callparams)s + size : integer or iterable of integers, optional + Number of samples to draw. Default is ``None``, in which case a + single variate is returned as an array with shape ``m.shape``. + %(_doc_random_state)s + + Returns + ------- + rvs : array_like + Random variates of shape ``size`` or ``m.shape`` + (if ``size=None``). + + Notes + ----- + %(_doc_callparams_note)s + + Also note that NumPy's `multivariate_hypergeometric` sampler is not + used as it doesn't support broadcasting. + """ + M, m, n, _, _, _ = self._process_parameters(m, n) + + random_state = self._get_random_state(random_state) + + if size is not None and isinstance(size, int): + size = (size, ) + + if size is None: + rvs = np.empty(m.shape, dtype=m.dtype) + else: + rvs = np.empty(size + (m.shape[-1], ), dtype=m.dtype) + rem = M + + # This sampler has been taken from numpy gh-13794 + # https://github.com/numpy/numpy/pull/13794 + for c in range(m.shape[-1] - 1): + rem = rem - m[..., c] + n0mask = n == 0 + rvs[..., c] = (~n0mask * + random_state.hypergeometric(m[..., c], + rem + n0mask, + n + n0mask, + size=size)) + n = n - rvs[..., c] + rvs[..., m.shape[-1] - 1] = n + + return rvs + + +multivariate_hypergeom = multivariate_hypergeom_gen() + + +class multivariate_hypergeom_frozen(multi_rv_frozen): + def __init__(self, m, n, seed=None): + self._dist = multivariate_hypergeom_gen(seed) + (self.M, self.m, self.n, + self.mcond, self.ncond, + self.mncond) = self._dist._process_parameters(m, n) + + # monkey patch self._dist + def _process_parameters(m, n): + return (self.M, self.m, self.n, + self.mcond, self.ncond, + self.mncond) + self._dist._process_parameters = _process_parameters + + def logpmf(self, x): + return self._dist.logpmf(x, self.m, self.n) + + def pmf(self, x): + return self._dist.pmf(x, self.m, self.n) + + def mean(self): + return self._dist.mean(self.m, self.n) + + def var(self): + return self._dist.var(self.m, self.n) + + def cov(self): + return self._dist.cov(self.m, self.n) + + def rvs(self, size=1, random_state=None): + return self._dist.rvs(self.m, self.n, + size=size, + random_state=random_state) + + +# Set frozen generator docstrings from corresponding docstrings in +# multivariate_hypergeom and fill in default strings in class docstrings +for name in ['logpmf', 'pmf', 'mean', 'var', 'cov', 'rvs']: + method = multivariate_hypergeom_gen.__dict__[name] + method_frozen = multivariate_hypergeom_frozen.__dict__[name] + method_frozen.__doc__ = doccer.docformat( + method.__doc__, mhg_docdict_noparams) + method.__doc__ = doccer.docformat(method.__doc__, + mhg_docdict_params) + + +class random_table_gen(multi_rv_generic): + r"""Contingency tables from independent samples with fixed marginal sums. + + This is the distribution of random tables with given row and column vector + sums. This distribution represents the set of random tables under the null + hypothesis that rows and columns are independent. It is used in hypothesis + tests of independence. + + Because of assumed independence, the expected frequency of each table + element can be computed from the row and column sums, so that the + distribution is completely determined by these two vectors. + + Methods + ------- + logpmf(x) + Log-probability of table `x` to occur in the distribution. + pmf(x) + Probability of table `x` to occur in the distribution. + mean(row, col) + Mean table. + rvs(row, col, size=None, method=None, random_state=None) + Draw random tables with given row and column vector sums. + + Parameters + ---------- + %(_doc_row_col)s + %(_doc_random_state)s + + Notes + ----- + %(_doc_row_col_note)s + + Random elements from the distribution are generated either with Boyett's + [1]_ or Patefield's algorithm [2]_. Boyett's algorithm has + O(N) time and space complexity, where N is the total sum of entries in the + table. Patefield's algorithm has O(K x log(N)) time complexity, where K is + the number of cells in the table and requires only a small constant work + space. By default, the `rvs` method selects the fastest algorithm based on + the input, but you can specify the algorithm with the keyword `method`. + Allowed values are "boyett" and "patefield". + + .. versionadded:: 1.10.0 + + Examples + -------- + >>> from scipy.stats import random_table + + >>> row = [1, 5] + >>> col = [2, 3, 1] + >>> random_table.mean(row, col) + array([[0.33333333, 0.5 , 0.16666667], + [1.66666667, 2.5 , 0.83333333]]) + + Alternatively, the object may be called (as a function) to fix the row + and column vector sums, returning a "frozen" distribution. + + >>> dist = random_table(row, col) + >>> dist.rvs(random_state=123) + array([[1., 0., 0.], + [1., 3., 1.]]) + + References + ---------- + .. [1] J. Boyett, AS 144 Appl. Statist. 28 (1979) 329-332 + .. [2] W.M. Patefield, AS 159 Appl. Statist. 30 (1981) 91-97 + """ + + def __init__(self, seed=None): + super().__init__(seed) + + def __call__(self, row, col, *, seed=None): + """Create a frozen distribution of tables with given marginals. + + See `random_table_frozen` for more information. + """ + return random_table_frozen(row, col, seed=seed) + + def logpmf(self, x, row, col): + """Log-probability of table to occur in the distribution. + + Parameters + ---------- + %(_doc_x)s + %(_doc_row_col)s + + Returns + ------- + logpmf : ndarray or scalar + Log of the probability mass function evaluated at `x`. + + Notes + ----- + %(_doc_row_col_note)s + + If row and column marginals of `x` do not match `row` and `col`, + negative infinity is returned. + + Examples + -------- + >>> from scipy.stats import random_table + >>> import numpy as np + + >>> x = [[1, 5, 1], [2, 3, 1]] + >>> row = np.sum(x, axis=1) + >>> col = np.sum(x, axis=0) + >>> random_table.logpmf(x, row, col) + -1.6306401200847027 + + Alternatively, the object may be called (as a function) to fix the row + and column vector sums, returning a "frozen" distribution. + + >>> d = random_table(row, col) + >>> d.logpmf(x) + -1.6306401200847027 + """ + r, c, n = self._process_parameters(row, col) + x = np.asarray(x) + + if x.ndim < 2: + raise ValueError("`x` must be at least two-dimensional") + + dtype_is_int = np.issubdtype(x.dtype, np.integer) + with np.errstate(invalid='ignore'): + if not dtype_is_int and not np.all(x.astype(int) == x): + raise ValueError("`x` must contain only integral values") + + # x does not contain NaN if we arrive here + if np.any(x < 0): + raise ValueError("`x` must contain only non-negative values") + + r2 = np.sum(x, axis=-1) + c2 = np.sum(x, axis=-2) + + if r2.shape[-1] != len(r): + raise ValueError("shape of `x` must agree with `row`") + + if c2.shape[-1] != len(c): + raise ValueError("shape of `x` must agree with `col`") + + res = np.empty(x.shape[:-2]) + + mask = np.all(r2 == r, axis=-1) & np.all(c2 == c, axis=-1) + + def lnfac(x): + return gammaln(x + 1) + + res[mask] = (np.sum(lnfac(r), axis=-1) + np.sum(lnfac(c), axis=-1) + - lnfac(n) - np.sum(lnfac(x[mask]), axis=(-1, -2))) + res[~mask] = -np.inf + + return res[()] + + def pmf(self, x, row, col): + """Probability of table to occur in the distribution. + + Parameters + ---------- + %(_doc_x)s + %(_doc_row_col)s + + Returns + ------- + pmf : ndarray or scalar + Probability mass function evaluated at `x`. + + Notes + ----- + %(_doc_row_col_note)s + + If row and column marginals of `x` do not match `row` and `col`, + zero is returned. + + Examples + -------- + >>> from scipy.stats import random_table + >>> import numpy as np + + >>> x = [[1, 5, 1], [2, 3, 1]] + >>> row = np.sum(x, axis=1) + >>> col = np.sum(x, axis=0) + >>> random_table.pmf(x, row, col) + 0.19580419580419592 + + Alternatively, the object may be called (as a function) to fix the row + and column vector sums, returning a "frozen" distribution. + + >>> d = random_table(row, col) + >>> d.pmf(x) + 0.19580419580419592 + """ + return np.exp(self.logpmf(x, row, col)) + + def mean(self, row, col): + """Mean of distribution of conditional tables. + %(_doc_mean_params)s + + Returns + ------- + mean: ndarray + Mean of the distribution. + + Notes + ----- + %(_doc_row_col_note)s + + Examples + -------- + >>> from scipy.stats import random_table + + >>> row = [1, 5] + >>> col = [2, 3, 1] + >>> random_table.mean(row, col) + array([[0.33333333, 0.5 , 0.16666667], + [1.66666667, 2.5 , 0.83333333]]) + + Alternatively, the object may be called (as a function) to fix the row + and column vector sums, returning a "frozen" distribution. + + >>> d = random_table(row, col) + >>> d.mean() + array([[0.33333333, 0.5 , 0.16666667], + [1.66666667, 2.5 , 0.83333333]]) + """ + r, c, n = self._process_parameters(row, col) + return np.outer(r, c) / n + + def rvs(self, row, col, *, size=None, method=None, random_state=None): + """Draw random tables with fixed column and row marginals. + + Parameters + ---------- + %(_doc_row_col)s + size : integer, optional + Number of samples to draw (default 1). + method : str, optional + Which method to use, "boyett" or "patefield". If None (default), + selects the fastest method for this input. + %(_doc_random_state)s + + Returns + ------- + rvs : ndarray + Random 2D tables of shape (`size`, `len(row)`, `len(col)`). + + Notes + ----- + %(_doc_row_col_note)s + + Examples + -------- + >>> from scipy.stats import random_table + + >>> row = [1, 5] + >>> col = [2, 3, 1] + >>> random_table.rvs(row, col, random_state=123) + array([[1., 0., 0.], + [1., 3., 1.]]) + + Alternatively, the object may be called (as a function) to fix the row + and column vector sums, returning a "frozen" distribution. + + >>> d = random_table(row, col) + >>> d.rvs(random_state=123) + array([[1., 0., 0.], + [1., 3., 1.]]) + """ + r, c, n = self._process_parameters(row, col) + size, shape = self._process_size_shape(size, r, c) + + random_state = self._get_random_state(random_state) + meth = self._process_rvs_method(method, r, c, n) + + return meth(r, c, n, size, random_state).reshape(shape) + + @staticmethod + def _process_parameters(row, col): + """ + Check that row and column vectors are one-dimensional, that they do + not contain negative or non-integer entries, and that the sums over + both vectors are equal. + """ + r = np.array(row, dtype=np.int64, copy=True) + c = np.array(col, dtype=np.int64, copy=True) + + if np.ndim(r) != 1: + raise ValueError("`row` must be one-dimensional") + if np.ndim(c) != 1: + raise ValueError("`col` must be one-dimensional") + + if np.any(r < 0): + raise ValueError("each element of `row` must be non-negative") + if np.any(c < 0): + raise ValueError("each element of `col` must be non-negative") + + n = np.sum(r) + if n != np.sum(c): + raise ValueError("sums over `row` and `col` must be equal") + + if not np.all(r == np.asarray(row)): + raise ValueError("each element of `row` must be an integer") + if not np.all(c == np.asarray(col)): + raise ValueError("each element of `col` must be an integer") + + return r, c, n + + @staticmethod + def _process_size_shape(size, r, c): + """ + Compute the number of samples to be drawn and the shape of the output + """ + shape = (len(r), len(c)) + + if size is None: + return 1, shape + + size = np.atleast_1d(size) + if not np.issubdtype(size.dtype, np.integer) or np.any(size < 0): + raise ValueError("`size` must be a non-negative integer or `None`") + + return np.prod(size), tuple(size) + shape + + @classmethod + def _process_rvs_method(cls, method, r, c, n): + known_methods = { + None: cls._rvs_select(r, c, n), + "boyett": cls._rvs_boyett, + "patefield": cls._rvs_patefield, + } + try: + return known_methods[method] + except KeyError: + raise ValueError(f"'{method}' not recognized, " + f"must be one of {set(known_methods)}") + + @classmethod + def _rvs_select(cls, r, c, n): + fac = 1.0 # benchmarks show that this value is about 1 + k = len(r) * len(c) # number of cells + # n + 1 guards against failure if n == 0 + if n > fac * np.log(n + 1) * k: + return cls._rvs_patefield + return cls._rvs_boyett + + @staticmethod + def _rvs_boyett(row, col, ntot, size, random_state): + return _rcont.rvs_rcont1(row, col, ntot, size, random_state) + + @staticmethod + def _rvs_patefield(row, col, ntot, size, random_state): + return _rcont.rvs_rcont2(row, col, ntot, size, random_state) + + +random_table = random_table_gen() + + +class random_table_frozen(multi_rv_frozen): + def __init__(self, row, col, *, seed=None): + self._dist = random_table_gen(seed) + self._params = self._dist._process_parameters(row, col) + + # monkey patch self._dist + def _process_parameters(r, c): + return self._params + self._dist._process_parameters = _process_parameters + + def logpmf(self, x): + return self._dist.logpmf(x, None, None) + + def pmf(self, x): + return self._dist.pmf(x, None, None) + + def mean(self): + return self._dist.mean(None, None) + + def rvs(self, size=None, method=None, random_state=None): + # optimisations are possible here + return self._dist.rvs(None, None, size=size, method=method, + random_state=random_state) + + +_ctab_doc_row_col = """\ +row : array_like + Sum of table entries in each row. +col : array_like + Sum of table entries in each column.""" + +_ctab_doc_x = """\ +x : array-like + Two-dimensional table of non-negative integers, or a + multi-dimensional array with the last two dimensions + corresponding with the tables.""" + +_ctab_doc_row_col_note = """\ +The row and column vectors must be one-dimensional, not empty, +and each sum up to the same value. They cannot contain negative +or noninteger entries.""" + +_ctab_doc_mean_params = f""" +Parameters +---------- +{_ctab_doc_row_col}""" + +_ctab_doc_row_col_note_frozen = """\ +See class definition for a detailed description of parameters.""" + +_ctab_docdict = { + "_doc_random_state": _doc_random_state, + "_doc_row_col": _ctab_doc_row_col, + "_doc_x": _ctab_doc_x, + "_doc_mean_params": _ctab_doc_mean_params, + "_doc_row_col_note": _ctab_doc_row_col_note, +} + +_ctab_docdict_frozen = _ctab_docdict.copy() +_ctab_docdict_frozen.update({ + "_doc_row_col": "", + "_doc_mean_params": "", + "_doc_row_col_note": _ctab_doc_row_col_note_frozen, +}) + + +def _docfill(obj, docdict, template=None): + obj.__doc__ = doccer.docformat(template or obj.__doc__, docdict) + + +# Set frozen generator docstrings from corresponding docstrings in +# random_table and fill in default strings in class docstrings +_docfill(random_table_gen, _ctab_docdict) +for name in ['logpmf', 'pmf', 'mean', 'rvs']: + method = random_table_gen.__dict__[name] + method_frozen = random_table_frozen.__dict__[name] + _docfill(method_frozen, _ctab_docdict_frozen, method.__doc__) + _docfill(method, _ctab_docdict) + + +class uniform_direction_gen(multi_rv_generic): + r"""A vector-valued uniform direction. + + Return a random direction (unit vector). The `dim` keyword specifies + the dimensionality of the space. + + Methods + ------- + rvs(dim=None, size=1, random_state=None) + Draw random directions. + + Parameters + ---------- + dim : scalar + Dimension of directions. + seed : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + Used for drawing random variates. + If `seed` is `None`, the `~np.random.RandomState` singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, seeded + with seed. + If `seed` is already a ``RandomState`` or ``Generator`` instance, + then that object is used. + Default is `None`. + + Notes + ----- + This distribution generates unit vectors uniformly distributed on + the surface of a hypersphere. These can be interpreted as random + directions. + For example, if `dim` is 3, 3D vectors from the surface of :math:`S^2` + will be sampled. + + References + ---------- + .. [1] Marsaglia, G. (1972). "Choosing a Point from the Surface of a + Sphere". Annals of Mathematical Statistics. 43 (2): 645-646. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import uniform_direction + >>> x = uniform_direction.rvs(3) + >>> np.linalg.norm(x) + 1. + + This generates one random direction, a vector on the surface of + :math:`S^2`. + + Alternatively, the object may be called (as a function) to return a frozen + distribution with fixed `dim` parameter. Here, + we create a `uniform_direction` with ``dim=3`` and draw 5 observations. + The samples are then arranged in an array of shape 5x3. + + >>> rng = np.random.default_rng() + >>> uniform_sphere_dist = uniform_direction(3) + >>> unit_vectors = uniform_sphere_dist.rvs(5, random_state=rng) + >>> unit_vectors + array([[ 0.56688642, -0.1332634 , -0.81294566], + [-0.427126 , -0.74779278, 0.50830044], + [ 0.3793989 , 0.92346629, 0.05715323], + [ 0.36428383, -0.92449076, -0.11231259], + [-0.27733285, 0.94410968, -0.17816678]]) + """ + + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__) + + def __call__(self, dim=None, seed=None): + """Create a frozen n-dimensional uniform direction distribution. + + See `uniform_direction` for more information. + """ + return uniform_direction_frozen(dim, seed=seed) + + def _process_parameters(self, dim): + """Dimension N must be specified; it cannot be inferred.""" + if dim is None or not np.isscalar(dim) or dim < 1 or dim != int(dim): + raise ValueError("Dimension of vector must be specified, " + "and must be an integer greater than 0.") + + return int(dim) + + def rvs(self, dim, size=None, random_state=None): + """Draw random samples from S(N-1). + + Parameters + ---------- + dim : integer + Dimension of space (N). + size : int or tuple of ints, optional + Given a shape of, for example, (m,n,k), m*n*k samples are + generated, and packed in an m-by-n-by-k arrangement. + Because each sample is N-dimensional, the output shape + is (m,n,k,N). If no shape is specified, a single (N-D) + sample is returned. + random_state : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + Pseudorandom number generator state used to generate resamples. + + If `random_state` is ``None`` (or `np.random`), the + `numpy.random.RandomState` singleton is used. + If `random_state` is an int, a new ``RandomState`` instance is + used, seeded with `random_state`. + If `random_state` is already a ``Generator`` or ``RandomState`` + instance then that instance is used. + + Returns + ------- + rvs : ndarray + Random direction vectors + + """ + random_state = self._get_random_state(random_state) + if size is None: + size = np.array([], dtype=int) + size = np.atleast_1d(size) + + dim = self._process_parameters(dim) + + samples = _sample_uniform_direction(dim, size, random_state) + return samples + + +uniform_direction = uniform_direction_gen() + + +class uniform_direction_frozen(multi_rv_frozen): + def __init__(self, dim=None, seed=None): + """Create a frozen n-dimensional uniform direction distribution. + + Parameters + ---------- + dim : int + Dimension of matrices + seed : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + + Examples + -------- + >>> from scipy.stats import uniform_direction + >>> x = uniform_direction(3) + >>> x.rvs() + + """ + self._dist = uniform_direction_gen(seed) + self.dim = self._dist._process_parameters(dim) + + def rvs(self, size=None, random_state=None): + return self._dist.rvs(self.dim, size, random_state) + + +def _sample_uniform_direction(dim, size, random_state): + """ + Private method to generate uniform directions + Reference: Marsaglia, G. (1972). "Choosing a Point from the Surface of a + Sphere". Annals of Mathematical Statistics. 43 (2): 645-646. + """ + samples_shape = np.append(size, dim) + samples = random_state.standard_normal(samples_shape) + samples /= np.linalg.norm(samples, axis=-1, keepdims=True) + return samples + + +_dirichlet_mn_doc_default_callparams = """\ +alpha : array_like + The concentration parameters. The number of entries along the last axis + determines the dimensionality of the distribution. Each entry must be + strictly positive. +n : int or array_like + The number of trials. Each element must be a strictly positive integer. +""" + +_dirichlet_mn_doc_frozen_callparams = "" + +_dirichlet_mn_doc_frozen_callparams_note = """\ +See class definition for a detailed description of parameters.""" + +dirichlet_mn_docdict_params = { + '_dirichlet_mn_doc_default_callparams': _dirichlet_mn_doc_default_callparams, + '_doc_random_state': _doc_random_state +} + +dirichlet_mn_docdict_noparams = { + '_dirichlet_mn_doc_default_callparams': _dirichlet_mn_doc_frozen_callparams, + '_doc_random_state': _doc_random_state +} + + +def _dirichlet_multinomial_check_parameters(alpha, n, x=None): + + alpha = np.asarray(alpha) + n = np.asarray(n) + + if x is not None: + # Ensure that `x` and `alpha` are arrays. If the shapes are + # incompatible, NumPy will raise an appropriate error. + try: + x, alpha = np.broadcast_arrays(x, alpha) + except ValueError as e: + msg = "`x` and `alpha` must be broadcastable." + raise ValueError(msg) from e + + x_int = np.floor(x) + if np.any(x < 0) or np.any(x != x_int): + raise ValueError("`x` must contain only non-negative integers.") + x = x_int + + if np.any(alpha <= 0): + raise ValueError("`alpha` must contain only positive values.") + + n_int = np.floor(n) + if np.any(n <= 0) or np.any(n != n_int): + raise ValueError("`n` must be a positive integer.") + n = n_int + + sum_alpha = np.sum(alpha, axis=-1) + sum_alpha, n = np.broadcast_arrays(sum_alpha, n) + + return (alpha, sum_alpha, n) if x is None else (alpha, sum_alpha, n, x) + + +class dirichlet_multinomial_gen(multi_rv_generic): + r"""A Dirichlet multinomial random variable. + + The Dirichlet multinomial distribution is a compound probability + distribution: it is the multinomial distribution with number of trials + `n` and class probabilities ``p`` randomly sampled from a Dirichlet + distribution with concentration parameters ``alpha``. + + Methods + ------- + logpmf(x, alpha, n): + Log of the probability mass function. + pmf(x, alpha, n): + Probability mass function. + mean(alpha, n): + Mean of the Dirichlet multinomial distribution. + var(alpha, n): + Variance of the Dirichlet multinomial distribution. + cov(alpha, n): + The covariance of the Dirichlet multinomial distribution. + + Parameters + ---------- + %(_dirichlet_mn_doc_default_callparams)s + %(_doc_random_state)s + + See Also + -------- + scipy.stats.dirichlet : The dirichlet distribution. + scipy.stats.multinomial : The multinomial distribution. + + References + ---------- + .. [1] Dirichlet-multinomial distribution, Wikipedia, + https://www.wikipedia.org/wiki/Dirichlet-multinomial_distribution + + Examples + -------- + >>> from scipy.stats import dirichlet_multinomial + + Get the PMF + + >>> n = 6 # number of trials + >>> alpha = [3, 4, 5] # concentration parameters + >>> x = [1, 2, 3] # counts + >>> dirichlet_multinomial.pmf(x, alpha, n) + 0.08484162895927604 + + If the sum of category counts does not equal the number of trials, + the probability mass is zero. + + >>> dirichlet_multinomial.pmf(x, alpha, n=7) + 0.0 + + Get the log of the PMF + + >>> dirichlet_multinomial.logpmf(x, alpha, n) + -2.4669689491013327 + + Get the mean + + >>> dirichlet_multinomial.mean(alpha, n) + array([1.5, 2. , 2.5]) + + Get the variance + + >>> dirichlet_multinomial.var(alpha, n) + array([1.55769231, 1.84615385, 2.01923077]) + + Get the covariance + + >>> dirichlet_multinomial.cov(alpha, n) + array([[ 1.55769231, -0.69230769, -0.86538462], + [-0.69230769, 1.84615385, -1.15384615], + [-0.86538462, -1.15384615, 2.01923077]]) + + Alternatively, the object may be called (as a function) to fix the + `alpha` and `n` parameters, returning a "frozen" Dirichlet multinomial + random variable. + + >>> dm = dirichlet_multinomial(alpha, n) + >>> dm.pmf(x) + 0.08484162895927579 + + All methods are fully vectorized. Each element of `x` and `alpha` is + a vector (along the last axis), each element of `n` is an + integer (scalar), and the result is computed element-wise. + + >>> x = [[1, 2, 3], [4, 5, 6]] + >>> alpha = [[1, 2, 3], [4, 5, 6]] + >>> n = [6, 15] + >>> dirichlet_multinomial.pmf(x, alpha, n) + array([0.06493506, 0.02626937]) + + >>> dirichlet_multinomial.cov(alpha, n).shape # both covariance matrices + (2, 3, 3) + + Broadcasting according to standard NumPy conventions is supported. Here, + we have four sets of concentration parameters (each a two element vector) + for each of three numbers of trials (each a scalar). + + >>> alpha = [[3, 4], [4, 5], [5, 6], [6, 7]] + >>> n = [[6], [7], [8]] + >>> dirichlet_multinomial.mean(alpha, n).shape + (3, 4, 2) + + """ + def __init__(self, seed=None): + super().__init__(seed) + self.__doc__ = doccer.docformat(self.__doc__, + dirichlet_mn_docdict_params) + + def __call__(self, alpha, n, seed=None): + return dirichlet_multinomial_frozen(alpha, n, seed=seed) + + def logpmf(self, x, alpha, n): + """The log of the probability mass function. + + Parameters + ---------- + x: ndarray + Category counts (non-negative integers). Must be broadcastable + with shape parameter ``alpha``. If multidimensional, the last axis + must correspond with the categories. + %(_dirichlet_mn_doc_default_callparams)s + + Returns + ------- + out: ndarray or scalar + Log of the probability mass function. + + """ + + a, Sa, n, x = _dirichlet_multinomial_check_parameters(alpha, n, x) + + out = np.asarray(loggamma(Sa) + loggamma(n + 1) - loggamma(n + Sa)) + out += (loggamma(x + a) - (loggamma(a) + loggamma(x + 1))).sum(axis=-1) + np.place(out, n != x.sum(axis=-1), -np.inf) + return out[()] + + def pmf(self, x, alpha, n): + """Probability mass function for a Dirichlet multinomial distribution. + + Parameters + ---------- + x: ndarray + Category counts (non-negative integers). Must be broadcastable + with shape parameter ``alpha``. If multidimensional, the last axis + must correspond with the categories. + %(_dirichlet_mn_doc_default_callparams)s + + Returns + ------- + out: ndarray or scalar + Probability mass function. + + """ + return np.exp(self.logpmf(x, alpha, n)) + + def mean(self, alpha, n): + """Mean of a Dirichlet multinomial distribution. + + Parameters + ---------- + %(_dirichlet_mn_doc_default_callparams)s + + Returns + ------- + out: ndarray + Mean of a Dirichlet multinomial distribution. + + """ + a, Sa, n = _dirichlet_multinomial_check_parameters(alpha, n) + n, Sa = n[..., np.newaxis], Sa[..., np.newaxis] + return n * a / Sa + + def var(self, alpha, n): + """The variance of the Dirichlet multinomial distribution. + + Parameters + ---------- + %(_dirichlet_mn_doc_default_callparams)s + + Returns + ------- + out: array_like + The variances of the components of the distribution. This is + the diagonal of the covariance matrix of the distribution. + + """ + a, Sa, n = _dirichlet_multinomial_check_parameters(alpha, n) + n, Sa = n[..., np.newaxis], Sa[..., np.newaxis] + return n * a / Sa * (1 - a/Sa) * (n + Sa) / (1 + Sa) + + def cov(self, alpha, n): + """Covariance matrix of a Dirichlet multinomial distribution. + + Parameters + ---------- + %(_dirichlet_mn_doc_default_callparams)s + + Returns + ------- + out : array_like + The covariance matrix of the distribution. + + """ + a, Sa, n = _dirichlet_multinomial_check_parameters(alpha, n) + var = dirichlet_multinomial.var(a, n) + + n, Sa = n[..., np.newaxis, np.newaxis], Sa[..., np.newaxis, np.newaxis] + aiaj = a[..., :, np.newaxis] * a[..., np.newaxis, :] + cov = -n * aiaj / Sa ** 2 * (n + Sa) / (1 + Sa) + + ii = np.arange(cov.shape[-1]) + cov[..., ii, ii] = var + return cov + + +dirichlet_multinomial = dirichlet_multinomial_gen() + + +class dirichlet_multinomial_frozen(multi_rv_frozen): + def __init__(self, alpha, n, seed=None): + alpha, Sa, n = _dirichlet_multinomial_check_parameters(alpha, n) + self.alpha = alpha + self.n = n + self._dist = dirichlet_multinomial_gen(seed) + + def logpmf(self, x): + return self._dist.logpmf(x, self.alpha, self.n) + + def pmf(self, x): + return self._dist.pmf(x, self.alpha, self.n) + + def mean(self): + return self._dist.mean(self.alpha, self.n) + + def var(self): + return self._dist.var(self.alpha, self.n) + + def cov(self): + return self._dist.cov(self.alpha, self.n) + + +# Set frozen generator docstrings from corresponding docstrings in +# dirichlet_multinomial and fill in default strings in class docstrings. +for name in ['logpmf', 'pmf', 'mean', 'var', 'cov']: + method = dirichlet_multinomial_gen.__dict__[name] + method_frozen = dirichlet_multinomial_frozen.__dict__[name] + method_frozen.__doc__ = doccer.docformat( + method.__doc__, dirichlet_mn_docdict_noparams) + method.__doc__ = doccer.docformat(method.__doc__, + dirichlet_mn_docdict_params) + + +class vonmises_fisher_gen(multi_rv_generic): + r"""A von Mises-Fisher variable. + + The `mu` keyword specifies the mean direction vector. The `kappa` keyword + specifies the concentration parameter. + + Methods + ------- + pdf(x, mu=None, kappa=1) + Probability density function. + logpdf(x, mu=None, kappa=1) + Log of the probability density function. + rvs(mu=None, kappa=1, size=1, random_state=None) + Draw random samples from a von Mises-Fisher distribution. + entropy(mu=None, kappa=1) + Compute the differential entropy of the von Mises-Fisher distribution. + fit(data) + Fit a von Mises-Fisher distribution to data. + + Parameters + ---------- + mu : array_like + Mean direction of the distribution. Must be a one-dimensional unit + vector of norm 1. + kappa : float + Concentration parameter. Must be positive. + seed : {None, int, np.random.RandomState, np.random.Generator}, optional + Used for drawing random variates. + If `seed` is `None`, the `~np.random.RandomState` singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, seeded + with seed. + If `seed` is already a ``RandomState`` or ``Generator`` instance, + then that object is used. + Default is `None`. + + See Also + -------- + scipy.stats.vonmises : Von-Mises Fisher distribution in 2D on a circle + uniform_direction : uniform distribution on the surface of a hypersphere + + Notes + ----- + The von Mises-Fisher distribution is a directional distribution on the + surface of the unit hypersphere. The probability density + function of a unit vector :math:`\mathbf{x}` is + + .. math:: + + f(\mathbf{x}) = \frac{\kappa^{d/2-1}}{(2\pi)^{d/2}I_{d/2-1}(\kappa)} + \exp\left(\kappa \mathbf{\mu}^T\mathbf{x}\right), + + where :math:`\mathbf{\mu}` is the mean direction, :math:`\kappa` the + concentration parameter, :math:`d` the dimension and :math:`I` the + modified Bessel function of the first kind. As :math:`\mu` represents + a direction, it must be a unit vector or in other words, a point + on the hypersphere: :math:`\mathbf{\mu}\in S^{d-1}`. :math:`\kappa` is a + concentration parameter, which means that it must be positive + (:math:`\kappa>0`) and that the distribution becomes more narrow with + increasing :math:`\kappa`. In that sense, the reciprocal value + :math:`1/\kappa` resembles the variance parameter of the normal + distribution. + + The von Mises-Fisher distribution often serves as an analogue of the + normal distribution on the sphere. Intuitively, for unit vectors, a + useful distance measure is given by the angle :math:`\alpha` between + them. This is exactly what the scalar product + :math:`\mathbf{\mu}^T\mathbf{x}=\cos(\alpha)` in the + von Mises-Fisher probability density function describes: the angle + between the mean direction :math:`\mathbf{\mu}` and the vector + :math:`\mathbf{x}`. The larger the angle between them, the smaller the + probability to observe :math:`\mathbf{x}` for this particular mean + direction :math:`\mathbf{\mu}`. + + In dimensions 2 and 3, specialized algorithms are used for fast sampling + [2]_, [3]_. For dimensions of 4 or higher the rejection sampling algorithm + described in [4]_ is utilized. This implementation is partially based on + the geomstats package [5]_, [6]_. + + .. versionadded:: 1.11 + + References + ---------- + .. [1] Von Mises-Fisher distribution, Wikipedia, + https://en.wikipedia.org/wiki/Von_Mises%E2%80%93Fisher_distribution + .. [2] Mardia, K., and Jupp, P. Directional statistics. Wiley, 2000. + .. [3] J. Wenzel. Numerically stable sampling of the von Mises Fisher + distribution on S2. + https://www.mitsuba-renderer.org/~wenzel/files/vmf.pdf + .. [4] Wood, A. Simulation of the von mises fisher distribution. + Communications in statistics-simulation and computation 23, + 1 (1994), 157-164. https://doi.org/10.1080/03610919408813161 + .. [5] geomstats, Github. MIT License. Accessed: 06.01.2023. + https://github.com/geomstats/geomstats + .. [6] Miolane, N. et al. Geomstats: A Python Package for Riemannian + Geometry in Machine Learning. Journal of Machine Learning Research + 21 (2020). http://jmlr.org/papers/v21/19-027.html + + Examples + -------- + **Visualization of the probability density** + + Plot the probability density in three dimensions for increasing + concentration parameter. The density is calculated by the ``pdf`` + method. + + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy.stats import vonmises_fisher + >>> from matplotlib.colors import Normalize + >>> n_grid = 100 + >>> u = np.linspace(0, np.pi, n_grid) + >>> v = np.linspace(0, 2 * np.pi, n_grid) + >>> u_grid, v_grid = np.meshgrid(u, v) + >>> vertices = np.stack([np.cos(v_grid) * np.sin(u_grid), + ... np.sin(v_grid) * np.sin(u_grid), + ... np.cos(u_grid)], + ... axis=2) + >>> x = np.outer(np.cos(v), np.sin(u)) + >>> y = np.outer(np.sin(v), np.sin(u)) + >>> z = np.outer(np.ones_like(u), np.cos(u)) + >>> def plot_vmf_density(ax, x, y, z, vertices, mu, kappa): + ... vmf = vonmises_fisher(mu, kappa) + ... pdf_values = vmf.pdf(vertices) + ... pdfnorm = Normalize(vmin=pdf_values.min(), vmax=pdf_values.max()) + ... ax.plot_surface(x, y, z, rstride=1, cstride=1, + ... facecolors=plt.cm.viridis(pdfnorm(pdf_values)), + ... linewidth=0) + ... ax.set_aspect('equal') + ... ax.view_init(azim=-130, elev=0) + ... ax.axis('off') + ... ax.set_title(rf"$\kappa={kappa}$") + >>> fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(9, 4), + ... subplot_kw={"projection": "3d"}) + >>> left, middle, right = axes + >>> mu = np.array([-np.sqrt(0.5), -np.sqrt(0.5), 0]) + >>> plot_vmf_density(left, x, y, z, vertices, mu, 5) + >>> plot_vmf_density(middle, x, y, z, vertices, mu, 20) + >>> plot_vmf_density(right, x, y, z, vertices, mu, 100) + >>> plt.subplots_adjust(top=1, bottom=0.0, left=0.0, right=1.0, wspace=0.) + >>> plt.show() + + As we increase the concentration parameter, the points are getting more + clustered together around the mean direction. + + **Sampling** + + Draw 5 samples from the distribution using the ``rvs`` method resulting + in a 5x3 array. + + >>> rng = np.random.default_rng() + >>> mu = np.array([0, 0, 1]) + >>> samples = vonmises_fisher(mu, 20).rvs(5, random_state=rng) + >>> samples + array([[ 0.3884594 , -0.32482588, 0.86231516], + [ 0.00611366, -0.09878289, 0.99509023], + [-0.04154772, -0.01637135, 0.99900239], + [-0.14613735, 0.12553507, 0.98126695], + [-0.04429884, -0.23474054, 0.97104814]]) + + These samples are unit vectors on the sphere :math:`S^2`. To verify, + let us calculate their euclidean norms: + + >>> np.linalg.norm(samples, axis=1) + array([1., 1., 1., 1., 1.]) + + Plot 20 observations drawn from the von Mises-Fisher distribution for + increasing concentration parameter :math:`\kappa`. The red dot highlights + the mean direction :math:`\mu`. + + >>> def plot_vmf_samples(ax, x, y, z, mu, kappa): + ... vmf = vonmises_fisher(mu, kappa) + ... samples = vmf.rvs(20) + ... ax.plot_surface(x, y, z, rstride=1, cstride=1, linewidth=0, + ... alpha=0.2) + ... ax.scatter(samples[:, 0], samples[:, 1], samples[:, 2], c='k', s=5) + ... ax.scatter(mu[0], mu[1], mu[2], c='r', s=30) + ... ax.set_aspect('equal') + ... ax.view_init(azim=-130, elev=0) + ... ax.axis('off') + ... ax.set_title(rf"$\kappa={kappa}$") + >>> mu = np.array([-np.sqrt(0.5), -np.sqrt(0.5), 0]) + >>> fig, axes = plt.subplots(nrows=1, ncols=3, + ... subplot_kw={"projection": "3d"}, + ... figsize=(9, 4)) + >>> left, middle, right = axes + >>> plot_vmf_samples(left, x, y, z, mu, 5) + >>> plot_vmf_samples(middle, x, y, z, mu, 20) + >>> plot_vmf_samples(right, x, y, z, mu, 100) + >>> plt.subplots_adjust(top=1, bottom=0.0, left=0.0, + ... right=1.0, wspace=0.) + >>> plt.show() + + The plots show that with increasing concentration :math:`\kappa` the + resulting samples are centered more closely around the mean direction. + + **Fitting the distribution parameters** + + The distribution can be fitted to data using the ``fit`` method returning + the estimated parameters. As a toy example let's fit the distribution to + samples drawn from a known von Mises-Fisher distribution. + + >>> mu, kappa = np.array([0, 0, 1]), 20 + >>> samples = vonmises_fisher(mu, kappa).rvs(1000, random_state=rng) + >>> mu_fit, kappa_fit = vonmises_fisher.fit(samples) + >>> mu_fit, kappa_fit + (array([0.01126519, 0.01044501, 0.99988199]), 19.306398751730995) + + We see that the estimated parameters `mu_fit` and `kappa_fit` are + very close to the ground truth parameters. + + """ + def __init__(self, seed=None): + super().__init__(seed) + + def __call__(self, mu=None, kappa=1, seed=None): + """Create a frozen von Mises-Fisher distribution. + + See `vonmises_fisher_frozen` for more information. + """ + return vonmises_fisher_frozen(mu, kappa, seed=seed) + + def _process_parameters(self, mu, kappa): + """ + Infer dimensionality from mu and ensure that mu is a one-dimensional + unit vector and kappa positive. + """ + mu = np.asarray(mu) + if mu.ndim > 1: + raise ValueError("'mu' must have one-dimensional shape.") + if not np.allclose(np.linalg.norm(mu), 1.): + raise ValueError("'mu' must be a unit vector of norm 1.") + if not mu.size > 1: + raise ValueError("'mu' must have at least two entries.") + kappa_error_msg = "'kappa' must be a positive scalar." + if not np.isscalar(kappa) or kappa < 0: + raise ValueError(kappa_error_msg) + if float(kappa) == 0.: + raise ValueError("For 'kappa=0' the von Mises-Fisher distribution " + "becomes the uniform distribution on the sphere " + "surface. Consider using " + "'scipy.stats.uniform_direction' instead.") + dim = mu.size + + return dim, mu, kappa + + def _check_data_vs_dist(self, x, dim): + if x.shape[-1] != dim: + raise ValueError("The dimensionality of the last axis of 'x' must " + "match the dimensionality of the " + "von Mises Fisher distribution.") + if not np.allclose(np.linalg.norm(x, axis=-1), 1.): + msg = "'x' must be unit vectors of norm 1 along last dimension." + raise ValueError(msg) + + def _log_norm_factor(self, dim, kappa): + # normalization factor is given by + # c = kappa**(dim/2-1)/((2*pi)**(dim/2)*I[dim/2-1](kappa)) + # = kappa**(dim/2-1)*exp(-kappa) / + # ((2*pi)**(dim/2)*I[dim/2-1](kappa)*exp(-kappa) + # = kappa**(dim/2-1)*exp(-kappa) / + # ((2*pi)**(dim/2)*ive[dim/2-1](kappa) + # Then the log is given by + # log c = 1/2*(dim -1)*log(kappa) - kappa - -1/2*dim*ln(2*pi) - + # ive[dim/2-1](kappa) + halfdim = 0.5 * dim + return (0.5 * (dim - 2)*np.log(kappa) - halfdim * _LOG_2PI - + np.log(ive(halfdim - 1, kappa)) - kappa) + + def _logpdf(self, x, dim, mu, kappa): + """Log of the von Mises-Fisher probability density function. + + As this function does no argument checking, it should not be + called directly; use 'logpdf' instead. + + """ + x = np.asarray(x) + self._check_data_vs_dist(x, dim) + dotproducts = np.einsum('i,...i->...', mu, x) + return self._log_norm_factor(dim, kappa) + kappa * dotproducts + + def logpdf(self, x, mu=None, kappa=1): + """Log of the von Mises-Fisher probability density function. + + Parameters + ---------- + x : array_like + Points at which to evaluate the log of the probability + density function. The last axis of `x` must correspond + to unit vectors of the same dimensionality as the distribution. + mu : array_like, default: None + Mean direction of the distribution. Must be a one-dimensional unit + vector of norm 1. + kappa : float, default: 1 + Concentration parameter. Must be positive. + + Returns + ------- + logpdf : ndarray or scalar + Log of the probability density function evaluated at `x`. + + """ + dim, mu, kappa = self._process_parameters(mu, kappa) + return self._logpdf(x, dim, mu, kappa) + + def pdf(self, x, mu=None, kappa=1): + """Von Mises-Fisher probability density function. + + Parameters + ---------- + x : array_like + Points at which to evaluate the probability + density function. The last axis of `x` must correspond + to unit vectors of the same dimensionality as the distribution. + mu : array_like + Mean direction of the distribution. Must be a one-dimensional unit + vector of norm 1. + kappa : float + Concentration parameter. Must be positive. + + Returns + ------- + pdf : ndarray or scalar + Probability density function evaluated at `x`. + + """ + dim, mu, kappa = self._process_parameters(mu, kappa) + return np.exp(self._logpdf(x, dim, mu, kappa)) + + def _rvs_2d(self, mu, kappa, size, random_state): + """ + In 2D, the von Mises-Fisher distribution reduces to the + von Mises distribution which can be efficiently sampled by numpy. + This method is much faster than the general rejection + sampling based algorithm. + + """ + mean_angle = np.arctan2(mu[1], mu[0]) + angle_samples = random_state.vonmises(mean_angle, kappa, size=size) + samples = np.stack([np.cos(angle_samples), np.sin(angle_samples)], + axis=-1) + return samples + + def _rvs_3d(self, kappa, size, random_state): + """ + Generate samples from a von Mises-Fisher distribution + with mu = [1, 0, 0] and kappa. Samples then have to be + rotated towards the desired mean direction mu. + This method is much faster than the general rejection + sampling based algorithm. + Reference: https://www.mitsuba-renderer.org/~wenzel/files/vmf.pdf + + """ + if size is None: + sample_size = 1 + else: + sample_size = size + + # compute x coordinate acc. to equation from section 3.1 + x = random_state.random(sample_size) + x = 1. + np.log(x + (1. - x) * np.exp(-2 * kappa))/kappa + + # (y, z) are random 2D vectors that only have to be + # normalized accordingly. Then (x, y z) follow a VMF distribution + temp = np.sqrt(1. - np.square(x)) + uniformcircle = _sample_uniform_direction(2, sample_size, random_state) + samples = np.stack([x, temp * uniformcircle[..., 0], + temp * uniformcircle[..., 1]], + axis=-1) + if size is None: + samples = np.squeeze(samples) + return samples + + def _rejection_sampling(self, dim, kappa, size, random_state): + """ + Generate samples from a n-dimensional von Mises-Fisher distribution + with mu = [1, 0, ..., 0] and kappa via rejection sampling. + Samples then have to be rotated towards the desired mean direction mu. + Reference: https://doi.org/10.1080/03610919408813161 + """ + dim_minus_one = dim - 1 + # calculate number of requested samples + if size is not None: + if not np.iterable(size): + size = (size, ) + n_samples = math.prod(size) + else: + n_samples = 1 + # calculate envelope for rejection sampler (eq. 4) + sqrt = np.sqrt(4 * kappa ** 2. + dim_minus_one ** 2) + envelop_param = (-2 * kappa + sqrt) / dim_minus_one + if envelop_param == 0: + # the regular formula suffers from loss of precision for high + # kappa. This can only be detected by checking for 0 here. + # Workaround: expansion for sqrt variable + # https://www.wolframalpha.com/input?i=sqrt%284*x%5E2%2Bd%5E2%29 + # e = (-2 * k + sqrt(k**2 + d**2)) / d + # ~ (-2 * k + 2 * k + d**2/(4 * k) - d**4/(64 * k**3)) / d + # = d/(4 * k) - d**3/(64 * k**3) + envelop_param = (dim_minus_one/4 * kappa**-1. + - dim_minus_one**3/64 * kappa**-3.) + # reference step 0 + node = (1. - envelop_param) / (1. + envelop_param) + # t = ln(1 - ((1-x)/(1+x))**2) + # = ln(4 * x / (1+x)**2) + # = ln(4) + ln(x) - 2*log1p(x) + correction = (kappa * node + dim_minus_one + * (np.log(4) + np.log(envelop_param) + - 2 * np.log1p(envelop_param))) + n_accepted = 0 + x = np.zeros((n_samples, )) + halfdim = 0.5 * dim_minus_one + # main loop + while n_accepted < n_samples: + # generate candidates acc. to reference step 1 + sym_beta = random_state.beta(halfdim, halfdim, + size=n_samples - n_accepted) + coord_x = (1 - (1 + envelop_param) * sym_beta) / ( + 1 - (1 - envelop_param) * sym_beta) + # accept or reject: reference step 2 + # reformulation for numerical stability: + # t = ln(1 - (1-x)/(1+x) * y) + # = ln((1 + x - y +x*y)/(1 +x)) + accept_tol = random_state.random(n_samples - n_accepted) + criterion = ( + kappa * coord_x + + dim_minus_one * (np.log((1 + envelop_param - coord_x + + coord_x * envelop_param) / (1 + envelop_param))) + - correction) > np.log(accept_tol) + accepted_iter = np.sum(criterion) + x[n_accepted:n_accepted + accepted_iter] = coord_x[criterion] + n_accepted += accepted_iter + # concatenate x and remaining coordinates: step 3 + coord_rest = _sample_uniform_direction(dim_minus_one, n_accepted, + random_state) + coord_rest = np.einsum( + '...,...i->...i', np.sqrt(1 - x ** 2), coord_rest) + samples = np.concatenate([x[..., None], coord_rest], axis=1) + # reshape output to (size, dim) + if size is not None: + samples = samples.reshape(size + (dim, )) + else: + samples = np.squeeze(samples) + return samples + + def _rotate_samples(self, samples, mu, dim): + """A QR decomposition is used to find the rotation that maps the + north pole (1, 0,...,0) to the vector mu. This rotation is then + applied to all samples. + + Parameters + ---------- + samples: array_like, shape = [..., n] + mu : array-like, shape=[n, ] + Point to parametrise the rotation. + + Returns + ------- + samples : rotated samples + + """ + base_point = np.zeros((dim, )) + base_point[0] = 1. + embedded = np.concatenate([mu[None, :], np.zeros((dim - 1, dim))]) + rotmatrix, _ = np.linalg.qr(np.transpose(embedded)) + if np.allclose(np.matmul(rotmatrix, base_point[:, None])[:, 0], mu): + rotsign = 1 + else: + rotsign = -1 + + # apply rotation + samples = np.einsum('ij,...j->...i', rotmatrix, samples) * rotsign + return samples + + def _rvs(self, dim, mu, kappa, size, random_state): + if dim == 2: + samples = self._rvs_2d(mu, kappa, size, random_state) + elif dim == 3: + samples = self._rvs_3d(kappa, size, random_state) + else: + samples = self._rejection_sampling(dim, kappa, size, + random_state) + + if dim != 2: + samples = self._rotate_samples(samples, mu, dim) + return samples + + def rvs(self, mu=None, kappa=1, size=1, random_state=None): + """Draw random samples from a von Mises-Fisher distribution. + + Parameters + ---------- + mu : array_like + Mean direction of the distribution. Must be a one-dimensional unit + vector of norm 1. + kappa : float + Concentration parameter. Must be positive. + size : int or tuple of ints, optional + Given a shape of, for example, (m,n,k), m*n*k samples are + generated, and packed in an m-by-n-by-k arrangement. + Because each sample is N-dimensional, the output shape + is (m,n,k,N). If no shape is specified, a single (N-D) + sample is returned. + random_state : {None, int, np.random.RandomState, np.random.Generator}, + optional + Used for drawing random variates. + If `seed` is `None`, the `~np.random.RandomState` singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, seeded + with seed. + If `seed` is already a ``RandomState`` or ``Generator`` instance, + then that object is used. + Default is `None`. + + Returns + ------- + rvs : ndarray + Random variates of shape (`size`, `N`), where `N` is the + dimension of the distribution. + + """ + dim, mu, kappa = self._process_parameters(mu, kappa) + random_state = self._get_random_state(random_state) + samples = self._rvs(dim, mu, kappa, size, random_state) + return samples + + def _entropy(self, dim, kappa): + halfdim = 0.5 * dim + return (-self._log_norm_factor(dim, kappa) - kappa * + ive(halfdim, kappa) / ive(halfdim - 1, kappa)) + + def entropy(self, mu=None, kappa=1): + """Compute the differential entropy of the von Mises-Fisher + distribution. + + Parameters + ---------- + mu : array_like, default: None + Mean direction of the distribution. Must be a one-dimensional unit + vector of norm 1. + kappa : float, default: 1 + Concentration parameter. Must be positive. + + Returns + ------- + h : scalar + Entropy of the von Mises-Fisher distribution. + + """ + dim, _, kappa = self._process_parameters(mu, kappa) + return self._entropy(dim, kappa) + + def fit(self, x): + """Fit the von Mises-Fisher distribution to data. + + Parameters + ---------- + x : array-like + Data the distribution is fitted to. Must be two dimensional. + The second axis of `x` must be unit vectors of norm 1 and + determine the dimensionality of the fitted + von Mises-Fisher distribution. + + Returns + ------- + mu : ndarray + Estimated mean direction. + kappa : float + Estimated concentration parameter. + + """ + # validate input data + x = np.asarray(x) + if x.ndim != 2: + raise ValueError("'x' must be two dimensional.") + if not np.allclose(np.linalg.norm(x, axis=-1), 1.): + msg = "'x' must be unit vectors of norm 1 along last dimension." + raise ValueError(msg) + dim = x.shape[-1] + + # mu is simply the directional mean + dirstats = directional_stats(x) + mu = dirstats.mean_direction + r = dirstats.mean_resultant_length + + # kappa is the solution to the equation: + # r = I[dim/2](kappa) / I[dim/2 -1](kappa) + # = I[dim/2](kappa) * exp(-kappa) / I[dim/2 -1](kappa) * exp(-kappa) + # = ive(dim/2, kappa) / ive(dim/2 -1, kappa) + + halfdim = 0.5 * dim + + def solve_for_kappa(kappa): + bessel_vals = ive([halfdim, halfdim - 1], kappa) + return bessel_vals[0]/bessel_vals[1] - r + + root_res = root_scalar(solve_for_kappa, method="brentq", + bracket=(1e-8, 1e9)) + kappa = root_res.root + return mu, kappa + + +vonmises_fisher = vonmises_fisher_gen() + + +class vonmises_fisher_frozen(multi_rv_frozen): + def __init__(self, mu=None, kappa=1, seed=None): + """Create a frozen von Mises-Fisher distribution. + + Parameters + ---------- + mu : array_like, default: None + Mean direction of the distribution. + kappa : float, default: 1 + Concentration parameter. Must be positive. + seed : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + + """ + self._dist = vonmises_fisher_gen(seed) + self.dim, self.mu, self.kappa = ( + self._dist._process_parameters(mu, kappa) + ) + + def logpdf(self, x): + """ + Parameters + ---------- + x : array_like + Points at which to evaluate the log of the probability + density function. The last axis of `x` must correspond + to unit vectors of the same dimensionality as the distribution. + + Returns + ------- + logpdf : ndarray or scalar + Log of probability density function evaluated at `x`. + + """ + return self._dist._logpdf(x, self.dim, self.mu, self.kappa) + + def pdf(self, x): + """ + Parameters + ---------- + x : array_like + Points at which to evaluate the log of the probability + density function. The last axis of `x` must correspond + to unit vectors of the same dimensionality as the distribution. + + Returns + ------- + pdf : ndarray or scalar + Probability density function evaluated at `x`. + + """ + return np.exp(self.logpdf(x)) + + def rvs(self, size=1, random_state=None): + """Draw random variates from the Von Mises-Fisher distribution. + + Parameters + ---------- + size : int or tuple of ints, optional + Given a shape of, for example, (m,n,k), m*n*k samples are + generated, and packed in an m-by-n-by-k arrangement. + Because each sample is N-dimensional, the output shape + is (m,n,k,N). If no shape is specified, a single (N-D) + sample is returned. + random_state : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance + then that instance is used. + + Returns + ------- + rvs : ndarray or scalar + Random variates of size (`size`, `N`), where `N` is the + dimension of the distribution. + + """ + random_state = self._dist._get_random_state(random_state) + return self._dist._rvs(self.dim, self.mu, self.kappa, size, + random_state) + + def entropy(self): + """ + Calculate the differential entropy of the von Mises-Fisher + distribution. + + Returns + ------- + h: float + Entropy of the Von Mises-Fisher distribution. + + """ + return self._dist._entropy(self.dim, self.kappa) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_mvn.cpython-310-x86_64-linux-gnu.so b/venv/lib/python3.10/site-packages/scipy/stats/_mvn.cpython-310-x86_64-linux-gnu.so new file mode 100644 index 0000000000000000000000000000000000000000..6cc5254bd5efc5fa9c80185c5d4ea531b26808bc Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/stats/_mvn.cpython-310-x86_64-linux-gnu.so differ diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_page_trend_test.py b/venv/lib/python3.10/site-packages/scipy/stats/_page_trend_test.py new file mode 100644 index 0000000000000000000000000000000000000000..87a4d0d17c07ce609cc575fc7dc61af75d2b9c51 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_page_trend_test.py @@ -0,0 +1,479 @@ +from itertools import permutations +import numpy as np +import math +from ._continuous_distns import norm +import scipy.stats +from dataclasses import dataclass + + +@dataclass +class PageTrendTestResult: + statistic: float + pvalue: float + method: str + + +def page_trend_test(data, ranked=False, predicted_ranks=None, method='auto'): + r""" + Perform Page's Test, a measure of trend in observations between treatments. + + Page's Test (also known as Page's :math:`L` test) is useful when: + + * there are :math:`n \geq 3` treatments, + * :math:`m \geq 2` subjects are observed for each treatment, and + * the observations are hypothesized to have a particular order. + + Specifically, the test considers the null hypothesis that + + .. math:: + + m_1 = m_2 = m_3 \cdots = m_n, + + where :math:`m_j` is the mean of the observed quantity under treatment + :math:`j`, against the alternative hypothesis that + + .. math:: + + m_1 \leq m_2 \leq m_3 \leq \cdots \leq m_n, + + where at least one inequality is strict. + + As noted by [4]_, Page's :math:`L` test has greater statistical power than + the Friedman test against the alternative that there is a difference in + trend, as Friedman's test only considers a difference in the means of the + observations without considering their order. Whereas Spearman :math:`\rho` + considers the correlation between the ranked observations of two variables + (e.g. the airspeed velocity of a swallow vs. the weight of the coconut it + carries), Page's :math:`L` is concerned with a trend in an observation + (e.g. the airspeed velocity of a swallow) across several distinct + treatments (e.g. carrying each of five coconuts of different weight) even + as the observation is repeated with multiple subjects (e.g. one European + swallow and one African swallow). + + Parameters + ---------- + data : array-like + A :math:`m \times n` array; the element in row :math:`i` and + column :math:`j` is the observation corresponding with subject + :math:`i` and treatment :math:`j`. By default, the columns are + assumed to be arranged in order of increasing predicted mean. + + ranked : boolean, optional + By default, `data` is assumed to be observations rather than ranks; + it will be ranked with `scipy.stats.rankdata` along ``axis=1``. If + `data` is provided in the form of ranks, pass argument ``True``. + + predicted_ranks : array-like, optional + The predicted ranks of the column means. If not specified, + the columns are assumed to be arranged in order of increasing + predicted mean, so the default `predicted_ranks` are + :math:`[1, 2, \dots, n-1, n]`. + + method : {'auto', 'asymptotic', 'exact'}, optional + Selects the method used to calculate the *p*-value. The following + options are available. + + * 'auto': selects between 'exact' and 'asymptotic' to + achieve reasonably accurate results in reasonable time (default) + * 'asymptotic': compares the standardized test statistic against + the normal distribution + * 'exact': computes the exact *p*-value by comparing the observed + :math:`L` statistic against those realized by all possible + permutations of ranks (under the null hypothesis that each + permutation is equally likely) + + Returns + ------- + res : PageTrendTestResult + An object containing attributes: + + statistic : float + Page's :math:`L` test statistic. + pvalue : float + The associated *p*-value + method : {'asymptotic', 'exact'} + The method used to compute the *p*-value + + See Also + -------- + rankdata, friedmanchisquare, spearmanr + + Notes + ----- + As noted in [1]_, "the :math:`n` 'treatments' could just as well represent + :math:`n` objects or events or performances or persons or trials ranked." + Similarly, the :math:`m` 'subjects' could equally stand for :math:`m` + "groupings by ability or some other control variable, or judges doing + the ranking, or random replications of some other sort." + + The procedure for calculating the :math:`L` statistic, adapted from + [1]_, is: + + 1. "Predetermine with careful logic the appropriate hypotheses + concerning the predicted ordering of the experimental results. + If no reasonable basis for ordering any treatments is known, the + :math:`L` test is not appropriate." + 2. "As in other experiments, determine at what level of confidence + you will reject the null hypothesis that there is no agreement of + experimental results with the monotonic hypothesis." + 3. "Cast the experimental material into a two-way table of :math:`n` + columns (treatments, objects ranked, conditions) and :math:`m` + rows (subjects, replication groups, levels of control variables)." + 4. "When experimental observations are recorded, rank them across each + row", e.g. ``ranks = scipy.stats.rankdata(data, axis=1)``. + 5. "Add the ranks in each column", e.g. + ``colsums = np.sum(ranks, axis=0)``. + 6. "Multiply each sum of ranks by the predicted rank for that same + column", e.g. ``products = predicted_ranks * colsums``. + 7. "Sum all such products", e.g. ``L = products.sum()``. + + [1]_ continues by suggesting use of the standardized statistic + + .. math:: + + \chi_L^2 = \frac{\left[12L-3mn(n+1)^2\right]^2}{mn^2(n^2-1)(n+1)} + + "which is distributed approximately as chi-square with 1 degree of + freedom. The ordinary use of :math:`\chi^2` tables would be + equivalent to a two-sided test of agreement. If a one-sided test + is desired, *as will almost always be the case*, the probability + discovered in the chi-square table should be *halved*." + + However, this standardized statistic does not distinguish between the + observed values being well correlated with the predicted ranks and being + _anti_-correlated with the predicted ranks. Instead, we follow [2]_ + and calculate the standardized statistic + + .. math:: + + \Lambda = \frac{L - E_0}{\sqrt{V_0}}, + + where :math:`E_0 = \frac{1}{4} mn(n+1)^2` and + :math:`V_0 = \frac{1}{144} mn^2(n+1)(n^2-1)`, "which is asymptotically + normal under the null hypothesis". + + The *p*-value for ``method='exact'`` is generated by comparing the observed + value of :math:`L` against the :math:`L` values generated for all + :math:`(n!)^m` possible permutations of ranks. The calculation is performed + using the recursive method of [5]. + + The *p*-values are not adjusted for the possibility of ties. When + ties are present, the reported ``'exact'`` *p*-values may be somewhat + larger (i.e. more conservative) than the true *p*-value [2]_. The + ``'asymptotic'``` *p*-values, however, tend to be smaller (i.e. less + conservative) than the ``'exact'`` *p*-values. + + References + ---------- + .. [1] Ellis Batten Page, "Ordered hypotheses for multiple treatments: + a significant test for linear ranks", *Journal of the American + Statistical Association* 58(301), p. 216--230, 1963. + + .. [2] Markus Neuhauser, *Nonparametric Statistical Test: A computational + approach*, CRC Press, p. 150--152, 2012. + + .. [3] Statext LLC, "Page's L Trend Test - Easy Statistics", *Statext - + Statistics Study*, https://www.statext.com/practice/PageTrendTest03.php, + Accessed July 12, 2020. + + .. [4] "Page's Trend Test", *Wikipedia*, WikimediaFoundation, + https://en.wikipedia.org/wiki/Page%27s_trend_test, + Accessed July 12, 2020. + + .. [5] Robert E. Odeh, "The exact distribution of Page's L-statistic in + the two-way layout", *Communications in Statistics - Simulation and + Computation*, 6(1), p. 49--61, 1977. + + Examples + -------- + We use the example from [3]_: 10 students are asked to rate three + teaching methods - tutorial, lecture, and seminar - on a scale of 1-5, + with 1 being the lowest and 5 being the highest. We have decided that + a confidence level of 99% is required to reject the null hypothesis in + favor of our alternative: that the seminar will have the highest ratings + and the tutorial will have the lowest. Initially, the data have been + tabulated with each row representing an individual student's ratings of + the three methods in the following order: tutorial, lecture, seminar. + + >>> table = [[3, 4, 3], + ... [2, 2, 4], + ... [3, 3, 5], + ... [1, 3, 2], + ... [2, 3, 2], + ... [2, 4, 5], + ... [1, 2, 4], + ... [3, 4, 4], + ... [2, 4, 5], + ... [1, 3, 4]] + + Because the tutorial is hypothesized to have the lowest ratings, the + column corresponding with tutorial rankings should be first; the seminar + is hypothesized to have the highest ratings, so its column should be last. + Since the columns are already arranged in this order of increasing + predicted mean, we can pass the table directly into `page_trend_test`. + + >>> from scipy.stats import page_trend_test + >>> res = page_trend_test(table) + >>> res + PageTrendTestResult(statistic=133.5, pvalue=0.0018191161948127822, + method='exact') + + This *p*-value indicates that there is a 0.1819% chance that + the :math:`L` statistic would reach such an extreme value under the null + hypothesis. Because 0.1819% is less than 1%, we have evidence to reject + the null hypothesis in favor of our alternative at a 99% confidence level. + + The value of the :math:`L` statistic is 133.5. To check this manually, + we rank the data such that high scores correspond with high ranks, settling + ties with an average rank: + + >>> from scipy.stats import rankdata + >>> ranks = rankdata(table, axis=1) + >>> ranks + array([[1.5, 3. , 1.5], + [1.5, 1.5, 3. ], + [1.5, 1.5, 3. ], + [1. , 3. , 2. ], + [1.5, 3. , 1.5], + [1. , 2. , 3. ], + [1. , 2. , 3. ], + [1. , 2.5, 2.5], + [1. , 2. , 3. ], + [1. , 2. , 3. ]]) + + We add the ranks within each column, multiply the sums by the + predicted ranks, and sum the products. + + >>> import numpy as np + >>> m, n = ranks.shape + >>> predicted_ranks = np.arange(1, n+1) + >>> L = (predicted_ranks * np.sum(ranks, axis=0)).sum() + >>> res.statistic == L + True + + As presented in [3]_, the asymptotic approximation of the *p*-value is the + survival function of the normal distribution evaluated at the standardized + test statistic: + + >>> from scipy.stats import norm + >>> E0 = (m*n*(n+1)**2)/4 + >>> V0 = (m*n**2*(n+1)*(n**2-1))/144 + >>> Lambda = (L-E0)/np.sqrt(V0) + >>> p = norm.sf(Lambda) + >>> p + 0.0012693433690751756 + + This does not precisely match the *p*-value reported by `page_trend_test` + above. The asymptotic distribution is not very accurate, nor conservative, + for :math:`m \leq 12` and :math:`n \leq 8`, so `page_trend_test` chose to + use ``method='exact'`` based on the dimensions of the table and the + recommendations in Page's original paper [1]_. To override + `page_trend_test`'s choice, provide the `method` argument. + + >>> res = page_trend_test(table, method="asymptotic") + >>> res + PageTrendTestResult(statistic=133.5, pvalue=0.0012693433690751756, + method='asymptotic') + + If the data are already ranked, we can pass in the ``ranks`` instead of + the ``table`` to save computation time. + + >>> res = page_trend_test(ranks, # ranks of data + ... ranked=True, # data is already ranked + ... ) + >>> res + PageTrendTestResult(statistic=133.5, pvalue=0.0018191161948127822, + method='exact') + + Suppose the raw data had been tabulated in an order different from the + order of predicted means, say lecture, seminar, tutorial. + + >>> table = np.asarray(table)[:, [1, 2, 0]] + + Since the arrangement of this table is not consistent with the assumed + ordering, we can either rearrange the table or provide the + `predicted_ranks`. Remembering that the lecture is predicted + to have the middle rank, the seminar the highest, and tutorial the lowest, + we pass: + + >>> res = page_trend_test(table, # data as originally tabulated + ... predicted_ranks=[2, 3, 1], # our predicted order + ... ) + >>> res + PageTrendTestResult(statistic=133.5, pvalue=0.0018191161948127822, + method='exact') + + """ + + # Possible values of the method parameter and the corresponding function + # used to evaluate the p value + methods = {"asymptotic": _l_p_asymptotic, + "exact": _l_p_exact, + "auto": None} + if method not in methods: + raise ValueError(f"`method` must be in {set(methods)}") + + ranks = np.asarray(data) + if ranks.ndim != 2: # TODO: relax this to accept 3d arrays? + raise ValueError("`data` must be a 2d array.") + + m, n = ranks.shape + if m < 2 or n < 3: + raise ValueError("Page's L is only appropriate for data with two " + "or more rows and three or more columns.") + + if np.any(np.isnan(data)): + raise ValueError("`data` contains NaNs, which cannot be ranked " + "meaningfully") + + # ensure NumPy array and rank the data if it's not already ranked + if ranked: + # Only a basic check on whether data is ranked. Checking that the data + # is properly ranked could take as much time as ranking it. + if not (ranks.min() >= 1 and ranks.max() <= ranks.shape[1]): + raise ValueError("`data` is not properly ranked. Rank the data or " + "pass `ranked=False`.") + else: + ranks = scipy.stats.rankdata(data, axis=-1) + + # generate predicted ranks if not provided, ensure valid NumPy array + if predicted_ranks is None: + predicted_ranks = np.arange(1, n+1) + else: + predicted_ranks = np.asarray(predicted_ranks) + if (predicted_ranks.ndim < 1 or + (set(predicted_ranks) != set(range(1, n+1)) or + len(predicted_ranks) != n)): + raise ValueError(f"`predicted_ranks` must include each integer " + f"from 1 to {n} (the number of columns in " + f"`data`) exactly once.") + + if not isinstance(ranked, bool): + raise TypeError("`ranked` must be boolean.") + + # Calculate the L statistic + L = _l_vectorized(ranks, predicted_ranks) + + # Calculate the p-value + if method == "auto": + method = _choose_method(ranks) + p_fun = methods[method] # get the function corresponding with the method + p = p_fun(L, m, n) + + page_result = PageTrendTestResult(statistic=L, pvalue=p, method=method) + return page_result + + +def _choose_method(ranks): + '''Choose method for computing p-value automatically''' + m, n = ranks.shape + if n > 8 or (m > 12 and n > 3) or m > 20: # as in [1], [4] + method = "asymptotic" + else: + method = "exact" + return method + + +def _l_vectorized(ranks, predicted_ranks): + '''Calculate's Page's L statistic for each page of a 3d array''' + colsums = ranks.sum(axis=-2, keepdims=True) + products = predicted_ranks * colsums + Ls = products.sum(axis=-1) + Ls = Ls[0] if Ls.size == 1 else Ls.ravel() + return Ls + + +def _l_p_asymptotic(L, m, n): + '''Calculate the p-value of Page's L from the asymptotic distribution''' + # Using [1] as a reference, the asymptotic p-value would be calculated as: + # chi_L = (12*L - 3*m*n*(n+1)**2)**2/(m*n**2*(n**2-1)*(n+1)) + # p = chi2.sf(chi_L, df=1, loc=0, scale=1)/2 + # but this is insensitive to the direction of the hypothesized ranking + + # See [2] page 151 + E0 = (m*n*(n+1)**2)/4 + V0 = (m*n**2*(n+1)*(n**2-1))/144 + Lambda = (L-E0)/np.sqrt(V0) + # This is a one-sided "greater" test - calculate the probability that the + # L statistic under H0 would be greater than the observed L statistic + p = norm.sf(Lambda) + return p + + +def _l_p_exact(L, m, n): + '''Calculate the p-value of Page's L exactly''' + # [1] uses m, n; [5] uses n, k. + # Switch convention here because exact calculation code references [5]. + L, n, k = int(L), int(m), int(n) + _pagel_state.set_k(k) + return _pagel_state.sf(L, n) + + +class _PageL: + '''Maintains state between `page_trend_test` executions''' + + def __init__(self): + '''Lightweight initialization''' + self.all_pmfs = {} + + def set_k(self, k): + '''Calculate lower and upper limits of L for single row''' + self.k = k + # See [5] top of page 52 + self.a, self.b = (k*(k+1)*(k+2))//6, (k*(k+1)*(2*k+1))//6 + + def sf(self, l, n): + '''Survival function of Page's L statistic''' + ps = [self.pmf(l, n) for l in range(l, n*self.b + 1)] + return np.sum(ps) + + def p_l_k_1(self): + '''Relative frequency of each L value over all possible single rows''' + + # See [5] Equation (6) + ranks = range(1, self.k+1) + # generate all possible rows of length k + rank_perms = np.array(list(permutations(ranks))) + # compute Page's L for all possible rows + Ls = (ranks*rank_perms).sum(axis=1) + # count occurrences of each L value + counts = np.histogram(Ls, np.arange(self.a-0.5, self.b+1.5))[0] + # factorial(k) is number of possible permutations + return counts/math.factorial(self.k) + + def pmf(self, l, n): + '''Recursive function to evaluate p(l, k, n); see [5] Equation 1''' + + if n not in self.all_pmfs: + self.all_pmfs[n] = {} + if self.k not in self.all_pmfs[n]: + self.all_pmfs[n][self.k] = {} + + # Cache results to avoid repeating calculation. Initially this was + # written with lru_cache, but this seems faster? Also, we could add + # an option to save this for future lookup. + if l in self.all_pmfs[n][self.k]: + return self.all_pmfs[n][self.k][l] + + if n == 1: + ps = self.p_l_k_1() # [5] Equation 6 + ls = range(self.a, self.b+1) + # not fast, but we'll only be here once + self.all_pmfs[n][self.k] = {l: p for l, p in zip(ls, ps)} + return self.all_pmfs[n][self.k][l] + + p = 0 + low = max(l-(n-1)*self.b, self.a) # [5] Equation 2 + high = min(l-(n-1)*self.a, self.b) + + # [5] Equation 1 + for t in range(low, high+1): + p1 = self.pmf(l-t, n-1) + p2 = self.pmf(t, 1) + p += p1*p2 + self.all_pmfs[n][self.k][l] = p + return p + + +# Maintain state for faster repeat calls to page_trend_test w/ method='exact' +_pagel_state = _PageL() diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_qmc_cy.pyi b/venv/lib/python3.10/site-packages/scipy/stats/_qmc_cy.pyi new file mode 100644 index 0000000000000000000000000000000000000000..1006385a43179478a9a4a32ae5f825aa5b8b35c4 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_qmc_cy.pyi @@ -0,0 +1,54 @@ +import numpy as np +from scipy._lib._util import DecimalNumber, IntNumber + + +def _cy_wrapper_centered_discrepancy( + sample: np.ndarray, + iterative: bool, + workers: IntNumber, +) -> float: ... + + +def _cy_wrapper_wrap_around_discrepancy( + sample: np.ndarray, + iterative: bool, + workers: IntNumber, +) -> float: ... + + +def _cy_wrapper_mixture_discrepancy( + sample: np.ndarray, + iterative: bool, + workers: IntNumber, +) -> float: ... + + +def _cy_wrapper_l2_star_discrepancy( + sample: np.ndarray, + iterative: bool, + workers: IntNumber, +) -> float: ... + + +def _cy_wrapper_update_discrepancy( + x_new_view: np.ndarray, + sample_view: np.ndarray, + initial_disc: DecimalNumber, +) -> float: ... + + +def _cy_van_der_corput( + n: IntNumber, + base: IntNumber, + start_index: IntNumber, + workers: IntNumber, +) -> np.ndarray: ... + + +def _cy_van_der_corput_scrambled( + n: IntNumber, + base: IntNumber, + start_index: IntNumber, + permutations: np.ndarray, + workers: IntNumber, +) -> np.ndarray: ... diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_result_classes.py b/venv/lib/python3.10/site-packages/scipy/stats/_result_classes.py new file mode 100644 index 0000000000000000000000000000000000000000..975af9310efb0c9a414439fd8d531fb95c988951 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_result_classes.py @@ -0,0 +1,40 @@ +# This module exists only to allow Sphinx to generate docs +# for the result objects returned by some functions in stats +# _without_ adding them to the main stats documentation page. + +""" +Result classes +-------------- + +.. currentmodule:: scipy.stats._result_classes + +.. autosummary:: + :toctree: generated/ + + RelativeRiskResult + BinomTestResult + TukeyHSDResult + DunnettResult + PearsonRResult + FitResult + OddsRatioResult + TtestResult + ECDFResult + EmpiricalDistributionFunction + +""" + +__all__ = ['BinomTestResult', 'RelativeRiskResult', 'TukeyHSDResult', + 'PearsonRResult', 'FitResult', 'OddsRatioResult', + 'TtestResult', 'DunnettResult', 'ECDFResult', + 'EmpiricalDistributionFunction'] + + +from ._binomtest import BinomTestResult +from ._odds_ratio import OddsRatioResult +from ._relative_risk import RelativeRiskResult +from ._hypotests import TukeyHSDResult +from ._multicomp import DunnettResult +from ._stats_py import PearsonRResult, TtestResult +from ._fit import FitResult +from ._survival import ECDFResult, EmpiricalDistributionFunction diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_rvs_sampling.py b/venv/lib/python3.10/site-packages/scipy/stats/_rvs_sampling.py new file mode 100644 index 0000000000000000000000000000000000000000..86adb251c3e5ced6896b498bc28c5b6b144db7af --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_rvs_sampling.py @@ -0,0 +1,56 @@ +import warnings +from scipy.stats.sampling import RatioUniforms + +def rvs_ratio_uniforms(pdf, umax, vmin, vmax, size=1, c=0, random_state=None): + """ + Generate random samples from a probability density function using the + ratio-of-uniforms method. + + .. deprecated:: 1.12.0 + `rvs_ratio_uniforms` is deprecated in favour of + `scipy.stats.sampling.RatioUniforms` from version 1.12.0 and will + be removed in SciPy 1.15.0 + + Parameters + ---------- + pdf : callable + A function with signature `pdf(x)` that is proportional to the + probability density function of the distribution. + umax : float + The upper bound of the bounding rectangle in the u-direction. + vmin : float + The lower bound of the bounding rectangle in the v-direction. + vmax : float + The upper bound of the bounding rectangle in the v-direction. + size : int or tuple of ints, optional + Defining number of random variates (default is 1). + c : float, optional. + Shift parameter of ratio-of-uniforms method, see Notes. Default is 0. + random_state : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance then + that instance is used. + + Returns + ------- + rvs : ndarray + The random variates distributed according to the probability + distribution defined by the pdf. + + Notes + ----- + Please refer to `scipy.stats.sampling.RatioUniforms` for the documentation. + """ + warnings.warn("Please use `RatioUniforms` from the " + "`scipy.stats.sampling` namespace. The " + "`scipy.stats.rvs_ratio_uniforms` namespace is deprecated " + "and will be removed in SciPy 1.15.0", + category=DeprecationWarning, stacklevel=2) + gen = RatioUniforms(pdf, umax=umax, vmin=vmin, vmax=vmax, + c=c, random_state=random_state) + return gen.rvs(size) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_sensitivity_analysis.py b/venv/lib/python3.10/site-packages/scipy/stats/_sensitivity_analysis.py new file mode 100644 index 0000000000000000000000000000000000000000..7baffcbdccb58aa453f5ddf6b83da4730c979e06 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_sensitivity_analysis.py @@ -0,0 +1,712 @@ +from __future__ import annotations + +import inspect +from dataclasses import dataclass +from typing import ( + Callable, Literal, Protocol, TYPE_CHECKING +) + +import numpy as np + +from scipy.stats._common import ConfidenceInterval +from scipy.stats._qmc import check_random_state +from scipy.stats._resampling import BootstrapResult +from scipy.stats import qmc, bootstrap + + +if TYPE_CHECKING: + import numpy.typing as npt + from scipy._lib._util import DecimalNumber, IntNumber, SeedType + + +__all__ = [ + 'sobol_indices' +] + + +def f_ishigami(x: npt.ArrayLike) -> np.ndarray: + r"""Ishigami function. + + .. math:: + + Y(\mathbf{x}) = \sin x_1 + 7 \sin^2 x_2 + 0.1 x_3^4 \sin x_1 + + with :math:`\mathbf{x} \in [-\pi, \pi]^3`. + + Parameters + ---------- + x : array_like ([x1, x2, x3], n) + + Returns + ------- + f : array_like (n,) + Function evaluation. + + References + ---------- + .. [1] Ishigami, T. and T. Homma. "An importance quantification technique + in uncertainty analysis for computer models." IEEE, + :doi:`10.1109/ISUMA.1990.151285`, 1990. + """ + x = np.atleast_2d(x) + f_eval = ( + np.sin(x[0]) + + 7 * np.sin(x[1])**2 + + 0.1 * (x[2]**4) * np.sin(x[0]) + ) + return f_eval + + +def sample_A_B( + n: IntNumber, + dists: list[PPFDist], + random_state: SeedType = None +) -> np.ndarray: + """Sample two matrices A and B. + + Uses a Sobol' sequence with 2`d` columns to have 2 uncorrelated matrices. + This is more efficient than using 2 random draw of Sobol'. + See sec. 5 from [1]_. + + Output shape is (d, n). + + References + ---------- + .. [1] Saltelli, A., P. Annoni, I. Azzini, F. Campolongo, M. Ratto, and + S. Tarantola. "Variance based sensitivity analysis of model + output. Design and estimator for the total sensitivity index." + Computer Physics Communications, 181(2):259-270, + :doi:`10.1016/j.cpc.2009.09.018`, 2010. + """ + d = len(dists) + A_B = qmc.Sobol(d=2*d, seed=random_state, bits=64).random(n).T + A_B = A_B.reshape(2, d, -1) + try: + for d_, dist in enumerate(dists): + A_B[:, d_] = dist.ppf(A_B[:, d_]) + except AttributeError as exc: + message = "Each distribution in `dists` must have method `ppf`." + raise ValueError(message) from exc + return A_B + + +def sample_AB(A: np.ndarray, B: np.ndarray) -> np.ndarray: + """AB matrix. + + AB: rows of B into A. Shape (d, d, n). + - Copy A into d "pages" + - In the first page, replace 1st rows of A with 1st row of B. + ... + - In the dth page, replace dth row of A with dth row of B. + - return the stack of pages + """ + d, n = A.shape + AB = np.tile(A, (d, 1, 1)) + i = np.arange(d) + AB[i, i] = B[i] + return AB + + +def saltelli_2010( + f_A: np.ndarray, f_B: np.ndarray, f_AB: np.ndarray +) -> tuple[np.ndarray, np.ndarray]: + r"""Saltelli2010 formulation. + + .. math:: + + S_i = \frac{1}{N} \sum_{j=1}^N + f(\mathbf{B})_j (f(\mathbf{AB}^{(i)})_j - f(\mathbf{A})_j) + + .. math:: + + S_{T_i} = \frac{1}{N} \sum_{j=1}^N + (f(\mathbf{A})_j - f(\mathbf{AB}^{(i)})_j)^2 + + Parameters + ---------- + f_A, f_B : array_like (s, n) + Function values at A and B, respectively + f_AB : array_like (d, s, n) + Function values at each of the AB pages + + Returns + ------- + s, st : array_like (s, d) + First order and total order Sobol' indices. + + References + ---------- + .. [1] Saltelli, A., P. Annoni, I. Azzini, F. Campolongo, M. Ratto, and + S. Tarantola. "Variance based sensitivity analysis of model + output. Design and estimator for the total sensitivity index." + Computer Physics Communications, 181(2):259-270, + :doi:`10.1016/j.cpc.2009.09.018`, 2010. + """ + # Empirical variance calculated using output from A and B which are + # independent. Output of AB is not independent and cannot be used + var = np.var([f_A, f_B], axis=(0, -1)) + + # We divide by the variance to have a ratio of variance + # this leads to eq. 2 + s = np.mean(f_B * (f_AB - f_A), axis=-1) / var # Table 2 (b) + st = 0.5 * np.mean((f_A - f_AB) ** 2, axis=-1) / var # Table 2 (f) + + return s.T, st.T + + +@dataclass +class BootstrapSobolResult: + first_order: BootstrapResult + total_order: BootstrapResult + + +@dataclass +class SobolResult: + first_order: np.ndarray + total_order: np.ndarray + _indices_method: Callable + _f_A: np.ndarray + _f_B: np.ndarray + _f_AB: np.ndarray + _A: np.ndarray | None = None + _B: np.ndarray | None = None + _AB: np.ndarray | None = None + _bootstrap_result: BootstrapResult | None = None + + def bootstrap( + self, + confidence_level: DecimalNumber = 0.95, + n_resamples: IntNumber = 999 + ) -> BootstrapSobolResult: + """Bootstrap Sobol' indices to provide confidence intervals. + + Parameters + ---------- + confidence_level : float, default: ``0.95`` + The confidence level of the confidence intervals. + n_resamples : int, default: ``999`` + The number of resamples performed to form the bootstrap + distribution of the indices. + + Returns + ------- + res : BootstrapSobolResult + Bootstrap result containing the confidence intervals and the + bootstrap distribution of the indices. + + An object with attributes: + + first_order : BootstrapResult + Bootstrap result of the first order indices. + total_order : BootstrapResult + Bootstrap result of the total order indices. + See `BootstrapResult` for more details. + + """ + def statistic(idx): + f_A_ = self._f_A[:, idx] + f_B_ = self._f_B[:, idx] + f_AB_ = self._f_AB[..., idx] + return self._indices_method(f_A_, f_B_, f_AB_) + + n = self._f_A.shape[1] + + res = bootstrap( + [np.arange(n)], statistic=statistic, method="BCa", + n_resamples=n_resamples, + confidence_level=confidence_level, + bootstrap_result=self._bootstrap_result + ) + self._bootstrap_result = res + + first_order = BootstrapResult( + confidence_interval=ConfidenceInterval( + res.confidence_interval.low[0], res.confidence_interval.high[0] + ), + bootstrap_distribution=res.bootstrap_distribution[0], + standard_error=res.standard_error[0], + ) + total_order = BootstrapResult( + confidence_interval=ConfidenceInterval( + res.confidence_interval.low[1], res.confidence_interval.high[1] + ), + bootstrap_distribution=res.bootstrap_distribution[1], + standard_error=res.standard_error[1], + ) + + return BootstrapSobolResult( + first_order=first_order, total_order=total_order + ) + + +class PPFDist(Protocol): + @property + def ppf(self) -> Callable[..., float]: + ... + + +def sobol_indices( + *, + func: Callable[[np.ndarray], npt.ArrayLike] | + dict[Literal['f_A', 'f_B', 'f_AB'], np.ndarray], + n: IntNumber, + dists: list[PPFDist] | None = None, + method: Callable | Literal['saltelli_2010'] = 'saltelli_2010', + random_state: SeedType = None +) -> SobolResult: + r"""Global sensitivity indices of Sobol'. + + Parameters + ---------- + func : callable or dict(str, array_like) + If `func` is a callable, function to compute the Sobol' indices from. + Its signature must be:: + + func(x: ArrayLike) -> ArrayLike + + with ``x`` of shape ``(d, n)`` and output of shape ``(s, n)`` where: + + - ``d`` is the input dimensionality of `func` + (number of input variables), + - ``s`` is the output dimensionality of `func` + (number of output variables), and + - ``n`` is the number of samples (see `n` below). + + Function evaluation values must be finite. + + If `func` is a dictionary, contains the function evaluations from three + different arrays. Keys must be: ``f_A``, ``f_B`` and ``f_AB``. + ``f_A`` and ``f_B`` should have a shape ``(s, n)`` and ``f_AB`` + should have a shape ``(d, s, n)``. + This is an advanced feature and misuse can lead to wrong analysis. + n : int + Number of samples used to generate the matrices ``A`` and ``B``. + Must be a power of 2. The total number of points at which `func` is + evaluated will be ``n*(d+2)``. + dists : list(distributions), optional + List of each parameter's distribution. The distribution of parameters + depends on the application and should be carefully chosen. + Parameters are assumed to be independently distributed, meaning there + is no constraint nor relationship between their values. + + Distributions must be an instance of a class with a ``ppf`` + method. + + Must be specified if `func` is a callable, and ignored otherwise. + method : Callable or str, default: 'saltelli_2010' + Method used to compute the first and total Sobol' indices. + + If a callable, its signature must be:: + + func(f_A: np.ndarray, f_B: np.ndarray, f_AB: np.ndarray) + -> Tuple[np.ndarray, np.ndarray] + + with ``f_A, f_B`` of shape ``(s, n)`` and ``f_AB`` of shape + ``(d, s, n)``. + These arrays contain the function evaluations from three different sets + of samples. + The output is a tuple of the first and total indices with + shape ``(s, d)``. + This is an advanced feature and misuse can lead to wrong analysis. + random_state : {None, int, `numpy.random.Generator`}, optional + If `random_state` is an int or None, a new `numpy.random.Generator` is + created using ``np.random.default_rng(random_state)``. + If `random_state` is already a ``Generator`` instance, then the + provided instance is used. + + Returns + ------- + res : SobolResult + An object with attributes: + + first_order : ndarray of shape (s, d) + First order Sobol' indices. + total_order : ndarray of shape (s, d) + Total order Sobol' indices. + + And method: + + bootstrap(confidence_level: float, n_resamples: int) + -> BootstrapSobolResult + + A method providing confidence intervals on the indices. + See `scipy.stats.bootstrap` for more details. + + The bootstrapping is done on both first and total order indices, + and they are available in `BootstrapSobolResult` as attributes + ``first_order`` and ``total_order``. + + Notes + ----- + The Sobol' method [1]_, [2]_ is a variance-based Sensitivity Analysis which + obtains the contribution of each parameter to the variance of the + quantities of interest (QoIs; i.e., the outputs of `func`). + Respective contributions can be used to rank the parameters and + also gauge the complexity of the model by computing the + model's effective (or mean) dimension. + + .. note:: + + Parameters are assumed to be independently distributed. Each + parameter can still follow any distribution. In fact, the distribution + is very important and should match the real distribution of the + parameters. + + It uses a functional decomposition of the variance of the function to + explore + + .. math:: + + \mathbb{V}(Y) = \sum_{i}^{d} \mathbb{V}_i (Y) + \sum_{i= 2**12``. The more complex the model is, + the more samples will be needed. + + Even for a purely addiditive model, the indices may not sum to 1 due + to numerical noise. + + References + ---------- + .. [1] Sobol, I. M.. "Sensitivity analysis for nonlinear mathematical + models." Mathematical Modeling and Computational Experiment, 1:407-414, + 1993. + .. [2] Sobol, I. M. (2001). "Global sensitivity indices for nonlinear + mathematical models and their Monte Carlo estimates." Mathematics + and Computers in Simulation, 55(1-3):271-280, + :doi:`10.1016/S0378-4754(00)00270-6`, 2001. + .. [3] Saltelli, A. "Making best use of model evaluations to + compute sensitivity indices." Computer Physics Communications, + 145(2):280-297, :doi:`10.1016/S0010-4655(02)00280-1`, 2002. + .. [4] Saltelli, A., M. Ratto, T. Andres, F. Campolongo, J. Cariboni, + D. Gatelli, M. Saisana, and S. Tarantola. "Global Sensitivity Analysis. + The Primer." 2007. + .. [5] Saltelli, A., P. Annoni, I. Azzini, F. Campolongo, M. Ratto, and + S. Tarantola. "Variance based sensitivity analysis of model + output. Design and estimator for the total sensitivity index." + Computer Physics Communications, 181(2):259-270, + :doi:`10.1016/j.cpc.2009.09.018`, 2010. + .. [6] Ishigami, T. and T. Homma. "An importance quantification technique + in uncertainty analysis for computer models." IEEE, + :doi:`10.1109/ISUMA.1990.151285`, 1990. + + Examples + -------- + The following is an example with the Ishigami function [6]_ + + .. math:: + + Y(\mathbf{x}) = \sin x_1 + 7 \sin^2 x_2 + 0.1 x_3^4 \sin x_1, + + with :math:`\mathbf{x} \in [-\pi, \pi]^3`. This function exhibits strong + non-linearity and non-monotonicity. + + Remember, Sobol' indices assumes that samples are independently + distributed. In this case we use a uniform distribution on each marginals. + + >>> import numpy as np + >>> from scipy.stats import sobol_indices, uniform + >>> rng = np.random.default_rng() + >>> def f_ishigami(x): + ... f_eval = ( + ... np.sin(x[0]) + ... + 7 * np.sin(x[1])**2 + ... + 0.1 * (x[2]**4) * np.sin(x[0]) + ... ) + ... return f_eval + >>> indices = sobol_indices( + ... func=f_ishigami, n=1024, + ... dists=[ + ... uniform(loc=-np.pi, scale=2*np.pi), + ... uniform(loc=-np.pi, scale=2*np.pi), + ... uniform(loc=-np.pi, scale=2*np.pi) + ... ], + ... random_state=rng + ... ) + >>> indices.first_order + array([0.31637954, 0.43781162, 0.00318825]) + >>> indices.total_order + array([0.56122127, 0.44287857, 0.24229595]) + + Confidence interval can be obtained using bootstrapping. + + >>> boot = indices.bootstrap() + + Then, this information can be easily visualized. + + >>> import matplotlib.pyplot as plt + >>> fig, axs = plt.subplots(1, 2, figsize=(9, 4)) + >>> _ = axs[0].errorbar( + ... [1, 2, 3], indices.first_order, fmt='o', + ... yerr=[ + ... indices.first_order - boot.first_order.confidence_interval.low, + ... boot.first_order.confidence_interval.high - indices.first_order + ... ], + ... ) + >>> axs[0].set_ylabel("First order Sobol' indices") + >>> axs[0].set_xlabel('Input parameters') + >>> axs[0].set_xticks([1, 2, 3]) + >>> _ = axs[1].errorbar( + ... [1, 2, 3], indices.total_order, fmt='o', + ... yerr=[ + ... indices.total_order - boot.total_order.confidence_interval.low, + ... boot.total_order.confidence_interval.high - indices.total_order + ... ], + ... ) + >>> axs[1].set_ylabel("Total order Sobol' indices") + >>> axs[1].set_xlabel('Input parameters') + >>> axs[1].set_xticks([1, 2, 3]) + >>> plt.tight_layout() + >>> plt.show() + + .. note:: + + By default, `scipy.stats.uniform` has support ``[0, 1]``. + Using the parameters ``loc`` and ``scale``, one obtains the uniform + distribution on ``[loc, loc + scale]``. + + This result is particularly interesting because the first order index + :math:`S_{x_3} = 0` whereas its total order is :math:`S_{T_{x_3}} = 0.244`. + This means that higher order interactions with :math:`x_3` are responsible + for the difference. Almost 25% of the observed variance + on the QoI is due to the correlations between :math:`x_3` and :math:`x_1`, + although :math:`x_3` by itself has no impact on the QoI. + + The following gives a visual explanation of Sobol' indices on this + function. Let's generate 1024 samples in :math:`[-\pi, \pi]^3` and + calculate the value of the output. + + >>> from scipy.stats import qmc + >>> n_dim = 3 + >>> p_labels = ['$x_1$', '$x_2$', '$x_3$'] + >>> sample = qmc.Sobol(d=n_dim, seed=rng).random(1024) + >>> sample = qmc.scale( + ... sample=sample, + ... l_bounds=[-np.pi, -np.pi, -np.pi], + ... u_bounds=[np.pi, np.pi, np.pi] + ... ) + >>> output = f_ishigami(sample.T) + + Now we can do scatter plots of the output with respect to each parameter. + This gives a visual way to understand how each parameter impacts the + output of the function. + + >>> fig, ax = plt.subplots(1, n_dim, figsize=(12, 4)) + >>> for i in range(n_dim): + ... xi = sample[:, i] + ... ax[i].scatter(xi, output, marker='+') + ... ax[i].set_xlabel(p_labels[i]) + >>> ax[0].set_ylabel('Y') + >>> plt.tight_layout() + >>> plt.show() + + Now Sobol' goes a step further: + by conditioning the output value by given values of the parameter + (black lines), the conditional output mean is computed. It corresponds to + the term :math:`\mathbb{E}(Y|x_i)`. Taking the variance of this term gives + the numerator of the Sobol' indices. + + >>> mini = np.min(output) + >>> maxi = np.max(output) + >>> n_bins = 10 + >>> bins = np.linspace(-np.pi, np.pi, num=n_bins, endpoint=False) + >>> dx = bins[1] - bins[0] + >>> fig, ax = plt.subplots(1, n_dim, figsize=(12, 4)) + >>> for i in range(n_dim): + ... xi = sample[:, i] + ... ax[i].scatter(xi, output, marker='+') + ... ax[i].set_xlabel(p_labels[i]) + ... for bin_ in bins: + ... idx = np.where((bin_ <= xi) & (xi <= bin_ + dx)) + ... xi_ = xi[idx] + ... y_ = output[idx] + ... ave_y_ = np.mean(y_) + ... ax[i].plot([bin_ + dx/2] * 2, [mini, maxi], c='k') + ... ax[i].scatter(bin_ + dx/2, ave_y_, c='r') + >>> ax[0].set_ylabel('Y') + >>> plt.tight_layout() + >>> plt.show() + + Looking at :math:`x_3`, the variance + of the mean is zero leading to :math:`S_{x_3} = 0`. But we can further + observe that the variance of the output is not constant along the parameter + values of :math:`x_3`. This heteroscedasticity is explained by higher order + interactions. Moreover, an heteroscedasticity is also noticeable on + :math:`x_1` leading to an interaction between :math:`x_3` and :math:`x_1`. + On :math:`x_2`, the variance seems to be constant and thus null interaction + with this parameter can be supposed. + + This case is fairly simple to analyse visually---although it is only a + qualitative analysis. Nevertheless, when the number of input parameters + increases such analysis becomes unrealistic as it would be difficult to + conclude on high-order terms. Hence the benefit of using Sobol' indices. + + """ + random_state = check_random_state(random_state) + + n_ = int(n) + if not (n_ & (n_ - 1) == 0) or n != n_: + raise ValueError( + "The balance properties of Sobol' points require 'n' " + "to be a power of 2." + ) + n = n_ + + if not callable(method): + indices_methods: dict[str, Callable] = { + "saltelli_2010": saltelli_2010, + } + try: + method = method.lower() # type: ignore[assignment] + indices_method_ = indices_methods[method] + except KeyError as exc: + message = ( + f"{method!r} is not a valid 'method'. It must be one of" + f" {set(indices_methods)!r} or a callable." + ) + raise ValueError(message) from exc + else: + indices_method_ = method + sig = inspect.signature(indices_method_) + + if set(sig.parameters) != {'f_A', 'f_B', 'f_AB'}: + message = ( + "If 'method' is a callable, it must have the following" + f" signature: {inspect.signature(saltelli_2010)}" + ) + raise ValueError(message) + + def indices_method(f_A, f_B, f_AB): + """Wrap indices method to ensure proper output dimension. + + 1D when single output, 2D otherwise. + """ + return np.squeeze(indices_method_(f_A=f_A, f_B=f_B, f_AB=f_AB)) + + if callable(func): + if dists is None: + raise ValueError( + "'dists' must be defined when 'func' is a callable." + ) + + def wrapped_func(x): + return np.atleast_2d(func(x)) + + A, B = sample_A_B(n=n, dists=dists, random_state=random_state) + AB = sample_AB(A=A, B=B) + + f_A = wrapped_func(A) + + if f_A.shape[1] != n: + raise ValueError( + "'func' output should have a shape ``(s, -1)`` with ``s`` " + "the number of output." + ) + + def funcAB(AB): + d, d, n = AB.shape + AB = np.moveaxis(AB, 0, -1).reshape(d, n*d) + f_AB = wrapped_func(AB) + return np.moveaxis(f_AB.reshape((-1, n, d)), -1, 0) + + f_B = wrapped_func(B) + f_AB = funcAB(AB) + else: + message = ( + "When 'func' is a dictionary, it must contain the following " + "keys: 'f_A', 'f_B' and 'f_AB'." + "'f_A' and 'f_B' should have a shape ``(s, n)`` and 'f_AB' " + "should have a shape ``(d, s, n)``." + ) + try: + f_A, f_B, f_AB = np.atleast_2d( + func['f_A'], func['f_B'], func['f_AB'] + ) + except KeyError as exc: + raise ValueError(message) from exc + + if f_A.shape[1] != n or f_A.shape != f_B.shape or \ + f_AB.shape == f_A.shape or f_AB.shape[-1] % n != 0: + raise ValueError(message) + + # Normalization by mean + # Sobol', I. and Levitan, Y. L. (1999). On the use of variance reducing + # multipliers in monte carlo computations of a global sensitivity index. + # Computer Physics Communications, 117(1) :52-61. + mean = np.mean([f_A, f_B], axis=(0, -1)).reshape(-1, 1) + f_A -= mean + f_B -= mean + f_AB -= mean + + # Compute indices + # Filter warnings for constant output as var = 0 + with np.errstate(divide='ignore', invalid='ignore'): + first_order, total_order = indices_method(f_A=f_A, f_B=f_B, f_AB=f_AB) + + # null variance means null indices + first_order[~np.isfinite(first_order)] = 0 + total_order[~np.isfinite(total_order)] = 0 + + res = dict( + first_order=first_order, + total_order=total_order, + _indices_method=indices_method, + _f_A=f_A, + _f_B=f_B, + _f_AB=f_AB + ) + + if callable(func): + res.update( + dict( + _A=A, + _B=B, + _AB=AB, + ) + ) + + return SobolResult(**res) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_stats_mstats_common.py b/venv/lib/python3.10/site-packages/scipy/stats/_stats_mstats_common.py new file mode 100644 index 0000000000000000000000000000000000000000..35e426ec2e31611793a45e817964cfe79d75173d --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_stats_mstats_common.py @@ -0,0 +1,499 @@ +import warnings +import numpy as np +from . import distributions +from .._lib._bunch import _make_tuple_bunch +from ._stats_pythran import siegelslopes as siegelslopes_pythran +from . import _mstats_basic + +__all__ = ['_find_repeats', 'linregress', 'theilslopes', 'siegelslopes'] + +# This is not a namedtuple for backwards compatibility. See PR #12983 +LinregressResult = _make_tuple_bunch('LinregressResult', + ['slope', 'intercept', 'rvalue', + 'pvalue', 'stderr'], + extra_field_names=['intercept_stderr']) +TheilslopesResult = _make_tuple_bunch('TheilslopesResult', + ['slope', 'intercept', + 'low_slope', 'high_slope']) +SiegelslopesResult = _make_tuple_bunch('SiegelslopesResult', + ['slope', 'intercept']) + + +def linregress(x, y=None, alternative='two-sided'): + """ + Calculate a linear least-squares regression for two sets of measurements. + + Parameters + ---------- + x, y : array_like + Two sets of measurements. Both arrays should have the same length. If + only `x` is given (and ``y=None``), then it must be a two-dimensional + array where one dimension has length 2. The two sets of measurements + are then found by splitting the array along the length-2 dimension. In + the case where ``y=None`` and `x` is a 2x2 array, ``linregress(x)`` is + equivalent to ``linregress(x[0], x[1])``. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + + * 'two-sided': the slope of the regression line is nonzero + * 'less': the slope of the regression line is less than zero + * 'greater': the slope of the regression line is greater than zero + + .. versionadded:: 1.7.0 + + Returns + ------- + result : ``LinregressResult`` instance + The return value is an object with the following attributes: + + slope : float + Slope of the regression line. + intercept : float + Intercept of the regression line. + rvalue : float + The Pearson correlation coefficient. The square of ``rvalue`` + is equal to the coefficient of determination. + pvalue : float + The p-value for a hypothesis test whose null hypothesis is + that the slope is zero, using Wald Test with t-distribution of + the test statistic. See `alternative` above for alternative + hypotheses. + stderr : float + Standard error of the estimated slope (gradient), under the + assumption of residual normality. + intercept_stderr : float + Standard error of the estimated intercept, under the assumption + of residual normality. + + See Also + -------- + scipy.optimize.curve_fit : + Use non-linear least squares to fit a function to data. + scipy.optimize.leastsq : + Minimize the sum of squares of a set of equations. + + Notes + ----- + Missing values are considered pair-wise: if a value is missing in `x`, + the corresponding value in `y` is masked. + + For compatibility with older versions of SciPy, the return value acts + like a ``namedtuple`` of length 5, with fields ``slope``, ``intercept``, + ``rvalue``, ``pvalue`` and ``stderr``, so one can continue to write:: + + slope, intercept, r, p, se = linregress(x, y) + + With that style, however, the standard error of the intercept is not + available. To have access to all the computed values, including the + standard error of the intercept, use the return value as an object + with attributes, e.g.:: + + result = linregress(x, y) + print(result.intercept, result.intercept_stderr) + + Examples + -------- + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy import stats + >>> rng = np.random.default_rng() + + Generate some data: + + >>> x = rng.random(10) + >>> y = 1.6*x + rng.random(10) + + Perform the linear regression: + + >>> res = stats.linregress(x, y) + + Coefficient of determination (R-squared): + + >>> print(f"R-squared: {res.rvalue**2:.6f}") + R-squared: 0.717533 + + Plot the data along with the fitted line: + + >>> plt.plot(x, y, 'o', label='original data') + >>> plt.plot(x, res.intercept + res.slope*x, 'r', label='fitted line') + >>> plt.legend() + >>> plt.show() + + Calculate 95% confidence interval on slope and intercept: + + >>> # Two-sided inverse Students t-distribution + >>> # p - probability, df - degrees of freedom + >>> from scipy.stats import t + >>> tinv = lambda p, df: abs(t.ppf(p/2, df)) + + >>> ts = tinv(0.05, len(x)-2) + >>> print(f"slope (95%): {res.slope:.6f} +/- {ts*res.stderr:.6f}") + slope (95%): 1.453392 +/- 0.743465 + >>> print(f"intercept (95%): {res.intercept:.6f}" + ... f" +/- {ts*res.intercept_stderr:.6f}") + intercept (95%): 0.616950 +/- 0.544475 + + """ + TINY = 1.0e-20 + if y is None: # x is a (2, N) or (N, 2) shaped array_like + x = np.asarray(x) + if x.shape[0] == 2: + x, y = x + elif x.shape[1] == 2: + x, y = x.T + else: + raise ValueError("If only `x` is given as input, it has to " + "be of shape (2, N) or (N, 2); provided shape " + f"was {x.shape}.") + else: + x = np.asarray(x) + y = np.asarray(y) + + if x.size == 0 or y.size == 0: + raise ValueError("Inputs must not be empty.") + + if np.amax(x) == np.amin(x) and len(x) > 1: + raise ValueError("Cannot calculate a linear regression " + "if all x values are identical") + + n = len(x) + xmean = np.mean(x, None) + ymean = np.mean(y, None) + + # Average sums of square differences from the mean + # ssxm = mean( (x-mean(x))^2 ) + # ssxym = mean( (x-mean(x)) * (y-mean(y)) ) + ssxm, ssxym, _, ssym = np.cov(x, y, bias=1).flat + + # R-value + # r = ssxym / sqrt( ssxm * ssym ) + if ssxm == 0.0 or ssym == 0.0: + # If the denominator was going to be 0 + r = 0.0 + else: + r = ssxym / np.sqrt(ssxm * ssym) + # Test for numerical error propagation (make sure -1 < r < 1) + if r > 1.0: + r = 1.0 + elif r < -1.0: + r = -1.0 + + slope = ssxym / ssxm + intercept = ymean - slope*xmean + if n == 2: + # handle case when only two points are passed in + if y[0] == y[1]: + prob = 1.0 + else: + prob = 0.0 + slope_stderr = 0.0 + intercept_stderr = 0.0 + else: + df = n - 2 # Number of degrees of freedom + # n-2 degrees of freedom because 2 has been used up + # to estimate the mean and standard deviation + t = r * np.sqrt(df / ((1.0 - r + TINY)*(1.0 + r + TINY))) + t, prob = _mstats_basic._ttest_finish(df, t, alternative) + + slope_stderr = np.sqrt((1 - r**2) * ssym / ssxm / df) + + # Also calculate the standard error of the intercept + # The following relationship is used: + # ssxm = mean( (x-mean(x))^2 ) + # = ssx - sx*sx + # = mean( x^2 ) - mean(x)^2 + intercept_stderr = slope_stderr * np.sqrt(ssxm + xmean**2) + + return LinregressResult(slope=slope, intercept=intercept, rvalue=r, + pvalue=prob, stderr=slope_stderr, + intercept_stderr=intercept_stderr) + + +def theilslopes(y, x=None, alpha=0.95, method='separate'): + r""" + Computes the Theil-Sen estimator for a set of points (x, y). + + `theilslopes` implements a method for robust linear regression. It + computes the slope as the median of all slopes between paired values. + + Parameters + ---------- + y : array_like + Dependent variable. + x : array_like or None, optional + Independent variable. If None, use ``arange(len(y))`` instead. + alpha : float, optional + Confidence degree between 0 and 1. Default is 95% confidence. + Note that `alpha` is symmetric around 0.5, i.e. both 0.1 and 0.9 are + interpreted as "find the 90% confidence interval". + method : {'joint', 'separate'}, optional + Method to be used for computing estimate for intercept. + Following methods are supported, + + * 'joint': Uses np.median(y - slope * x) as intercept. + * 'separate': Uses np.median(y) - slope * np.median(x) + as intercept. + + The default is 'separate'. + + .. versionadded:: 1.8.0 + + Returns + ------- + result : ``TheilslopesResult`` instance + The return value is an object with the following attributes: + + slope : float + Theil slope. + intercept : float + Intercept of the Theil line. + low_slope : float + Lower bound of the confidence interval on `slope`. + high_slope : float + Upper bound of the confidence interval on `slope`. + + See Also + -------- + siegelslopes : a similar technique using repeated medians + + Notes + ----- + The implementation of `theilslopes` follows [1]_. The intercept is + not defined in [1]_, and here it is defined as ``median(y) - + slope*median(x)``, which is given in [3]_. Other definitions of + the intercept exist in the literature such as ``median(y - slope*x)`` + in [4]_. The approach to compute the intercept can be determined by the + parameter ``method``. A confidence interval for the intercept is not + given as this question is not addressed in [1]_. + + For compatibility with older versions of SciPy, the return value acts + like a ``namedtuple`` of length 4, with fields ``slope``, ``intercept``, + ``low_slope``, and ``high_slope``, so one can continue to write:: + + slope, intercept, low_slope, high_slope = theilslopes(y, x) + + References + ---------- + .. [1] P.K. Sen, "Estimates of the regression coefficient based on + Kendall's tau", J. Am. Stat. Assoc., Vol. 63, pp. 1379-1389, 1968. + .. [2] H. Theil, "A rank-invariant method of linear and polynomial + regression analysis I, II and III", Nederl. Akad. Wetensch., Proc. + 53:, pp. 386-392, pp. 521-525, pp. 1397-1412, 1950. + .. [3] W.L. Conover, "Practical nonparametric statistics", 2nd ed., + John Wiley and Sons, New York, pp. 493. + .. [4] https://en.wikipedia.org/wiki/Theil%E2%80%93Sen_estimator + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> import matplotlib.pyplot as plt + + >>> x = np.linspace(-5, 5, num=150) + >>> y = x + np.random.normal(size=x.size) + >>> y[11:15] += 10 # add outliers + >>> y[-5:] -= 7 + + Compute the slope, intercept and 90% confidence interval. For comparison, + also compute the least-squares fit with `linregress`: + + >>> res = stats.theilslopes(y, x, 0.90, method='separate') + >>> lsq_res = stats.linregress(x, y) + + Plot the results. The Theil-Sen regression line is shown in red, with the + dashed red lines illustrating the confidence interval of the slope (note + that the dashed red lines are not the confidence interval of the regression + as the confidence interval of the intercept is not included). The green + line shows the least-squares fit for comparison. + + >>> fig = plt.figure() + >>> ax = fig.add_subplot(111) + >>> ax.plot(x, y, 'b.') + >>> ax.plot(x, res[1] + res[0] * x, 'r-') + >>> ax.plot(x, res[1] + res[2] * x, 'r--') + >>> ax.plot(x, res[1] + res[3] * x, 'r--') + >>> ax.plot(x, lsq_res[1] + lsq_res[0] * x, 'g-') + >>> plt.show() + + """ + if method not in ['joint', 'separate']: + raise ValueError("method must be either 'joint' or 'separate'." + f"'{method}' is invalid.") + # We copy both x and y so we can use _find_repeats. + y = np.array(y, dtype=float, copy=True).ravel() + if x is None: + x = np.arange(len(y), dtype=float) + else: + x = np.array(x, dtype=float, copy=True).ravel() + if len(x) != len(y): + raise ValueError(f"Incompatible lengths ! ({len(y)}<>{len(x)})") + + # Compute sorted slopes only when deltax > 0 + deltax = x[:, np.newaxis] - x + deltay = y[:, np.newaxis] - y + slopes = deltay[deltax > 0] / deltax[deltax > 0] + if not slopes.size: + msg = "All `x` coordinates are identical." + warnings.warn(msg, RuntimeWarning, stacklevel=2) + slopes.sort() + medslope = np.median(slopes) + if method == 'joint': + medinter = np.median(y - medslope * x) + else: + medinter = np.median(y) - medslope * np.median(x) + # Now compute confidence intervals + if alpha > 0.5: + alpha = 1. - alpha + + z = distributions.norm.ppf(alpha / 2.) + # This implements (2.6) from Sen (1968) + _, nxreps = _find_repeats(x) + _, nyreps = _find_repeats(y) + nt = len(slopes) # N in Sen (1968) + ny = len(y) # n in Sen (1968) + # Equation 2.6 in Sen (1968): + sigsq = 1/18. * (ny * (ny-1) * (2*ny+5) - + sum(k * (k-1) * (2*k + 5) for k in nxreps) - + sum(k * (k-1) * (2*k + 5) for k in nyreps)) + # Find the confidence interval indices in `slopes` + try: + sigma = np.sqrt(sigsq) + Ru = min(int(np.round((nt - z*sigma)/2.)), len(slopes)-1) + Rl = max(int(np.round((nt + z*sigma)/2.)) - 1, 0) + delta = slopes[[Rl, Ru]] + except (ValueError, IndexError): + delta = (np.nan, np.nan) + + return TheilslopesResult(slope=medslope, intercept=medinter, + low_slope=delta[0], high_slope=delta[1]) + + +def _find_repeats(arr): + # This function assumes it may clobber its input. + if len(arr) == 0: + return np.array(0, np.float64), np.array(0, np.intp) + + # XXX This cast was previously needed for the Fortran implementation, + # should we ditch it? + arr = np.asarray(arr, np.float64).ravel() + arr.sort() + + # Taken from NumPy 1.9's np.unique. + change = np.concatenate(([True], arr[1:] != arr[:-1])) + unique = arr[change] + change_idx = np.concatenate(np.nonzero(change) + ([arr.size],)) + freq = np.diff(change_idx) + atleast2 = freq > 1 + return unique[atleast2], freq[atleast2] + + +def siegelslopes(y, x=None, method="hierarchical"): + r""" + Computes the Siegel estimator for a set of points (x, y). + + `siegelslopes` implements a method for robust linear regression + using repeated medians (see [1]_) to fit a line to the points (x, y). + The method is robust to outliers with an asymptotic breakdown point + of 50%. + + Parameters + ---------- + y : array_like + Dependent variable. + x : array_like or None, optional + Independent variable. If None, use ``arange(len(y))`` instead. + method : {'hierarchical', 'separate'} + If 'hierarchical', estimate the intercept using the estimated + slope ``slope`` (default option). + If 'separate', estimate the intercept independent of the estimated + slope. See Notes for details. + + Returns + ------- + result : ``SiegelslopesResult`` instance + The return value is an object with the following attributes: + + slope : float + Estimate of the slope of the regression line. + intercept : float + Estimate of the intercept of the regression line. + + See Also + -------- + theilslopes : a similar technique without repeated medians + + Notes + ----- + With ``n = len(y)``, compute ``m_j`` as the median of + the slopes from the point ``(x[j], y[j])`` to all other `n-1` points. + ``slope`` is then the median of all slopes ``m_j``. + Two ways are given to estimate the intercept in [1]_ which can be chosen + via the parameter ``method``. + The hierarchical approach uses the estimated slope ``slope`` + and computes ``intercept`` as the median of ``y - slope*x``. + The other approach estimates the intercept separately as follows: for + each point ``(x[j], y[j])``, compute the intercepts of all the `n-1` + lines through the remaining points and take the median ``i_j``. + ``intercept`` is the median of the ``i_j``. + + The implementation computes `n` times the median of a vector of size `n` + which can be slow for large vectors. There are more efficient algorithms + (see [2]_) which are not implemented here. + + For compatibility with older versions of SciPy, the return value acts + like a ``namedtuple`` of length 2, with fields ``slope`` and + ``intercept``, so one can continue to write:: + + slope, intercept = siegelslopes(y, x) + + References + ---------- + .. [1] A. Siegel, "Robust Regression Using Repeated Medians", + Biometrika, Vol. 69, pp. 242-244, 1982. + + .. [2] A. Stein and M. Werman, "Finding the repeated median regression + line", Proceedings of the Third Annual ACM-SIAM Symposium on + Discrete Algorithms, pp. 409-413, 1992. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> import matplotlib.pyplot as plt + + >>> x = np.linspace(-5, 5, num=150) + >>> y = x + np.random.normal(size=x.size) + >>> y[11:15] += 10 # add outliers + >>> y[-5:] -= 7 + + Compute the slope and intercept. For comparison, also compute the + least-squares fit with `linregress`: + + >>> res = stats.siegelslopes(y, x) + >>> lsq_res = stats.linregress(x, y) + + Plot the results. The Siegel regression line is shown in red. The green + line shows the least-squares fit for comparison. + + >>> fig = plt.figure() + >>> ax = fig.add_subplot(111) + >>> ax.plot(x, y, 'b.') + >>> ax.plot(x, res[1] + res[0] * x, 'r-') + >>> ax.plot(x, lsq_res[1] + lsq_res[0] * x, 'g-') + >>> plt.show() + + """ + if method not in ['hierarchical', 'separate']: + raise ValueError("method can only be 'hierarchical' or 'separate'") + y = np.asarray(y).ravel() + if x is None: + x = np.arange(len(y), dtype=float) + else: + x = np.asarray(x, dtype=float).ravel() + if len(x) != len(y): + raise ValueError(f"Incompatible lengths ! ({len(y)}<>{len(x)})") + dtype = np.result_type(x, y, np.float32) # use at least float32 + y, x = y.astype(dtype), x.astype(dtype) + medslope, medinter = siegelslopes_pythran(y, x, method) + return SiegelslopesResult(slope=medslope, intercept=medinter) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_stats_py.py b/venv/lib/python3.10/site-packages/scipy/stats/_stats_py.py new file mode 100644 index 0000000000000000000000000000000000000000..a0bcb13c74b2b1b9bb75dd1ecad59dbab08ac249 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_stats_py.py @@ -0,0 +1,11053 @@ +# Copyright 2002 Gary Strangman. All rights reserved +# Copyright 2002-2016 The SciPy Developers +# +# The original code from Gary Strangman was heavily adapted for +# use in SciPy by Travis Oliphant. The original code came with the +# following disclaimer: +# +# This software is provided "as-is". There are no expressed or implied +# warranties of any kind, including, but not limited to, the warranties +# of merchantability and fitness for a given application. In no event +# shall Gary Strangman be liable for any direct, indirect, incidental, +# special, exemplary or consequential damages (including, but not limited +# to, loss of use, data or profits, or business interruption) however +# caused and on any theory of liability, whether in contract, strict +# liability or tort (including negligence or otherwise) arising in any way +# out of the use of this software, even if advised of the possibility of +# such damage. + +""" +A collection of basic statistical functions for Python. + +References +---------- +.. [CRCProbStat2000] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probability and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. + +""" +import warnings +import math +from math import gcd +from collections import namedtuple + +import numpy as np +from numpy import array, asarray, ma + +from scipy import sparse +from scipy.spatial.distance import cdist +from scipy.spatial import distance_matrix + +from scipy.ndimage import _measurements +from scipy.optimize import milp, LinearConstraint +from scipy._lib._util import (check_random_state, MapWrapper, _get_nan, + rng_integers, _rename_parameter, _contains_nan, + AxisError) + +import scipy.special as special +from scipy import linalg +from . import distributions +from . import _mstats_basic as mstats_basic +from ._stats_mstats_common import (_find_repeats, linregress, theilslopes, + siegelslopes) +from ._stats import (_kendall_dis, _toint64, _weightedrankedtau, + _local_correlations) +from dataclasses import dataclass, field +from ._hypotests import _all_partitions +from ._stats_pythran import _compute_outer_prob_inside_method +from ._resampling import (MonteCarloMethod, PermutationMethod, BootstrapMethod, + monte_carlo_test, permutation_test, bootstrap, + _batch_generator) +from ._axis_nan_policy import (_axis_nan_policy_factory, + _broadcast_concatenate) +from ._binomtest import _binary_search_for_binom_tst as _binary_search +from scipy._lib._bunch import _make_tuple_bunch +from scipy import stats +from scipy.optimize import root_scalar +from scipy._lib.deprecation import _NoValue, _deprecate_positional_args +from scipy._lib._util import normalize_axis_index + +# In __all__ but deprecated for removal in SciPy 1.13.0 +from scipy._lib._util import float_factorial # noqa: F401 +from scipy.stats._mstats_basic import ( # noqa: F401 + PointbiserialrResult, Ttest_1sampResult, Ttest_relResult +) + + +# Functions/classes in other files should be added in `__init__.py`, not here +__all__ = ['find_repeats', 'gmean', 'hmean', 'pmean', 'mode', 'tmean', 'tvar', + 'tmin', 'tmax', 'tstd', 'tsem', 'moment', + 'skew', 'kurtosis', 'describe', 'skewtest', 'kurtosistest', + 'normaltest', 'jarque_bera', + 'scoreatpercentile', 'percentileofscore', + 'cumfreq', 'relfreq', 'obrientransform', + 'sem', 'zmap', 'zscore', 'gzscore', 'iqr', 'gstd', + 'median_abs_deviation', + 'sigmaclip', 'trimboth', 'trim1', 'trim_mean', + 'f_oneway', 'pearsonr', 'fisher_exact', + 'spearmanr', 'pointbiserialr', + 'kendalltau', 'weightedtau', 'multiscale_graphcorr', + 'linregress', 'siegelslopes', 'theilslopes', 'ttest_1samp', + 'ttest_ind', 'ttest_ind_from_stats', 'ttest_rel', + 'kstest', 'ks_1samp', 'ks_2samp', + 'chisquare', 'power_divergence', + 'tiecorrect', 'ranksums', 'kruskal', 'friedmanchisquare', + 'rankdata', 'combine_pvalues', 'quantile_test', + 'wasserstein_distance', 'wasserstein_distance_nd', 'energy_distance', + 'brunnermunzel', 'alexandergovern', + 'expectile'] + + +def _chk_asarray(a, axis): + if axis is None: + a = np.ravel(a) + outaxis = 0 + else: + a = np.asarray(a) + outaxis = axis + + if a.ndim == 0: + a = np.atleast_1d(a) + + return a, outaxis + + +def _chk2_asarray(a, b, axis): + if axis is None: + a = np.ravel(a) + b = np.ravel(b) + outaxis = 0 + else: + a = np.asarray(a) + b = np.asarray(b) + outaxis = axis + + if a.ndim == 0: + a = np.atleast_1d(a) + if b.ndim == 0: + b = np.atleast_1d(b) + + return a, b, outaxis + + +SignificanceResult = _make_tuple_bunch('SignificanceResult', + ['statistic', 'pvalue'], []) + + +# note that `weights` are paired with `x` +@_axis_nan_policy_factory( + lambda x: x, n_samples=1, n_outputs=1, too_small=0, paired=True, + result_to_tuple=lambda x: (x,), kwd_samples=['weights']) +def gmean(a, axis=0, dtype=None, weights=None): + r"""Compute the weighted geometric mean along the specified axis. + + The weighted geometric mean of the array :math:`a_i` associated to weights + :math:`w_i` is: + + .. math:: + + \exp \left( \frac{ \sum_{i=1}^n w_i \ln a_i }{ \sum_{i=1}^n w_i } + \right) \, , + + and, with equal weights, it gives: + + .. math:: + + \sqrt[n]{ \prod_{i=1}^n a_i } \, . + + Parameters + ---------- + a : array_like + Input array or object that can be converted to an array. + axis : int or None, optional + Axis along which the geometric mean is computed. Default is 0. + If None, compute over the whole array `a`. + dtype : dtype, optional + Type to which the input arrays are cast before the calculation is + performed. + weights : array_like, optional + The `weights` array must be broadcastable to the same shape as `a`. + Default is None, which gives each value a weight of 1.0. + + Returns + ------- + gmean : ndarray + See `dtype` parameter above. + + See Also + -------- + numpy.mean : Arithmetic average + numpy.average : Weighted average + hmean : Harmonic mean + + References + ---------- + .. [1] "Weighted Geometric Mean", *Wikipedia*, + https://en.wikipedia.org/wiki/Weighted_geometric_mean. + .. [2] Grossman, J., Grossman, M., Katz, R., "Averages: A New Approach", + Archimedes Foundation, 1983 + + Examples + -------- + >>> from scipy.stats import gmean + >>> gmean([1, 4]) + 2.0 + >>> gmean([1, 2, 3, 4, 5, 6, 7]) + 3.3800151591412964 + >>> gmean([1, 4, 7], weights=[3, 1, 3]) + 2.80668351922014 + + """ + + a = np.asarray(a, dtype=dtype) + + if weights is not None: + weights = np.asarray(weights, dtype=dtype) + + with np.errstate(divide='ignore'): + log_a = np.log(a) + + return np.exp(np.average(log_a, axis=axis, weights=weights)) + + +@_axis_nan_policy_factory( + lambda x: x, n_samples=1, n_outputs=1, too_small=0, paired=True, + result_to_tuple=lambda x: (x,), kwd_samples=['weights']) +def hmean(a, axis=0, dtype=None, *, weights=None): + r"""Calculate the weighted harmonic mean along the specified axis. + + The weighted harmonic mean of the array :math:`a_i` associated to weights + :math:`w_i` is: + + .. math:: + + \frac{ \sum_{i=1}^n w_i }{ \sum_{i=1}^n \frac{w_i}{a_i} } \, , + + and, with equal weights, it gives: + + .. math:: + + \frac{ n }{ \sum_{i=1}^n \frac{1}{a_i} } \, . + + Parameters + ---------- + a : array_like + Input array, masked array or object that can be converted to an array. + axis : int or None, optional + Axis along which the harmonic mean is computed. Default is 0. + If None, compute over the whole array `a`. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults to the + dtype of `a`, unless `a` has an integer `dtype` with a precision less + than that of the default platform integer. In that case, the default + platform integer is used. + weights : array_like, optional + The weights array can either be 1-D (in which case its length must be + the size of `a` along the given `axis`) or of the same shape as `a`. + Default is None, which gives each value a weight of 1.0. + + .. versionadded:: 1.9 + + Returns + ------- + hmean : ndarray + See `dtype` parameter above. + + See Also + -------- + numpy.mean : Arithmetic average + numpy.average : Weighted average + gmean : Geometric mean + + Notes + ----- + The harmonic mean is computed over a single dimension of the input + array, axis=0 by default, or all values in the array if axis=None. + float64 intermediate and return values are used for integer inputs. + + References + ---------- + .. [1] "Weighted Harmonic Mean", *Wikipedia*, + https://en.wikipedia.org/wiki/Harmonic_mean#Weighted_harmonic_mean + .. [2] Ferger, F., "The nature and use of the harmonic mean", Journal of + the American Statistical Association, vol. 26, pp. 36-40, 1931 + + Examples + -------- + >>> from scipy.stats import hmean + >>> hmean([1, 4]) + 1.6000000000000001 + >>> hmean([1, 2, 3, 4, 5, 6, 7]) + 2.6997245179063363 + >>> hmean([1, 4, 7], weights=[3, 1, 3]) + 1.9029126213592233 + + """ + if not isinstance(a, np.ndarray): + a = np.array(a, dtype=dtype) + elif dtype: + # Must change the default dtype allowing array type + if isinstance(a, np.ma.MaskedArray): + a = np.ma.asarray(a, dtype=dtype) + else: + a = np.asarray(a, dtype=dtype) + + if np.all(a >= 0): + # Harmonic mean only defined if greater than or equal to zero. + if weights is not None: + weights = np.asanyarray(weights, dtype=dtype) + + with np.errstate(divide='ignore'): + return 1.0 / np.average(1.0 / a, axis=axis, weights=weights) + else: + raise ValueError("Harmonic mean only defined if all elements greater " + "than or equal to zero") + + +@_axis_nan_policy_factory( + lambda x: x, n_samples=1, n_outputs=1, too_small=0, paired=True, + result_to_tuple=lambda x: (x,), kwd_samples=['weights']) +def pmean(a, p, *, axis=0, dtype=None, weights=None): + r"""Calculate the weighted power mean along the specified axis. + + The weighted power mean of the array :math:`a_i` associated to weights + :math:`w_i` is: + + .. math:: + + \left( \frac{ \sum_{i=1}^n w_i a_i^p }{ \sum_{i=1}^n w_i } + \right)^{ 1 / p } \, , + + and, with equal weights, it gives: + + .. math:: + + \left( \frac{ 1 }{ n } \sum_{i=1}^n a_i^p \right)^{ 1 / p } \, . + + When ``p=0``, it returns the geometric mean. + + This mean is also called generalized mean or Hölder mean, and must not be + confused with the Kolmogorov generalized mean, also called + quasi-arithmetic mean or generalized f-mean [3]_. + + Parameters + ---------- + a : array_like + Input array, masked array or object that can be converted to an array. + p : int or float + Exponent. + axis : int or None, optional + Axis along which the power mean is computed. Default is 0. + If None, compute over the whole array `a`. + dtype : dtype, optional + Type of the returned array and of the accumulator in which the + elements are summed. If `dtype` is not specified, it defaults to the + dtype of `a`, unless `a` has an integer `dtype` with a precision less + than that of the default platform integer. In that case, the default + platform integer is used. + weights : array_like, optional + The weights array can either be 1-D (in which case its length must be + the size of `a` along the given `axis`) or of the same shape as `a`. + Default is None, which gives each value a weight of 1.0. + + Returns + ------- + pmean : ndarray, see `dtype` parameter above. + Output array containing the power mean values. + + See Also + -------- + numpy.average : Weighted average + gmean : Geometric mean + hmean : Harmonic mean + + Notes + ----- + The power mean is computed over a single dimension of the input + array, ``axis=0`` by default, or all values in the array if ``axis=None``. + float64 intermediate and return values are used for integer inputs. + + .. versionadded:: 1.9 + + References + ---------- + .. [1] "Generalized Mean", *Wikipedia*, + https://en.wikipedia.org/wiki/Generalized_mean + .. [2] Norris, N., "Convexity properties of generalized mean value + functions", The Annals of Mathematical Statistics, vol. 8, + pp. 118-120, 1937 + .. [3] Bullen, P.S., Handbook of Means and Their Inequalities, 2003 + + Examples + -------- + >>> from scipy.stats import pmean, hmean, gmean + >>> pmean([1, 4], 1.3) + 2.639372938300652 + >>> pmean([1, 2, 3, 4, 5, 6, 7], 1.3) + 4.157111214492084 + >>> pmean([1, 4, 7], -2, weights=[3, 1, 3]) + 1.4969684896631954 + + For p=-1, power mean is equal to harmonic mean: + + >>> pmean([1, 4, 7], -1, weights=[3, 1, 3]) + 1.9029126213592233 + >>> hmean([1, 4, 7], weights=[3, 1, 3]) + 1.9029126213592233 + + For p=0, power mean is defined as the geometric mean: + + >>> pmean([1, 4, 7], 0, weights=[3, 1, 3]) + 2.80668351922014 + >>> gmean([1, 4, 7], weights=[3, 1, 3]) + 2.80668351922014 + + """ + if not isinstance(p, (int, float)): + raise ValueError("Power mean only defined for exponent of type int or " + "float.") + if p == 0: + return gmean(a, axis=axis, dtype=dtype, weights=weights) + + if not isinstance(a, np.ndarray): + a = np.array(a, dtype=dtype) + elif dtype: + # Must change the default dtype allowing array type + if isinstance(a, np.ma.MaskedArray): + a = np.ma.asarray(a, dtype=dtype) + else: + a = np.asarray(a, dtype=dtype) + + if np.all(a >= 0): + # Power mean only defined if greater than or equal to zero + if weights is not None: + weights = np.asanyarray(weights, dtype=dtype) + + with np.errstate(divide='ignore'): + return np.float_power( + np.average(np.float_power(a, p), axis=axis, weights=weights), + 1/p) + else: + raise ValueError("Power mean only defined if all elements greater " + "than or equal to zero") + + +ModeResult = namedtuple('ModeResult', ('mode', 'count')) + + +def _mode_result(mode, count): + # When a slice is empty, `_axis_nan_policy` automatically produces + # NaN for `mode` and `count`. This is a reasonable convention for `mode`, + # but `count` should not be NaN; it should be zero. + i = np.isnan(count) + if i.shape == (): + count = count.dtype(0) if i else count + else: + count[i] = 0 + return ModeResult(mode, count) + + +@_axis_nan_policy_factory(_mode_result, override={'vectorization': True, + 'nan_propagation': False}) +def mode(a, axis=0, nan_policy='propagate', keepdims=False): + r"""Return an array of the modal (most common) value in the passed array. + + If there is more than one such value, only one is returned. + The bin-count for the modal bins is also returned. + + Parameters + ---------- + a : array_like + Numeric, n-dimensional array of which to find mode(s). + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over + the whole array `a`. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': treats nan as it would treat any other value + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + keepdims : bool, optional + If set to ``False``, the `axis` over which the statistic is taken + is consumed (eliminated from the output array). If set to ``True``, + the `axis` is retained with size one, and the result will broadcast + correctly against the input array. + + Returns + ------- + mode : ndarray + Array of modal values. + count : ndarray + Array of counts for each mode. + + Notes + ----- + The mode is calculated using `numpy.unique`. + In NumPy versions 1.21 and after, all NaNs - even those with different + binary representations - are treated as equivalent and counted as separate + instances of the same value. + + By convention, the mode of an empty array is NaN, and the associated count + is zero. + + Examples + -------- + >>> import numpy as np + >>> a = np.array([[3, 0, 3, 7], + ... [3, 2, 6, 2], + ... [1, 7, 2, 8], + ... [3, 0, 6, 1], + ... [3, 2, 5, 5]]) + >>> from scipy import stats + >>> stats.mode(a, keepdims=True) + ModeResult(mode=array([[3, 0, 6, 1]]), count=array([[4, 2, 2, 1]])) + + To get mode of whole array, specify ``axis=None``: + + >>> stats.mode(a, axis=None, keepdims=True) + ModeResult(mode=[[3]], count=[[5]]) + >>> stats.mode(a, axis=None, keepdims=False) + ModeResult(mode=3, count=5) + + """ + # `axis`, `nan_policy`, and `keepdims` are handled by `_axis_nan_policy` + if not np.issubdtype(a.dtype, np.number): + message = ("Argument `a` is not recognized as numeric. " + "Support for input that cannot be coerced to a numeric " + "array was deprecated in SciPy 1.9.0 and removed in SciPy " + "1.11.0. Please consider `np.unique`.") + raise TypeError(message) + + if a.size == 0: + NaN = _get_nan(a) + return ModeResult(*np.array([NaN, 0], dtype=NaN.dtype)) + + vals, cnts = np.unique(a, return_counts=True) + modes, counts = vals[cnts.argmax()], cnts.max() + return ModeResult(modes[()], counts[()]) + + +def _put_nan_to_limits(a, limits, inclusive): + """Put NaNs in an array for values outside of given limits. + + This is primarily a utility function. + + Parameters + ---------- + a : array + limits : (float or None, float or None) + A tuple consisting of the (lower limit, upper limit). Values in the + input array less than the lower limit or greater than the upper limit + will be replaced with `np.nan`. None implies no limit. + inclusive : (bool, bool) + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to lower or upper are allowed. + + """ + if limits is None: + return a + mask = np.full_like(a, False, dtype=np.bool_) + lower_limit, upper_limit = limits + lower_include, upper_include = inclusive + if lower_limit is not None: + mask |= (a < lower_limit) if lower_include else a <= lower_limit + if upper_limit is not None: + mask |= (a > upper_limit) if upper_include else a >= upper_limit + if np.all(mask): + raise ValueError("No array values within given limits") + if np.any(mask): + a = a.copy() if np.issubdtype(a.dtype, np.inexact) else a.astype(np.float64) + a[mask] = np.nan + return a + + +@_axis_nan_policy_factory( + lambda x: x, n_outputs=1, default_axis=None, + result_to_tuple=lambda x: (x,) +) +def tmean(a, limits=None, inclusive=(True, True), axis=None): + """Compute the trimmed mean. + + This function finds the arithmetic mean of given values, ignoring values + outside the given `limits`. + + Parameters + ---------- + a : array_like + Array of values. + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None (default), then all + values are used. Either of the limit values in the tuple can also be + None representing a half-open interval. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + axis : int or None, optional + Axis along which to compute test. Default is None. + + Returns + ------- + tmean : ndarray + Trimmed mean. + + See Also + -------- + trim_mean : Returns mean after trimming a proportion from both tails. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> x = np.arange(20) + >>> stats.tmean(x) + 9.5 + >>> stats.tmean(x, (3,17)) + 10.0 + + """ + a = _put_nan_to_limits(a, limits, inclusive) + return np.nanmean(a, axis=axis) + + +@_axis_nan_policy_factory( + lambda x: x, n_outputs=1, result_to_tuple=lambda x: (x,) +) +def tvar(a, limits=None, inclusive=(True, True), axis=0, ddof=1): + """Compute the trimmed variance. + + This function computes the sample variance of an array of values, + while ignoring values which are outside of given `limits`. + + Parameters + ---------- + a : array_like + Array of values. + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None, then all values are + used. Either of the limit values in the tuple can also be None + representing a half-open interval. The default value is None. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over the + whole array `a`. + ddof : int, optional + Delta degrees of freedom. Default is 1. + + Returns + ------- + tvar : float + Trimmed variance. + + Notes + ----- + `tvar` computes the unbiased sample variance, i.e. it uses a correction + factor ``n / (n - 1)``. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> x = np.arange(20) + >>> stats.tvar(x) + 35.0 + >>> stats.tvar(x, (3,17)) + 20.0 + + """ + a = _put_nan_to_limits(a, limits, inclusive) + return np.nanvar(a, ddof=ddof, axis=axis) + + +@_axis_nan_policy_factory( + lambda x: x, n_outputs=1, result_to_tuple=lambda x: (x,) +) +def tmin(a, lowerlimit=None, axis=0, inclusive=True, nan_policy='propagate'): + """Compute the trimmed minimum. + + This function finds the minimum value of an array `a` along the + specified axis, but only considering values greater than a specified + lower limit. + + Parameters + ---------- + a : array_like + Array of values. + lowerlimit : None or float, optional + Values in the input array less than the given limit will be ignored. + When lowerlimit is None, then all values are used. The default value + is None. + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over the + whole array `a`. + inclusive : {True, False}, optional + This flag determines whether values exactly equal to the lower limit + are included. The default value is True. + + Returns + ------- + tmin : float, int or ndarray + Trimmed minimum. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> x = np.arange(20) + >>> stats.tmin(x) + 0 + + >>> stats.tmin(x, 13) + 13 + + >>> stats.tmin(x, 13, inclusive=False) + 14 + + """ + dtype = a.dtype + a = _put_nan_to_limits(a, (lowerlimit, None), (inclusive, None)) + res = np.nanmin(a, axis=axis) + if not np.any(np.isnan(res)): + # needed if input is of integer dtype + return res.astype(dtype, copy=False) + return res + + +@_axis_nan_policy_factory( + lambda x: x, n_outputs=1, result_to_tuple=lambda x: (x,) +) +def tmax(a, upperlimit=None, axis=0, inclusive=True, nan_policy='propagate'): + """Compute the trimmed maximum. + + This function computes the maximum value of an array along a given axis, + while ignoring values larger than a specified upper limit. + + Parameters + ---------- + a : array_like + Array of values. + upperlimit : None or float, optional + Values in the input array greater than the given limit will be ignored. + When upperlimit is None, then all values are used. The default value + is None. + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over the + whole array `a`. + inclusive : {True, False}, optional + This flag determines whether values exactly equal to the upper limit + are included. The default value is True. + + Returns + ------- + tmax : float, int or ndarray + Trimmed maximum. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> x = np.arange(20) + >>> stats.tmax(x) + 19 + + >>> stats.tmax(x, 13) + 13 + + >>> stats.tmax(x, 13, inclusive=False) + 12 + + """ + dtype = a.dtype + a = _put_nan_to_limits(a, (None, upperlimit), (None, inclusive)) + res = np.nanmax(a, axis=axis) + if not np.any(np.isnan(res)): + # needed if input is of integer dtype + return res.astype(dtype, copy=False) + return res + + +@_axis_nan_policy_factory( + lambda x: x, n_outputs=1, result_to_tuple=lambda x: (x,) +) +def tstd(a, limits=None, inclusive=(True, True), axis=0, ddof=1): + """Compute the trimmed sample standard deviation. + + This function finds the sample standard deviation of given values, + ignoring values outside the given `limits`. + + Parameters + ---------- + a : array_like + Array of values. + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None, then all values are + used. Either of the limit values in the tuple can also be None + representing a half-open interval. The default value is None. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over the + whole array `a`. + ddof : int, optional + Delta degrees of freedom. Default is 1. + + Returns + ------- + tstd : float + Trimmed sample standard deviation. + + Notes + ----- + `tstd` computes the unbiased sample standard deviation, i.e. it uses a + correction factor ``n / (n - 1)``. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> x = np.arange(20) + >>> stats.tstd(x) + 5.9160797830996161 + >>> stats.tstd(x, (3,17)) + 4.4721359549995796 + + """ + return np.sqrt(tvar(a, limits, inclusive, axis, ddof, _no_deco=True)) + + +@_axis_nan_policy_factory( + lambda x: x, n_outputs=1, result_to_tuple=lambda x: (x,) +) +def tsem(a, limits=None, inclusive=(True, True), axis=0, ddof=1): + """Compute the trimmed standard error of the mean. + + This function finds the standard error of the mean for given + values, ignoring values outside the given `limits`. + + Parameters + ---------- + a : array_like + Array of values. + limits : None or (lower limit, upper limit), optional + Values in the input array less than the lower limit or greater than the + upper limit will be ignored. When limits is None, then all values are + used. Either of the limit values in the tuple can also be None + representing a half-open interval. The default value is None. + inclusive : (bool, bool), optional + A tuple consisting of the (lower flag, upper flag). These flags + determine whether values exactly equal to the lower or upper limits + are included. The default value is (True, True). + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over the + whole array `a`. + ddof : int, optional + Delta degrees of freedom. Default is 1. + + Returns + ------- + tsem : float + Trimmed standard error of the mean. + + Notes + ----- + `tsem` uses unbiased sample standard deviation, i.e. it uses a + correction factor ``n / (n - 1)``. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> x = np.arange(20) + >>> stats.tsem(x) + 1.3228756555322954 + >>> stats.tsem(x, (3,17)) + 1.1547005383792515 + + """ + a = _put_nan_to_limits(a, limits, inclusive) + sd = np.sqrt(np.nanvar(a, ddof=ddof, axis=axis)) + n_obs = (~np.isnan(a)).sum(axis=axis) + return sd / np.sqrt(n_obs, dtype=sd.dtype) + + +##################################### +# MOMENTS # +##################################### + + +def _moment_outputs(kwds): + moment = np.atleast_1d(kwds.get('order', 1)) + if moment.size == 0: + raise ValueError("'order' must be a scalar or a non-empty 1D " + "list/array.") + return len(moment) + + +def _moment_result_object(*args): + if len(args) == 1: + return args[0] + return np.asarray(args) + +# `moment` fits into the `_axis_nan_policy` pattern, but it is a bit unusual +# because the number of outputs is variable. Specifically, +# `result_to_tuple=lambda x: (x,)` may be surprising for a function that +# can produce more than one output, but it is intended here. +# When `moment is called to produce the output: +# - `result_to_tuple` packs the returned array into a single-element tuple, +# - `_moment_result_object` extracts and returns that single element. +# However, when the input array is empty, `moment` is never called. Instead, +# - `_check_empty_inputs` is used to produce an empty array with the +# appropriate dimensions. +# - A list comprehension creates the appropriate number of copies of this +# array, depending on `n_outputs`. +# - This list - which may have multiple elements - is passed into +# `_moment_result_object`. +# - If there is a single output, `_moment_result_object` extracts and returns +# the single output from the list. +# - If there are multiple outputs, and therefore multiple elements in the list, +# `_moment_result_object` converts the list of arrays to a single array and +# returns it. +# Currently this leads to a slight inconsistency: when the input array is +# empty, there is no distinction between the `moment` function being called +# with parameter `order=1` and `order=[1]`; the latter *should* produce +# the same as the former but with a singleton zeroth dimension. +@_rename_parameter('moment', 'order') +@_axis_nan_policy_factory( # noqa: E302 + _moment_result_object, n_samples=1, result_to_tuple=lambda x: (x,), + n_outputs=_moment_outputs +) +def moment(a, order=1, axis=0, nan_policy='propagate', *, center=None): + r"""Calculate the nth moment about the mean for a sample. + + A moment is a specific quantitative measure of the shape of a set of + points. It is often used to calculate coefficients of skewness and kurtosis + due to its close relationship with them. + + Parameters + ---------- + a : array_like + Input array. + order : int or array_like of ints, optional + Order of central moment that is returned. Default is 1. + axis : int or None, optional + Axis along which the central moment is computed. Default is 0. + If None, compute over the whole array `a`. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + center : float or None, optional + The point about which moments are taken. This can be the sample mean, + the origin, or any other be point. If `None` (default) compute the + center as the sample mean. + + Returns + ------- + n-th moment about the `center` : ndarray or float + The appropriate moment along the given axis or over all values if axis + is None. The denominator for the moment calculation is the number of + observations, no degrees of freedom correction is done. + + See Also + -------- + kurtosis, skew, describe + + Notes + ----- + The k-th moment of a data sample is: + + .. math:: + + m_k = \frac{1}{n} \sum_{i = 1}^n (x_i - c)^k + + Where `n` is the number of samples, and `c` is the center around which the + moment is calculated. This function uses exponentiation by squares [1]_ for + efficiency. + + Note that, if `a` is an empty array (``a.size == 0``), array `moment` with + one element (`moment.size == 1`) is treated the same as scalar `moment` + (``np.isscalar(moment)``). This might produce arrays of unexpected shape. + + References + ---------- + .. [1] https://eli.thegreenplace.net/2009/03/21/efficient-integer-exponentiation-algorithms + + Examples + -------- + >>> from scipy.stats import moment + >>> moment([1, 2, 3, 4, 5], order=1) + 0.0 + >>> moment([1, 2, 3, 4, 5], order=2) + 2.0 + + """ + moment = order # parameter was renamed + a, axis = _chk_asarray(a, axis) + + # for array_like moment input, return a value for each. + if not np.isscalar(moment): + # Calculated the mean once at most, and only if it will be used + calculate_mean = center is None and np.any(np.asarray(moment) > 1) + mean = a.mean(axis, keepdims=True) if calculate_mean else None + mmnt = [] + for i in moment: + if center is None and i > 1: + mmnt.append(_moment(a, i, axis, mean=mean)) + else: + mmnt.append(_moment(a, i, axis, mean=center)) + return np.array(mmnt) + else: + return _moment(a, moment, axis, mean=center) + + +# Moment with optional pre-computed mean, equal to a.mean(axis, keepdims=True) +def _moment(a, moment, axis, *, mean=None): + if np.abs(moment - np.round(moment)) > 0: + raise ValueError("All moment parameters must be integers") + + # moment of empty array is the same regardless of order + if a.size == 0: + return np.mean(a, axis=axis) + + dtype = a.dtype.type if a.dtype.kind in 'fc' else np.float64 + + if moment == 0 or (moment == 1 and mean is None): + # By definition the zeroth moment is always 1, and the first *central* + # moment is 0. + shape = list(a.shape) + del shape[axis] + + if len(shape) == 0: + return dtype(1.0 if moment == 0 else 0.0) + else: + return (np.ones(shape, dtype=dtype) if moment == 0 + else np.zeros(shape, dtype=dtype)) + else: + # Exponentiation by squares: form exponent sequence + n_list = [moment] + current_n = moment + while current_n > 2: + if current_n % 2: + current_n = (current_n - 1) / 2 + else: + current_n /= 2 + n_list.append(current_n) + + # Starting point for exponentiation by squares + mean = (a.mean(axis, keepdims=True) if mean is None + else np.asarray(mean, dtype=dtype)[()]) + a_zero_mean = a - mean + + eps = np.finfo(a_zero_mean.dtype).resolution * 10 + with np.errstate(divide='ignore', invalid='ignore'): + rel_diff = np.max(np.abs(a_zero_mean), axis=axis, + keepdims=True) / np.abs(mean) + with np.errstate(invalid='ignore'): + precision_loss = np.any(rel_diff < eps) + n = a.shape[axis] if axis is not None else a.size + if precision_loss and n > 1: + message = ("Precision loss occurred in moment calculation due to " + "catastrophic cancellation. This occurs when the data " + "are nearly identical. Results may be unreliable.") + warnings.warn(message, RuntimeWarning, stacklevel=4) + + if n_list[-1] == 1: + s = a_zero_mean.copy() + else: + s = a_zero_mean**2 + + # Perform multiplications + for n in n_list[-2::-1]: + s = s**2 + if n % 2: + s *= a_zero_mean + return np.mean(s, axis) + + +def _var(x, axis=0, ddof=0, mean=None): + # Calculate variance of sample, warning if precision is lost + var = _moment(x, 2, axis, mean=mean) + if ddof != 0: + n = x.shape[axis] if axis is not None else x.size + var *= np.divide(n, n-ddof) # to avoid error on division by zero + return var + + +@_axis_nan_policy_factory( + lambda x: x, result_to_tuple=lambda x: (x,), n_outputs=1 +) +def skew(a, axis=0, bias=True, nan_policy='propagate'): + r"""Compute the sample skewness of a data set. + + For normally distributed data, the skewness should be about zero. For + unimodal continuous distributions, a skewness value greater than zero means + that there is more weight in the right tail of the distribution. The + function `skewtest` can be used to determine if the skewness value + is close enough to zero, statistically speaking. + + Parameters + ---------- + a : ndarray + Input array. + axis : int or None, optional + Axis along which skewness is calculated. Default is 0. + If None, compute over the whole array `a`. + bias : bool, optional + If False, then the calculations are corrected for statistical bias. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Returns + ------- + skewness : ndarray + The skewness of values along an axis, returning NaN where all values + are equal. + + Notes + ----- + The sample skewness is computed as the Fisher-Pearson coefficient + of skewness, i.e. + + .. math:: + + g_1=\frac{m_3}{m_2^{3/2}} + + where + + .. math:: + + m_i=\frac{1}{N}\sum_{n=1}^N(x[n]-\bar{x})^i + + is the biased sample :math:`i\texttt{th}` central moment, and + :math:`\bar{x}` is + the sample mean. If ``bias`` is False, the calculations are + corrected for bias and the value computed is the adjusted + Fisher-Pearson standardized moment coefficient, i.e. + + .. math:: + + G_1=\frac{k_3}{k_2^{3/2}}= + \frac{\sqrt{N(N-1)}}{N-2}\frac{m_3}{m_2^{3/2}}. + + References + ---------- + .. [1] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probability and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. + Section 2.2.24.1 + + Examples + -------- + >>> from scipy.stats import skew + >>> skew([1, 2, 3, 4, 5]) + 0.0 + >>> skew([2, 8, 0, 4, 1, 9, 9, 0]) + 0.2650554122698573 + + """ + a, axis = _chk_asarray(a, axis) + n = a.shape[axis] + + contains_nan, nan_policy = _contains_nan(a, nan_policy) + + if contains_nan and nan_policy == 'omit': + a = ma.masked_invalid(a) + return mstats_basic.skew(a, axis, bias) + + mean = a.mean(axis, keepdims=True) + m2 = _moment(a, 2, axis, mean=mean) + m3 = _moment(a, 3, axis, mean=mean) + with np.errstate(all='ignore'): + zero = (m2 <= (np.finfo(m2.dtype).resolution * mean.squeeze(axis))**2) + vals = np.where(zero, np.nan, m3 / m2**1.5) + if not bias: + can_correct = ~zero & (n > 2) + if can_correct.any(): + m2 = np.extract(can_correct, m2) + m3 = np.extract(can_correct, m3) + nval = np.sqrt((n - 1.0) * n) / (n - 2.0) * m3 / m2**1.5 + np.place(vals, can_correct, nval) + + return vals[()] + + +@_axis_nan_policy_factory( + lambda x: x, result_to_tuple=lambda x: (x,), n_outputs=1 +) +def kurtosis(a, axis=0, fisher=True, bias=True, nan_policy='propagate'): + """Compute the kurtosis (Fisher or Pearson) of a dataset. + + Kurtosis is the fourth central moment divided by the square of the + variance. If Fisher's definition is used, then 3.0 is subtracted from + the result to give 0.0 for a normal distribution. + + If bias is False then the kurtosis is calculated using k statistics to + eliminate bias coming from biased moment estimators + + Use `kurtosistest` to see if result is close enough to normal. + + Parameters + ---------- + a : array + Data for which the kurtosis is calculated. + axis : int or None, optional + Axis along which the kurtosis is calculated. Default is 0. + If None, compute over the whole array `a`. + fisher : bool, optional + If True, Fisher's definition is used (normal ==> 0.0). If False, + Pearson's definition is used (normal ==> 3.0). + bias : bool, optional + If False, then the calculations are corrected for statistical bias. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. 'propagate' returns nan, + 'raise' throws an error, 'omit' performs the calculations ignoring nan + values. Default is 'propagate'. + + Returns + ------- + kurtosis : array + The kurtosis of values along an axis, returning NaN where all values + are equal. + + References + ---------- + .. [1] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probability and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. + + Examples + -------- + In Fisher's definition, the kurtosis of the normal distribution is zero. + In the following example, the kurtosis is close to zero, because it was + calculated from the dataset, not from the continuous distribution. + + >>> import numpy as np + >>> from scipy.stats import norm, kurtosis + >>> data = norm.rvs(size=1000, random_state=3) + >>> kurtosis(data) + -0.06928694200380558 + + The distribution with a higher kurtosis has a heavier tail. + The zero valued kurtosis of the normal distribution in Fisher's definition + can serve as a reference point. + + >>> import matplotlib.pyplot as plt + >>> import scipy.stats as stats + >>> from scipy.stats import kurtosis + + >>> x = np.linspace(-5, 5, 100) + >>> ax = plt.subplot() + >>> distnames = ['laplace', 'norm', 'uniform'] + + >>> for distname in distnames: + ... if distname == 'uniform': + ... dist = getattr(stats, distname)(loc=-2, scale=4) + ... else: + ... dist = getattr(stats, distname) + ... data = dist.rvs(size=1000) + ... kur = kurtosis(data, fisher=True) + ... y = dist.pdf(x) + ... ax.plot(x, y, label="{}, {}".format(distname, round(kur, 3))) + ... ax.legend() + + The Laplace distribution has a heavier tail than the normal distribution. + The uniform distribution (which has negative kurtosis) has the thinnest + tail. + + """ + a, axis = _chk_asarray(a, axis) + + contains_nan, nan_policy = _contains_nan(a, nan_policy) + + if contains_nan and nan_policy == 'omit': + a = ma.masked_invalid(a) + return mstats_basic.kurtosis(a, axis, fisher, bias) + + n = a.shape[axis] + mean = a.mean(axis, keepdims=True) + m2 = _moment(a, 2, axis, mean=mean) + m4 = _moment(a, 4, axis, mean=mean) + with np.errstate(all='ignore'): + zero = (m2 <= (np.finfo(m2.dtype).resolution * mean.squeeze(axis))**2) + vals = np.where(zero, np.nan, m4 / m2**2.0) + + if not bias: + can_correct = ~zero & (n > 3) + if can_correct.any(): + m2 = np.extract(can_correct, m2) + m4 = np.extract(can_correct, m4) + nval = 1.0/(n-2)/(n-3) * ((n**2-1.0)*m4/m2**2.0 - 3*(n-1)**2.0) + np.place(vals, can_correct, nval + 3.0) + + return vals[()] - 3 if fisher else vals[()] + + +DescribeResult = namedtuple('DescribeResult', + ('nobs', 'minmax', 'mean', 'variance', 'skewness', + 'kurtosis')) + + +def describe(a, axis=0, ddof=1, bias=True, nan_policy='propagate'): + """Compute several descriptive statistics of the passed array. + + Parameters + ---------- + a : array_like + Input data. + axis : int or None, optional + Axis along which statistics are calculated. Default is 0. + If None, compute over the whole array `a`. + ddof : int, optional + Delta degrees of freedom (only for variance). Default is 1. + bias : bool, optional + If False, then the skewness and kurtosis calculations are corrected + for statistical bias. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Returns + ------- + nobs : int or ndarray of ints + Number of observations (length of data along `axis`). + When 'omit' is chosen as nan_policy, the length along each axis + slice is counted separately. + minmax: tuple of ndarrays or floats + Minimum and maximum value of `a` along the given axis. + mean : ndarray or float + Arithmetic mean of `a` along the given axis. + variance : ndarray or float + Unbiased variance of `a` along the given axis; denominator is number + of observations minus one. + skewness : ndarray or float + Skewness of `a` along the given axis, based on moment calculations + with denominator equal to the number of observations, i.e. no degrees + of freedom correction. + kurtosis : ndarray or float + Kurtosis (Fisher) of `a` along the given axis. The kurtosis is + normalized so that it is zero for the normal distribution. No + degrees of freedom are used. + + See Also + -------- + skew, kurtosis + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> a = np.arange(10) + >>> stats.describe(a) + DescribeResult(nobs=10, minmax=(0, 9), mean=4.5, + variance=9.166666666666666, skewness=0.0, + kurtosis=-1.2242424242424244) + >>> b = [[1, 2], [3, 4]] + >>> stats.describe(b) + DescribeResult(nobs=2, minmax=(array([1, 2]), array([3, 4])), + mean=array([2., 3.]), variance=array([2., 2.]), + skewness=array([0., 0.]), kurtosis=array([-2., -2.])) + + """ + a, axis = _chk_asarray(a, axis) + + contains_nan, nan_policy = _contains_nan(a, nan_policy) + + if contains_nan and nan_policy == 'omit': + a = ma.masked_invalid(a) + return mstats_basic.describe(a, axis, ddof, bias) + + if a.size == 0: + raise ValueError("The input must not be empty.") + n = a.shape[axis] + mm = (np.min(a, axis=axis), np.max(a, axis=axis)) + m = np.mean(a, axis=axis) + v = _var(a, axis=axis, ddof=ddof) + sk = skew(a, axis, bias=bias) + kurt = kurtosis(a, axis, bias=bias) + + return DescribeResult(n, mm, m, v, sk, kurt) + +##################################### +# NORMALITY TESTS # +##################################### + + +def _get_pvalue(statistic, distribution, alternative, symmetric=True): + """Get p-value given the statistic, (continuous) distribution, and alternative""" + + if alternative == 'less': + pvalue = distribution.cdf(statistic) + elif alternative == 'greater': + pvalue = distribution.sf(statistic) + elif alternative == 'two-sided': + pvalue = 2 * (distribution.sf(np.abs(statistic)) if symmetric + else np.minimum(distribution.cdf(statistic), + distribution.sf(statistic))) + else: + message = "`alternative` must be 'less', 'greater', or 'two-sided'." + raise ValueError(message) + + return pvalue + + +SkewtestResult = namedtuple('SkewtestResult', ('statistic', 'pvalue')) + + +@_axis_nan_policy_factory(SkewtestResult, n_samples=1, too_small=7) +def skewtest(a, axis=0, nan_policy='propagate', alternative='two-sided'): + r"""Test whether the skew is different from the normal distribution. + + This function tests the null hypothesis that the skewness of + the population that the sample was drawn from is the same + as that of a corresponding normal distribution. + + Parameters + ---------- + a : array + The data to be tested. + axis : int or None, optional + Axis along which statistics are calculated. Default is 0. + If None, compute over the whole array `a`. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + + * 'two-sided': the skewness of the distribution underlying the sample + is different from that of the normal distribution (i.e. 0) + * 'less': the skewness of the distribution underlying the sample + is less than that of the normal distribution + * 'greater': the skewness of the distribution underlying the sample + is greater than that of the normal distribution + + .. versionadded:: 1.7.0 + + Returns + ------- + statistic : float + The computed z-score for this test. + pvalue : float + The p-value for the hypothesis test. + + Notes + ----- + The sample size must be at least 8. + + References + ---------- + .. [1] R. B. D'Agostino, A. J. Belanger and R. B. D'Agostino Jr., + "A suggestion for using powerful and informative tests of + normality", American Statistician 44, pp. 316-321, 1990. + .. [2] Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test + for normality (complete samples). Biometrika, 52(3/4), 591-611. + .. [3] B. Phipson and G. K. Smyth. "Permutation P-values Should Never Be + Zero: Calculating Exact P-values When Permutations Are Randomly + Drawn." Statistical Applications in Genetics and Molecular Biology + 9.1 (2010). + + Examples + -------- + Suppose we wish to infer from measurements whether the weights of adult + human males in a medical study are not normally distributed [2]_. + The weights (lbs) are recorded in the array ``x`` below. + + >>> import numpy as np + >>> x = np.array([148, 154, 158, 160, 161, 162, 166, 170, 182, 195, 236]) + + The skewness test from [1]_ begins by computing a statistic based on the + sample skewness. + + >>> from scipy import stats + >>> res = stats.skewtest(x) + >>> res.statistic + 2.7788579769903414 + + Because normal distributions have zero skewness, the magnitude of this + statistic tends to be low for samples drawn from a normal distribution. + + The test is performed by comparing the observed value of the + statistic against the null distribution: the distribution of statistic + values derived under the null hypothesis that the weights were drawn from + a normal distribution. + + For this test, the null distribution of the statistic for very large + samples is the standard normal distribution. + + >>> import matplotlib.pyplot as plt + >>> dist = stats.norm() + >>> st_val = np.linspace(-5, 5, 100) + >>> pdf = dist.pdf(st_val) + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> def st_plot(ax): # we'll reuse this + ... ax.plot(st_val, pdf) + ... ax.set_title("Skew Test Null Distribution") + ... ax.set_xlabel("statistic") + ... ax.set_ylabel("probability density") + >>> st_plot(ax) + >>> plt.show() + + The comparison is quantified by the p-value: the proportion of values in + the null distribution as extreme or more extreme than the observed + value of the statistic. In a two-sided test, elements of the null + distribution greater than the observed statistic and elements of the null + distribution less than the negative of the observed statistic are both + considered "more extreme". + + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> st_plot(ax) + >>> pvalue = dist.cdf(-res.statistic) + dist.sf(res.statistic) + >>> annotation = (f'p-value={pvalue:.3f}\n(shaded area)') + >>> props = dict(facecolor='black', width=1, headwidth=5, headlength=8) + >>> _ = ax.annotate(annotation, (3, 0.005), (3.25, 0.02), arrowprops=props) + >>> i = st_val >= res.statistic + >>> ax.fill_between(st_val[i], y1=0, y2=pdf[i], color='C0') + >>> i = st_val <= -res.statistic + >>> ax.fill_between(st_val[i], y1=0, y2=pdf[i], color='C0') + >>> ax.set_xlim(-5, 5) + >>> ax.set_ylim(0, 0.1) + >>> plt.show() + >>> res.pvalue + 0.005455036974740185 + + If the p-value is "small" - that is, if there is a low probability of + sampling data from a normally distributed population that produces such an + extreme value of the statistic - this may be taken as evidence against + the null hypothesis in favor of the alternative: the weights were not + drawn from a normal distribution. Note that: + + - The inverse is not true; that is, the test is not used to provide + evidence for the null hypothesis. + - The threshold for values that will be considered "small" is a choice that + should be made before the data is analyzed [3]_ with consideration of the + risks of both false positives (incorrectly rejecting the null hypothesis) + and false negatives (failure to reject a false null hypothesis). + + Note that the standard normal distribution provides an asymptotic + approximation of the null distribution; it is only accurate for samples + with many observations. For small samples like ours, + `scipy.stats.monte_carlo_test` may provide a more accurate, albeit + stochastic, approximation of the exact p-value. + + >>> def statistic(x, axis): + ... # get just the skewtest statistic; ignore the p-value + ... return stats.skewtest(x, axis=axis).statistic + >>> res = stats.monte_carlo_test(x, stats.norm.rvs, statistic) + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> st_plot(ax) + >>> ax.hist(res.null_distribution, np.linspace(-5, 5, 50), + ... density=True) + >>> ax.legend(['aymptotic approximation\n(many observations)', + ... 'Monte Carlo approximation\n(11 observations)']) + >>> plt.show() + >>> res.pvalue + 0.0062 # may vary + + In this case, the asymptotic approximation and Monte Carlo approximation + agree fairly closely, even for our small sample. + + """ + b2 = skew(a, axis) + n = a.shape[axis] + if n < 8: + raise ValueError( + "skewtest is not valid with less than 8 samples; %i samples" + " were given." % int(n)) + y = b2 * math.sqrt(((n + 1) * (n + 3)) / (6.0 * (n - 2))) + beta2 = (3.0 * (n**2 + 27*n - 70) * (n+1) * (n+3) / + ((n-2.0) * (n+5) * (n+7) * (n+9))) + W2 = -1 + math.sqrt(2 * (beta2 - 1)) + delta = 1 / math.sqrt(0.5 * math.log(W2)) + alpha = math.sqrt(2.0 / (W2 - 1)) + y = np.where(y == 0, 1, y) + Z = delta * np.log(y / alpha + np.sqrt((y / alpha)**2 + 1)) + + pvalue = _get_pvalue(Z, distributions.norm, alternative) + return SkewtestResult(Z[()], pvalue[()]) + + +KurtosistestResult = namedtuple('KurtosistestResult', ('statistic', 'pvalue')) + + +@_axis_nan_policy_factory(KurtosistestResult, n_samples=1, too_small=4) +def kurtosistest(a, axis=0, nan_policy='propagate', alternative='two-sided'): + r"""Test whether a dataset has normal kurtosis. + + This function tests the null hypothesis that the kurtosis + of the population from which the sample was drawn is that + of the normal distribution. + + Parameters + ---------- + a : array + Array of the sample data. + axis : int or None, optional + Axis along which to compute test. Default is 0. If None, + compute over the whole array `a`. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the kurtosis of the distribution underlying the sample + is different from that of the normal distribution + * 'less': the kurtosis of the distribution underlying the sample + is less than that of the normal distribution + * 'greater': the kurtosis of the distribution underlying the sample + is greater than that of the normal distribution + + .. versionadded:: 1.7.0 + + Returns + ------- + statistic : float + The computed z-score for this test. + pvalue : float + The p-value for the hypothesis test. + + Notes + ----- + Valid only for n>20. This function uses the method described in [1]_. + + References + ---------- + .. [1] see e.g. F. J. Anscombe, W. J. Glynn, "Distribution of the kurtosis + statistic b2 for normal samples", Biometrika, vol. 70, pp. 227-234, 1983. + .. [2] Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test + for normality (complete samples). Biometrika, 52(3/4), 591-611. + .. [3] B. Phipson and G. K. Smyth. "Permutation P-values Should Never Be + Zero: Calculating Exact P-values When Permutations Are Randomly + Drawn." Statistical Applications in Genetics and Molecular Biology + 9.1 (2010). + .. [4] Panagiotakos, D. B. (2008). The value of p-value in biomedical + research. The open cardiovascular medicine journal, 2, 97. + + Examples + -------- + Suppose we wish to infer from measurements whether the weights of adult + human males in a medical study are not normally distributed [2]_. + The weights (lbs) are recorded in the array ``x`` below. + + >>> import numpy as np + >>> x = np.array([148, 154, 158, 160, 161, 162, 166, 170, 182, 195, 236]) + + The kurtosis test from [1]_ begins by computing a statistic based on the + sample (excess/Fisher) kurtosis. + + >>> from scipy import stats + >>> res = stats.kurtosistest(x) + >>> res.statistic + 2.3048235214240873 + + (The test warns that our sample has too few observations to perform the + test. We'll return to this at the end of the example.) + Because normal distributions have zero excess kurtosis (by definition), + the magnitude of this statistic tends to be low for samples drawn from a + normal distribution. + + The test is performed by comparing the observed value of the + statistic against the null distribution: the distribution of statistic + values derived under the null hypothesis that the weights were drawn from + a normal distribution. + + For this test, the null distribution of the statistic for very large + samples is the standard normal distribution. + + >>> import matplotlib.pyplot as plt + >>> dist = stats.norm() + >>> kt_val = np.linspace(-5, 5, 100) + >>> pdf = dist.pdf(kt_val) + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> def kt_plot(ax): # we'll reuse this + ... ax.plot(kt_val, pdf) + ... ax.set_title("Kurtosis Test Null Distribution") + ... ax.set_xlabel("statistic") + ... ax.set_ylabel("probability density") + >>> kt_plot(ax) + >>> plt.show() + + The comparison is quantified by the p-value: the proportion of values in + the null distribution as extreme or more extreme than the observed + value of the statistic. In a two-sided test in which the statistic is + positive, elements of the null distribution greater than the observed + statistic and elements of the null distribution less than the negative of + the observed statistic are both considered "more extreme". + + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> kt_plot(ax) + >>> pvalue = dist.cdf(-res.statistic) + dist.sf(res.statistic) + >>> annotation = (f'p-value={pvalue:.3f}\n(shaded area)') + >>> props = dict(facecolor='black', width=1, headwidth=5, headlength=8) + >>> _ = ax.annotate(annotation, (3, 0.005), (3.25, 0.02), arrowprops=props) + >>> i = kt_val >= res.statistic + >>> ax.fill_between(kt_val[i], y1=0, y2=pdf[i], color='C0') + >>> i = kt_val <= -res.statistic + >>> ax.fill_between(kt_val[i], y1=0, y2=pdf[i], color='C0') + >>> ax.set_xlim(-5, 5) + >>> ax.set_ylim(0, 0.1) + >>> plt.show() + >>> res.pvalue + 0.0211764592113868 + + If the p-value is "small" - that is, if there is a low probability of + sampling data from a normally distributed population that produces such an + extreme value of the statistic - this may be taken as evidence against + the null hypothesis in favor of the alternative: the weights were not + drawn from a normal distribution. Note that: + + - The inverse is not true; that is, the test is not used to provide + evidence for the null hypothesis. + - The threshold for values that will be considered "small" is a choice that + should be made before the data is analyzed [3]_ with consideration of the + risks of both false positives (incorrectly rejecting the null hypothesis) + and false negatives (failure to reject a false null hypothesis). + + Note that the standard normal distribution provides an asymptotic + approximation of the null distribution; it is only accurate for samples + with many observations. This is the reason we received a warning at the + beginning of the example; our sample is quite small. In this case, + `scipy.stats.monte_carlo_test` may provide a more accurate, albeit + stochastic, approximation of the exact p-value. + + >>> def statistic(x, axis): + ... # get just the skewtest statistic; ignore the p-value + ... return stats.kurtosistest(x, axis=axis).statistic + >>> res = stats.monte_carlo_test(x, stats.norm.rvs, statistic) + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> kt_plot(ax) + >>> ax.hist(res.null_distribution, np.linspace(-5, 5, 50), + ... density=True) + >>> ax.legend(['aymptotic approximation\n(many observations)', + ... 'Monte Carlo approximation\n(11 observations)']) + >>> plt.show() + >>> res.pvalue + 0.0272 # may vary + + Furthermore, despite their stochastic nature, p-values computed in this way + can be used to exactly control the rate of false rejections of the null + hypothesis [4]_. + + """ + n = a.shape[axis] + if n < 5: + raise ValueError( + "kurtosistest requires at least 5 observations; %i observations" + " were given." % int(n)) + if n < 20: + warnings.warn("kurtosistest only valid for n>=20 ... continuing " + "anyway, n=%i" % int(n), + stacklevel=2) + b2 = kurtosis(a, axis, fisher=False) + + E = 3.0*(n-1) / (n+1) + varb2 = 24.0*n*(n-2)*(n-3) / ((n+1)*(n+1.)*(n+3)*(n+5)) # [1]_ Eq. 1 + x = (b2-E) / np.sqrt(varb2) # [1]_ Eq. 4 + # [1]_ Eq. 2: + sqrtbeta1 = 6.0*(n*n-5*n+2)/((n+7)*(n+9)) * np.sqrt((6.0*(n+3)*(n+5)) / + (n*(n-2)*(n-3))) + # [1]_ Eq. 3: + A = 6.0 + 8.0/sqrtbeta1 * (2.0/sqrtbeta1 + np.sqrt(1+4.0/(sqrtbeta1**2))) + term1 = 1 - 2/(9.0*A) + denom = 1 + x*np.sqrt(2/(A-4.0)) + term2 = np.sign(denom) * np.where(denom == 0.0, np.nan, + np.power((1-2.0/A)/np.abs(denom), 1/3.0)) + if np.any(denom == 0): + msg = ("Test statistic not defined in some cases due to division by " + "zero. Return nan in that case...") + warnings.warn(msg, RuntimeWarning, stacklevel=2) + + Z = (term1 - term2) / np.sqrt(2/(9.0*A)) # [1]_ Eq. 5 + + pvalue = _get_pvalue(Z, distributions.norm, alternative) + return KurtosistestResult(Z[()], pvalue[()]) + + +NormaltestResult = namedtuple('NormaltestResult', ('statistic', 'pvalue')) + + +@_axis_nan_policy_factory(NormaltestResult, n_samples=1, too_small=7) +def normaltest(a, axis=0, nan_policy='propagate'): + r"""Test whether a sample differs from a normal distribution. + + This function tests the null hypothesis that a sample comes + from a normal distribution. It is based on D'Agostino and + Pearson's [1]_, [2]_ test that combines skew and kurtosis to + produce an omnibus test of normality. + + Parameters + ---------- + a : array_like + The array containing the sample to be tested. + axis : int or None, optional + Axis along which to compute test. Default is 0. If None, + compute over the whole array `a`. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Returns + ------- + statistic : float or array + ``s^2 + k^2``, where ``s`` is the z-score returned by `skewtest` and + ``k`` is the z-score returned by `kurtosistest`. + pvalue : float or array + A 2-sided chi squared probability for the hypothesis test. + + References + ---------- + .. [1] D'Agostino, R. B. (1971), "An omnibus test of normality for + moderate and large sample size", Biometrika, 58, 341-348 + .. [2] D'Agostino, R. and Pearson, E. S. (1973), "Tests for departure from + normality", Biometrika, 60, 613-622 + .. [3] Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test + for normality (complete samples). Biometrika, 52(3/4), 591-611. + .. [4] B. Phipson and G. K. Smyth. "Permutation P-values Should Never Be + Zero: Calculating Exact P-values When Permutations Are Randomly + Drawn." Statistical Applications in Genetics and Molecular Biology + 9.1 (2010). + .. [5] Panagiotakos, D. B. (2008). The value of p-value in biomedical + research. The open cardiovascular medicine journal, 2, 97. + + Examples + -------- + Suppose we wish to infer from measurements whether the weights of adult + human males in a medical study are not normally distributed [3]_. + The weights (lbs) are recorded in the array ``x`` below. + + >>> import numpy as np + >>> x = np.array([148, 154, 158, 160, 161, 162, 166, 170, 182, 195, 236]) + + The normality test of [1]_ and [2]_ begins by computing a statistic based + on the sample skewness and kurtosis. + + >>> from scipy import stats + >>> res = stats.normaltest(x) + >>> res.statistic + 13.034263121192582 + + (The test warns that our sample has too few observations to perform the + test. We'll return to this at the end of the example.) + Because the normal distribution has zero skewness and zero + ("excess" or "Fisher") kurtosis, the value of this statistic tends to be + low for samples drawn from a normal distribution. + + The test is performed by comparing the observed value of the statistic + against the null distribution: the distribution of statistic values derived + under the null hypothesis that the weights were drawn from a normal + distribution. + For this normality test, the null distribution for very large samples is + the chi-squared distribution with two degrees of freedom. + + >>> import matplotlib.pyplot as plt + >>> dist = stats.chi2(df=2) + >>> stat_vals = np.linspace(0, 16, 100) + >>> pdf = dist.pdf(stat_vals) + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> def plot(ax): # we'll reuse this + ... ax.plot(stat_vals, pdf) + ... ax.set_title("Normality Test Null Distribution") + ... ax.set_xlabel("statistic") + ... ax.set_ylabel("probability density") + >>> plot(ax) + >>> plt.show() + + The comparison is quantified by the p-value: the proportion of values in + the null distribution greater than or equal to the observed value of the + statistic. + + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> plot(ax) + >>> pvalue = dist.sf(res.statistic) + >>> annotation = (f'p-value={pvalue:.6f}\n(shaded area)') + >>> props = dict(facecolor='black', width=1, headwidth=5, headlength=8) + >>> _ = ax.annotate(annotation, (13.5, 5e-4), (14, 5e-3), arrowprops=props) + >>> i = stat_vals >= res.statistic # index more extreme statistic values + >>> ax.fill_between(stat_vals[i], y1=0, y2=pdf[i]) + >>> ax.set_xlim(8, 16) + >>> ax.set_ylim(0, 0.01) + >>> plt.show() + >>> res.pvalue + 0.0014779023013100172 + + If the p-value is "small" - that is, if there is a low probability of + sampling data from a normally distributed population that produces such an + extreme value of the statistic - this may be taken as evidence against + the null hypothesis in favor of the alternative: the weights were not + drawn from a normal distribution. Note that: + + - The inverse is not true; that is, the test is not used to provide + evidence for the null hypothesis. + - The threshold for values that will be considered "small" is a choice that + should be made before the data is analyzed [4]_ with consideration of the + risks of both false positives (incorrectly rejecting the null hypothesis) + and false negatives (failure to reject a false null hypothesis). + + Note that the chi-squared distribution provides an asymptotic + approximation of the null distribution; it is only accurate for samples + with many observations. This is the reason we received a warning at the + beginning of the example; our sample is quite small. In this case, + `scipy.stats.monte_carlo_test` may provide a more accurate, albeit + stochastic, approximation of the exact p-value. + + >>> def statistic(x, axis): + ... # Get only the `normaltest` statistic; ignore approximate p-value + ... return stats.normaltest(x, axis=axis).statistic + >>> res = stats.monte_carlo_test(x, stats.norm.rvs, statistic, + ... alternative='greater') + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> plot(ax) + >>> ax.hist(res.null_distribution, np.linspace(0, 25, 50), + ... density=True) + >>> ax.legend(['aymptotic approximation (many observations)', + ... 'Monte Carlo approximation (11 observations)']) + >>> ax.set_xlim(0, 14) + >>> plt.show() + >>> res.pvalue + 0.0082 # may vary + + Furthermore, despite their stochastic nature, p-values computed in this way + can be used to exactly control the rate of false rejections of the null + hypothesis [5]_. + + """ + s, _ = skewtest(a, axis) + k, _ = kurtosistest(a, axis) + k2 = s*s + k*k + + return NormaltestResult(k2, distributions.chi2.sf(k2, 2)) + + +@_axis_nan_policy_factory(SignificanceResult, default_axis=None) +def jarque_bera(x, *, axis=None): + r"""Perform the Jarque-Bera goodness of fit test on sample data. + + The Jarque-Bera test tests whether the sample data has the skewness and + kurtosis matching a normal distribution. + + Note that this test only works for a large enough number of data samples + (>2000) as the test statistic asymptotically has a Chi-squared distribution + with 2 degrees of freedom. + + Parameters + ---------- + x : array_like + Observations of a random variable. + axis : int or None, default: 0 + If an int, the axis of the input along which to compute the statistic. + The statistic of each axis-slice (e.g. row) of the input will appear in + a corresponding element of the output. + If ``None``, the input will be raveled before computing the statistic. + + Returns + ------- + result : SignificanceResult + An object with the following attributes: + + statistic : float + The test statistic. + pvalue : float + The p-value for the hypothesis test. + + References + ---------- + .. [1] Jarque, C. and Bera, A. (1980) "Efficient tests for normality, + homoscedasticity and serial independence of regression residuals", + 6 Econometric Letters 255-259. + .. [2] Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test + for normality (complete samples). Biometrika, 52(3/4), 591-611. + .. [3] B. Phipson and G. K. Smyth. "Permutation P-values Should Never Be + Zero: Calculating Exact P-values When Permutations Are Randomly + Drawn." Statistical Applications in Genetics and Molecular Biology + 9.1 (2010). + .. [4] Panagiotakos, D. B. (2008). The value of p-value in biomedical + research. The open cardiovascular medicine journal, 2, 97. + + Examples + -------- + Suppose we wish to infer from measurements whether the weights of adult + human males in a medical study are not normally distributed [2]_. + The weights (lbs) are recorded in the array ``x`` below. + + >>> import numpy as np + >>> x = np.array([148, 154, 158, 160, 161, 162, 166, 170, 182, 195, 236]) + + The Jarque-Bera test begins by computing a statistic based on the sample + skewness and kurtosis. + + >>> from scipy import stats + >>> res = stats.jarque_bera(x) + >>> res.statistic + 6.982848237344646 + + Because the normal distribution has zero skewness and zero + ("excess" or "Fisher") kurtosis, the value of this statistic tends to be + low for samples drawn from a normal distribution. + + The test is performed by comparing the observed value of the statistic + against the null distribution: the distribution of statistic values derived + under the null hypothesis that the weights were drawn from a normal + distribution. + For the Jarque-Bera test, the null distribution for very large samples is + the chi-squared distribution with two degrees of freedom. + + >>> import matplotlib.pyplot as plt + >>> dist = stats.chi2(df=2) + >>> jb_val = np.linspace(0, 11, 100) + >>> pdf = dist.pdf(jb_val) + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> def jb_plot(ax): # we'll reuse this + ... ax.plot(jb_val, pdf) + ... ax.set_title("Jarque-Bera Null Distribution") + ... ax.set_xlabel("statistic") + ... ax.set_ylabel("probability density") + >>> jb_plot(ax) + >>> plt.show() + + The comparison is quantified by the p-value: the proportion of values in + the null distribution greater than or equal to the observed value of the + statistic. + + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> jb_plot(ax) + >>> pvalue = dist.sf(res.statistic) + >>> annotation = (f'p-value={pvalue:.6f}\n(shaded area)') + >>> props = dict(facecolor='black', width=1, headwidth=5, headlength=8) + >>> _ = ax.annotate(annotation, (7.5, 0.01), (8, 0.05), arrowprops=props) + >>> i = jb_val >= res.statistic # indices of more extreme statistic values + >>> ax.fill_between(jb_val[i], y1=0, y2=pdf[i]) + >>> ax.set_xlim(0, 11) + >>> ax.set_ylim(0, 0.3) + >>> plt.show() + >>> res.pvalue + 0.03045746622458189 + + If the p-value is "small" - that is, if there is a low probability of + sampling data from a normally distributed population that produces such an + extreme value of the statistic - this may be taken as evidence against + the null hypothesis in favor of the alternative: the weights were not + drawn from a normal distribution. Note that: + + - The inverse is not true; that is, the test is not used to provide + evidence for the null hypothesis. + - The threshold for values that will be considered "small" is a choice that + should be made before the data is analyzed [3]_ with consideration of the + risks of both false positives (incorrectly rejecting the null hypothesis) + and false negatives (failure to reject a false null hypothesis). + + Note that the chi-squared distribution provides an asymptotic approximation + of the null distribution; it is only accurate for samples with many + observations. For small samples like ours, `scipy.stats.monte_carlo_test` + may provide a more accurate, albeit stochastic, approximation of the + exact p-value. + + >>> def statistic(x, axis): + ... # underlying calculation of the Jarque Bera statistic + ... s = stats.skew(x, axis=axis) + ... k = stats.kurtosis(x, axis=axis) + ... return x.shape[axis]/6 * (s**2 + k**2/4) + >>> res = stats.monte_carlo_test(x, stats.norm.rvs, statistic, + ... alternative='greater') + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> jb_plot(ax) + >>> ax.hist(res.null_distribution, np.linspace(0, 10, 50), + ... density=True) + >>> ax.legend(['aymptotic approximation (many observations)', + ... 'Monte Carlo approximation (11 observations)']) + >>> plt.show() + >>> res.pvalue + 0.0097 # may vary + + Furthermore, despite their stochastic nature, p-values computed in this way + can be used to exactly control the rate of false rejections of the null + hypothesis [4]_. + + """ + x = np.asarray(x) + if axis is None: + x = x.ravel() + axis = 0 + + n = x.shape[axis] + if n == 0: + raise ValueError('At least one observation is required.') + + mu = x.mean(axis=axis, keepdims=True) + diffx = x - mu + s = skew(diffx, axis=axis, _no_deco=True) + k = kurtosis(diffx, axis=axis, _no_deco=True) + statistic = n / 6 * (s**2 + k**2 / 4) + pvalue = distributions.chi2.sf(statistic, df=2) + + return SignificanceResult(statistic, pvalue) + + +##################################### +# FREQUENCY FUNCTIONS # +##################################### + + +def scoreatpercentile(a, per, limit=(), interpolation_method='fraction', + axis=None): + """Calculate the score at a given percentile of the input sequence. + + For example, the score at `per=50` is the median. If the desired quantile + lies between two data points, we interpolate between them, according to + the value of `interpolation`. If the parameter `limit` is provided, it + should be a tuple (lower, upper) of two values. + + Parameters + ---------- + a : array_like + A 1-D array of values from which to extract score. + per : array_like + Percentile(s) at which to extract score. Values should be in range + [0,100]. + limit : tuple, optional + Tuple of two scalars, the lower and upper limits within which to + compute the percentile. Values of `a` outside + this (closed) interval will be ignored. + interpolation_method : {'fraction', 'lower', 'higher'}, optional + Specifies the interpolation method to use, + when the desired quantile lies between two data points `i` and `j` + The following options are available (default is 'fraction'): + + * 'fraction': ``i + (j - i) * fraction`` where ``fraction`` is the + fractional part of the index surrounded by ``i`` and ``j`` + * 'lower': ``i`` + * 'higher': ``j`` + + axis : int, optional + Axis along which the percentiles are computed. Default is None. If + None, compute over the whole array `a`. + + Returns + ------- + score : float or ndarray + Score at percentile(s). + + See Also + -------- + percentileofscore, numpy.percentile + + Notes + ----- + This function will become obsolete in the future. + For NumPy 1.9 and higher, `numpy.percentile` provides all the functionality + that `scoreatpercentile` provides. And it's significantly faster. + Therefore it's recommended to use `numpy.percentile` for users that have + numpy >= 1.9. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> a = np.arange(100) + >>> stats.scoreatpercentile(a, 50) + 49.5 + + """ + # adapted from NumPy's percentile function. When we require numpy >= 1.8, + # the implementation of this function can be replaced by np.percentile. + a = np.asarray(a) + if a.size == 0: + # empty array, return nan(s) with shape matching `per` + if np.isscalar(per): + return np.nan + else: + return np.full(np.asarray(per).shape, np.nan, dtype=np.float64) + + if limit: + a = a[(limit[0] <= a) & (a <= limit[1])] + + sorted_ = np.sort(a, axis=axis) + if axis is None: + axis = 0 + + return _compute_qth_percentile(sorted_, per, interpolation_method, axis) + + +# handle sequence of per's without calling sort multiple times +def _compute_qth_percentile(sorted_, per, interpolation_method, axis): + if not np.isscalar(per): + score = [_compute_qth_percentile(sorted_, i, + interpolation_method, axis) + for i in per] + return np.array(score) + + if not (0 <= per <= 100): + raise ValueError("percentile must be in the range [0, 100]") + + indexer = [slice(None)] * sorted_.ndim + idx = per / 100. * (sorted_.shape[axis] - 1) + + if int(idx) != idx: + # round fractional indices according to interpolation method + if interpolation_method == 'lower': + idx = int(np.floor(idx)) + elif interpolation_method == 'higher': + idx = int(np.ceil(idx)) + elif interpolation_method == 'fraction': + pass # keep idx as fraction and interpolate + else: + raise ValueError("interpolation_method can only be 'fraction', " + "'lower' or 'higher'") + + i = int(idx) + if i == idx: + indexer[axis] = slice(i, i + 1) + weights = array(1) + sumval = 1.0 + else: + indexer[axis] = slice(i, i + 2) + j = i + 1 + weights = array([(j - idx), (idx - i)], float) + wshape = [1] * sorted_.ndim + wshape[axis] = 2 + weights.shape = wshape + sumval = weights.sum() + + # Use np.add.reduce (== np.sum but a little faster) to coerce data type + return np.add.reduce(sorted_[tuple(indexer)] * weights, axis=axis) / sumval + + +def percentileofscore(a, score, kind='rank', nan_policy='propagate'): + """Compute the percentile rank of a score relative to a list of scores. + + A `percentileofscore` of, for example, 80% means that 80% of the + scores in `a` are below the given score. In the case of gaps or + ties, the exact definition depends on the optional keyword, `kind`. + + Parameters + ---------- + a : array_like + A 1-D array to which `score` is compared. + score : array_like + Scores to compute percentiles for. + kind : {'rank', 'weak', 'strict', 'mean'}, optional + Specifies the interpretation of the resulting score. + The following options are available (default is 'rank'): + + * 'rank': Average percentage ranking of score. In case of multiple + matches, average the percentage rankings of all matching scores. + * 'weak': This kind corresponds to the definition of a cumulative + distribution function. A percentileofscore of 80% means that 80% + of values are less than or equal to the provided score. + * 'strict': Similar to "weak", except that only values that are + strictly less than the given score are counted. + * 'mean': The average of the "weak" and "strict" scores, often used + in testing. See https://en.wikipedia.org/wiki/Percentile_rank + nan_policy : {'propagate', 'raise', 'omit'}, optional + Specifies how to treat `nan` values in `a`. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan (for each value in `score`). + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Returns + ------- + pcos : float + Percentile-position of score (0-100) relative to `a`. + + See Also + -------- + numpy.percentile + scipy.stats.scoreatpercentile, scipy.stats.rankdata + + Examples + -------- + Three-quarters of the given values lie below a given score: + + >>> import numpy as np + >>> from scipy import stats + >>> stats.percentileofscore([1, 2, 3, 4], 3) + 75.0 + + With multiple matches, note how the scores of the two matches, 0.6 + and 0.8 respectively, are averaged: + + >>> stats.percentileofscore([1, 2, 3, 3, 4], 3) + 70.0 + + Only 2/5 values are strictly less than 3: + + >>> stats.percentileofscore([1, 2, 3, 3, 4], 3, kind='strict') + 40.0 + + But 4/5 values are less than or equal to 3: + + >>> stats.percentileofscore([1, 2, 3, 3, 4], 3, kind='weak') + 80.0 + + The average between the weak and the strict scores is: + + >>> stats.percentileofscore([1, 2, 3, 3, 4], 3, kind='mean') + 60.0 + + Score arrays (of any dimensionality) are supported: + + >>> stats.percentileofscore([1, 2, 3, 3, 4], [2, 3]) + array([40., 70.]) + + The inputs can be infinite: + + >>> stats.percentileofscore([-np.inf, 0, 1, np.inf], [1, 2, np.inf]) + array([75., 75., 100.]) + + If `a` is empty, then the resulting percentiles are all `nan`: + + >>> stats.percentileofscore([], [1, 2]) + array([nan, nan]) + """ + + a = np.asarray(a) + n = len(a) + score = np.asarray(score) + + # Nan treatment + cna, npa = _contains_nan(a, nan_policy, use_summation=False) + cns, nps = _contains_nan(score, nan_policy, use_summation=False) + + if (cna or cns) and nan_policy == 'raise': + raise ValueError("The input contains nan values") + + if cns: + # If a score is nan, then the output should be nan + # (also if nan_policy is "omit", because it only applies to `a`) + score = ma.masked_where(np.isnan(score), score) + + if cna: + if nan_policy == "omit": + # Don't count nans + a = ma.masked_where(np.isnan(a), a) + n = a.count() + + if nan_policy == "propagate": + # All outputs should be nans + n = 0 + + # Cannot compare to empty list ==> nan + if n == 0: + perct = np.full_like(score, np.nan, dtype=np.float64) + + else: + # Prepare broadcasting + score = score[..., None] + + def count(x): + return np.count_nonzero(x, -1) + + # Main computations/logic + if kind == 'rank': + left = count(a < score) + right = count(a <= score) + plus1 = left < right + perct = (left + right + plus1) * (50.0 / n) + elif kind == 'strict': + perct = count(a < score) * (100.0 / n) + elif kind == 'weak': + perct = count(a <= score) * (100.0 / n) + elif kind == 'mean': + left = count(a < score) + right = count(a <= score) + perct = (left + right) * (50.0 / n) + else: + raise ValueError( + "kind can only be 'rank', 'strict', 'weak' or 'mean'") + + # Re-insert nan values + perct = ma.filled(perct, np.nan) + + if perct.ndim == 0: + return perct[()] + return perct + + +HistogramResult = namedtuple('HistogramResult', + ('count', 'lowerlimit', 'binsize', 'extrapoints')) + + +def _histogram(a, numbins=10, defaultlimits=None, weights=None, + printextras=False): + """Create a histogram. + + Separate the range into several bins and return the number of instances + in each bin. + + Parameters + ---------- + a : array_like + Array of scores which will be put into bins. + numbins : int, optional + The number of bins to use for the histogram. Default is 10. + defaultlimits : tuple (lower, upper), optional + The lower and upper values for the range of the histogram. + If no value is given, a range slightly larger than the range of the + values in a is used. Specifically ``(a.min() - s, a.max() + s)``, + where ``s = (1/2)(a.max() - a.min()) / (numbins - 1)``. + weights : array_like, optional + The weights for each value in `a`. Default is None, which gives each + value a weight of 1.0 + printextras : bool, optional + If True, if there are extra points (i.e. the points that fall outside + the bin limits) a warning is raised saying how many of those points + there are. Default is False. + + Returns + ------- + count : ndarray + Number of points (or sum of weights) in each bin. + lowerlimit : float + Lowest value of histogram, the lower limit of the first bin. + binsize : float + The size of the bins (all bins have the same size). + extrapoints : int + The number of points outside the range of the histogram. + + See Also + -------- + numpy.histogram + + Notes + ----- + This histogram is based on numpy's histogram but has a larger range by + default if default limits is not set. + + """ + a = np.ravel(a) + if defaultlimits is None: + if a.size == 0: + # handle empty arrays. Undetermined range, so use 0-1. + defaultlimits = (0, 1) + else: + # no range given, so use values in `a` + data_min = a.min() + data_max = a.max() + # Have bins extend past min and max values slightly + s = (data_max - data_min) / (2. * (numbins - 1.)) + defaultlimits = (data_min - s, data_max + s) + + # use numpy's histogram method to compute bins + hist, bin_edges = np.histogram(a, bins=numbins, range=defaultlimits, + weights=weights) + # hist are not always floats, convert to keep with old output + hist = np.array(hist, dtype=float) + # fixed width for bins is assumed, as numpy's histogram gives + # fixed width bins for int values for 'bins' + binsize = bin_edges[1] - bin_edges[0] + # calculate number of extra points + extrapoints = len([v for v in a + if defaultlimits[0] > v or v > defaultlimits[1]]) + if extrapoints > 0 and printextras: + warnings.warn("Points outside given histogram range = %s" % extrapoints, + stacklevel=3,) + + return HistogramResult(hist, defaultlimits[0], binsize, extrapoints) + + +CumfreqResult = namedtuple('CumfreqResult', + ('cumcount', 'lowerlimit', 'binsize', + 'extrapoints')) + + +def cumfreq(a, numbins=10, defaultreallimits=None, weights=None): + """Return a cumulative frequency histogram, using the histogram function. + + A cumulative histogram is a mapping that counts the cumulative number of + observations in all of the bins up to the specified bin. + + Parameters + ---------- + a : array_like + Input array. + numbins : int, optional + The number of bins to use for the histogram. Default is 10. + defaultreallimits : tuple (lower, upper), optional + The lower and upper values for the range of the histogram. + If no value is given, a range slightly larger than the range of the + values in `a` is used. Specifically ``(a.min() - s, a.max() + s)``, + where ``s = (1/2)(a.max() - a.min()) / (numbins - 1)``. + weights : array_like, optional + The weights for each value in `a`. Default is None, which gives each + value a weight of 1.0 + + Returns + ------- + cumcount : ndarray + Binned values of cumulative frequency. + lowerlimit : float + Lower real limit + binsize : float + Width of each bin. + extrapoints : int + Extra points. + + Examples + -------- + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> x = [1, 4, 2, 1, 3, 1] + >>> res = stats.cumfreq(x, numbins=4, defaultreallimits=(1.5, 5)) + >>> res.cumcount + array([ 1., 2., 3., 3.]) + >>> res.extrapoints + 3 + + Create a normal distribution with 1000 random values + + >>> samples = stats.norm.rvs(size=1000, random_state=rng) + + Calculate cumulative frequencies + + >>> res = stats.cumfreq(samples, numbins=25) + + Calculate space of values for x + + >>> x = res.lowerlimit + np.linspace(0, res.binsize*res.cumcount.size, + ... res.cumcount.size) + + Plot histogram and cumulative histogram + + >>> fig = plt.figure(figsize=(10, 4)) + >>> ax1 = fig.add_subplot(1, 2, 1) + >>> ax2 = fig.add_subplot(1, 2, 2) + >>> ax1.hist(samples, bins=25) + >>> ax1.set_title('Histogram') + >>> ax2.bar(x, res.cumcount, width=res.binsize) + >>> ax2.set_title('Cumulative histogram') + >>> ax2.set_xlim([x.min(), x.max()]) + + >>> plt.show() + + """ + h, l, b, e = _histogram(a, numbins, defaultreallimits, weights=weights) + cumhist = np.cumsum(h * 1, axis=0) + return CumfreqResult(cumhist, l, b, e) + + +RelfreqResult = namedtuple('RelfreqResult', + ('frequency', 'lowerlimit', 'binsize', + 'extrapoints')) + + +def relfreq(a, numbins=10, defaultreallimits=None, weights=None): + """Return a relative frequency histogram, using the histogram function. + + A relative frequency histogram is a mapping of the number of + observations in each of the bins relative to the total of observations. + + Parameters + ---------- + a : array_like + Input array. + numbins : int, optional + The number of bins to use for the histogram. Default is 10. + defaultreallimits : tuple (lower, upper), optional + The lower and upper values for the range of the histogram. + If no value is given, a range slightly larger than the range of the + values in a is used. Specifically ``(a.min() - s, a.max() + s)``, + where ``s = (1/2)(a.max() - a.min()) / (numbins - 1)``. + weights : array_like, optional + The weights for each value in `a`. Default is None, which gives each + value a weight of 1.0 + + Returns + ------- + frequency : ndarray + Binned values of relative frequency. + lowerlimit : float + Lower real limit. + binsize : float + Width of each bin. + extrapoints : int + Extra points. + + Examples + -------- + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> a = np.array([2, 4, 1, 2, 3, 2]) + >>> res = stats.relfreq(a, numbins=4) + >>> res.frequency + array([ 0.16666667, 0.5 , 0.16666667, 0.16666667]) + >>> np.sum(res.frequency) # relative frequencies should add up to 1 + 1.0 + + Create a normal distribution with 1000 random values + + >>> samples = stats.norm.rvs(size=1000, random_state=rng) + + Calculate relative frequencies + + >>> res = stats.relfreq(samples, numbins=25) + + Calculate space of values for x + + >>> x = res.lowerlimit + np.linspace(0, res.binsize*res.frequency.size, + ... res.frequency.size) + + Plot relative frequency histogram + + >>> fig = plt.figure(figsize=(5, 4)) + >>> ax = fig.add_subplot(1, 1, 1) + >>> ax.bar(x, res.frequency, width=res.binsize) + >>> ax.set_title('Relative frequency histogram') + >>> ax.set_xlim([x.min(), x.max()]) + + >>> plt.show() + + """ + a = np.asanyarray(a) + h, l, b, e = _histogram(a, numbins, defaultreallimits, weights=weights) + h = h / a.shape[0] + + return RelfreqResult(h, l, b, e) + + +##################################### +# VARIABILITY FUNCTIONS # +##################################### + +def obrientransform(*samples): + """Compute the O'Brien transform on input data (any number of arrays). + + Used to test for homogeneity of variance prior to running one-way stats. + Each array in ``*samples`` is one level of a factor. + If `f_oneway` is run on the transformed data and found significant, + the variances are unequal. From Maxwell and Delaney [1]_, p.112. + + Parameters + ---------- + sample1, sample2, ... : array_like + Any number of arrays. + + Returns + ------- + obrientransform : ndarray + Transformed data for use in an ANOVA. The first dimension + of the result corresponds to the sequence of transformed + arrays. If the arrays given are all 1-D of the same length, + the return value is a 2-D array; otherwise it is a 1-D array + of type object, with each element being an ndarray. + + References + ---------- + .. [1] S. E. Maxwell and H. D. Delaney, "Designing Experiments and + Analyzing Data: A Model Comparison Perspective", Wadsworth, 1990. + + Examples + -------- + We'll test the following data sets for differences in their variance. + + >>> x = [10, 11, 13, 9, 7, 12, 12, 9, 10] + >>> y = [13, 21, 5, 10, 8, 14, 10, 12, 7, 15] + + Apply the O'Brien transform to the data. + + >>> from scipy.stats import obrientransform + >>> tx, ty = obrientransform(x, y) + + Use `scipy.stats.f_oneway` to apply a one-way ANOVA test to the + transformed data. + + >>> from scipy.stats import f_oneway + >>> F, p = f_oneway(tx, ty) + >>> p + 0.1314139477040335 + + If we require that ``p < 0.05`` for significance, we cannot conclude + that the variances are different. + + """ + TINY = np.sqrt(np.finfo(float).eps) + + # `arrays` will hold the transformed arguments. + arrays = [] + sLast = None + + for sample in samples: + a = np.asarray(sample) + n = len(a) + mu = np.mean(a) + sq = (a - mu)**2 + sumsq = sq.sum() + + # The O'Brien transform. + t = ((n - 1.5) * n * sq - 0.5 * sumsq) / ((n - 1) * (n - 2)) + + # Check that the mean of the transformed data is equal to the + # original variance. + var = sumsq / (n - 1) + if abs(var - np.mean(t)) > TINY: + raise ValueError('Lack of convergence in obrientransform.') + + arrays.append(t) + sLast = a.shape + + if sLast: + for arr in arrays[:-1]: + if sLast != arr.shape: + return np.array(arrays, dtype=object) + return np.array(arrays) + + +@_axis_nan_policy_factory( + lambda x: x, result_to_tuple=lambda x: (x,), n_outputs=1, too_small=1 +) +def sem(a, axis=0, ddof=1, nan_policy='propagate'): + """Compute standard error of the mean. + + Calculate the standard error of the mean (or standard error of + measurement) of the values in the input array. + + Parameters + ---------- + a : array_like + An array containing the values for which the standard error is + returned. + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over + the whole array `a`. + ddof : int, optional + Delta degrees-of-freedom. How many degrees of freedom to adjust + for bias in limited samples relative to the population estimate + of variance. Defaults to 1. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Returns + ------- + s : ndarray or float + The standard error of the mean in the sample(s), along the input axis. + + Notes + ----- + The default value for `ddof` is different to the default (0) used by other + ddof containing routines, such as np.std and np.nanstd. + + Examples + -------- + Find standard error along the first axis: + + >>> import numpy as np + >>> from scipy import stats + >>> a = np.arange(20).reshape(5,4) + >>> stats.sem(a) + array([ 2.8284, 2.8284, 2.8284, 2.8284]) + + Find standard error across the whole array, using n degrees of freedom: + + >>> stats.sem(a, axis=None, ddof=0) + 1.2893796958227628 + + """ + n = a.shape[axis] + s = np.std(a, axis=axis, ddof=ddof) / np.sqrt(n) + return s + + +def _isconst(x): + """ + Check if all values in x are the same. nans are ignored. + + x must be a 1d array. + + The return value is a 1d array with length 1, so it can be used + in np.apply_along_axis. + """ + y = x[~np.isnan(x)] + if y.size == 0: + return np.array([True]) + else: + return (y[0] == y).all(keepdims=True) + + +def _quiet_nanmean(x): + """ + Compute nanmean for the 1d array x, but quietly return nan if x is all nan. + + The return value is a 1d array with length 1, so it can be used + in np.apply_along_axis. + """ + y = x[~np.isnan(x)] + if y.size == 0: + return np.array([np.nan]) + else: + return np.mean(y, keepdims=True) + + +def _quiet_nanstd(x, ddof=0): + """ + Compute nanstd for the 1d array x, but quietly return nan if x is all nan. + + The return value is a 1d array with length 1, so it can be used + in np.apply_along_axis. + """ + y = x[~np.isnan(x)] + if y.size == 0: + return np.array([np.nan]) + else: + return np.std(y, keepdims=True, ddof=ddof) + + +def zscore(a, axis=0, ddof=0, nan_policy='propagate'): + """ + Compute the z score. + + Compute the z score of each value in the sample, relative to the + sample mean and standard deviation. + + Parameters + ---------- + a : array_like + An array like object containing the sample data. + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over + the whole array `a`. + ddof : int, optional + Degrees of freedom correction in the calculation of the + standard deviation. Default is 0. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. 'propagate' returns nan, + 'raise' throws an error, 'omit' performs the calculations ignoring nan + values. Default is 'propagate'. Note that when the value is 'omit', + nans in the input also propagate to the output, but they do not affect + the z-scores computed for the non-nan values. + + Returns + ------- + zscore : array_like + The z-scores, standardized by mean and standard deviation of + input array `a`. + + See Also + -------- + numpy.mean : Arithmetic average + numpy.std : Arithmetic standard deviation + scipy.stats.gzscore : Geometric standard score + + Notes + ----- + This function preserves ndarray subclasses, and works also with + matrices and masked arrays (it uses `asanyarray` instead of + `asarray` for parameters). + + References + ---------- + .. [1] "Standard score", *Wikipedia*, + https://en.wikipedia.org/wiki/Standard_score. + .. [2] Huck, S. W., Cross, T. L., Clark, S. B, "Overcoming misconceptions + about Z-scores", Teaching Statistics, vol. 8, pp. 38-40, 1986 + + Examples + -------- + >>> import numpy as np + >>> a = np.array([ 0.7972, 0.0767, 0.4383, 0.7866, 0.8091, + ... 0.1954, 0.6307, 0.6599, 0.1065, 0.0508]) + >>> from scipy import stats + >>> stats.zscore(a) + array([ 1.1273, -1.247 , -0.0552, 1.0923, 1.1664, -0.8559, 0.5786, + 0.6748, -1.1488, -1.3324]) + + Computing along a specified axis, using n-1 degrees of freedom + (``ddof=1``) to calculate the standard deviation: + + >>> b = np.array([[ 0.3148, 0.0478, 0.6243, 0.4608], + ... [ 0.7149, 0.0775, 0.6072, 0.9656], + ... [ 0.6341, 0.1403, 0.9759, 0.4064], + ... [ 0.5918, 0.6948, 0.904 , 0.3721], + ... [ 0.0921, 0.2481, 0.1188, 0.1366]]) + >>> stats.zscore(b, axis=1, ddof=1) + array([[-0.19264823, -1.28415119, 1.07259584, 0.40420358], + [ 0.33048416, -1.37380874, 0.04251374, 1.00081084], + [ 0.26796377, -1.12598418, 1.23283094, -0.37481053], + [-0.22095197, 0.24468594, 1.19042819, -1.21416216], + [-0.82780366, 1.4457416 , -0.43867764, -0.1792603 ]]) + + An example with `nan_policy='omit'`: + + >>> x = np.array([[25.11, 30.10, np.nan, 32.02, 43.15], + ... [14.95, 16.06, 121.25, 94.35, 29.81]]) + >>> stats.zscore(x, axis=1, nan_policy='omit') + array([[-1.13490897, -0.37830299, nan, -0.08718406, 1.60039602], + [-0.91611681, -0.89090508, 1.4983032 , 0.88731639, -0.5785977 ]]) + """ + return zmap(a, a, axis=axis, ddof=ddof, nan_policy=nan_policy) + + +def gzscore(a, *, axis=0, ddof=0, nan_policy='propagate'): + """ + Compute the geometric standard score. + + Compute the geometric z score of each strictly positive value in the + sample, relative to the geometric mean and standard deviation. + Mathematically the geometric z score can be evaluated as:: + + gzscore = log(a/gmu) / log(gsigma) + + where ``gmu`` (resp. ``gsigma``) is the geometric mean (resp. standard + deviation). + + Parameters + ---------- + a : array_like + Sample data. + axis : int or None, optional + Axis along which to operate. Default is 0. If None, compute over + the whole array `a`. + ddof : int, optional + Degrees of freedom correction in the calculation of the + standard deviation. Default is 0. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. 'propagate' returns nan, + 'raise' throws an error, 'omit' performs the calculations ignoring nan + values. Default is 'propagate'. Note that when the value is 'omit', + nans in the input also propagate to the output, but they do not affect + the geometric z scores computed for the non-nan values. + + Returns + ------- + gzscore : array_like + The geometric z scores, standardized by geometric mean and geometric + standard deviation of input array `a`. + + See Also + -------- + gmean : Geometric mean + gstd : Geometric standard deviation + zscore : Standard score + + Notes + ----- + This function preserves ndarray subclasses, and works also with + matrices and masked arrays (it uses ``asanyarray`` instead of + ``asarray`` for parameters). + + .. versionadded:: 1.8 + + References + ---------- + .. [1] "Geometric standard score", *Wikipedia*, + https://en.wikipedia.org/wiki/Geometric_standard_deviation#Geometric_standard_score. + + Examples + -------- + Draw samples from a log-normal distribution: + + >>> import numpy as np + >>> from scipy.stats import zscore, gzscore + >>> import matplotlib.pyplot as plt + + >>> rng = np.random.default_rng() + >>> mu, sigma = 3., 1. # mean and standard deviation + >>> x = rng.lognormal(mu, sigma, size=500) + + Display the histogram of the samples: + + >>> fig, ax = plt.subplots() + >>> ax.hist(x, 50) + >>> plt.show() + + Display the histogram of the samples standardized by the classical zscore. + Distribution is rescaled but its shape is unchanged. + + >>> fig, ax = plt.subplots() + >>> ax.hist(zscore(x), 50) + >>> plt.show() + + Demonstrate that the distribution of geometric zscores is rescaled and + quasinormal: + + >>> fig, ax = plt.subplots() + >>> ax.hist(gzscore(x), 50) + >>> plt.show() + + """ + a = np.asanyarray(a) + log = ma.log if isinstance(a, ma.MaskedArray) else np.log + + return zscore(log(a), axis=axis, ddof=ddof, nan_policy=nan_policy) + + +def zmap(scores, compare, axis=0, ddof=0, nan_policy='propagate'): + """ + Calculate the relative z-scores. + + Return an array of z-scores, i.e., scores that are standardized to + zero mean and unit variance, where mean and variance are calculated + from the comparison array. + + Parameters + ---------- + scores : array_like + The input for which z-scores are calculated. + compare : array_like + The input from which the mean and standard deviation of the + normalization are taken; assumed to have the same dimension as + `scores`. + axis : int or None, optional + Axis over which mean and variance of `compare` are calculated. + Default is 0. If None, compute over the whole array `scores`. + ddof : int, optional + Degrees of freedom correction in the calculation of the + standard deviation. Default is 0. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle the occurrence of nans in `compare`. + 'propagate' returns nan, 'raise' raises an exception, 'omit' + performs the calculations ignoring nan values. Default is + 'propagate'. Note that when the value is 'omit', nans in `scores` + also propagate to the output, but they do not affect the z-scores + computed for the non-nan values. + + Returns + ------- + zscore : array_like + Z-scores, in the same shape as `scores`. + + Notes + ----- + This function preserves ndarray subclasses, and works also with + matrices and masked arrays (it uses `asanyarray` instead of + `asarray` for parameters). + + Examples + -------- + >>> from scipy.stats import zmap + >>> a = [0.5, 2.0, 2.5, 3] + >>> b = [0, 1, 2, 3, 4] + >>> zmap(a, b) + array([-1.06066017, 0. , 0.35355339, 0.70710678]) + + """ + a = np.asanyarray(compare) + + if a.size == 0: + return np.empty(a.shape) + + contains_nan, nan_policy = _contains_nan(a, nan_policy) + + if contains_nan and nan_policy == 'omit': + if axis is None: + mn = _quiet_nanmean(a.ravel()) + std = _quiet_nanstd(a.ravel(), ddof=ddof) + isconst = _isconst(a.ravel()) + else: + mn = np.apply_along_axis(_quiet_nanmean, axis, a) + std = np.apply_along_axis(_quiet_nanstd, axis, a, ddof=ddof) + isconst = np.apply_along_axis(_isconst, axis, a) + else: + mn = a.mean(axis=axis, keepdims=True) + std = a.std(axis=axis, ddof=ddof, keepdims=True) + # The intent is to check whether all elements of `a` along `axis` are + # identical. Due to finite precision arithmetic, comparing elements + # against `mn` doesn't work. Previously, this compared elements to + # `_first`, but that extracts the element at index 0 regardless of + # whether it is masked. As a simple fix, compare against `min`. + a0 = a.min(axis=axis, keepdims=True) + isconst = (a == a0).all(axis=axis, keepdims=True) + + # Set std deviations that are 0 to 1 to avoid division by 0. + std[isconst] = 1.0 + z = (scores - mn) / std + # Set the outputs associated with a constant input to nan. + z[np.broadcast_to(isconst, z.shape)] = np.nan + return z + + +def gstd(a, axis=0, ddof=1): + """ + Calculate the geometric standard deviation of an array. + + The geometric standard deviation describes the spread of a set of numbers + where the geometric mean is preferred. It is a multiplicative factor, and + so a dimensionless quantity. + + It is defined as the exponent of the standard deviation of ``log(a)``. + Mathematically the population geometric standard deviation can be + evaluated as:: + + gstd = exp(std(log(a))) + + .. versionadded:: 1.3.0 + + Parameters + ---------- + a : array_like + An array like object containing the sample data. + axis : int, tuple or None, optional + Axis along which to operate. Default is 0. If None, compute over + the whole array `a`. + ddof : int, optional + Degree of freedom correction in the calculation of the + geometric standard deviation. Default is 1. + + Returns + ------- + gstd : ndarray or float + An array of the geometric standard deviation. If `axis` is None or `a` + is a 1d array a float is returned. + + See Also + -------- + gmean : Geometric mean + numpy.std : Standard deviation + gzscore : Geometric standard score + + Notes + ----- + As the calculation requires the use of logarithms the geometric standard + deviation only supports strictly positive values. Any non-positive or + infinite values will raise a `ValueError`. + The geometric standard deviation is sometimes confused with the exponent of + the standard deviation, ``exp(std(a))``. Instead the geometric standard + deviation is ``exp(std(log(a)))``. + The default value for `ddof` is different to the default value (0) used + by other ddof containing functions, such as ``np.std`` and ``np.nanstd``. + + References + ---------- + .. [1] "Geometric standard deviation", *Wikipedia*, + https://en.wikipedia.org/wiki/Geometric_standard_deviation. + .. [2] Kirkwood, T. B., "Geometric means and measures of dispersion", + Biometrics, vol. 35, pp. 908-909, 1979 + + Examples + -------- + Find the geometric standard deviation of a log-normally distributed sample. + Note that the standard deviation of the distribution is one, on a + log scale this evaluates to approximately ``exp(1)``. + + >>> import numpy as np + >>> from scipy.stats import gstd + >>> rng = np.random.default_rng() + >>> sample = rng.lognormal(mean=0, sigma=1, size=1000) + >>> gstd(sample) + 2.810010162475324 + + Compute the geometric standard deviation of a multidimensional array and + of a given axis. + + >>> a = np.arange(1, 25).reshape(2, 3, 4) + >>> gstd(a, axis=None) + 2.2944076136018947 + >>> gstd(a, axis=2) + array([[1.82424757, 1.22436866, 1.13183117], + [1.09348306, 1.07244798, 1.05914985]]) + >>> gstd(a, axis=(1,2)) + array([2.12939215, 1.22120169]) + + The geometric standard deviation further handles masked arrays. + + >>> a = np.arange(1, 25).reshape(2, 3, 4) + >>> ma = np.ma.masked_where(a > 16, a) + >>> ma + masked_array( + data=[[[1, 2, 3, 4], + [5, 6, 7, 8], + [9, 10, 11, 12]], + [[13, 14, 15, 16], + [--, --, --, --], + [--, --, --, --]]], + mask=[[[False, False, False, False], + [False, False, False, False], + [False, False, False, False]], + [[False, False, False, False], + [ True, True, True, True], + [ True, True, True, True]]], + fill_value=999999) + >>> gstd(ma, axis=2) + masked_array( + data=[[1.8242475707663655, 1.2243686572447428, 1.1318311657788478], + [1.0934830582350938, --, --]], + mask=[[False, False, False], + [False, True, True]], + fill_value=999999) + + """ + a = np.asanyarray(a) + log = ma.log if isinstance(a, ma.MaskedArray) else np.log + + try: + with warnings.catch_warnings(): + warnings.simplefilter("error", RuntimeWarning) + return np.exp(np.std(log(a), axis=axis, ddof=ddof)) + except RuntimeWarning as w: + if np.isinf(a).any(): + raise ValueError( + 'Infinite value encountered. The geometric standard deviation ' + 'is defined for strictly positive values only.' + ) from w + a_nan = np.isnan(a) + a_nan_any = a_nan.any() + # exclude NaN's from negativity check, but + # avoid expensive masking for arrays with no NaN + if ((a_nan_any and np.less_equal(np.nanmin(a), 0)) or + (not a_nan_any and np.less_equal(a, 0).any())): + raise ValueError( + 'Non positive value encountered. The geometric standard ' + 'deviation is defined for strictly positive values only.' + ) from w + elif 'Degrees of freedom <= 0 for slice' == str(w): + raise ValueError(w) from w + else: + # Remaining warnings don't need to be exceptions. + return np.exp(np.std(log(a, where=~a_nan), axis=axis, ddof=ddof)) + except TypeError as e: + raise ValueError( + 'Invalid array input. The inputs could not be ' + 'safely coerced to any supported types') from e + + +# Private dictionary initialized only once at module level +# See https://en.wikipedia.org/wiki/Robust_measures_of_scale +_scale_conversions = {'normal': special.erfinv(0.5) * 2.0 * math.sqrt(2.0)} + + +@_axis_nan_policy_factory( + lambda x: x, result_to_tuple=lambda x: (x,), n_outputs=1, + default_axis=None, override={'nan_propagation': False} +) +def iqr(x, axis=None, rng=(25, 75), scale=1.0, nan_policy='propagate', + interpolation='linear', keepdims=False): + r""" + Compute the interquartile range of the data along the specified axis. + + The interquartile range (IQR) is the difference between the 75th and + 25th percentile of the data. It is a measure of the dispersion + similar to standard deviation or variance, but is much more robust + against outliers [2]_. + + The ``rng`` parameter allows this function to compute other + percentile ranges than the actual IQR. For example, setting + ``rng=(0, 100)`` is equivalent to `numpy.ptp`. + + The IQR of an empty array is `np.nan`. + + .. versionadded:: 0.18.0 + + Parameters + ---------- + x : array_like + Input array or object that can be converted to an array. + axis : int or sequence of int, optional + Axis along which the range is computed. The default is to + compute the IQR for the entire array. + rng : Two-element sequence containing floats in range of [0,100] optional + Percentiles over which to compute the range. Each must be + between 0 and 100, inclusive. The default is the true IQR: + ``(25, 75)``. The order of the elements is not important. + scale : scalar or str or array_like of reals, optional + The numerical value of scale will be divided out of the final + result. The following string value is also recognized: + + * 'normal' : Scale by + :math:`2 \sqrt{2} erf^{-1}(\frac{1}{2}) \approx 1.349`. + + The default is 1.0. + Array-like `scale` of real dtype is also allowed, as long + as it broadcasts correctly to the output such that + ``out / scale`` is a valid operation. The output dimensions + depend on the input array, `x`, the `axis` argument, and the + `keepdims` flag. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + interpolation : str, optional + + Specifies the interpolation method to use when the percentile + boundaries lie between two data points ``i`` and ``j``. + The following options are available (default is 'linear'): + + * 'linear': ``i + (j - i)*fraction``, where ``fraction`` is the + fractional part of the index surrounded by ``i`` and ``j``. + * 'lower': ``i``. + * 'higher': ``j``. + * 'nearest': ``i`` or ``j`` whichever is nearest. + * 'midpoint': ``(i + j)/2``. + + For NumPy >= 1.22.0, the additional options provided by the ``method`` + keyword of `numpy.percentile` are also valid. + + keepdims : bool, optional + If this is set to True, the reduced axes are left in the + result as dimensions with size one. With this option, the result + will broadcast correctly against the original array `x`. + + Returns + ------- + iqr : scalar or ndarray + If ``axis=None``, a scalar is returned. If the input contains + integers or floats of smaller precision than ``np.float64``, then the + output data-type is ``np.float64``. Otherwise, the output data-type is + the same as that of the input. + + See Also + -------- + numpy.std, numpy.var + + References + ---------- + .. [1] "Interquartile range" https://en.wikipedia.org/wiki/Interquartile_range + .. [2] "Robust measures of scale" https://en.wikipedia.org/wiki/Robust_measures_of_scale + .. [3] "Quantile" https://en.wikipedia.org/wiki/Quantile + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import iqr + >>> x = np.array([[10, 7, 4], [3, 2, 1]]) + >>> x + array([[10, 7, 4], + [ 3, 2, 1]]) + >>> iqr(x) + 4.0 + >>> iqr(x, axis=0) + array([ 3.5, 2.5, 1.5]) + >>> iqr(x, axis=1) + array([ 3., 1.]) + >>> iqr(x, axis=1, keepdims=True) + array([[ 3.], + [ 1.]]) + + """ + x = asarray(x) + + # This check prevents percentile from raising an error later. Also, it is + # consistent with `np.var` and `np.std`. + if not x.size: + return _get_nan(x) + + # An error may be raised here, so fail-fast, before doing lengthy + # computations, even though `scale` is not used until later + if isinstance(scale, str): + scale_key = scale.lower() + if scale_key not in _scale_conversions: + raise ValueError(f"{scale} not a valid scale for `iqr`") + scale = _scale_conversions[scale_key] + + # Select the percentile function to use based on nans and policy + contains_nan, nan_policy = _contains_nan(x, nan_policy) + + if contains_nan and nan_policy == 'omit': + percentile_func = np.nanpercentile + else: + percentile_func = np.percentile + + if len(rng) != 2: + raise TypeError("quantile range must be two element sequence") + + if np.isnan(rng).any(): + raise ValueError("range must not contain NaNs") + + rng = sorted(rng) + pct = percentile_func(x, rng, axis=axis, method=interpolation, + keepdims=keepdims) + out = np.subtract(pct[1], pct[0]) + + if scale != 1.0: + out /= scale + + return out + + +def _mad_1d(x, center, nan_policy): + # Median absolute deviation for 1-d array x. + # This is a helper function for `median_abs_deviation`; it assumes its + # arguments have been validated already. In particular, x must be a + # 1-d numpy array, center must be callable, and if nan_policy is not + # 'propagate', it is assumed to be 'omit', because 'raise' is handled + # in `median_abs_deviation`. + # No warning is generated if x is empty or all nan. + isnan = np.isnan(x) + if isnan.any(): + if nan_policy == 'propagate': + return np.nan + x = x[~isnan] + if x.size == 0: + # MAD of an empty array is nan. + return np.nan + # Edge cases have been handled, so do the basic MAD calculation. + med = center(x) + mad = np.median(np.abs(x - med)) + return mad + + +def median_abs_deviation(x, axis=0, center=np.median, scale=1.0, + nan_policy='propagate'): + r""" + Compute the median absolute deviation of the data along the given axis. + + The median absolute deviation (MAD, [1]_) computes the median over the + absolute deviations from the median. It is a measure of dispersion + similar to the standard deviation but more robust to outliers [2]_. + + The MAD of an empty array is ``np.nan``. + + .. versionadded:: 1.5.0 + + Parameters + ---------- + x : array_like + Input array or object that can be converted to an array. + axis : int or None, optional + Axis along which the range is computed. Default is 0. If None, compute + the MAD over the entire array. + center : callable, optional + A function that will return the central value. The default is to use + np.median. Any user defined function used will need to have the + function signature ``func(arr, axis)``. + scale : scalar or str, optional + The numerical value of scale will be divided out of the final + result. The default is 1.0. The string "normal" is also accepted, + and results in `scale` being the inverse of the standard normal + quantile function at 0.75, which is approximately 0.67449. + Array-like scale is also allowed, as long as it broadcasts correctly + to the output such that ``out / scale`` is a valid operation. The + output dimensions depend on the input array, `x`, and the `axis` + argument. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Returns + ------- + mad : scalar or ndarray + If ``axis=None``, a scalar is returned. If the input contains + integers or floats of smaller precision than ``np.float64``, then the + output data-type is ``np.float64``. Otherwise, the output data-type is + the same as that of the input. + + See Also + -------- + numpy.std, numpy.var, numpy.median, scipy.stats.iqr, scipy.stats.tmean, + scipy.stats.tstd, scipy.stats.tvar + + Notes + ----- + The `center` argument only affects the calculation of the central value + around which the MAD is calculated. That is, passing in ``center=np.mean`` + will calculate the MAD around the mean - it will not calculate the *mean* + absolute deviation. + + The input array may contain `inf`, but if `center` returns `inf`, the + corresponding MAD for that data will be `nan`. + + References + ---------- + .. [1] "Median absolute deviation", + https://en.wikipedia.org/wiki/Median_absolute_deviation + .. [2] "Robust measures of scale", + https://en.wikipedia.org/wiki/Robust_measures_of_scale + + Examples + -------- + When comparing the behavior of `median_abs_deviation` with ``np.std``, + the latter is affected when we change a single value of an array to have an + outlier value while the MAD hardly changes: + + >>> import numpy as np + >>> from scipy import stats + >>> x = stats.norm.rvs(size=100, scale=1, random_state=123456) + >>> x.std() + 0.9973906394005013 + >>> stats.median_abs_deviation(x) + 0.82832610097857 + >>> x[0] = 345.6 + >>> x.std() + 34.42304872314415 + >>> stats.median_abs_deviation(x) + 0.8323442311590675 + + Axis handling example: + + >>> x = np.array([[10, 7, 4], [3, 2, 1]]) + >>> x + array([[10, 7, 4], + [ 3, 2, 1]]) + >>> stats.median_abs_deviation(x) + array([3.5, 2.5, 1.5]) + >>> stats.median_abs_deviation(x, axis=None) + 2.0 + + Scale normal example: + + >>> x = stats.norm.rvs(size=1000000, scale=2, random_state=123456) + >>> stats.median_abs_deviation(x) + 1.3487398527041636 + >>> stats.median_abs_deviation(x, scale='normal') + 1.9996446978061115 + + """ + if not callable(center): + raise TypeError("The argument 'center' must be callable. The given " + f"value {repr(center)} is not callable.") + + # An error may be raised here, so fail-fast, before doing lengthy + # computations, even though `scale` is not used until later + if isinstance(scale, str): + if scale.lower() == 'normal': + scale = 0.6744897501960817 # special.ndtri(0.75) + else: + raise ValueError(f"{scale} is not a valid scale value.") + + x = asarray(x) + + # Consistent with `np.var` and `np.std`. + if not x.size: + if axis is None: + return np.nan + nan_shape = tuple(item for i, item in enumerate(x.shape) if i != axis) + if nan_shape == (): + # Return nan, not array(nan) + return np.nan + return np.full(nan_shape, np.nan) + + contains_nan, nan_policy = _contains_nan(x, nan_policy) + + if contains_nan: + if axis is None: + mad = _mad_1d(x.ravel(), center, nan_policy) + else: + mad = np.apply_along_axis(_mad_1d, axis, x, center, nan_policy) + else: + if axis is None: + med = center(x, axis=None) + mad = np.median(np.abs(x - med)) + else: + # Wrap the call to center() in expand_dims() so it acts like + # keepdims=True was used. + med = np.expand_dims(center(x, axis=axis), axis) + mad = np.median(np.abs(x - med), axis=axis) + + return mad / scale + + +##################################### +# TRIMMING FUNCTIONS # +##################################### + + +SigmaclipResult = namedtuple('SigmaclipResult', ('clipped', 'lower', 'upper')) + + +def sigmaclip(a, low=4., high=4.): + """Perform iterative sigma-clipping of array elements. + + Starting from the full sample, all elements outside the critical range are + removed, i.e. all elements of the input array `c` that satisfy either of + the following conditions:: + + c < mean(c) - std(c)*low + c > mean(c) + std(c)*high + + The iteration continues with the updated sample until no + elements are outside the (updated) range. + + Parameters + ---------- + a : array_like + Data array, will be raveled if not 1-D. + low : float, optional + Lower bound factor of sigma clipping. Default is 4. + high : float, optional + Upper bound factor of sigma clipping. Default is 4. + + Returns + ------- + clipped : ndarray + Input array with clipped elements removed. + lower : float + Lower threshold value use for clipping. + upper : float + Upper threshold value use for clipping. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import sigmaclip + >>> a = np.concatenate((np.linspace(9.5, 10.5, 31), + ... np.linspace(0, 20, 5))) + >>> fact = 1.5 + >>> c, low, upp = sigmaclip(a, fact, fact) + >>> c + array([ 9.96666667, 10. , 10.03333333, 10. ]) + >>> c.var(), c.std() + (0.00055555555555555165, 0.023570226039551501) + >>> low, c.mean() - fact*c.std(), c.min() + (9.9646446609406727, 9.9646446609406727, 9.9666666666666668) + >>> upp, c.mean() + fact*c.std(), c.max() + (10.035355339059327, 10.035355339059327, 10.033333333333333) + + >>> a = np.concatenate((np.linspace(9.5, 10.5, 11), + ... np.linspace(-100, -50, 3))) + >>> c, low, upp = sigmaclip(a, 1.8, 1.8) + >>> (c == np.linspace(9.5, 10.5, 11)).all() + True + + """ + c = np.asarray(a).ravel() + delta = 1 + while delta: + c_std = c.std() + c_mean = c.mean() + size = c.size + critlower = c_mean - c_std * low + critupper = c_mean + c_std * high + c = c[(c >= critlower) & (c <= critupper)] + delta = size - c.size + + return SigmaclipResult(c, critlower, critupper) + + +def trimboth(a, proportiontocut, axis=0): + """Slice off a proportion of items from both ends of an array. + + Slice off the passed proportion of items from both ends of the passed + array (i.e., with `proportiontocut` = 0.1, slices leftmost 10% **and** + rightmost 10% of scores). The trimmed values are the lowest and + highest ones. + Slice off less if proportion results in a non-integer slice index (i.e. + conservatively slices off `proportiontocut`). + + Parameters + ---------- + a : array_like + Data to trim. + proportiontocut : float + Proportion (in range 0-1) of total data set to trim of each end. + axis : int or None, optional + Axis along which to trim data. Default is 0. If None, compute over + the whole array `a`. + + Returns + ------- + out : ndarray + Trimmed version of array `a`. The order of the trimmed content + is undefined. + + See Also + -------- + trim_mean + + Examples + -------- + Create an array of 10 values and trim 10% of those values from each end: + + >>> import numpy as np + >>> from scipy import stats + >>> a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] + >>> stats.trimboth(a, 0.1) + array([1, 3, 2, 4, 5, 6, 7, 8]) + + Note that the elements of the input array are trimmed by value, but the + output array is not necessarily sorted. + + The proportion to trim is rounded down to the nearest integer. For + instance, trimming 25% of the values from each end of an array of 10 + values will return an array of 6 values: + + >>> b = np.arange(10) + >>> stats.trimboth(b, 1/4).shape + (6,) + + Multidimensional arrays can be trimmed along any axis or across the entire + array: + + >>> c = [2, 4, 6, 8, 0, 1, 3, 5, 7, 9] + >>> d = np.array([a, b, c]) + >>> stats.trimboth(d, 0.4, axis=0).shape + (1, 10) + >>> stats.trimboth(d, 0.4, axis=1).shape + (3, 2) + >>> stats.trimboth(d, 0.4, axis=None).shape + (6,) + + """ + a = np.asarray(a) + + if a.size == 0: + return a + + if axis is None: + a = a.ravel() + axis = 0 + + nobs = a.shape[axis] + lowercut = int(proportiontocut * nobs) + uppercut = nobs - lowercut + if (lowercut >= uppercut): + raise ValueError("Proportion too big.") + + atmp = np.partition(a, (lowercut, uppercut - 1), axis) + + sl = [slice(None)] * atmp.ndim + sl[axis] = slice(lowercut, uppercut) + return atmp[tuple(sl)] + + +def trim1(a, proportiontocut, tail='right', axis=0): + """Slice off a proportion from ONE end of the passed array distribution. + + If `proportiontocut` = 0.1, slices off 'leftmost' or 'rightmost' + 10% of scores. The lowest or highest values are trimmed (depending on + the tail). + Slice off less if proportion results in a non-integer slice index + (i.e. conservatively slices off `proportiontocut` ). + + Parameters + ---------- + a : array_like + Input array. + proportiontocut : float + Fraction to cut off of 'left' or 'right' of distribution. + tail : {'left', 'right'}, optional + Defaults to 'right'. + axis : int or None, optional + Axis along which to trim data. Default is 0. If None, compute over + the whole array `a`. + + Returns + ------- + trim1 : ndarray + Trimmed version of array `a`. The order of the trimmed content is + undefined. + + Examples + -------- + Create an array of 10 values and trim 20% of its lowest values: + + >>> import numpy as np + >>> from scipy import stats + >>> a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] + >>> stats.trim1(a, 0.2, 'left') + array([2, 4, 3, 5, 6, 7, 8, 9]) + + Note that the elements of the input array are trimmed by value, but the + output array is not necessarily sorted. + + The proportion to trim is rounded down to the nearest integer. For + instance, trimming 25% of the values from an array of 10 values will + return an array of 8 values: + + >>> b = np.arange(10) + >>> stats.trim1(b, 1/4).shape + (8,) + + Multidimensional arrays can be trimmed along any axis or across the entire + array: + + >>> c = [2, 4, 6, 8, 0, 1, 3, 5, 7, 9] + >>> d = np.array([a, b, c]) + >>> stats.trim1(d, 0.8, axis=0).shape + (1, 10) + >>> stats.trim1(d, 0.8, axis=1).shape + (3, 2) + >>> stats.trim1(d, 0.8, axis=None).shape + (6,) + + """ + a = np.asarray(a) + if axis is None: + a = a.ravel() + axis = 0 + + nobs = a.shape[axis] + + # avoid possible corner case + if proportiontocut >= 1: + return [] + + if tail.lower() == 'right': + lowercut = 0 + uppercut = nobs - int(proportiontocut * nobs) + + elif tail.lower() == 'left': + lowercut = int(proportiontocut * nobs) + uppercut = nobs + + atmp = np.partition(a, (lowercut, uppercut - 1), axis) + + sl = [slice(None)] * atmp.ndim + sl[axis] = slice(lowercut, uppercut) + return atmp[tuple(sl)] + + +def trim_mean(a, proportiontocut, axis=0): + """Return mean of array after trimming a specified fraction of extreme values + + Removes the specified proportion of elements from *each* end of the + sorted array, then computes the mean of the remaining elements. + + Parameters + ---------- + a : array_like + Input array. + proportiontocut : float + Fraction of the most positive and most negative elements to remove. + When the specified proportion does not result in an integer number of + elements, the number of elements to trim is rounded down. + axis : int or None, default: 0 + Axis along which the trimmed means are computed. + If None, compute over the raveled array. + + Returns + ------- + trim_mean : ndarray + Mean of trimmed array. + + See Also + -------- + trimboth : Remove a proportion of elements from each end of an array. + tmean : Compute the mean after trimming values outside specified limits. + + Notes + ----- + For 1-D array `a`, `trim_mean` is approximately equivalent to the following + calculation:: + + import numpy as np + a = np.sort(a) + m = int(proportiontocut * len(a)) + np.mean(a[m: len(a) - m]) + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> x = [1, 2, 3, 5] + >>> stats.trim_mean(x, 0.25) + 2.5 + + When the specified proportion does not result in an integer number of + elements, the number of elements to trim is rounded down. + + >>> stats.trim_mean(x, 0.24999) == np.mean(x) + True + + Use `axis` to specify the axis along which the calculation is performed. + + >>> x2 = [[1, 2, 3, 5], + ... [10, 20, 30, 50]] + >>> stats.trim_mean(x2, 0.25) + array([ 5.5, 11. , 16.5, 27.5]) + >>> stats.trim_mean(x2, 0.25, axis=1) + array([ 2.5, 25. ]) + + """ + a = np.asarray(a) + + if a.size == 0: + return np.nan + + if axis is None: + a = a.ravel() + axis = 0 + + nobs = a.shape[axis] + lowercut = int(proportiontocut * nobs) + uppercut = nobs - lowercut + if (lowercut > uppercut): + raise ValueError("Proportion too big.") + + atmp = np.partition(a, (lowercut, uppercut - 1), axis) + + sl = [slice(None)] * atmp.ndim + sl[axis] = slice(lowercut, uppercut) + return np.mean(atmp[tuple(sl)], axis=axis) + + +F_onewayResult = namedtuple('F_onewayResult', ('statistic', 'pvalue')) + + +def _create_f_oneway_nan_result(shape, axis, samples): + """ + This is a helper function for f_oneway for creating the return values + in certain degenerate conditions. It creates return values that are + all nan with the appropriate shape for the given `shape` and `axis`. + """ + axis = normalize_axis_index(axis, len(shape)) + shp = shape[:axis] + shape[axis+1:] + f = np.full(shp, fill_value=_get_nan(*samples)) + prob = f.copy() + return F_onewayResult(f[()], prob[()]) + + +def _first(arr, axis): + """Return arr[..., 0:1, ...] where 0:1 is in the `axis` position.""" + return np.take_along_axis(arr, np.array(0, ndmin=arr.ndim), axis) + + +def _f_oneway_is_too_small(samples, kwargs={}, axis=-1): + # Check this after forming alldata, so shape errors are detected + # and reported before checking for 0 length inputs. + if any(sample.shape[axis] == 0 for sample in samples): + msg = 'at least one input has length 0' + warnings.warn(stats.DegenerateDataWarning(msg), stacklevel=2) + return True + + # Must have at least one group with length greater than 1. + if all(sample.shape[axis] == 1 for sample in samples): + msg = ('all input arrays have length 1. f_oneway requires that at ' + 'least one input has length greater than 1.') + warnings.warn(stats.DegenerateDataWarning(msg), stacklevel=2) + return True + + return False + + +@_axis_nan_policy_factory( + F_onewayResult, n_samples=None, too_small=_f_oneway_is_too_small +) +def f_oneway(*samples, axis=0): + """Perform one-way ANOVA. + + The one-way ANOVA tests the null hypothesis that two or more groups have + the same population mean. The test is applied to samples from two or + more groups, possibly with differing sizes. + + Parameters + ---------- + sample1, sample2, ... : array_like + The sample measurements for each group. There must be at least + two arguments. If the arrays are multidimensional, then all the + dimensions of the array must be the same except for `axis`. + axis : int, optional + Axis of the input arrays along which the test is applied. + Default is 0. + + Returns + ------- + statistic : float + The computed F statistic of the test. + pvalue : float + The associated p-value from the F distribution. + + Warns + ----- + `~scipy.stats.ConstantInputWarning` + Raised if all values within each of the input arrays are identical. + In this case the F statistic is either infinite or isn't defined, + so ``np.inf`` or ``np.nan`` is returned. + + `~scipy.stats.DegenerateDataWarning` + Raised if the length of any input array is 0, or if all the input + arrays have length 1. ``np.nan`` is returned for the F statistic + and the p-value in these cases. + + Notes + ----- + The ANOVA test has important assumptions that must be satisfied in order + for the associated p-value to be valid. + + 1. The samples are independent. + 2. Each sample is from a normally distributed population. + 3. The population standard deviations of the groups are all equal. This + property is known as homoscedasticity. + + If these assumptions are not true for a given set of data, it may still + be possible to use the Kruskal-Wallis H-test (`scipy.stats.kruskal`) or + the Alexander-Govern test (`scipy.stats.alexandergovern`) although with + some loss of power. + + The length of each group must be at least one, and there must be at + least one group with length greater than one. If these conditions + are not satisfied, a warning is generated and (``np.nan``, ``np.nan``) + is returned. + + If all values in each group are identical, and there exist at least two + groups with different values, the function generates a warning and + returns (``np.inf``, 0). + + If all values in all groups are the same, function generates a warning + and returns (``np.nan``, ``np.nan``). + + The algorithm is from Heiman [2]_, pp.394-7. + + References + ---------- + .. [1] R. Lowry, "Concepts and Applications of Inferential Statistics", + Chapter 14, 2014, http://vassarstats.net/textbook/ + + .. [2] G.W. Heiman, "Understanding research methods and statistics: An + integrated introduction for psychology", Houghton, Mifflin and + Company, 2001. + + .. [3] G.H. McDonald, "Handbook of Biological Statistics", One-way ANOVA. + http://www.biostathandbook.com/onewayanova.html + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import f_oneway + + Here are some data [3]_ on a shell measurement (the length of the anterior + adductor muscle scar, standardized by dividing by length) in the mussel + Mytilus trossulus from five locations: Tillamook, Oregon; Newport, Oregon; + Petersburg, Alaska; Magadan, Russia; and Tvarminne, Finland, taken from a + much larger data set used in McDonald et al. (1991). + + >>> tillamook = [0.0571, 0.0813, 0.0831, 0.0976, 0.0817, 0.0859, 0.0735, + ... 0.0659, 0.0923, 0.0836] + >>> newport = [0.0873, 0.0662, 0.0672, 0.0819, 0.0749, 0.0649, 0.0835, + ... 0.0725] + >>> petersburg = [0.0974, 0.1352, 0.0817, 0.1016, 0.0968, 0.1064, 0.105] + >>> magadan = [0.1033, 0.0915, 0.0781, 0.0685, 0.0677, 0.0697, 0.0764, + ... 0.0689] + >>> tvarminne = [0.0703, 0.1026, 0.0956, 0.0973, 0.1039, 0.1045] + >>> f_oneway(tillamook, newport, petersburg, magadan, tvarminne) + F_onewayResult(statistic=7.121019471642447, pvalue=0.0002812242314534544) + + `f_oneway` accepts multidimensional input arrays. When the inputs + are multidimensional and `axis` is not given, the test is performed + along the first axis of the input arrays. For the following data, the + test is performed three times, once for each column. + + >>> a = np.array([[9.87, 9.03, 6.81], + ... [7.18, 8.35, 7.00], + ... [8.39, 7.58, 7.68], + ... [7.45, 6.33, 9.35], + ... [6.41, 7.10, 9.33], + ... [8.00, 8.24, 8.44]]) + >>> b = np.array([[6.35, 7.30, 7.16], + ... [6.65, 6.68, 7.63], + ... [5.72, 7.73, 6.72], + ... [7.01, 9.19, 7.41], + ... [7.75, 7.87, 8.30], + ... [6.90, 7.97, 6.97]]) + >>> c = np.array([[3.31, 8.77, 1.01], + ... [8.25, 3.24, 3.62], + ... [6.32, 8.81, 5.19], + ... [7.48, 8.83, 8.91], + ... [8.59, 6.01, 6.07], + ... [3.07, 9.72, 7.48]]) + >>> F, p = f_oneway(a, b, c) + >>> F + array([1.75676344, 0.03701228, 3.76439349]) + >>> p + array([0.20630784, 0.96375203, 0.04733157]) + + """ + if len(samples) < 2: + raise TypeError('at least two inputs are required;' + f' got {len(samples)}.') + + # ANOVA on N groups, each in its own array + num_groups = len(samples) + + # We haven't explicitly validated axis, but if it is bad, this call of + # np.concatenate will raise np.exceptions.AxisError. The call will raise + # ValueError if the dimensions of all the arrays, except the axis + # dimension, are not the same. + alldata = np.concatenate(samples, axis=axis) + bign = alldata.shape[axis] + + # Check if the inputs are too small + if _f_oneway_is_too_small(samples): + return _create_f_oneway_nan_result(alldata.shape, axis, samples) + + # Check if all values within each group are identical, and if the common + # value in at least one group is different from that in another group. + # Based on https://github.com/scipy/scipy/issues/11669 + + # If axis=0, say, and the groups have shape (n0, ...), (n1, ...), ..., + # then is_const is a boolean array with shape (num_groups, ...). + # It is True if the values within the groups along the axis slice are + # identical. In the typical case where each input array is 1-d, is_const is + # a 1-d array with length num_groups. + is_const = np.concatenate( + [(_first(sample, axis) == sample).all(axis=axis, + keepdims=True) + for sample in samples], + axis=axis + ) + + # all_const is a boolean array with shape (...) (see previous comment). + # It is True if the values within each group along the axis slice are + # the same (e.g. [[3, 3, 3], [5, 5, 5, 5], [4, 4, 4]]). + all_const = is_const.all(axis=axis) + if all_const.any(): + msg = ("Each of the input arrays is constant; " + "the F statistic is not defined or infinite") + warnings.warn(stats.ConstantInputWarning(msg), stacklevel=2) + + # all_same_const is True if all the values in the groups along the axis=0 + # slice are the same (e.g. [[3, 3, 3], [3, 3, 3, 3], [3, 3, 3]]). + all_same_const = (_first(alldata, axis) == alldata).all(axis=axis) + + # Determine the mean of the data, and subtract that from all inputs to a + # variance (via sum_of_sq / sq_of_sum) calculation. Variance is invariant + # to a shift in location, and centering all data around zero vastly + # improves numerical stability. + offset = alldata.mean(axis=axis, keepdims=True) + alldata = alldata - offset + + normalized_ss = _square_of_sums(alldata, axis=axis) / bign + + sstot = _sum_of_squares(alldata, axis=axis) - normalized_ss + + ssbn = 0 + for sample in samples: + smo_ss = _square_of_sums(sample - offset, axis=axis) + ssbn = ssbn + smo_ss / sample.shape[axis] + + # Naming: variables ending in bn/b are for "between treatments", wn/w are + # for "within treatments" + ssbn = ssbn - normalized_ss + sswn = sstot - ssbn + dfbn = num_groups - 1 + dfwn = bign - num_groups + msb = ssbn / dfbn + msw = sswn / dfwn + with np.errstate(divide='ignore', invalid='ignore'): + f = msb / msw + + prob = special.fdtrc(dfbn, dfwn, f) # equivalent to stats.f.sf + + # Fix any f values that should be inf or nan because the corresponding + # inputs were constant. + if np.isscalar(f): + if all_same_const: + f = np.nan + prob = np.nan + elif all_const: + f = np.inf + prob = 0.0 + else: + f[all_const] = np.inf + prob[all_const] = 0.0 + f[all_same_const] = np.nan + prob[all_same_const] = np.nan + + return F_onewayResult(f, prob) + + +@dataclass +class AlexanderGovernResult: + statistic: float + pvalue: float + + +@_axis_nan_policy_factory( + AlexanderGovernResult, n_samples=None, + result_to_tuple=lambda x: (x.statistic, x.pvalue), + too_small=1 +) +def alexandergovern(*samples, nan_policy='propagate'): + """Performs the Alexander Govern test. + + The Alexander-Govern approximation tests the equality of k independent + means in the face of heterogeneity of variance. The test is applied to + samples from two or more groups, possibly with differing sizes. + + Parameters + ---------- + sample1, sample2, ... : array_like + The sample measurements for each group. There must be at least + two samples. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Returns + ------- + res : AlexanderGovernResult + An object with attributes: + + statistic : float + The computed A statistic of the test. + pvalue : float + The associated p-value from the chi-squared distribution. + + Warns + ----- + `~scipy.stats.ConstantInputWarning` + Raised if an input is a constant array. The statistic is not defined + in this case, so ``np.nan`` is returned. + + See Also + -------- + f_oneway : one-way ANOVA + + Notes + ----- + The use of this test relies on several assumptions. + + 1. The samples are independent. + 2. Each sample is from a normally distributed population. + 3. Unlike `f_oneway`, this test does not assume on homoscedasticity, + instead relaxing the assumption of equal variances. + + Input samples must be finite, one dimensional, and with size greater than + one. + + References + ---------- + .. [1] Alexander, Ralph A., and Diane M. Govern. "A New and Simpler + Approximation for ANOVA under Variance Heterogeneity." Journal + of Educational Statistics, vol. 19, no. 2, 1994, pp. 91-101. + JSTOR, www.jstor.org/stable/1165140. Accessed 12 Sept. 2020. + + Examples + -------- + >>> from scipy.stats import alexandergovern + + Here are some data on annual percentage rate of interest charged on + new car loans at nine of the largest banks in four American cities + taken from the National Institute of Standards and Technology's + ANOVA dataset. + + We use `alexandergovern` to test the null hypothesis that all cities + have the same mean APR against the alternative that the cities do not + all have the same mean APR. We decide that a significance level of 5% + is required to reject the null hypothesis in favor of the alternative. + + >>> atlanta = [13.75, 13.75, 13.5, 13.5, 13.0, 13.0, 13.0, 12.75, 12.5] + >>> chicago = [14.25, 13.0, 12.75, 12.5, 12.5, 12.4, 12.3, 11.9, 11.9] + >>> houston = [14.0, 14.0, 13.51, 13.5, 13.5, 13.25, 13.0, 12.5, 12.5] + >>> memphis = [15.0, 14.0, 13.75, 13.59, 13.25, 12.97, 12.5, 12.25, + ... 11.89] + >>> alexandergovern(atlanta, chicago, houston, memphis) + AlexanderGovernResult(statistic=4.65087071883494, + pvalue=0.19922132490385214) + + The p-value is 0.1992, indicating a nearly 20% chance of observing + such an extreme value of the test statistic under the null hypothesis. + This exceeds 5%, so we do not reject the null hypothesis in favor of + the alternative. + + """ + samples = _alexandergovern_input_validation(samples, nan_policy) + + if np.any([(sample == sample[0]).all() for sample in samples]): + msg = "An input array is constant; the statistic is not defined." + warnings.warn(stats.ConstantInputWarning(msg), stacklevel=2) + return AlexanderGovernResult(np.nan, np.nan) + + # The following formula numbers reference the equation described on + # page 92 by Alexander, Govern. Formulas 5, 6, and 7 describe other + # tests that serve as the basis for equation (8) but are not needed + # to perform the test. + + # precalculate mean and length of each sample + lengths = np.array([len(sample) for sample in samples]) + means = np.array([np.mean(sample) for sample in samples]) + + # (1) determine standard error of the mean for each sample + standard_errors = [np.std(sample, ddof=1) / np.sqrt(length) + for sample, length in zip(samples, lengths)] + + # (2) define a weight for each sample + inv_sq_se = 1 / np.square(standard_errors) + weights = inv_sq_se / np.sum(inv_sq_se) + + # (3) determine variance-weighted estimate of the common mean + var_w = np.sum(weights * means) + + # (4) determine one-sample t statistic for each group + t_stats = (means - var_w)/standard_errors + + # calculate parameters to be used in transformation + v = lengths - 1 + a = v - .5 + b = 48 * a**2 + c = (a * np.log(1 + (t_stats ** 2)/v))**.5 + + # (8) perform a normalizing transformation on t statistic + z = (c + ((c**3 + 3*c)/b) - + ((4*c**7 + 33*c**5 + 240*c**3 + 855*c) / + (b**2*10 + 8*b*c**4 + 1000*b))) + + # (9) calculate statistic + A = np.sum(np.square(z)) + + # "[the p value is determined from] central chi-square random deviates + # with k - 1 degrees of freedom". Alexander, Govern (94) + p = distributions.chi2.sf(A, len(samples) - 1) + return AlexanderGovernResult(A, p) + + +def _alexandergovern_input_validation(samples, nan_policy): + if len(samples) < 2: + raise TypeError(f"2 or more inputs required, got {len(samples)}") + + for sample in samples: + if np.size(sample) <= 1: + raise ValueError("Input sample size must be greater than one.") + if np.isinf(sample).any(): + raise ValueError("Input samples must be finite.") + + return samples + + +def _pearsonr_fisher_ci(r, n, confidence_level, alternative): + """ + Compute the confidence interval for Pearson's R. + + Fisher's transformation is used to compute the confidence interval + (https://en.wikipedia.org/wiki/Fisher_transformation). + """ + if r == 1: + zr = np.inf + elif r == -1: + zr = -np.inf + else: + zr = np.arctanh(r) + + if n > 3: + se = np.sqrt(1 / (n - 3)) + if alternative == "two-sided": + h = special.ndtri(0.5 + confidence_level/2) + zlo = zr - h*se + zhi = zr + h*se + rlo = np.tanh(zlo) + rhi = np.tanh(zhi) + elif alternative == "less": + h = special.ndtri(confidence_level) + zhi = zr + h*se + rhi = np.tanh(zhi) + rlo = -1.0 + else: + # alternative == "greater": + h = special.ndtri(confidence_level) + zlo = zr - h*se + rlo = np.tanh(zlo) + rhi = 1.0 + else: + rlo, rhi = -1.0, 1.0 + + return ConfidenceInterval(low=rlo, high=rhi) + + +def _pearsonr_bootstrap_ci(confidence_level, method, x, y, alternative): + """ + Compute the confidence interval for Pearson's R using the bootstrap. + """ + def statistic(x, y): + statistic, _ = pearsonr(x, y) + return statistic + + res = bootstrap((x, y), statistic, confidence_level=confidence_level, + paired=True, alternative=alternative, **method._asdict()) + # for one-sided confidence intervals, bootstrap gives +/- inf on one side + res.confidence_interval = np.clip(res.confidence_interval, -1, 1) + + return ConfidenceInterval(*res.confidence_interval) + + +ConfidenceInterval = namedtuple('ConfidenceInterval', ['low', 'high']) + +PearsonRResultBase = _make_tuple_bunch('PearsonRResultBase', + ['statistic', 'pvalue'], []) + + +class PearsonRResult(PearsonRResultBase): + """ + Result of `scipy.stats.pearsonr` + + Attributes + ---------- + statistic : float + Pearson product-moment correlation coefficient. + pvalue : float + The p-value associated with the chosen alternative. + + Methods + ------- + confidence_interval + Computes the confidence interval of the correlation + coefficient `statistic` for the given confidence level. + + """ + def __init__(self, statistic, pvalue, alternative, n, x, y): + super().__init__(statistic, pvalue) + self._alternative = alternative + self._n = n + self._x = x + self._y = y + + # add alias for consistency with other correlation functions + self.correlation = statistic + + def confidence_interval(self, confidence_level=0.95, method=None): + """ + The confidence interval for the correlation coefficient. + + Compute the confidence interval for the correlation coefficient + ``statistic`` with the given confidence level. + + If `method` is not provided, + The confidence interval is computed using the Fisher transformation + F(r) = arctanh(r) [1]_. When the sample pairs are drawn from a + bivariate normal distribution, F(r) approximately follows a normal + distribution with standard error ``1/sqrt(n - 3)``, where ``n`` is the + length of the original samples along the calculation axis. When + ``n <= 3``, this approximation does not yield a finite, real standard + error, so we define the confidence interval to be -1 to 1. + + If `method` is an instance of `BootstrapMethod`, the confidence + interval is computed using `scipy.stats.bootstrap` with the provided + configuration options and other appropriate settings. In some cases, + confidence limits may be NaN due to a degenerate resample, and this is + typical for very small samples (~6 observations). + + Parameters + ---------- + confidence_level : float + The confidence level for the calculation of the correlation + coefficient confidence interval. Default is 0.95. + + method : BootstrapMethod, optional + Defines the method used to compute the confidence interval. See + method description for details. + + .. versionadded:: 1.11.0 + + Returns + ------- + ci : namedtuple + The confidence interval is returned in a ``namedtuple`` with + fields `low` and `high`. + + References + ---------- + .. [1] "Pearson correlation coefficient", Wikipedia, + https://en.wikipedia.org/wiki/Pearson_correlation_coefficient + """ + if isinstance(method, BootstrapMethod): + ci = _pearsonr_bootstrap_ci(confidence_level, method, + self._x, self._y, self._alternative) + elif method is None: + ci = _pearsonr_fisher_ci(self.statistic, self._n, confidence_level, + self._alternative) + else: + message = ('`method` must be an instance of `BootstrapMethod` ' + 'or None.') + raise ValueError(message) + return ci + +def pearsonr(x, y, *, alternative='two-sided', method=None): + r""" + Pearson correlation coefficient and p-value for testing non-correlation. + + The Pearson correlation coefficient [1]_ measures the linear relationship + between two datasets. Like other correlation + coefficients, this one varies between -1 and +1 with 0 implying no + correlation. Correlations of -1 or +1 imply an exact linear relationship. + Positive correlations imply that as x increases, so does y. Negative + correlations imply that as x increases, y decreases. + + This function also performs a test of the null hypothesis that the + distributions underlying the samples are uncorrelated and normally + distributed. (See Kowalski [3]_ + for a discussion of the effects of non-normality of the input on the + distribution of the correlation coefficient.) + The p-value roughly indicates the probability of an uncorrelated system + producing datasets that have a Pearson correlation at least as extreme + as the one computed from these datasets. + + Parameters + ---------- + x : (N,) array_like + Input array. + y : (N,) array_like + Input array. + alternative : {'two-sided', 'greater', 'less'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + + * 'two-sided': the correlation is nonzero + * 'less': the correlation is negative (less than zero) + * 'greater': the correlation is positive (greater than zero) + + .. versionadded:: 1.9.0 + method : ResamplingMethod, optional + Defines the method used to compute the p-value. If `method` is an + instance of `PermutationMethod`/`MonteCarloMethod`, the p-value is + computed using + `scipy.stats.permutation_test`/`scipy.stats.monte_carlo_test` with the + provided configuration options and other appropriate settings. + Otherwise, the p-value is computed as documented in the notes. + + .. versionadded:: 1.11.0 + + Returns + ------- + result : `~scipy.stats._result_classes.PearsonRResult` + An object with the following attributes: + + statistic : float + Pearson product-moment correlation coefficient. + pvalue : float + The p-value associated with the chosen alternative. + + The object has the following method: + + confidence_interval(confidence_level, method) + This computes the confidence interval of the correlation + coefficient `statistic` for the given confidence level. + The confidence interval is returned in a ``namedtuple`` with + fields `low` and `high`. If `method` is not provided, the + confidence interval is computed using the Fisher transformation + [1]_. If `method` is an instance of `BootstrapMethod`, the + confidence interval is computed using `scipy.stats.bootstrap` with + the provided configuration options and other appropriate settings. + In some cases, confidence limits may be NaN due to a degenerate + resample, and this is typical for very small samples (~6 + observations). + + Warns + ----- + `~scipy.stats.ConstantInputWarning` + Raised if an input is a constant array. The correlation coefficient + is not defined in this case, so ``np.nan`` is returned. + + `~scipy.stats.NearConstantInputWarning` + Raised if an input is "nearly" constant. The array ``x`` is considered + nearly constant if ``norm(x - mean(x)) < 1e-13 * abs(mean(x))``. + Numerical errors in the calculation ``x - mean(x)`` in this case might + result in an inaccurate calculation of r. + + See Also + -------- + spearmanr : Spearman rank-order correlation coefficient. + kendalltau : Kendall's tau, a correlation measure for ordinal data. + + Notes + ----- + The correlation coefficient is calculated as follows: + + .. math:: + + r = \frac{\sum (x - m_x) (y - m_y)} + {\sqrt{\sum (x - m_x)^2 \sum (y - m_y)^2}} + + where :math:`m_x` is the mean of the vector x and :math:`m_y` is + the mean of the vector y. + + Under the assumption that x and y are drawn from + independent normal distributions (so the population correlation coefficient + is 0), the probability density function of the sample correlation + coefficient r is ([1]_, [2]_): + + .. math:: + f(r) = \frac{{(1-r^2)}^{n/2-2}}{\mathrm{B}(\frac{1}{2},\frac{n}{2}-1)} + + where n is the number of samples, and B is the beta function. This + is sometimes referred to as the exact distribution of r. This is + the distribution that is used in `pearsonr` to compute the p-value when + the `method` parameter is left at its default value (None). + The distribution is a beta distribution on the interval [-1, 1], + with equal shape parameters a = b = n/2 - 1. In terms of SciPy's + implementation of the beta distribution, the distribution of r is:: + + dist = scipy.stats.beta(n/2 - 1, n/2 - 1, loc=-1, scale=2) + + The default p-value returned by `pearsonr` is a two-sided p-value. For a + given sample with correlation coefficient r, the p-value is + the probability that abs(r') of a random sample x' and y' drawn from + the population with zero correlation would be greater than or equal + to abs(r). In terms of the object ``dist`` shown above, the p-value + for a given r and length n can be computed as:: + + p = 2*dist.cdf(-abs(r)) + + When n is 2, the above continuous distribution is not well-defined. + One can interpret the limit of the beta distribution as the shape + parameters a and b approach a = b = 0 as a discrete distribution with + equal probability masses at r = 1 and r = -1. More directly, one + can observe that, given the data x = [x1, x2] and y = [y1, y2], and + assuming x1 != x2 and y1 != y2, the only possible values for r are 1 + and -1. Because abs(r') for any sample x' and y' with length 2 will + be 1, the two-sided p-value for a sample of length 2 is always 1. + + For backwards compatibility, the object that is returned also behaves + like a tuple of length two that holds the statistic and the p-value. + + References + ---------- + .. [1] "Pearson correlation coefficient", Wikipedia, + https://en.wikipedia.org/wiki/Pearson_correlation_coefficient + .. [2] Student, "Probable error of a correlation coefficient", + Biometrika, Volume 6, Issue 2-3, 1 September 1908, pp. 302-310. + .. [3] C. J. Kowalski, "On the Effects of Non-Normality on the Distribution + of the Sample Product-Moment Correlation Coefficient" + Journal of the Royal Statistical Society. Series C (Applied + Statistics), Vol. 21, No. 1 (1972), pp. 1-12. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> x, y = [1, 2, 3, 4, 5, 6, 7], [10, 9, 2.5, 6, 4, 3, 2] + >>> res = stats.pearsonr(x, y) + >>> res + PearsonRResult(statistic=-0.828503883588428, pvalue=0.021280260007523286) + + To perform an exact permutation version of the test: + + >>> rng = np.random.default_rng(7796654889291491997) + >>> method = stats.PermutationMethod(n_resamples=np.inf, random_state=rng) + >>> stats.pearsonr(x, y, method=method) + PearsonRResult(statistic=-0.828503883588428, pvalue=0.028174603174603175) + + To perform the test under the null hypothesis that the data were drawn from + *uniform* distributions: + + >>> method = stats.MonteCarloMethod(rvs=(rng.uniform, rng.uniform)) + >>> stats.pearsonr(x, y, method=method) + PearsonRResult(statistic=-0.828503883588428, pvalue=0.0188) + + To produce an asymptotic 90% confidence interval: + + >>> res.confidence_interval(confidence_level=0.9) + ConfidenceInterval(low=-0.9644331982722841, high=-0.3460237473272273) + + And for a bootstrap confidence interval: + + >>> method = stats.BootstrapMethod(method='BCa', random_state=rng) + >>> res.confidence_interval(confidence_level=0.9, method=method) + ConfidenceInterval(low=-0.9983163756488651, high=-0.22771001702132443) # may vary + + There is a linear dependence between x and y if y = a + b*x + e, where + a,b are constants and e is a random error term, assumed to be independent + of x. For simplicity, assume that x is standard normal, a=0, b=1 and let + e follow a normal distribution with mean zero and standard deviation s>0. + + >>> rng = np.random.default_rng() + >>> s = 0.5 + >>> x = stats.norm.rvs(size=500, random_state=rng) + >>> e = stats.norm.rvs(scale=s, size=500, random_state=rng) + >>> y = x + e + >>> stats.pearsonr(x, y).statistic + 0.9001942438244763 + + This should be close to the exact value given by + + >>> 1/np.sqrt(1 + s**2) + 0.8944271909999159 + + For s=0.5, we observe a high level of correlation. In general, a large + variance of the noise reduces the correlation, while the correlation + approaches one as the variance of the error goes to zero. + + It is important to keep in mind that no correlation does not imply + independence unless (x, y) is jointly normal. Correlation can even be zero + when there is a very simple dependence structure: if X follows a + standard normal distribution, let y = abs(x). Note that the correlation + between x and y is zero. Indeed, since the expectation of x is zero, + cov(x, y) = E[x*y]. By definition, this equals E[x*abs(x)] which is zero + by symmetry. The following lines of code illustrate this observation: + + >>> y = np.abs(x) + >>> stats.pearsonr(x, y) + PearsonRResult(statistic=-0.05444919272687482, pvalue=0.22422294836207743) + + A non-zero correlation coefficient can be misleading. For example, if X has + a standard normal distribution, define y = x if x < 0 and y = 0 otherwise. + A simple calculation shows that corr(x, y) = sqrt(2/Pi) = 0.797..., + implying a high level of correlation: + + >>> y = np.where(x < 0, x, 0) + >>> stats.pearsonr(x, y) + PearsonRResult(statistic=0.861985781588, pvalue=4.813432002751103e-149) + + This is unintuitive since there is no dependence of x and y if x is larger + than zero which happens in about half of the cases if we sample x and y. + + """ + n = len(x) + if n != len(y): + raise ValueError('x and y must have the same length.') + + if n < 2: + raise ValueError('x and y must have length at least 2.') + + x = np.asarray(x) + y = np.asarray(y) + + if (np.issubdtype(x.dtype, np.complexfloating) + or np.issubdtype(y.dtype, np.complexfloating)): + raise ValueError('This function does not support complex data') + + # If an input is constant, the correlation coefficient is not defined. + if (x == x[0]).all() or (y == y[0]).all(): + msg = ("An input array is constant; the correlation coefficient " + "is not defined.") + warnings.warn(stats.ConstantInputWarning(msg), stacklevel=2) + result = PearsonRResult(statistic=np.nan, pvalue=np.nan, n=n, + alternative=alternative, x=x, y=y) + return result + + if isinstance(method, PermutationMethod): + def statistic(y): + statistic, _ = pearsonr(x, y, alternative=alternative) + return statistic + + res = permutation_test((y,), statistic, permutation_type='pairings', + alternative=alternative, **method._asdict()) + + return PearsonRResult(statistic=res.statistic, pvalue=res.pvalue, n=n, + alternative=alternative, x=x, y=y) + elif isinstance(method, MonteCarloMethod): + def statistic(x, y): + statistic, _ = pearsonr(x, y, alternative=alternative) + return statistic + + if method.rvs is None: + rng = np.random.default_rng() + method.rvs = rng.normal, rng.normal + + res = monte_carlo_test((x, y,), statistic=statistic, + alternative=alternative, **method._asdict()) + + return PearsonRResult(statistic=res.statistic, pvalue=res.pvalue, n=n, + alternative=alternative, x=x, y=y) + elif method is not None: + message = ('`method` must be an instance of `PermutationMethod`,' + '`MonteCarloMethod`, or None.') + raise ValueError(message) + + # dtype is the data type for the calculations. This expression ensures + # that the data type is at least 64 bit floating point. It might have + # more precision if the input is, for example, np.longdouble. + dtype = type(1.0 + x[0] + y[0]) + + if n == 2: + r = dtype(np.sign(x[1] - x[0])*np.sign(y[1] - y[0])) + result = PearsonRResult(statistic=r, pvalue=1.0, n=n, + alternative=alternative, x=x, y=y) + return result + + xmean = x.mean(dtype=dtype) + ymean = y.mean(dtype=dtype) + + # By using `astype(dtype)`, we ensure that the intermediate calculations + # use at least 64 bit floating point. + xm = x.astype(dtype) - xmean + ym = y.astype(dtype) - ymean + + # Unlike np.linalg.norm or the expression sqrt((xm*xm).sum()), + # scipy.linalg.norm(xm) does not overflow if xm is, for example, + # [-5e210, 5e210, 3e200, -3e200] + normxm = linalg.norm(xm) + normym = linalg.norm(ym) + + threshold = 1e-13 + if normxm < threshold*abs(xmean) or normym < threshold*abs(ymean): + # If all the values in x (likewise y) are very close to the mean, + # the loss of precision that occurs in the subtraction xm = x - xmean + # might result in large errors in r. + msg = ("An input array is nearly constant; the computed " + "correlation coefficient may be inaccurate.") + warnings.warn(stats.NearConstantInputWarning(msg), stacklevel=2) + + r = np.dot(xm/normxm, ym/normym) + + # Presumably, if abs(r) > 1, then it is only some small artifact of + # floating point arithmetic. + r = max(min(r, 1.0), -1.0) + + # As explained in the docstring, the distribution of `r` under the null + # hypothesis is the beta distribution on (-1, 1) with a = b = n/2 - 1. + ab = n/2 - 1 + dist = stats.beta(ab, ab, loc=-1, scale=2) + pvalue = _get_pvalue(r, dist, alternative) + + return PearsonRResult(statistic=r, pvalue=pvalue, n=n, + alternative=alternative, x=x, y=y) + + +def fisher_exact(table, alternative='two-sided'): + """Perform a Fisher exact test on a 2x2 contingency table. + + The null hypothesis is that the true odds ratio of the populations + underlying the observations is one, and the observations were sampled + from these populations under a condition: the marginals of the + resulting table must equal those of the observed table. The statistic + returned is the unconditional maximum likelihood estimate of the odds + ratio, and the p-value is the probability under the null hypothesis of + obtaining a table at least as extreme as the one that was actually + observed. There are other possible choices of statistic and two-sided + p-value definition associated with Fisher's exact test; please see the + Notes for more information. + + Parameters + ---------- + table : array_like of ints + A 2x2 contingency table. Elements must be non-negative integers. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the odds ratio of the underlying population is not one + * 'less': the odds ratio of the underlying population is less than one + * 'greater': the odds ratio of the underlying population is greater + than one + + See the Notes for more details. + + Returns + ------- + res : SignificanceResult + An object containing attributes: + + statistic : float + This is the prior odds ratio, not a posterior estimate. + pvalue : float + The probability under the null hypothesis of obtaining a + table at least as extreme as the one that was actually observed. + + See Also + -------- + chi2_contingency : Chi-square test of independence of variables in a + contingency table. This can be used as an alternative to + `fisher_exact` when the numbers in the table are large. + contingency.odds_ratio : Compute the odds ratio (sample or conditional + MLE) for a 2x2 contingency table. + barnard_exact : Barnard's exact test, which is a more powerful alternative + than Fisher's exact test for 2x2 contingency tables. + boschloo_exact : Boschloo's exact test, which is a more powerful + alternative than Fisher's exact test for 2x2 contingency tables. + + Notes + ----- + *Null hypothesis and p-values* + + The null hypothesis is that the true odds ratio of the populations + underlying the observations is one, and the observations were sampled at + random from these populations under a condition: the marginals of the + resulting table must equal those of the observed table. Equivalently, + the null hypothesis is that the input table is from the hypergeometric + distribution with parameters (as used in `hypergeom`) + ``M = a + b + c + d``, ``n = a + b`` and ``N = a + c``, where the + input table is ``[[a, b], [c, d]]``. This distribution has support + ``max(0, N + n - M) <= x <= min(N, n)``, or, in terms of the values + in the input table, ``min(0, a - d) <= x <= a + min(b, c)``. ``x`` + can be interpreted as the upper-left element of a 2x2 table, so the + tables in the distribution have form:: + + [ x n - x ] + [N - x M - (n + N) + x] + + For example, if:: + + table = [6 2] + [1 4] + + then the support is ``2 <= x <= 7``, and the tables in the distribution + are:: + + [2 6] [3 5] [4 4] [5 3] [6 2] [7 1] + [5 0] [4 1] [3 2] [2 3] [1 4] [0 5] + + The probability of each table is given by the hypergeometric distribution + ``hypergeom.pmf(x, M, n, N)``. For this example, these are (rounded to + three significant digits):: + + x 2 3 4 5 6 7 + p 0.0163 0.163 0.408 0.326 0.0816 0.00466 + + These can be computed with:: + + >>> import numpy as np + >>> from scipy.stats import hypergeom + >>> table = np.array([[6, 2], [1, 4]]) + >>> M = table.sum() + >>> n = table[0].sum() + >>> N = table[:, 0].sum() + >>> start, end = hypergeom.support(M, n, N) + >>> hypergeom.pmf(np.arange(start, end+1), M, n, N) + array([0.01631702, 0.16317016, 0.40792541, 0.32634033, 0.08158508, + 0.004662 ]) + + The two-sided p-value is the probability that, under the null hypothesis, + a random table would have a probability equal to or less than the + probability of the input table. For our example, the probability of + the input table (where ``x = 6``) is 0.0816. The x values where the + probability does not exceed this are 2, 6 and 7, so the two-sided p-value + is ``0.0163 + 0.0816 + 0.00466 ~= 0.10256``:: + + >>> from scipy.stats import fisher_exact + >>> res = fisher_exact(table, alternative='two-sided') + >>> res.pvalue + 0.10256410256410257 + + The one-sided p-value for ``alternative='greater'`` is the probability + that a random table has ``x >= a``, which in our example is ``x >= 6``, + or ``0.0816 + 0.00466 ~= 0.08626``:: + + >>> res = fisher_exact(table, alternative='greater') + >>> res.pvalue + 0.08624708624708627 + + This is equivalent to computing the survival function of the + distribution at ``x = 5`` (one less than ``x`` from the input table, + because we want to include the probability of ``x = 6`` in the sum):: + + >>> hypergeom.sf(5, M, n, N) + 0.08624708624708627 + + For ``alternative='less'``, the one-sided p-value is the probability + that a random table has ``x <= a``, (i.e. ``x <= 6`` in our example), + or ``0.0163 + 0.163 + 0.408 + 0.326 + 0.0816 ~= 0.9949``:: + + >>> res = fisher_exact(table, alternative='less') + >>> res.pvalue + 0.9953379953379957 + + This is equivalent to computing the cumulative distribution function + of the distribution at ``x = 6``: + + >>> hypergeom.cdf(6, M, n, N) + 0.9953379953379957 + + *Odds ratio* + + The calculated odds ratio is different from the value computed by the + R function ``fisher.test``. This implementation returns the "sample" + or "unconditional" maximum likelihood estimate, while ``fisher.test`` + in R uses the conditional maximum likelihood estimate. To compute the + conditional maximum likelihood estimate of the odds ratio, use + `scipy.stats.contingency.odds_ratio`. + + References + ---------- + .. [1] Fisher, Sir Ronald A, "The Design of Experiments: + Mathematics of a Lady Tasting Tea." ISBN 978-0-486-41151-4, 1935. + .. [2] "Fisher's exact test", + https://en.wikipedia.org/wiki/Fisher's_exact_test + .. [3] Emma V. Low et al. "Identifying the lowest effective dose of + acetazolamide for the prophylaxis of acute mountain sickness: + systematic review and meta-analysis." + BMJ, 345, :doi:`10.1136/bmj.e6779`, 2012. + + Examples + -------- + In [3]_, the effective dose of acetazolamide for the prophylaxis of acute + mountain sickness was investigated. The study notably concluded: + + Acetazolamide 250 mg, 500 mg, and 750 mg daily were all efficacious for + preventing acute mountain sickness. Acetazolamide 250 mg was the lowest + effective dose with available evidence for this indication. + + The following table summarizes the results of the experiment in which + some participants took a daily dose of acetazolamide 250 mg while others + took a placebo. + Cases of acute mountain sickness were recorded:: + + Acetazolamide Control/Placebo + Acute mountain sickness 7 17 + No 15 5 + + + Is there evidence that the acetazolamide 250 mg reduces the risk of + acute mountain sickness? + We begin by formulating a null hypothesis :math:`H_0`: + + The odds of experiencing acute mountain sickness are the same with + the acetazolamide treatment as they are with placebo. + + Let's assess the plausibility of this hypothesis with + Fisher's test. + + >>> from scipy.stats import fisher_exact + >>> res = fisher_exact([[7, 17], [15, 5]], alternative='less') + >>> res.statistic + 0.13725490196078433 + >>> res.pvalue + 0.0028841933752349743 + + Using a significance level of 5%, we would reject the null hypothesis in + favor of the alternative hypothesis: "The odds of experiencing acute + mountain sickness with acetazolamide treatment are less than the odds of + experiencing acute mountain sickness with placebo." + + .. note:: + + Because the null distribution of Fisher's exact test is formed under + the assumption that both row and column sums are fixed, the result of + the test are conservative when applied to an experiment in which the + row sums are not fixed. + + In this case, the column sums are fixed; there are 22 subjects in each + group. But the number of cases of acute mountain sickness is not + (and cannot be) fixed before conducting the experiment. It is a + consequence. + + Boschloo's test does not depend on the assumption that the row sums + are fixed, and consequently, it provides a more powerful test in this + situation. + + >>> from scipy.stats import boschloo_exact + >>> res = boschloo_exact([[7, 17], [15, 5]], alternative='less') + >>> res.statistic + 0.0028841933752349743 + >>> res.pvalue + 0.0015141406667567101 + + We verify that the p-value is less than with `fisher_exact`. + + """ + hypergeom = distributions.hypergeom + # int32 is not enough for the algorithm + c = np.asarray(table, dtype=np.int64) + if not c.shape == (2, 2): + raise ValueError("The input `table` must be of shape (2, 2).") + + if np.any(c < 0): + raise ValueError("All values in `table` must be nonnegative.") + + if 0 in c.sum(axis=0) or 0 in c.sum(axis=1): + # If both values in a row or column are zero, the p-value is 1 and + # the odds ratio is NaN. + return SignificanceResult(np.nan, 1.0) + + if c[1, 0] > 0 and c[0, 1] > 0: + oddsratio = c[0, 0] * c[1, 1] / (c[1, 0] * c[0, 1]) + else: + oddsratio = np.inf + + n1 = c[0, 0] + c[0, 1] + n2 = c[1, 0] + c[1, 1] + n = c[0, 0] + c[1, 0] + + def pmf(x): + return hypergeom.pmf(x, n1 + n2, n1, n) + + if alternative == 'less': + pvalue = hypergeom.cdf(c[0, 0], n1 + n2, n1, n) + elif alternative == 'greater': + # Same formula as the 'less' case, but with the second column. + pvalue = hypergeom.cdf(c[0, 1], n1 + n2, n1, c[0, 1] + c[1, 1]) + elif alternative == 'two-sided': + mode = int((n + 1) * (n1 + 1) / (n1 + n2 + 2)) + pexact = hypergeom.pmf(c[0, 0], n1 + n2, n1, n) + pmode = hypergeom.pmf(mode, n1 + n2, n1, n) + + epsilon = 1e-14 + gamma = 1 + epsilon + + if np.abs(pexact - pmode) / np.maximum(pexact, pmode) <= epsilon: + return SignificanceResult(oddsratio, 1.) + + elif c[0, 0] < mode: + plower = hypergeom.cdf(c[0, 0], n1 + n2, n1, n) + if hypergeom.pmf(n, n1 + n2, n1, n) > pexact * gamma: + return SignificanceResult(oddsratio, plower) + + guess = _binary_search(lambda x: -pmf(x), -pexact * gamma, mode, n) + pvalue = plower + hypergeom.sf(guess, n1 + n2, n1, n) + else: + pupper = hypergeom.sf(c[0, 0] - 1, n1 + n2, n1, n) + if hypergeom.pmf(0, n1 + n2, n1, n) > pexact * gamma: + return SignificanceResult(oddsratio, pupper) + + guess = _binary_search(pmf, pexact * gamma, 0, mode) + pvalue = pupper + hypergeom.cdf(guess, n1 + n2, n1, n) + else: + msg = "`alternative` should be one of {'two-sided', 'less', 'greater'}" + raise ValueError(msg) + + pvalue = min(pvalue, 1.0) + + return SignificanceResult(oddsratio, pvalue) + + +def spearmanr(a, b=None, axis=0, nan_policy='propagate', + alternative='two-sided'): + r"""Calculate a Spearman correlation coefficient with associated p-value. + + The Spearman rank-order correlation coefficient is a nonparametric measure + of the monotonicity of the relationship between two datasets. + Like other correlation coefficients, + this one varies between -1 and +1 with 0 implying no correlation. + Correlations of -1 or +1 imply an exact monotonic relationship. Positive + correlations imply that as x increases, so does y. Negative correlations + imply that as x increases, y decreases. + + The p-value roughly indicates the probability of an uncorrelated system + producing datasets that have a Spearman correlation at least as extreme + as the one computed from these datasets. Although calculation of the + p-value does not make strong assumptions about the distributions underlying + the samples, it is only accurate for very large samples (>500 + observations). For smaller sample sizes, consider a permutation test (see + Examples section below). + + Parameters + ---------- + a, b : 1D or 2D array_like, b is optional + One or two 1-D or 2-D arrays containing multiple variables and + observations. When these are 1-D, each represents a vector of + observations of a single variable. For the behavior in the 2-D case, + see under ``axis``, below. + Both arrays need to have the same length in the ``axis`` dimension. + axis : int or None, optional + If axis=0 (default), then each column represents a variable, with + observations in the rows. If axis=1, the relationship is transposed: + each row represents a variable, while the columns contain observations. + If axis=None, then both arrays will be raveled. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + + * 'two-sided': the correlation is nonzero + * 'less': the correlation is negative (less than zero) + * 'greater': the correlation is positive (greater than zero) + + .. versionadded:: 1.7.0 + + Returns + ------- + res : SignificanceResult + An object containing attributes: + + statistic : float or ndarray (2-D square) + Spearman correlation matrix or correlation coefficient (if only 2 + variables are given as parameters). Correlation matrix is square + with length equal to total number of variables (columns or rows) in + ``a`` and ``b`` combined. + pvalue : float + The p-value for a hypothesis test whose null hypothesis + is that two samples have no ordinal correlation. See + `alternative` above for alternative hypotheses. `pvalue` has the + same shape as `statistic`. + + Warns + ----- + `~scipy.stats.ConstantInputWarning` + Raised if an input is a constant array. The correlation coefficient + is not defined in this case, so ``np.nan`` is returned. + + References + ---------- + .. [1] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probability and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. + Section 14.7 + .. [2] Kendall, M. G. and Stuart, A. (1973). + The Advanced Theory of Statistics, Volume 2: Inference and Relationship. + Griffin. 1973. + Section 31.18 + .. [3] Kershenobich, D., Fierro, F. J., & Rojkind, M. (1970). The + relationship between the free pool of proline and collagen content in + human liver cirrhosis. The Journal of Clinical Investigation, 49(12), + 2246-2249. + .. [4] Hollander, M., Wolfe, D. A., & Chicken, E. (2013). Nonparametric + statistical methods. John Wiley & Sons. + .. [5] B. Phipson and G. K. Smyth. "Permutation P-values Should Never Be + Zero: Calculating Exact P-values When Permutations Are Randomly Drawn." + Statistical Applications in Genetics and Molecular Biology 9.1 (2010). + .. [6] Ludbrook, J., & Dudley, H. (1998). Why permutation tests are + superior to t and F tests in biomedical research. The American + Statistician, 52(2), 127-132. + + Examples + -------- + Consider the following data from [3]_, which studied the relationship + between free proline (an amino acid) and total collagen (a protein often + found in connective tissue) in unhealthy human livers. + + The ``x`` and ``y`` arrays below record measurements of the two compounds. + The observations are paired: each free proline measurement was taken from + the same liver as the total collagen measurement at the same index. + + >>> import numpy as np + >>> # total collagen (mg/g dry weight of liver) + >>> x = np.array([7.1, 7.1, 7.2, 8.3, 9.4, 10.5, 11.4]) + >>> # free proline (μ mole/g dry weight of liver) + >>> y = np.array([2.8, 2.9, 2.8, 2.6, 3.5, 4.6, 5.0]) + + These data were analyzed in [4]_ using Spearman's correlation coefficient, + a statistic sensitive to monotonic correlation between the samples. + + >>> from scipy import stats + >>> res = stats.spearmanr(x, y) + >>> res.statistic + 0.7000000000000001 + + The value of this statistic tends to be high (close to 1) for samples with + a strongly positive ordinal correlation, low (close to -1) for samples with + a strongly negative ordinal correlation, and small in magnitude (close to + zero) for samples with weak ordinal correlation. + + The test is performed by comparing the observed value of the + statistic against the null distribution: the distribution of statistic + values derived under the null hypothesis that total collagen and free + proline measurements are independent. + + For this test, the statistic can be transformed such that the null + distribution for large samples is Student's t distribution with + ``len(x) - 2`` degrees of freedom. + + >>> import matplotlib.pyplot as plt + >>> dof = len(x)-2 # len(x) == len(y) + >>> dist = stats.t(df=dof) + >>> t_vals = np.linspace(-5, 5, 100) + >>> pdf = dist.pdf(t_vals) + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> def plot(ax): # we'll reuse this + ... ax.plot(t_vals, pdf) + ... ax.set_title("Spearman's Rho Test Null Distribution") + ... ax.set_xlabel("statistic") + ... ax.set_ylabel("probability density") + >>> plot(ax) + >>> plt.show() + + The comparison is quantified by the p-value: the proportion of values in + the null distribution as extreme or more extreme than the observed + value of the statistic. In a two-sided test in which the statistic is + positive, elements of the null distribution greater than the transformed + statistic and elements of the null distribution less than the negative of + the observed statistic are both considered "more extreme". + + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> plot(ax) + >>> rs = res.statistic # original statistic + >>> transformed = rs * np.sqrt(dof / ((rs+1.0)*(1.0-rs))) + >>> pvalue = dist.cdf(-transformed) + dist.sf(transformed) + >>> annotation = (f'p-value={pvalue:.4f}\n(shaded area)') + >>> props = dict(facecolor='black', width=1, headwidth=5, headlength=8) + >>> _ = ax.annotate(annotation, (2.7, 0.025), (3, 0.03), arrowprops=props) + >>> i = t_vals >= transformed + >>> ax.fill_between(t_vals[i], y1=0, y2=pdf[i], color='C0') + >>> i = t_vals <= -transformed + >>> ax.fill_between(t_vals[i], y1=0, y2=pdf[i], color='C0') + >>> ax.set_xlim(-5, 5) + >>> ax.set_ylim(0, 0.1) + >>> plt.show() + >>> res.pvalue + 0.07991669030889909 # two-sided p-value + + If the p-value is "small" - that is, if there is a low probability of + sampling data from independent distributions that produces such an extreme + value of the statistic - this may be taken as evidence against the null + hypothesis in favor of the alternative: the distribution of total collagen + and free proline are *not* independent. Note that: + + - The inverse is not true; that is, the test is not used to provide + evidence for the null hypothesis. + - The threshold for values that will be considered "small" is a choice that + should be made before the data is analyzed [5]_ with consideration of the + risks of both false positives (incorrectly rejecting the null hypothesis) + and false negatives (failure to reject a false null hypothesis). + - Small p-values are not evidence for a *large* effect; rather, they can + only provide evidence for a "significant" effect, meaning that they are + unlikely to have occurred under the null hypothesis. + + Suppose that before performing the experiment, the authors had reason + to predict a positive correlation between the total collagen and free + proline measurements, and that they had chosen to assess the plausibility + of the null hypothesis against a one-sided alternative: free proline has a + positive ordinal correlation with total collagen. In this case, only those + values in the null distribution that are as great or greater than the + observed statistic are considered to be more extreme. + + >>> res = stats.spearmanr(x, y, alternative='greater') + >>> res.statistic + 0.7000000000000001 # same statistic + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> plot(ax) + >>> pvalue = dist.sf(transformed) + >>> annotation = (f'p-value={pvalue:.6f}\n(shaded area)') + >>> props = dict(facecolor='black', width=1, headwidth=5, headlength=8) + >>> _ = ax.annotate(annotation, (3, 0.018), (3.5, 0.03), arrowprops=props) + >>> i = t_vals >= transformed + >>> ax.fill_between(t_vals[i], y1=0, y2=pdf[i], color='C0') + >>> ax.set_xlim(1, 5) + >>> ax.set_ylim(0, 0.1) + >>> plt.show() + >>> res.pvalue + 0.03995834515444954 # one-sided p-value; half of the two-sided p-value + + Note that the t-distribution provides an asymptotic approximation of the + null distribution; it is only accurate for samples with many observations. + For small samples, it may be more appropriate to perform a permutation + test: Under the null hypothesis that total collagen and free proline are + independent, each of the free proline measurements were equally likely to + have been observed with any of the total collagen measurements. Therefore, + we can form an *exact* null distribution by calculating the statistic under + each possible pairing of elements between ``x`` and ``y``. + + >>> def statistic(x): # explore all possible pairings by permuting `x` + ... rs = stats.spearmanr(x, y).statistic # ignore pvalue + ... transformed = rs * np.sqrt(dof / ((rs+1.0)*(1.0-rs))) + ... return transformed + >>> ref = stats.permutation_test((x,), statistic, alternative='greater', + ... permutation_type='pairings') + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> plot(ax) + >>> ax.hist(ref.null_distribution, np.linspace(-5, 5, 26), + ... density=True) + >>> ax.legend(['aymptotic approximation\n(many observations)', + ... f'exact \n({len(ref.null_distribution)} permutations)']) + >>> plt.show() + >>> ref.pvalue + 0.04563492063492063 # exact one-sided p-value + + """ + if axis is not None and axis > 1: + raise ValueError("spearmanr only handles 1-D or 2-D arrays, " + f"supplied axis argument {axis}, please use only " + "values 0, 1 or None for axis") + + a, axisout = _chk_asarray(a, axis) + if a.ndim > 2: + raise ValueError("spearmanr only handles 1-D or 2-D arrays") + + if b is None: + if a.ndim < 2: + raise ValueError("`spearmanr` needs at least 2 " + "variables to compare") + else: + # Concatenate a and b, so that we now only have to handle the case + # of a 2-D `a`. + b, _ = _chk_asarray(b, axis) + if axisout == 0: + a = np.column_stack((a, b)) + else: + a = np.vstack((a, b)) + + n_vars = a.shape[1 - axisout] + n_obs = a.shape[axisout] + if n_obs <= 1: + # Handle empty arrays or single observations. + res = SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + + warn_msg = ("An input array is constant; the correlation coefficient " + "is not defined.") + if axisout == 0: + if (a[:, 0][0] == a[:, 0]).all() or (a[:, 1][0] == a[:, 1]).all(): + # If an input is constant, the correlation coefficient + # is not defined. + warnings.warn(stats.ConstantInputWarning(warn_msg), stacklevel=2) + res = SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + else: # case when axisout == 1 b/c a is 2 dim only + if (a[0, :][0] == a[0, :]).all() or (a[1, :][0] == a[1, :]).all(): + # If an input is constant, the correlation coefficient + # is not defined. + warnings.warn(stats.ConstantInputWarning(warn_msg), stacklevel=2) + res = SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + + a_contains_nan, nan_policy = _contains_nan(a, nan_policy) + variable_has_nan = np.zeros(n_vars, dtype=bool) + if a_contains_nan: + if nan_policy == 'omit': + return mstats_basic.spearmanr(a, axis=axis, nan_policy=nan_policy, + alternative=alternative) + elif nan_policy == 'propagate': + if a.ndim == 1 or n_vars <= 2: + res = SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + else: + # Keep track of variables with NaNs, set the outputs to NaN + # only for those variables + variable_has_nan = np.isnan(a).any(axis=axisout) + + a_ranked = np.apply_along_axis(rankdata, axisout, a) + rs = np.corrcoef(a_ranked, rowvar=axisout) + dof = n_obs - 2 # degrees of freedom + + # rs can have elements equal to 1, so avoid zero division warnings + with np.errstate(divide='ignore'): + # clip the small negative values possibly caused by rounding + # errors before taking the square root + t = rs * np.sqrt((dof/((rs+1.0)*(1.0-rs))).clip(0)) + + prob = _get_pvalue(t, distributions.t(dof), alternative) + + # For backwards compatibility, return scalars when comparing 2 columns + if rs.shape == (2, 2): + res = SignificanceResult(rs[1, 0], prob[1, 0]) + res.correlation = rs[1, 0] + return res + else: + rs[variable_has_nan, :] = np.nan + rs[:, variable_has_nan] = np.nan + res = SignificanceResult(rs[()], prob[()]) + res.correlation = rs + return res + + +def pointbiserialr(x, y): + r"""Calculate a point biserial correlation coefficient and its p-value. + + The point biserial correlation is used to measure the relationship + between a binary variable, x, and a continuous variable, y. Like other + correlation coefficients, this one varies between -1 and +1 with 0 + implying no correlation. Correlations of -1 or +1 imply a determinative + relationship. + + This function may be computed using a shortcut formula but produces the + same result as `pearsonr`. + + Parameters + ---------- + x : array_like of bools + Input array. + y : array_like + Input array. + + Returns + ------- + res: SignificanceResult + An object containing attributes: + + statistic : float + The R value. + pvalue : float + The two-sided p-value. + + Notes + ----- + `pointbiserialr` uses a t-test with ``n-1`` degrees of freedom. + It is equivalent to `pearsonr`. + + The value of the point-biserial correlation can be calculated from: + + .. math:: + + r_{pb} = \frac{\overline{Y_1} - \overline{Y_0}} + {s_y} + \sqrt{\frac{N_0 N_1} + {N (N - 1)}} + + Where :math:`\overline{Y_{0}}` and :math:`\overline{Y_{1}}` are means + of the metric observations coded 0 and 1 respectively; :math:`N_{0}` and + :math:`N_{1}` are number of observations coded 0 and 1 respectively; + :math:`N` is the total number of observations and :math:`s_{y}` is the + standard deviation of all the metric observations. + + A value of :math:`r_{pb}` that is significantly different from zero is + completely equivalent to a significant difference in means between the two + groups. Thus, an independent groups t Test with :math:`N-2` degrees of + freedom may be used to test whether :math:`r_{pb}` is nonzero. The + relation between the t-statistic for comparing two independent groups and + :math:`r_{pb}` is given by: + + .. math:: + + t = \sqrt{N - 2}\frac{r_{pb}}{\sqrt{1 - r^{2}_{pb}}} + + References + ---------- + .. [1] J. Lev, "The Point Biserial Coefficient of Correlation", Ann. Math. + Statist., Vol. 20, no.1, pp. 125-126, 1949. + + .. [2] R.F. Tate, "Correlation Between a Discrete and a Continuous + Variable. Point-Biserial Correlation.", Ann. Math. Statist., Vol. 25, + np. 3, pp. 603-607, 1954. + + .. [3] D. Kornbrot "Point Biserial Correlation", In Wiley StatsRef: + Statistics Reference Online (eds N. Balakrishnan, et al.), 2014. + :doi:`10.1002/9781118445112.stat06227` + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> a = np.array([0, 0, 0, 1, 1, 1, 1]) + >>> b = np.arange(7) + >>> stats.pointbiserialr(a, b) + (0.8660254037844386, 0.011724811003954652) + >>> stats.pearsonr(a, b) + (0.86602540378443871, 0.011724811003954626) + >>> np.corrcoef(a, b) + array([[ 1. , 0.8660254], + [ 0.8660254, 1. ]]) + + """ + rpb, prob = pearsonr(x, y) + # create result object with alias for backward compatibility + res = SignificanceResult(rpb, prob) + res.correlation = rpb + return res + + +@_deprecate_positional_args(version="1.14") +def kendalltau(x, y, *, initial_lexsort=_NoValue, nan_policy='propagate', + method='auto', variant='b', alternative='two-sided'): + r"""Calculate Kendall's tau, a correlation measure for ordinal data. + + Kendall's tau is a measure of the correspondence between two rankings. + Values close to 1 indicate strong agreement, and values close to -1 + indicate strong disagreement. This implements two variants of Kendall's + tau: tau-b (the default) and tau-c (also known as Stuart's tau-c). These + differ only in how they are normalized to lie within the range -1 to 1; + the hypothesis tests (their p-values) are identical. Kendall's original + tau-a is not implemented separately because both tau-b and tau-c reduce + to tau-a in the absence of ties. + + Parameters + ---------- + x, y : array_like + Arrays of rankings, of the same shape. If arrays are not 1-D, they + will be flattened to 1-D. + initial_lexsort : bool, optional, deprecated + This argument is unused. + + .. deprecated:: 1.10.0 + `kendalltau` keyword argument `initial_lexsort` is deprecated as it + is unused and will be removed in SciPy 1.14.0. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + method : {'auto', 'asymptotic', 'exact'}, optional + Defines which method is used to calculate the p-value [5]_. + The following options are available (default is 'auto'): + + * 'auto': selects the appropriate method based on a trade-off + between speed and accuracy + * 'asymptotic': uses a normal approximation valid for large samples + * 'exact': computes the exact p-value, but can only be used if no ties + are present. As the sample size increases, the 'exact' computation + time may grow and the result may lose some precision. + variant : {'b', 'c'}, optional + Defines which variant of Kendall's tau is returned. Default is 'b'. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + + * 'two-sided': the rank correlation is nonzero + * 'less': the rank correlation is negative (less than zero) + * 'greater': the rank correlation is positive (greater than zero) + + Returns + ------- + res : SignificanceResult + An object containing attributes: + + statistic : float + The tau statistic. + pvalue : float + The p-value for a hypothesis test whose null hypothesis is + an absence of association, tau = 0. + + See Also + -------- + spearmanr : Calculates a Spearman rank-order correlation coefficient. + theilslopes : Computes the Theil-Sen estimator for a set of points (x, y). + weightedtau : Computes a weighted version of Kendall's tau. + + Notes + ----- + The definition of Kendall's tau that is used is [2]_:: + + tau_b = (P - Q) / sqrt((P + Q + T) * (P + Q + U)) + + tau_c = 2 (P - Q) / (n**2 * (m - 1) / m) + + where P is the number of concordant pairs, Q the number of discordant + pairs, T the number of ties only in `x`, and U the number of ties only in + `y`. If a tie occurs for the same pair in both `x` and `y`, it is not + added to either T or U. n is the total number of samples, and m is the + number of unique values in either `x` or `y`, whichever is smaller. + + References + ---------- + .. [1] Maurice G. Kendall, "A New Measure of Rank Correlation", Biometrika + Vol. 30, No. 1/2, pp. 81-93, 1938. + .. [2] Maurice G. Kendall, "The treatment of ties in ranking problems", + Biometrika Vol. 33, No. 3, pp. 239-251. 1945. + .. [3] Gottfried E. Noether, "Elements of Nonparametric Statistics", John + Wiley & Sons, 1967. + .. [4] Peter M. Fenwick, "A new data structure for cumulative frequency + tables", Software: Practice and Experience, Vol. 24, No. 3, + pp. 327-336, 1994. + .. [5] Maurice G. Kendall, "Rank Correlation Methods" (4th Edition), + Charles Griffin & Co., 1970. + .. [6] Kershenobich, D., Fierro, F. J., & Rojkind, M. (1970). The + relationship between the free pool of proline and collagen content + in human liver cirrhosis. The Journal of Clinical Investigation, + 49(12), 2246-2249. + .. [7] Hollander, M., Wolfe, D. A., & Chicken, E. (2013). Nonparametric + statistical methods. John Wiley & Sons. + .. [8] B. Phipson and G. K. Smyth. "Permutation P-values Should Never Be + Zero: Calculating Exact P-values When Permutations Are Randomly + Drawn." Statistical Applications in Genetics and Molecular Biology + 9.1 (2010). + + Examples + -------- + Consider the following data from [6]_, which studied the relationship + between free proline (an amino acid) and total collagen (a protein often + found in connective tissue) in unhealthy human livers. + + The ``x`` and ``y`` arrays below record measurements of the two compounds. + The observations are paired: each free proline measurement was taken from + the same liver as the total collagen measurement at the same index. + + >>> import numpy as np + >>> # total collagen (mg/g dry weight of liver) + >>> x = np.array([7.1, 7.1, 7.2, 8.3, 9.4, 10.5, 11.4]) + >>> # free proline (μ mole/g dry weight of liver) + >>> y = np.array([2.8, 2.9, 2.8, 2.6, 3.5, 4.6, 5.0]) + + These data were analyzed in [7]_ using Spearman's correlation coefficient, + a statistic similar to to Kendall's tau in that it is also sensitive to + ordinal correlation between the samples. Let's perform an analogous study + using Kendall's tau. + + >>> from scipy import stats + >>> res = stats.kendalltau(x, y) + >>> res.statistic + 0.5499999999999999 + + The value of this statistic tends to be high (close to 1) for samples with + a strongly positive ordinal correlation, low (close to -1) for samples with + a strongly negative ordinal correlation, and small in magnitude (close to + zero) for samples with weak ordinal correlation. + + The test is performed by comparing the observed value of the + statistic against the null distribution: the distribution of statistic + values derived under the null hypothesis that total collagen and free + proline measurements are independent. + + For this test, the null distribution for large samples without ties is + approximated as the normal distribution with variance + ``(2*(2*n + 5))/(9*n*(n - 1))``, where ``n = len(x)``. + + >>> import matplotlib.pyplot as plt + >>> n = len(x) # len(x) == len(y) + >>> var = (2*(2*n + 5))/(9*n*(n - 1)) + >>> dist = stats.norm(scale=np.sqrt(var)) + >>> z_vals = np.linspace(-1.25, 1.25, 100) + >>> pdf = dist.pdf(z_vals) + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> def plot(ax): # we'll reuse this + ... ax.plot(z_vals, pdf) + ... ax.set_title("Kendall Tau Test Null Distribution") + ... ax.set_xlabel("statistic") + ... ax.set_ylabel("probability density") + >>> plot(ax) + >>> plt.show() + + The comparison is quantified by the p-value: the proportion of values in + the null distribution as extreme or more extreme than the observed + value of the statistic. In a two-sided test in which the statistic is + positive, elements of the null distribution greater than the transformed + statistic and elements of the null distribution less than the negative of + the observed statistic are both considered "more extreme". + + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> plot(ax) + >>> pvalue = dist.cdf(-res.statistic) + dist.sf(res.statistic) + >>> annotation = (f'p-value={pvalue:.4f}\n(shaded area)') + >>> props = dict(facecolor='black', width=1, headwidth=5, headlength=8) + >>> _ = ax.annotate(annotation, (0.65, 0.15), (0.8, 0.3), arrowprops=props) + >>> i = z_vals >= res.statistic + >>> ax.fill_between(z_vals[i], y1=0, y2=pdf[i], color='C0') + >>> i = z_vals <= -res.statistic + >>> ax.fill_between(z_vals[i], y1=0, y2=pdf[i], color='C0') + >>> ax.set_xlim(-1.25, 1.25) + >>> ax.set_ylim(0, 0.5) + >>> plt.show() + >>> res.pvalue + 0.09108705741631495 # approximate p-value + + Note that there is slight disagreement between the shaded area of the curve + and the p-value returned by `kendalltau`. This is because our data has + ties, and we have neglected a tie correction to the null distribution + variance that `kendalltau` performs. For samples without ties, the shaded + areas of our plot and p-value returned by `kendalltau` would match exactly. + + If the p-value is "small" - that is, if there is a low probability of + sampling data from independent distributions that produces such an extreme + value of the statistic - this may be taken as evidence against the null + hypothesis in favor of the alternative: the distribution of total collagen + and free proline are *not* independent. Note that: + + - The inverse is not true; that is, the test is not used to provide + evidence for the null hypothesis. + - The threshold for values that will be considered "small" is a choice that + should be made before the data is analyzed [8]_ with consideration of the + risks of both false positives (incorrectly rejecting the null hypothesis) + and false negatives (failure to reject a false null hypothesis). + - Small p-values are not evidence for a *large* effect; rather, they can + only provide evidence for a "significant" effect, meaning that they are + unlikely to have occurred under the null hypothesis. + + For samples without ties of moderate size, `kendalltau` can compute the + p-value exactly. However, in the presence of ties, `kendalltau` resorts + to an asymptotic approximation. Nonetheles, we can use a permutation test + to compute the null distribution exactly: Under the null hypothesis that + total collagen and free proline are independent, each of the free proline + measurements were equally likely to have been observed with any of the + total collagen measurements. Therefore, we can form an *exact* null + distribution by calculating the statistic under each possible pairing of + elements between ``x`` and ``y``. + + >>> def statistic(x): # explore all possible pairings by permuting `x` + ... return stats.kendalltau(x, y).statistic # ignore pvalue + >>> ref = stats.permutation_test((x,), statistic, + ... permutation_type='pairings') + >>> fig, ax = plt.subplots(figsize=(8, 5)) + >>> plot(ax) + >>> bins = np.linspace(-1.25, 1.25, 25) + >>> ax.hist(ref.null_distribution, bins=bins, density=True) + >>> ax.legend(['aymptotic approximation\n(many observations)', + ... 'exact null distribution']) + >>> plot(ax) + >>> plt.show() + >>> ref.pvalue + 0.12222222222222222 # exact p-value + + Note that there is significant disagreement between the exact p-value + calculated here and the approximation returned by `kendalltau` above. For + small samples with ties, consider performing a permutation test for more + accurate results. + + """ + if initial_lexsort is not _NoValue: + msg = ("'kendalltau' keyword argument 'initial_lexsort' is deprecated" + " as it is unused and will be removed in SciPy 1.12.0.") + warnings.warn(msg, DeprecationWarning, stacklevel=2) + + x = np.asarray(x).ravel() + y = np.asarray(y).ravel() + + if x.size != y.size: + raise ValueError("All inputs to `kendalltau` must be of the same " + f"size, found x-size {x.size} and y-size {y.size}") + elif not x.size or not y.size: + # Return NaN if arrays are empty + res = SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + + # check both x and y + cnx, npx = _contains_nan(x, nan_policy) + cny, npy = _contains_nan(y, nan_policy) + contains_nan = cnx or cny + if npx == 'omit' or npy == 'omit': + nan_policy = 'omit' + + if contains_nan and nan_policy == 'propagate': + res = SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + + elif contains_nan and nan_policy == 'omit': + x = ma.masked_invalid(x) + y = ma.masked_invalid(y) + if variant == 'b': + return mstats_basic.kendalltau(x, y, method=method, use_ties=True, + alternative=alternative) + else: + message = ("nan_policy='omit' is currently compatible only with " + "variant='b'.") + raise ValueError(message) + + def count_rank_tie(ranks): + cnt = np.bincount(ranks).astype('int64', copy=False) + cnt = cnt[cnt > 1] + # Python ints to avoid overflow down the line + return (int((cnt * (cnt - 1) // 2).sum()), + int((cnt * (cnt - 1.) * (cnt - 2)).sum()), + int((cnt * (cnt - 1.) * (2*cnt + 5)).sum())) + + size = x.size + perm = np.argsort(y) # sort on y and convert y to dense ranks + x, y = x[perm], y[perm] + y = np.r_[True, y[1:] != y[:-1]].cumsum(dtype=np.intp) + + # stable sort on x and convert x to dense ranks + perm = np.argsort(x, kind='mergesort') + x, y = x[perm], y[perm] + x = np.r_[True, x[1:] != x[:-1]].cumsum(dtype=np.intp) + + dis = _kendall_dis(x, y) # discordant pairs + + obs = np.r_[True, (x[1:] != x[:-1]) | (y[1:] != y[:-1]), True] + cnt = np.diff(np.nonzero(obs)[0]).astype('int64', copy=False) + + ntie = int((cnt * (cnt - 1) // 2).sum()) # joint ties + xtie, x0, x1 = count_rank_tie(x) # ties in x, stats + ytie, y0, y1 = count_rank_tie(y) # ties in y, stats + + tot = (size * (size - 1)) // 2 + + if xtie == tot or ytie == tot: + res = SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + + # Note that tot = con + dis + (xtie - ntie) + (ytie - ntie) + ntie + # = con + dis + xtie + ytie - ntie + con_minus_dis = tot - xtie - ytie + ntie - 2 * dis + if variant == 'b': + tau = con_minus_dis / np.sqrt(tot - xtie) / np.sqrt(tot - ytie) + elif variant == 'c': + minclasses = min(len(set(x)), len(set(y))) + tau = 2*con_minus_dis / (size**2 * (minclasses-1)/minclasses) + else: + raise ValueError(f"Unknown variant of the method chosen: {variant}. " + "variant must be 'b' or 'c'.") + + # Limit range to fix computational errors + tau = np.minimum(1., max(-1., tau)) + + # The p-value calculation is the same for all variants since the p-value + # depends only on con_minus_dis. + if method == 'exact' and (xtie != 0 or ytie != 0): + raise ValueError("Ties found, exact method cannot be used.") + + if method == 'auto': + if (xtie == 0 and ytie == 0) and (size <= 33 or + min(dis, tot-dis) <= 1): + method = 'exact' + else: + method = 'asymptotic' + + if xtie == 0 and ytie == 0 and method == 'exact': + pvalue = mstats_basic._kendall_p_exact(size, tot-dis, alternative) + elif method == 'asymptotic': + # con_minus_dis is approx normally distributed with this variance [3]_ + m = size * (size - 1.) + var = ((m * (2*size + 5) - x1 - y1) / 18 + + (2 * xtie * ytie) / m + x0 * y0 / (9 * m * (size - 2))) + z = con_minus_dis / np.sqrt(var) + pvalue = _get_pvalue(z, distributions.norm, alternative) + else: + raise ValueError(f"Unknown method {method} specified. Use 'auto', " + "'exact' or 'asymptotic'.") + + # create result object with alias for backward compatibility + res = SignificanceResult(tau[()], pvalue[()]) + res.correlation = tau[()] + return res + + +def weightedtau(x, y, rank=True, weigher=None, additive=True): + r"""Compute a weighted version of Kendall's :math:`\tau`. + + The weighted :math:`\tau` is a weighted version of Kendall's + :math:`\tau` in which exchanges of high weight are more influential than + exchanges of low weight. The default parameters compute the additive + hyperbolic version of the index, :math:`\tau_\mathrm h`, which has + been shown to provide the best balance between important and + unimportant elements [1]_. + + The weighting is defined by means of a rank array, which assigns a + nonnegative rank to each element (higher importance ranks being + associated with smaller values, e.g., 0 is the highest possible rank), + and a weigher function, which assigns a weight based on the rank to + each element. The weight of an exchange is then the sum or the product + of the weights of the ranks of the exchanged elements. The default + parameters compute :math:`\tau_\mathrm h`: an exchange between + elements with rank :math:`r` and :math:`s` (starting from zero) has + weight :math:`1/(r+1) + 1/(s+1)`. + + Specifying a rank array is meaningful only if you have in mind an + external criterion of importance. If, as it usually happens, you do + not have in mind a specific rank, the weighted :math:`\tau` is + defined by averaging the values obtained using the decreasing + lexicographical rank by (`x`, `y`) and by (`y`, `x`). This is the + behavior with default parameters. Note that the convention used + here for ranking (lower values imply higher importance) is opposite + to that used by other SciPy statistical functions. + + Parameters + ---------- + x, y : array_like + Arrays of scores, of the same shape. If arrays are not 1-D, they will + be flattened to 1-D. + rank : array_like of ints or bool, optional + A nonnegative rank assigned to each element. If it is None, the + decreasing lexicographical rank by (`x`, `y`) will be used: elements of + higher rank will be those with larger `x`-values, using `y`-values to + break ties (in particular, swapping `x` and `y` will give a different + result). If it is False, the element indices will be used + directly as ranks. The default is True, in which case this + function returns the average of the values obtained using the + decreasing lexicographical rank by (`x`, `y`) and by (`y`, `x`). + weigher : callable, optional + The weigher function. Must map nonnegative integers (zero + representing the most important element) to a nonnegative weight. + The default, None, provides hyperbolic weighing, that is, + rank :math:`r` is mapped to weight :math:`1/(r+1)`. + additive : bool, optional + If True, the weight of an exchange is computed by adding the + weights of the ranks of the exchanged elements; otherwise, the weights + are multiplied. The default is True. + + Returns + ------- + res: SignificanceResult + An object containing attributes: + + statistic : float + The weighted :math:`\tau` correlation index. + pvalue : float + Presently ``np.nan``, as the null distribution of the statistic is + unknown (even in the additive hyperbolic case). + + See Also + -------- + kendalltau : Calculates Kendall's tau. + spearmanr : Calculates a Spearman rank-order correlation coefficient. + theilslopes : Computes the Theil-Sen estimator for a set of points (x, y). + + Notes + ----- + This function uses an :math:`O(n \log n)`, mergesort-based algorithm + [1]_ that is a weighted extension of Knight's algorithm for Kendall's + :math:`\tau` [2]_. It can compute Shieh's weighted :math:`\tau` [3]_ + between rankings without ties (i.e., permutations) by setting + `additive` and `rank` to False, as the definition given in [1]_ is a + generalization of Shieh's. + + NaNs are considered the smallest possible score. + + .. versionadded:: 0.19.0 + + References + ---------- + .. [1] Sebastiano Vigna, "A weighted correlation index for rankings with + ties", Proceedings of the 24th international conference on World + Wide Web, pp. 1166-1176, ACM, 2015. + .. [2] W.R. Knight, "A Computer Method for Calculating Kendall's Tau with + Ungrouped Data", Journal of the American Statistical Association, + Vol. 61, No. 314, Part 1, pp. 436-439, 1966. + .. [3] Grace S. Shieh. "A weighted Kendall's tau statistic", Statistics & + Probability Letters, Vol. 39, No. 1, pp. 17-24, 1998. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> x = [12, 2, 1, 12, 2] + >>> y = [1, 4, 7, 1, 0] + >>> res = stats.weightedtau(x, y) + >>> res.statistic + -0.56694968153682723 + >>> res.pvalue + nan + >>> res = stats.weightedtau(x, y, additive=False) + >>> res.statistic + -0.62205716951801038 + + NaNs are considered the smallest possible score: + + >>> x = [12, 2, 1, 12, 2] + >>> y = [1, 4, 7, 1, np.nan] + >>> res = stats.weightedtau(x, y) + >>> res.statistic + -0.56694968153682723 + + This is exactly Kendall's tau: + + >>> x = [12, 2, 1, 12, 2] + >>> y = [1, 4, 7, 1, 0] + >>> res = stats.weightedtau(x, y, weigher=lambda x: 1) + >>> res.statistic + -0.47140452079103173 + + >>> x = [12, 2, 1, 12, 2] + >>> y = [1, 4, 7, 1, 0] + >>> stats.weightedtau(x, y, rank=None) + SignificanceResult(statistic=-0.4157652301037516, pvalue=nan) + >>> stats.weightedtau(y, x, rank=None) + SignificanceResult(statistic=-0.7181341329699028, pvalue=nan) + + """ + x = np.asarray(x).ravel() + y = np.asarray(y).ravel() + + if x.size != y.size: + raise ValueError("All inputs to `weightedtau` must be " + "of the same size, " + f"found x-size {x.size} and y-size {y.size}") + if not x.size: + # Return NaN if arrays are empty + res = SignificanceResult(np.nan, np.nan) + res.correlation = np.nan + return res + + # If there are NaNs we apply _toint64() + if np.isnan(np.sum(x)): + x = _toint64(x) + if np.isnan(np.sum(y)): + y = _toint64(y) + + # Reduce to ranks unsupported types + if x.dtype != y.dtype: + if x.dtype != np.int64: + x = _toint64(x) + if y.dtype != np.int64: + y = _toint64(y) + else: + if x.dtype not in (np.int32, np.int64, np.float32, np.float64): + x = _toint64(x) + y = _toint64(y) + + if rank is True: + tau = ( + _weightedrankedtau(x, y, None, weigher, additive) + + _weightedrankedtau(y, x, None, weigher, additive) + ) / 2 + res = SignificanceResult(tau, np.nan) + res.correlation = tau + return res + + if rank is False: + rank = np.arange(x.size, dtype=np.intp) + elif rank is not None: + rank = np.asarray(rank).ravel() + if rank.size != x.size: + raise ValueError( + "All inputs to `weightedtau` must be of the same size, " + f"found x-size {x.size} and rank-size {rank.size}" + ) + + tau = _weightedrankedtau(x, y, rank, weigher, additive) + res = SignificanceResult(tau, np.nan) + res.correlation = tau + return res + + +# FROM MGCPY: https://github.com/neurodata/mgcpy + + +class _ParallelP: + """Helper function to calculate parallel p-value.""" + + def __init__(self, x, y, random_states): + self.x = x + self.y = y + self.random_states = random_states + + def __call__(self, index): + order = self.random_states[index].permutation(self.y.shape[0]) + permy = self.y[order][:, order] + + # calculate permuted stats, store in null distribution + perm_stat = _mgc_stat(self.x, permy)[0] + + return perm_stat + + +def _perm_test(x, y, stat, reps=1000, workers=-1, random_state=None): + r"""Helper function that calculates the p-value. See below for uses. + + Parameters + ---------- + x, y : ndarray + `x` and `y` have shapes `(n, p)` and `(n, q)`. + stat : float + The sample test statistic. + reps : int, optional + The number of replications used to estimate the null when using the + permutation test. The default is 1000 replications. + workers : int or map-like callable, optional + If `workers` is an int the population is subdivided into `workers` + sections and evaluated in parallel (uses + `multiprocessing.Pool `). Supply `-1` to use all cores + available to the Process. Alternatively supply a map-like callable, + such as `multiprocessing.Pool.map` for evaluating the population in + parallel. This evaluation is carried out as `workers(func, iterable)`. + Requires that `func` be pickleable. + random_state : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance then + that instance is used. + + Returns + ------- + pvalue : float + The sample test p-value. + null_dist : list + The approximated null distribution. + + """ + # generate seeds for each rep (change to new parallel random number + # capabilities in numpy >= 1.17+) + random_state = check_random_state(random_state) + random_states = [np.random.RandomState(rng_integers(random_state, 1 << 32, + size=4, dtype=np.uint32)) for _ in range(reps)] + + # parallelizes with specified workers over number of reps and set seeds + parallelp = _ParallelP(x=x, y=y, random_states=random_states) + with MapWrapper(workers) as mapwrapper: + null_dist = np.array(list(mapwrapper(parallelp, range(reps)))) + + # calculate p-value and significant permutation map through list + pvalue = (1 + (null_dist >= stat).sum()) / (1 + reps) + + return pvalue, null_dist + + +def _euclidean_dist(x): + return cdist(x, x) + + +MGCResult = _make_tuple_bunch('MGCResult', + ['statistic', 'pvalue', 'mgc_dict'], []) + + +def multiscale_graphcorr(x, y, compute_distance=_euclidean_dist, reps=1000, + workers=1, is_twosamp=False, random_state=None): + r"""Computes the Multiscale Graph Correlation (MGC) test statistic. + + Specifically, for each point, MGC finds the :math:`k`-nearest neighbors for + one property (e.g. cloud density), and the :math:`l`-nearest neighbors for + the other property (e.g. grass wetness) [1]_. This pair :math:`(k, l)` is + called the "scale". A priori, however, it is not know which scales will be + most informative. So, MGC computes all distance pairs, and then efficiently + computes the distance correlations for all scales. The local correlations + illustrate which scales are relatively informative about the relationship. + The key, therefore, to successfully discover and decipher relationships + between disparate data modalities is to adaptively determine which scales + are the most informative, and the geometric implication for the most + informative scales. Doing so not only provides an estimate of whether the + modalities are related, but also provides insight into how the + determination was made. This is especially important in high-dimensional + data, where simple visualizations do not reveal relationships to the + unaided human eye. Characterizations of this implementation in particular + have been derived from and benchmarked within in [2]_. + + Parameters + ---------- + x, y : ndarray + If ``x`` and ``y`` have shapes ``(n, p)`` and ``(n, q)`` where `n` is + the number of samples and `p` and `q` are the number of dimensions, + then the MGC independence test will be run. Alternatively, ``x`` and + ``y`` can have shapes ``(n, n)`` if they are distance or similarity + matrices, and ``compute_distance`` must be sent to ``None``. If ``x`` + and ``y`` have shapes ``(n, p)`` and ``(m, p)``, an unpaired + two-sample MGC test will be run. + compute_distance : callable, optional + A function that computes the distance or similarity among the samples + within each data matrix. Set to ``None`` if ``x`` and ``y`` are + already distance matrices. The default uses the euclidean norm metric. + If you are calling a custom function, either create the distance + matrix before-hand or create a function of the form + ``compute_distance(x)`` where `x` is the data matrix for which + pairwise distances are calculated. + reps : int, optional + The number of replications used to estimate the null when using the + permutation test. The default is ``1000``. + workers : int or map-like callable, optional + If ``workers`` is an int the population is subdivided into ``workers`` + sections and evaluated in parallel (uses ``multiprocessing.Pool + ``). Supply ``-1`` to use all cores available to the + Process. Alternatively supply a map-like callable, such as + ``multiprocessing.Pool.map`` for evaluating the p-value in parallel. + This evaluation is carried out as ``workers(func, iterable)``. + Requires that `func` be pickleable. The default is ``1``. + is_twosamp : bool, optional + If `True`, a two sample test will be run. If ``x`` and ``y`` have + shapes ``(n, p)`` and ``(m, p)``, this optional will be overridden and + set to ``True``. Set to ``True`` if ``x`` and ``y`` both have shapes + ``(n, p)`` and a two sample test is desired. The default is ``False``. + Note that this will not run if inputs are distance matrices. + random_state : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance then + that instance is used. + + Returns + ------- + res : MGCResult + An object containing attributes: + + statistic : float + The sample MGC test statistic within `[-1, 1]`. + pvalue : float + The p-value obtained via permutation. + mgc_dict : dict + Contains additional useful results: + + - mgc_map : ndarray + A 2D representation of the latent geometry of the + relationship. + - opt_scale : (int, int) + The estimated optimal scale as a `(x, y)` pair. + - null_dist : list + The null distribution derived from the permuted matrices. + + See Also + -------- + pearsonr : Pearson correlation coefficient and p-value for testing + non-correlation. + kendalltau : Calculates Kendall's tau. + spearmanr : Calculates a Spearman rank-order correlation coefficient. + + Notes + ----- + A description of the process of MGC and applications on neuroscience data + can be found in [1]_. It is performed using the following steps: + + #. Two distance matrices :math:`D^X` and :math:`D^Y` are computed and + modified to be mean zero columnwise. This results in two + :math:`n \times n` distance matrices :math:`A` and :math:`B` (the + centering and unbiased modification) [3]_. + + #. For all values :math:`k` and :math:`l` from :math:`1, ..., n`, + + * The :math:`k`-nearest neighbor and :math:`l`-nearest neighbor graphs + are calculated for each property. Here, :math:`G_k (i, j)` indicates + the :math:`k`-smallest values of the :math:`i`-th row of :math:`A` + and :math:`H_l (i, j)` indicates the :math:`l` smallested values of + the :math:`i`-th row of :math:`B` + + * Let :math:`\circ` denotes the entry-wise matrix product, then local + correlations are summed and normalized using the following statistic: + + .. math:: + + c^{kl} = \frac{\sum_{ij} A G_k B H_l} + {\sqrt{\sum_{ij} A^2 G_k \times \sum_{ij} B^2 H_l}} + + #. The MGC test statistic is the smoothed optimal local correlation of + :math:`\{ c^{kl} \}`. Denote the smoothing operation as :math:`R(\cdot)` + (which essentially set all isolated large correlations) as 0 and + connected large correlations the same as before, see [3]_.) MGC is, + + .. math:: + + MGC_n (x, y) = \max_{(k, l)} R \left(c^{kl} \left( x_n, y_n \right) + \right) + + The test statistic returns a value between :math:`(-1, 1)` since it is + normalized. + + The p-value returned is calculated using a permutation test. This process + is completed by first randomly permuting :math:`y` to estimate the null + distribution and then calculating the probability of observing a test + statistic, under the null, at least as extreme as the observed test + statistic. + + MGC requires at least 5 samples to run with reliable results. It can also + handle high-dimensional data sets. + In addition, by manipulating the input data matrices, the two-sample + testing problem can be reduced to the independence testing problem [4]_. + Given sample data :math:`U` and :math:`V` of sizes :math:`p \times n` + :math:`p \times m`, data matrix :math:`X` and :math:`Y` can be created as + follows: + + .. math:: + + X = [U | V] \in \mathcal{R}^{p \times (n + m)} + Y = [0_{1 \times n} | 1_{1 \times m}] \in \mathcal{R}^{(n + m)} + + Then, the MGC statistic can be calculated as normal. This methodology can + be extended to similar tests such as distance correlation [4]_. + + .. versionadded:: 1.4.0 + + References + ---------- + .. [1] Vogelstein, J. T., Bridgeford, E. W., Wang, Q., Priebe, C. E., + Maggioni, M., & Shen, C. (2019). Discovering and deciphering + relationships across disparate data modalities. ELife. + .. [2] Panda, S., Palaniappan, S., Xiong, J., Swaminathan, A., + Ramachandran, S., Bridgeford, E. W., ... Vogelstein, J. T. (2019). + mgcpy: A Comprehensive High Dimensional Independence Testing Python + Package. :arXiv:`1907.02088` + .. [3] Shen, C., Priebe, C.E., & Vogelstein, J. T. (2019). From distance + correlation to multiscale graph correlation. Journal of the American + Statistical Association. + .. [4] Shen, C. & Vogelstein, J. T. (2018). The Exact Equivalence of + Distance and Kernel Methods for Hypothesis Testing. + :arXiv:`1806.05514` + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import multiscale_graphcorr + >>> x = np.arange(100) + >>> y = x + >>> res = multiscale_graphcorr(x, y) + >>> res.statistic, res.pvalue + (1.0, 0.001) + + To run an unpaired two-sample test, + + >>> x = np.arange(100) + >>> y = np.arange(79) + >>> res = multiscale_graphcorr(x, y) + >>> res.statistic, res.pvalue # doctest: +SKIP + (0.033258146255703246, 0.023) + + or, if shape of the inputs are the same, + + >>> x = np.arange(100) + >>> y = x + >>> res = multiscale_graphcorr(x, y, is_twosamp=True) + >>> res.statistic, res.pvalue # doctest: +SKIP + (-0.008021809890200488, 1.0) + + """ + if not isinstance(x, np.ndarray) or not isinstance(y, np.ndarray): + raise ValueError("x and y must be ndarrays") + + # convert arrays of type (n,) to (n, 1) + if x.ndim == 1: + x = x[:, np.newaxis] + elif x.ndim != 2: + raise ValueError(f"Expected a 2-D array `x`, found shape {x.shape}") + if y.ndim == 1: + y = y[:, np.newaxis] + elif y.ndim != 2: + raise ValueError(f"Expected a 2-D array `y`, found shape {y.shape}") + + nx, px = x.shape + ny, py = y.shape + + # check for NaNs + _contains_nan(x, nan_policy='raise') + _contains_nan(y, nan_policy='raise') + + # check for positive or negative infinity and raise error + if np.sum(np.isinf(x)) > 0 or np.sum(np.isinf(y)) > 0: + raise ValueError("Inputs contain infinities") + + if nx != ny: + if px == py: + # reshape x and y for two sample testing + is_twosamp = True + else: + raise ValueError("Shape mismatch, x and y must have shape [n, p] " + "and [n, q] or have shape [n, p] and [m, p].") + + if nx < 5 or ny < 5: + raise ValueError("MGC requires at least 5 samples to give reasonable " + "results.") + + # convert x and y to float + x = x.astype(np.float64) + y = y.astype(np.float64) + + # check if compute_distance_matrix if a callable() + if not callable(compute_distance) and compute_distance is not None: + raise ValueError("Compute_distance must be a function.") + + # check if number of reps exists, integer, or > 0 (if under 1000 raises + # warning) + if not isinstance(reps, int) or reps < 0: + raise ValueError("Number of reps must be an integer greater than 0.") + elif reps < 1000: + msg = ("The number of replications is low (under 1000), and p-value " + "calculations may be unreliable. Use the p-value result, with " + "caution!") + warnings.warn(msg, RuntimeWarning, stacklevel=2) + + if is_twosamp: + if compute_distance is None: + raise ValueError("Cannot run if inputs are distance matrices") + x, y = _two_sample_transform(x, y) + + if compute_distance is not None: + # compute distance matrices for x and y + x = compute_distance(x) + y = compute_distance(y) + + # calculate MGC stat + stat, stat_dict = _mgc_stat(x, y) + stat_mgc_map = stat_dict["stat_mgc_map"] + opt_scale = stat_dict["opt_scale"] + + # calculate permutation MGC p-value + pvalue, null_dist = _perm_test(x, y, stat, reps=reps, workers=workers, + random_state=random_state) + + # save all stats (other than stat/p-value) in dictionary + mgc_dict = {"mgc_map": stat_mgc_map, + "opt_scale": opt_scale, + "null_dist": null_dist} + + # create result object with alias for backward compatibility + res = MGCResult(stat, pvalue, mgc_dict) + res.stat = stat + return res + + +def _mgc_stat(distx, disty): + r"""Helper function that calculates the MGC stat. See above for use. + + Parameters + ---------- + distx, disty : ndarray + `distx` and `disty` have shapes `(n, p)` and `(n, q)` or + `(n, n)` and `(n, n)` + if distance matrices. + + Returns + ------- + stat : float + The sample MGC test statistic within `[-1, 1]`. + stat_dict : dict + Contains additional useful additional returns containing the following + keys: + + - stat_mgc_map : ndarray + MGC-map of the statistics. + - opt_scale : (float, float) + The estimated optimal scale as a `(x, y)` pair. + + """ + # calculate MGC map and optimal scale + stat_mgc_map = _local_correlations(distx, disty, global_corr='mgc') + + n, m = stat_mgc_map.shape + if m == 1 or n == 1: + # the global scale at is the statistic calculated at maximial nearest + # neighbors. There is not enough local scale to search over, so + # default to global scale + stat = stat_mgc_map[m - 1][n - 1] + opt_scale = m * n + else: + samp_size = len(distx) - 1 + + # threshold to find connected region of significant local correlations + sig_connect = _threshold_mgc_map(stat_mgc_map, samp_size) + + # maximum within the significant region + stat, opt_scale = _smooth_mgc_map(sig_connect, stat_mgc_map) + + stat_dict = {"stat_mgc_map": stat_mgc_map, + "opt_scale": opt_scale} + + return stat, stat_dict + + +def _threshold_mgc_map(stat_mgc_map, samp_size): + r""" + Finds a connected region of significance in the MGC-map by thresholding. + + Parameters + ---------- + stat_mgc_map : ndarray + All local correlations within `[-1,1]`. + samp_size : int + The sample size of original data. + + Returns + ------- + sig_connect : ndarray + A binary matrix with 1's indicating the significant region. + + """ + m, n = stat_mgc_map.shape + + # 0.02 is simply an empirical threshold, this can be set to 0.01 or 0.05 + # with varying levels of performance. Threshold is based on a beta + # approximation. + per_sig = 1 - (0.02 / samp_size) # Percentile to consider as significant + threshold = samp_size * (samp_size - 3)/4 - 1/2 # Beta approximation + threshold = distributions.beta.ppf(per_sig, threshold, threshold) * 2 - 1 + + # the global scale at is the statistic calculated at maximial nearest + # neighbors. Threshold is the maximum on the global and local scales + threshold = max(threshold, stat_mgc_map[m - 1][n - 1]) + + # find the largest connected component of significant correlations + sig_connect = stat_mgc_map > threshold + if np.sum(sig_connect) > 0: + sig_connect, _ = _measurements.label(sig_connect) + _, label_counts = np.unique(sig_connect, return_counts=True) + + # skip the first element in label_counts, as it is count(zeros) + max_label = np.argmax(label_counts[1:]) + 1 + sig_connect = sig_connect == max_label + else: + sig_connect = np.array([[False]]) + + return sig_connect + + +def _smooth_mgc_map(sig_connect, stat_mgc_map): + """Finds the smoothed maximal within the significant region R. + + If area of R is too small it returns the last local correlation. Otherwise, + returns the maximum within significant_connected_region. + + Parameters + ---------- + sig_connect : ndarray + A binary matrix with 1's indicating the significant region. + stat_mgc_map : ndarray + All local correlations within `[-1, 1]`. + + Returns + ------- + stat : float + The sample MGC statistic within `[-1, 1]`. + opt_scale: (float, float) + The estimated optimal scale as an `(x, y)` pair. + + """ + m, n = stat_mgc_map.shape + + # the global scale at is the statistic calculated at maximial nearest + # neighbors. By default, statistic and optimal scale are global. + stat = stat_mgc_map[m - 1][n - 1] + opt_scale = [m, n] + + if np.linalg.norm(sig_connect) != 0: + # proceed only when the connected region's area is sufficiently large + # 0.02 is simply an empirical threshold, this can be set to 0.01 or 0.05 + # with varying levels of performance + if np.sum(sig_connect) >= np.ceil(0.02 * max(m, n)) * min(m, n): + max_corr = max(stat_mgc_map[sig_connect]) + + # find all scales within significant_connected_region that maximize + # the local correlation + max_corr_index = np.where((stat_mgc_map >= max_corr) & sig_connect) + + if max_corr >= stat: + stat = max_corr + + k, l = max_corr_index + one_d_indices = k * n + l # 2D to 1D indexing + k = np.max(one_d_indices) // n + l = np.max(one_d_indices) % n + opt_scale = [k+1, l+1] # adding 1s to match R indexing + + return stat, opt_scale + + +def _two_sample_transform(u, v): + """Helper function that concatenates x and y for two sample MGC stat. + + See above for use. + + Parameters + ---------- + u, v : ndarray + `u` and `v` have shapes `(n, p)` and `(m, p)`. + + Returns + ------- + x : ndarray + Concatenate `u` and `v` along the `axis = 0`. `x` thus has shape + `(2n, p)`. + y : ndarray + Label matrix for `x` where 0 refers to samples that comes from `u` and + 1 refers to samples that come from `v`. `y` thus has shape `(2n, 1)`. + + """ + nx = u.shape[0] + ny = v.shape[0] + x = np.concatenate([u, v], axis=0) + y = np.concatenate([np.zeros(nx), np.ones(ny)], axis=0).reshape(-1, 1) + return x, y + + +##################################### +# INFERENTIAL STATISTICS # +##################################### + +TtestResultBase = _make_tuple_bunch('TtestResultBase', + ['statistic', 'pvalue'], ['df']) + + +class TtestResult(TtestResultBase): + """ + Result of a t-test. + + See the documentation of the particular t-test function for more + information about the definition of the statistic and meaning of + the confidence interval. + + Attributes + ---------- + statistic : float or array + The t-statistic of the sample. + pvalue : float or array + The p-value associated with the given alternative. + df : float or array + The number of degrees of freedom used in calculation of the + t-statistic; this is one less than the size of the sample + (``a.shape[axis]-1`` if there are no masked elements or omitted NaNs). + + Methods + ------- + confidence_interval + Computes a confidence interval around the population statistic + for the given confidence level. + The confidence interval is returned in a ``namedtuple`` with + fields `low` and `high`. + + """ + + def __init__(self, statistic, pvalue, df, # public + alternative, standard_error, estimate): # private + super().__init__(statistic, pvalue, df=df) + self._alternative = alternative + self._standard_error = standard_error # denominator of t-statistic + self._estimate = estimate # point estimate of sample mean + + def confidence_interval(self, confidence_level=0.95): + """ + Parameters + ---------- + confidence_level : float + The confidence level for the calculation of the population mean + confidence interval. Default is 0.95. + + Returns + ------- + ci : namedtuple + The confidence interval is returned in a ``namedtuple`` with + fields `low` and `high`. + + """ + low, high = _t_confidence_interval(self.df, self.statistic, + confidence_level, self._alternative) + low = low * self._standard_error + self._estimate + high = high * self._standard_error + self._estimate + return ConfidenceInterval(low=low, high=high) + + +def pack_TtestResult(statistic, pvalue, df, alternative, standard_error, + estimate): + # this could be any number of dimensions (including 0d), but there is + # at most one unique non-NaN value + alternative = np.atleast_1d(alternative) # can't index 0D object + alternative = alternative[np.isfinite(alternative)] + alternative = alternative[0] if alternative.size else np.nan + return TtestResult(statistic, pvalue, df=df, alternative=alternative, + standard_error=standard_error, estimate=estimate) + + +def unpack_TtestResult(res): + return (res.statistic, res.pvalue, res.df, res._alternative, + res._standard_error, res._estimate) + + +@_axis_nan_policy_factory(pack_TtestResult, default_axis=0, n_samples=2, + result_to_tuple=unpack_TtestResult, n_outputs=6) +def ttest_1samp(a, popmean, axis=0, nan_policy='propagate', + alternative="two-sided"): + """Calculate the T-test for the mean of ONE group of scores. + + This is a test for the null hypothesis that the expected value + (mean) of a sample of independent observations `a` is equal to the given + population mean, `popmean`. + + Parameters + ---------- + a : array_like + Sample observations. + popmean : float or array_like + Expected value in null hypothesis. If array_like, then its length along + `axis` must equal 1, and it must otherwise be broadcastable with `a`. + axis : int or None, optional + Axis along which to compute test; default is 0. If None, compute over + the whole array `a`. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the mean of the underlying distribution of the sample + is different than the given population mean (`popmean`) + * 'less': the mean of the underlying distribution of the sample is + less than the given population mean (`popmean`) + * 'greater': the mean of the underlying distribution of the sample is + greater than the given population mean (`popmean`) + + Returns + ------- + result : `~scipy.stats._result_classes.TtestResult` + An object with the following attributes: + + statistic : float or array + The t-statistic. + pvalue : float or array + The p-value associated with the given alternative. + df : float or array + The number of degrees of freedom used in calculation of the + t-statistic; this is one less than the size of the sample + (``a.shape[axis]``). + + .. versionadded:: 1.10.0 + + The object also has the following method: + + confidence_interval(confidence_level=0.95) + Computes a confidence interval around the population + mean for the given confidence level. + The confidence interval is returned in a ``namedtuple`` with + fields `low` and `high`. + + .. versionadded:: 1.10.0 + + Notes + ----- + The statistic is calculated as ``(np.mean(a) - popmean)/se``, where + ``se`` is the standard error. Therefore, the statistic will be positive + when the sample mean is greater than the population mean and negative when + the sample mean is less than the population mean. + + Examples + -------- + Suppose we wish to test the null hypothesis that the mean of a population + is equal to 0.5. We choose a confidence level of 99%; that is, we will + reject the null hypothesis in favor of the alternative if the p-value is + less than 0.01. + + When testing random variates from the standard uniform distribution, which + has a mean of 0.5, we expect the data to be consistent with the null + hypothesis most of the time. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> rvs = stats.uniform.rvs(size=50, random_state=rng) + >>> stats.ttest_1samp(rvs, popmean=0.5) + TtestResult(statistic=2.456308468440, pvalue=0.017628209047638, df=49) + + As expected, the p-value of 0.017 is not below our threshold of 0.01, so + we cannot reject the null hypothesis. + + When testing data from the standard *normal* distribution, which has a mean + of 0, we would expect the null hypothesis to be rejected. + + >>> rvs = stats.norm.rvs(size=50, random_state=rng) + >>> stats.ttest_1samp(rvs, popmean=0.5) + TtestResult(statistic=-7.433605518875, pvalue=1.416760157221e-09, df=49) + + Indeed, the p-value is lower than our threshold of 0.01, so we reject the + null hypothesis in favor of the default "two-sided" alternative: the mean + of the population is *not* equal to 0.5. + + However, suppose we were to test the null hypothesis against the + one-sided alternative that the mean of the population is *greater* than + 0.5. Since the mean of the standard normal is less than 0.5, we would not + expect the null hypothesis to be rejected. + + >>> stats.ttest_1samp(rvs, popmean=0.5, alternative='greater') + TtestResult(statistic=-7.433605518875, pvalue=0.99999999929, df=49) + + Unsurprisingly, with a p-value greater than our threshold, we would not + reject the null hypothesis. + + Note that when working with a confidence level of 99%, a true null + hypothesis will be rejected approximately 1% of the time. + + >>> rvs = stats.uniform.rvs(size=(100, 50), random_state=rng) + >>> res = stats.ttest_1samp(rvs, popmean=0.5, axis=1) + >>> np.sum(res.pvalue < 0.01) + 1 + + Indeed, even though all 100 samples above were drawn from the standard + uniform distribution, which *does* have a population mean of 0.5, we would + mistakenly reject the null hypothesis for one of them. + + `ttest_1samp` can also compute a confidence interval around the population + mean. + + >>> rvs = stats.norm.rvs(size=50, random_state=rng) + >>> res = stats.ttest_1samp(rvs, popmean=0) + >>> ci = res.confidence_interval(confidence_level=0.95) + >>> ci + ConfidenceInterval(low=-0.3193887540880017, high=0.2898583388980972) + + The bounds of the 95% confidence interval are the + minimum and maximum values of the parameter `popmean` for which the + p-value of the test would be 0.05. + + >>> res = stats.ttest_1samp(rvs, popmean=ci.low) + >>> np.testing.assert_allclose(res.pvalue, 0.05) + >>> res = stats.ttest_1samp(rvs, popmean=ci.high) + >>> np.testing.assert_allclose(res.pvalue, 0.05) + + Under certain assumptions about the population from which a sample + is drawn, the confidence interval with confidence level 95% is expected + to contain the true population mean in 95% of sample replications. + + >>> rvs = stats.norm.rvs(size=(50, 1000), loc=1, random_state=rng) + >>> res = stats.ttest_1samp(rvs, popmean=0) + >>> ci = res.confidence_interval() + >>> contains_pop_mean = (ci.low < 1) & (ci.high > 1) + >>> contains_pop_mean.sum() + 953 + + """ + a, axis = _chk_asarray(a, axis) + + n = a.shape[axis] + df = n - 1 + + mean = np.mean(a, axis) + try: + popmean = np.squeeze(popmean, axis=axis) + except ValueError as e: + raise ValueError("`popmean.shape[axis]` must equal 1.") from e + d = mean - popmean + v = _var(a, axis, ddof=1) + denom = np.sqrt(v / n) + + with np.errstate(divide='ignore', invalid='ignore'): + t = np.divide(d, denom)[()] + prob = _get_pvalue(t, distributions.t(df), alternative) + + # when nan_policy='omit', `df` can be different for different axis-slices + df = np.broadcast_to(df, t.shape)[()] + # _axis_nan_policy decorator doesn't play well with strings + alternative_num = {"less": -1, "two-sided": 0, "greater": 1}[alternative] + return TtestResult(t, prob, df=df, alternative=alternative_num, + standard_error=denom, estimate=mean) + + +def _t_confidence_interval(df, t, confidence_level, alternative): + # Input validation on `alternative` is already done + # We just need IV on confidence_level + if confidence_level < 0 or confidence_level > 1: + message = "`confidence_level` must be a number between 0 and 1." + raise ValueError(message) + + if alternative < 0: # 'less' + p = confidence_level + low, high = np.broadcast_arrays(-np.inf, special.stdtrit(df, p)) + elif alternative > 0: # 'greater' + p = 1 - confidence_level + low, high = np.broadcast_arrays(special.stdtrit(df, p), np.inf) + elif alternative == 0: # 'two-sided' + tail_probability = (1 - confidence_level)/2 + p = tail_probability, 1-tail_probability + # axis of p must be the zeroth and orthogonal to all the rest + p = np.reshape(p, [2] + [1]*np.asarray(df).ndim) + low, high = special.stdtrit(df, p) + else: # alternative is NaN when input is empty (see _axis_nan_policy) + p, nans = np.broadcast_arrays(t, np.nan) + low, high = nans, nans + + return low[()], high[()] + +def _ttest_ind_from_stats(mean1, mean2, denom, df, alternative): + + d = mean1 - mean2 + with np.errstate(divide='ignore', invalid='ignore'): + t = np.divide(d, denom)[()] + prob = _get_pvalue(t, distributions.t(df), alternative) + + return (t, prob) + + +def _unequal_var_ttest_denom(v1, n1, v2, n2): + vn1 = v1 / n1 + vn2 = v2 / n2 + with np.errstate(divide='ignore', invalid='ignore'): + df = (vn1 + vn2)**2 / (vn1**2 / (n1 - 1) + vn2**2 / (n2 - 1)) + + # If df is undefined, variances are zero (assumes n1 > 0 & n2 > 0). + # Hence it doesn't matter what df is as long as it's not NaN. + df = np.where(np.isnan(df), 1, df) + denom = np.sqrt(vn1 + vn2) + return df, denom + + +def _equal_var_ttest_denom(v1, n1, v2, n2): + # If there is a single observation in one sample, this formula for pooled + # variance breaks down because the variance of that sample is undefined. + # The pooled variance is still defined, though, because the (n-1) in the + # numerator should cancel with the (n-1) in the denominator, leaving only + # the sum of squared differences from the mean: zero. + v1 = np.where(n1 == 1, 0, v1)[()] + v2 = np.where(n2 == 1, 0, v2)[()] + + df = n1 + n2 - 2.0 + svar = ((n1 - 1) * v1 + (n2 - 1) * v2) / df + denom = np.sqrt(svar * (1.0 / n1 + 1.0 / n2)) + return df, denom + + +Ttest_indResult = namedtuple('Ttest_indResult', ('statistic', 'pvalue')) + + +def ttest_ind_from_stats(mean1, std1, nobs1, mean2, std2, nobs2, + equal_var=True, alternative="two-sided"): + r""" + T-test for means of two independent samples from descriptive statistics. + + This is a test for the null hypothesis that two independent + samples have identical average (expected) values. + + Parameters + ---------- + mean1 : array_like + The mean(s) of sample 1. + std1 : array_like + The corrected sample standard deviation of sample 1 (i.e. ``ddof=1``). + nobs1 : array_like + The number(s) of observations of sample 1. + mean2 : array_like + The mean(s) of sample 2. + std2 : array_like + The corrected sample standard deviation of sample 2 (i.e. ``ddof=1``). + nobs2 : array_like + The number(s) of observations of sample 2. + equal_var : bool, optional + If True (default), perform a standard independent 2 sample test + that assumes equal population variances [1]_. + If False, perform Welch's t-test, which does not assume equal + population variance [2]_. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the means of the distributions are unequal. + * 'less': the mean of the first distribution is less than the + mean of the second distribution. + * 'greater': the mean of the first distribution is greater than the + mean of the second distribution. + + .. versionadded:: 1.6.0 + + Returns + ------- + statistic : float or array + The calculated t-statistics. + pvalue : float or array + The two-tailed p-value. + + See Also + -------- + scipy.stats.ttest_ind + + Notes + ----- + The statistic is calculated as ``(mean1 - mean2)/se``, where ``se`` is the + standard error. Therefore, the statistic will be positive when `mean1` is + greater than `mean2` and negative when `mean1` is less than `mean2`. + + This method does not check whether any of the elements of `std1` or `std2` + are negative. If any elements of the `std1` or `std2` parameters are + negative in a call to this method, this method will return the same result + as if it were passed ``numpy.abs(std1)`` and ``numpy.abs(std2)``, + respectively, instead; no exceptions or warnings will be emitted. + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/T-test#Independent_two-sample_t-test + + .. [2] https://en.wikipedia.org/wiki/Welch%27s_t-test + + Examples + -------- + Suppose we have the summary data for two samples, as follows (with the + Sample Variance being the corrected sample variance):: + + Sample Sample + Size Mean Variance + Sample 1 13 15.0 87.5 + Sample 2 11 12.0 39.0 + + Apply the t-test to this data (with the assumption that the population + variances are equal): + + >>> import numpy as np + >>> from scipy.stats import ttest_ind_from_stats + >>> ttest_ind_from_stats(mean1=15.0, std1=np.sqrt(87.5), nobs1=13, + ... mean2=12.0, std2=np.sqrt(39.0), nobs2=11) + Ttest_indResult(statistic=0.9051358093310269, pvalue=0.3751996797581487) + + For comparison, here is the data from which those summary statistics + were taken. With this data, we can compute the same result using + `scipy.stats.ttest_ind`: + + >>> a = np.array([1, 3, 4, 6, 11, 13, 15, 19, 22, 24, 25, 26, 26]) + >>> b = np.array([2, 4, 6, 9, 11, 13, 14, 15, 18, 19, 21]) + >>> from scipy.stats import ttest_ind + >>> ttest_ind(a, b) + Ttest_indResult(statistic=0.905135809331027, pvalue=0.3751996797581486) + + Suppose we instead have binary data and would like to apply a t-test to + compare the proportion of 1s in two independent groups:: + + Number of Sample Sample + Size ones Mean Variance + Sample 1 150 30 0.2 0.161073 + Sample 2 200 45 0.225 0.175251 + + The sample mean :math:`\hat{p}` is the proportion of ones in the sample + and the variance for a binary observation is estimated by + :math:`\hat{p}(1-\hat{p})`. + + >>> ttest_ind_from_stats(mean1=0.2, std1=np.sqrt(0.161073), nobs1=150, + ... mean2=0.225, std2=np.sqrt(0.175251), nobs2=200) + Ttest_indResult(statistic=-0.5627187905196761, pvalue=0.5739887114209541) + + For comparison, we could compute the t statistic and p-value using + arrays of 0s and 1s and `scipy.stat.ttest_ind`, as above. + + >>> group1 = np.array([1]*30 + [0]*(150-30)) + >>> group2 = np.array([1]*45 + [0]*(200-45)) + >>> ttest_ind(group1, group2) + Ttest_indResult(statistic=-0.5627179589855622, pvalue=0.573989277115258) + + """ + mean1 = np.asarray(mean1) + std1 = np.asarray(std1) + mean2 = np.asarray(mean2) + std2 = np.asarray(std2) + if equal_var: + df, denom = _equal_var_ttest_denom(std1**2, nobs1, std2**2, nobs2) + else: + df, denom = _unequal_var_ttest_denom(std1**2, nobs1, + std2**2, nobs2) + + res = _ttest_ind_from_stats(mean1, mean2, denom, df, alternative) + return Ttest_indResult(*res) + + +@_axis_nan_policy_factory(pack_TtestResult, default_axis=0, n_samples=2, + result_to_tuple=unpack_TtestResult, n_outputs=6) +def ttest_ind(a, b, axis=0, equal_var=True, nan_policy='propagate', + permutations=None, random_state=None, alternative="two-sided", + trim=0): + """ + Calculate the T-test for the means of *two independent* samples of scores. + + This is a test for the null hypothesis that 2 independent samples + have identical average (expected) values. This test assumes that the + populations have identical variances by default. + + Parameters + ---------- + a, b : array_like + The arrays must have the same shape, except in the dimension + corresponding to `axis` (the first, by default). + axis : int or None, optional + Axis along which to compute test. If None, compute over the whole + arrays, `a`, and `b`. + equal_var : bool, optional + If True (default), perform a standard independent 2 sample test + that assumes equal population variances [1]_. + If False, perform Welch's t-test, which does not assume equal + population variance [2]_. + + .. versionadded:: 0.11.0 + + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + The 'omit' option is not currently available for permutation tests or + one-sided asympyotic tests. + + permutations : non-negative int, np.inf, or None (default), optional + If 0 or None (default), use the t-distribution to calculate p-values. + Otherwise, `permutations` is the number of random permutations that + will be used to estimate p-values using a permutation test. If + `permutations` equals or exceeds the number of distinct partitions of + the pooled data, an exact test is performed instead (i.e. each + distinct partition is used exactly once). See Notes for details. + + .. versionadded:: 1.7.0 + + random_state : {None, int, `numpy.random.Generator`, + `numpy.random.RandomState`}, optional + + If `seed` is None (or `np.random`), the `numpy.random.RandomState` + singleton is used. + If `seed` is an int, a new ``RandomState`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` or ``RandomState`` instance then + that instance is used. + + Pseudorandom number generator state used to generate permutations + (used only when `permutations` is not None). + + .. versionadded:: 1.7.0 + + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the means of the distributions underlying the samples + are unequal. + * 'less': the mean of the distribution underlying the first sample + is less than the mean of the distribution underlying the second + sample. + * 'greater': the mean of the distribution underlying the first + sample is greater than the mean of the distribution underlying + the second sample. + + .. versionadded:: 1.6.0 + + trim : float, optional + If nonzero, performs a trimmed (Yuen's) t-test. + Defines the fraction of elements to be trimmed from each end of the + input samples. If 0 (default), no elements will be trimmed from either + side. The number of trimmed elements from each tail is the floor of the + trim times the number of elements. Valid range is [0, .5). + + .. versionadded:: 1.7 + + Returns + ------- + result : `~scipy.stats._result_classes.TtestResult` + An object with the following attributes: + + statistic : float or ndarray + The t-statistic. + pvalue : float or ndarray + The p-value associated with the given alternative. + df : float or ndarray + The number of degrees of freedom used in calculation of the + t-statistic. This is always NaN for a permutation t-test. + + .. versionadded:: 1.11.0 + + The object also has the following method: + + confidence_interval(confidence_level=0.95) + Computes a confidence interval around the difference in + population means for the given confidence level. + The confidence interval is returned in a ``namedtuple`` with + fields ``low`` and ``high``. + When a permutation t-test is performed, the confidence interval + is not computed, and fields ``low`` and ``high`` contain NaN. + + .. versionadded:: 1.11.0 + + Notes + ----- + Suppose we observe two independent samples, e.g. flower petal lengths, and + we are considering whether the two samples were drawn from the same + population (e.g. the same species of flower or two species with similar + petal characteristics) or two different populations. + + The t-test quantifies the difference between the arithmetic means + of the two samples. The p-value quantifies the probability of observing + as or more extreme values assuming the null hypothesis, that the + samples are drawn from populations with the same population means, is true. + A p-value larger than a chosen threshold (e.g. 5% or 1%) indicates that + our observation is not so unlikely to have occurred by chance. Therefore, + we do not reject the null hypothesis of equal population means. + If the p-value is smaller than our threshold, then we have evidence + against the null hypothesis of equal population means. + + By default, the p-value is determined by comparing the t-statistic of the + observed data against a theoretical t-distribution. + When ``1 < permutations < binom(n, k)``, where + + * ``k`` is the number of observations in `a`, + * ``n`` is the total number of observations in `a` and `b`, and + * ``binom(n, k)`` is the binomial coefficient (``n`` choose ``k``), + + the data are pooled (concatenated), randomly assigned to either group `a` + or `b`, and the t-statistic is calculated. This process is performed + repeatedly (`permutation` times), generating a distribution of the + t-statistic under the null hypothesis, and the t-statistic of the observed + data is compared to this distribution to determine the p-value. + Specifically, the p-value reported is the "achieved significance level" + (ASL) as defined in 4.4 of [3]_. Note that there are other ways of + estimating p-values using randomized permutation tests; for other + options, see the more general `permutation_test`. + + When ``permutations >= binom(n, k)``, an exact test is performed: the data + are partitioned between the groups in each distinct way exactly once. + + The permutation test can be computationally expensive and not necessarily + more accurate than the analytical test, but it does not make strong + assumptions about the shape of the underlying distribution. + + Use of trimming is commonly referred to as the trimmed t-test. At times + called Yuen's t-test, this is an extension of Welch's t-test, with the + difference being the use of winsorized means in calculation of the variance + and the trimmed sample size in calculation of the statistic. Trimming is + recommended if the underlying distribution is long-tailed or contaminated + with outliers [4]_. + + The statistic is calculated as ``(np.mean(a) - np.mean(b))/se``, where + ``se`` is the standard error. Therefore, the statistic will be positive + when the sample mean of `a` is greater than the sample mean of `b` and + negative when the sample mean of `a` is less than the sample mean of + `b`. + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/T-test#Independent_two-sample_t-test + + .. [2] https://en.wikipedia.org/wiki/Welch%27s_t-test + + .. [3] B. Efron and T. Hastie. Computer Age Statistical Inference. (2016). + + .. [4] Yuen, Karen K. "The Two-Sample Trimmed t for Unequal Population + Variances." Biometrika, vol. 61, no. 1, 1974, pp. 165-170. JSTOR, + www.jstor.org/stable/2334299. Accessed 30 Mar. 2021. + + .. [5] Yuen, Karen K., and W. J. Dixon. "The Approximate Behaviour and + Performance of the Two-Sample Trimmed t." Biometrika, vol. 60, + no. 2, 1973, pp. 369-374. JSTOR, www.jstor.org/stable/2334550. + Accessed 30 Mar. 2021. + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + + Test with sample with identical means: + + >>> rvs1 = stats.norm.rvs(loc=5, scale=10, size=500, random_state=rng) + >>> rvs2 = stats.norm.rvs(loc=5, scale=10, size=500, random_state=rng) + >>> stats.ttest_ind(rvs1, rvs2) + Ttest_indResult(statistic=-0.4390847099199348, pvalue=0.6606952038870015) + >>> stats.ttest_ind(rvs1, rvs2, equal_var=False) + Ttest_indResult(statistic=-0.4390847099199348, pvalue=0.6606952553131064) + + `ttest_ind` underestimates p for unequal variances: + + >>> rvs3 = stats.norm.rvs(loc=5, scale=20, size=500, random_state=rng) + >>> stats.ttest_ind(rvs1, rvs3) + Ttest_indResult(statistic=-1.6370984482905417, pvalue=0.1019251574705033) + >>> stats.ttest_ind(rvs1, rvs3, equal_var=False) + Ttest_indResult(statistic=-1.637098448290542, pvalue=0.10202110497954867) + + When ``n1 != n2``, the equal variance t-statistic is no longer equal to the + unequal variance t-statistic: + + >>> rvs4 = stats.norm.rvs(loc=5, scale=20, size=100, random_state=rng) + >>> stats.ttest_ind(rvs1, rvs4) + Ttest_indResult(statistic=-1.9481646859513422, pvalue=0.05186270935842703) + >>> stats.ttest_ind(rvs1, rvs4, equal_var=False) + Ttest_indResult(statistic=-1.3146566100751664, pvalue=0.1913495266513811) + + T-test with different means, variance, and n: + + >>> rvs5 = stats.norm.rvs(loc=8, scale=20, size=100, random_state=rng) + >>> stats.ttest_ind(rvs1, rvs5) + Ttest_indResult(statistic=-2.8415950600298774, pvalue=0.0046418707568707885) + >>> stats.ttest_ind(rvs1, rvs5, equal_var=False) + Ttest_indResult(statistic=-1.8686598649188084, pvalue=0.06434714193919686) + + When performing a permutation test, more permutations typically yields + more accurate results. Use a ``np.random.Generator`` to ensure + reproducibility: + + >>> stats.ttest_ind(rvs1, rvs5, permutations=10000, + ... random_state=rng) + Ttest_indResult(statistic=-2.8415950600298774, pvalue=0.0052994700529947) + + Take these two samples, one of which has an extreme tail. + + >>> a = (56, 128.6, 12, 123.8, 64.34, 78, 763.3) + >>> b = (1.1, 2.9, 4.2) + + Use the `trim` keyword to perform a trimmed (Yuen) t-test. For example, + using 20% trimming, ``trim=.2``, the test will reduce the impact of one + (``np.floor(trim*len(a))``) element from each tail of sample `a`. It will + have no effect on sample `b` because ``np.floor(trim*len(b))`` is 0. + + >>> stats.ttest_ind(a, b, trim=.2) + Ttest_indResult(statistic=3.4463884028073513, + pvalue=0.01369338726499547) + """ + if not (0 <= trim < .5): + raise ValueError("Trimming percentage should be 0 <= `trim` < .5.") + + NaN = _get_nan(a, b) + + if a.size == 0 or b.size == 0: + # _axis_nan_policy decorator ensures this only happens with 1d input + return TtestResult(NaN, NaN, df=NaN, alternative=NaN, + standard_error=NaN, estimate=NaN) + + if permutations is not None and permutations != 0: + if trim != 0: + raise ValueError("Permutations are currently not supported " + "with trimming.") + if permutations < 0 or (np.isfinite(permutations) and + int(permutations) != permutations): + raise ValueError("Permutations must be a non-negative integer.") + + t, prob = _permutation_ttest(a, b, permutations=permutations, + axis=axis, equal_var=equal_var, + nan_policy=nan_policy, + random_state=random_state, + alternative=alternative) + df, denom, estimate = NaN, NaN, NaN + + else: + n1 = a.shape[axis] + n2 = b.shape[axis] + + if trim == 0: + if equal_var: + old_errstate = np.geterr() + np.seterr(divide='ignore', invalid='ignore') + v1 = _var(a, axis, ddof=1) + v2 = _var(b, axis, ddof=1) + if equal_var: + np.seterr(**old_errstate) + m1 = np.mean(a, axis) + m2 = np.mean(b, axis) + else: + v1, m1, n1 = _ttest_trim_var_mean_len(a, trim, axis) + v2, m2, n2 = _ttest_trim_var_mean_len(b, trim, axis) + + if equal_var: + df, denom = _equal_var_ttest_denom(v1, n1, v2, n2) + else: + df, denom = _unequal_var_ttest_denom(v1, n1, v2, n2) + t, prob = _ttest_ind_from_stats(m1, m2, denom, df, alternative) + + # when nan_policy='omit', `df` can be different for different axis-slices + df = np.broadcast_to(df, t.shape)[()] + estimate = m1-m2 + + # _axis_nan_policy decorator doesn't play well with strings + alternative_num = {"less": -1, "two-sided": 0, "greater": 1}[alternative] + return TtestResult(t, prob, df=df, alternative=alternative_num, + standard_error=denom, estimate=estimate) + + +def _ttest_trim_var_mean_len(a, trim, axis): + """Variance, mean, and length of winsorized input along specified axis""" + # for use with `ttest_ind` when trimming. + # further calculations in this test assume that the inputs are sorted. + # From [4] Section 1 "Let x_1, ..., x_n be n ordered observations..." + a = np.sort(a, axis=axis) + + # `g` is the number of elements to be replaced on each tail, converted + # from a percentage amount of trimming + n = a.shape[axis] + g = int(n * trim) + + # Calculate the Winsorized variance of the input samples according to + # specified `g` + v = _calculate_winsorized_variance(a, g, axis) + + # the total number of elements in the trimmed samples + n -= 2 * g + + # calculate the g-times trimmed mean, as defined in [4] (1-1) + m = trim_mean(a, trim, axis=axis) + return v, m, n + + +def _calculate_winsorized_variance(a, g, axis): + """Calculates g-times winsorized variance along specified axis""" + # it is expected that the input `a` is sorted along the correct axis + if g == 0: + return _var(a, ddof=1, axis=axis) + # move the intended axis to the end that way it is easier to manipulate + a_win = np.moveaxis(a, axis, -1) + + # save where NaNs are for later use. + nans_indices = np.any(np.isnan(a_win), axis=-1) + + # Winsorization and variance calculation are done in one step in [4] + # (1-3), but here winsorization is done first; replace the left and + # right sides with the repeating value. This can be see in effect in ( + # 1-3) in [4], where the leftmost and rightmost tails are replaced with + # `(g + 1) * x_{g + 1}` on the left and `(g + 1) * x_{n - g}` on the + # right. Zero-indexing turns `g + 1` to `g`, and `n - g` to `- g - 1` in + # array indexing. + a_win[..., :g] = a_win[..., [g]] + a_win[..., -g:] = a_win[..., [-g - 1]] + + # Determine the variance. In [4], the degrees of freedom is expressed as + # `h - 1`, where `h = n - 2g` (unnumbered equations in Section 1, end of + # page 369, beginning of page 370). This is converted to NumPy's format, + # `n - ddof` for use with `np.var`. The result is converted to an + # array to accommodate indexing later. + var_win = np.asarray(_var(a_win, ddof=(2 * g + 1), axis=-1)) + + # with `nan_policy='propagate'`, NaNs may be completely trimmed out + # because they were sorted into the tail of the array. In these cases, + # replace computed variances with `np.nan`. + var_win[nans_indices] = np.nan + return var_win + + +def _permutation_distribution_t(data, permutations, size_a, equal_var, + random_state=None): + """Generation permutation distribution of t statistic""" + + random_state = check_random_state(random_state) + + # prepare permutation indices + size = data.shape[-1] + # number of distinct combinations + n_max = special.comb(size, size_a) + + if permutations < n_max: + perm_generator = (random_state.permutation(size) + for i in range(permutations)) + else: + permutations = n_max + perm_generator = (np.concatenate(z) + for z in _all_partitions(size_a, size-size_a)) + + t_stat = [] + for indices in _batch_generator(perm_generator, batch=50): + # get one batch from perm_generator at a time as a list + indices = np.array(indices) + # generate permutations + data_perm = data[..., indices] + # move axis indexing permutations to position 0 to broadcast + # nicely with t_stat_observed, which doesn't have this dimension + data_perm = np.moveaxis(data_perm, -2, 0) + + a = data_perm[..., :size_a] + b = data_perm[..., size_a:] + t_stat.append(_calc_t_stat(a, b, equal_var)) + + t_stat = np.concatenate(t_stat, axis=0) + + return t_stat, permutations, n_max + + +def _calc_t_stat(a, b, equal_var, axis=-1): + """Calculate the t statistic along the given dimension.""" + na = a.shape[axis] + nb = b.shape[axis] + avg_a = np.mean(a, axis=axis) + avg_b = np.mean(b, axis=axis) + var_a = _var(a, axis=axis, ddof=1) + var_b = _var(b, axis=axis, ddof=1) + + if not equal_var: + denom = _unequal_var_ttest_denom(var_a, na, var_b, nb)[1] + else: + denom = _equal_var_ttest_denom(var_a, na, var_b, nb)[1] + + return (avg_a-avg_b)/denom + + +def _permutation_ttest(a, b, permutations, axis=0, equal_var=True, + nan_policy='propagate', random_state=None, + alternative="two-sided"): + """ + Calculates the T-test for the means of TWO INDEPENDENT samples of scores + using permutation methods. + + This test is similar to `stats.ttest_ind`, except it doesn't rely on an + approximate normality assumption since it uses a permutation test. + This function is only called from ttest_ind when permutations is not None. + + Parameters + ---------- + a, b : array_like + The arrays must be broadcastable, except along the dimension + corresponding to `axis` (the zeroth, by default). + axis : int, optional + The axis over which to operate on a and b. + permutations : int, optional + Number of permutations used to calculate p-value. If greater than or + equal to the number of distinct permutations, perform an exact test. + equal_var : bool, optional + If False, an equal variance (Welch's) t-test is conducted. Otherwise, + an ordinary t-test is conducted. + random_state : {None, int, `numpy.random.Generator`}, optional + If `seed` is None the `numpy.random.Generator` singleton is used. + If `seed` is an int, a new ``Generator`` instance is used, + seeded with `seed`. + If `seed` is already a ``Generator`` instance then that instance is + used. + Pseudorandom number generator state used for generating random + permutations. + + Returns + ------- + statistic : float or array + The calculated t-statistic. + pvalue : float or array + The p-value. + + """ + random_state = check_random_state(random_state) + + t_stat_observed = _calc_t_stat(a, b, equal_var, axis=axis) + + na = a.shape[axis] + mat = _broadcast_concatenate((a, b), axis=axis) + mat = np.moveaxis(mat, axis, -1) + + t_stat, permutations, n_max = _permutation_distribution_t( + mat, permutations, size_a=na, equal_var=equal_var, + random_state=random_state) + + compare = {"less": np.less_equal, + "greater": np.greater_equal, + "two-sided": lambda x, y: (x <= -np.abs(y)) | (x >= np.abs(y))} + + # Calculate the p-values + cmps = compare[alternative](t_stat, t_stat_observed) + # Randomized test p-value calculation should use biased estimate; see e.g. + # https://www.degruyter.com/document/doi/10.2202/1544-6115.1585/ + adjustment = 1 if n_max > permutations else 0 + pvalues = (cmps.sum(axis=0) + adjustment) / (permutations + adjustment) + + # nans propagate naturally in statistic calculation, but need to be + # propagated manually into pvalues + if nan_policy == 'propagate' and np.isnan(t_stat_observed).any(): + if np.ndim(pvalues) == 0: + pvalues = np.float64(np.nan) + else: + pvalues[np.isnan(t_stat_observed)] = np.nan + + return (t_stat_observed, pvalues) + + +def _get_len(a, axis, msg): + try: + n = a.shape[axis] + except IndexError: + raise AxisError(axis, a.ndim, msg) from None + return n + + +@_axis_nan_policy_factory(pack_TtestResult, default_axis=0, n_samples=2, + result_to_tuple=unpack_TtestResult, n_outputs=6, + paired=True) +def ttest_rel(a, b, axis=0, nan_policy='propagate', alternative="two-sided"): + """Calculate the t-test on TWO RELATED samples of scores, a and b. + + This is a test for the null hypothesis that two related or + repeated samples have identical average (expected) values. + + Parameters + ---------- + a, b : array_like + The arrays must have the same shape. + axis : int or None, optional + Axis along which to compute test. If None, compute over the whole + arrays, `a`, and `b`. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the means of the distributions underlying the samples + are unequal. + * 'less': the mean of the distribution underlying the first sample + is less than the mean of the distribution underlying the second + sample. + * 'greater': the mean of the distribution underlying the first + sample is greater than the mean of the distribution underlying + the second sample. + + .. versionadded:: 1.6.0 + + Returns + ------- + result : `~scipy.stats._result_classes.TtestResult` + An object with the following attributes: + + statistic : float or array + The t-statistic. + pvalue : float or array + The p-value associated with the given alternative. + df : float or array + The number of degrees of freedom used in calculation of the + t-statistic; this is one less than the size of the sample + (``a.shape[axis]``). + + .. versionadded:: 1.10.0 + + The object also has the following method: + + confidence_interval(confidence_level=0.95) + Computes a confidence interval around the difference in + population means for the given confidence level. + The confidence interval is returned in a ``namedtuple`` with + fields `low` and `high`. + + .. versionadded:: 1.10.0 + + Notes + ----- + Examples for use are scores of the same set of student in + different exams, or repeated sampling from the same units. The + test measures whether the average score differs significantly + across samples (e.g. exams). If we observe a large p-value, for + example greater than 0.05 or 0.1 then we cannot reject the null + hypothesis of identical average scores. If the p-value is smaller + than the threshold, e.g. 1%, 5% or 10%, then we reject the null + hypothesis of equal averages. Small p-values are associated with + large t-statistics. + + The t-statistic is calculated as ``np.mean(a - b)/se``, where ``se`` is the + standard error. Therefore, the t-statistic will be positive when the sample + mean of ``a - b`` is greater than zero and negative when the sample mean of + ``a - b`` is less than zero. + + References + ---------- + https://en.wikipedia.org/wiki/T-test#Dependent_t-test_for_paired_samples + + Examples + -------- + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + + >>> rvs1 = stats.norm.rvs(loc=5, scale=10, size=500, random_state=rng) + >>> rvs2 = (stats.norm.rvs(loc=5, scale=10, size=500, random_state=rng) + ... + stats.norm.rvs(scale=0.2, size=500, random_state=rng)) + >>> stats.ttest_rel(rvs1, rvs2) + TtestResult(statistic=-0.4549717054410304, pvalue=0.6493274702088672, df=499) + >>> rvs3 = (stats.norm.rvs(loc=8, scale=10, size=500, random_state=rng) + ... + stats.norm.rvs(scale=0.2, size=500, random_state=rng)) + >>> stats.ttest_rel(rvs1, rvs3) + TtestResult(statistic=-5.879467544540889, pvalue=7.540777129099917e-09, df=499) + + """ + a, b, axis = _chk2_asarray(a, b, axis) + + na = _get_len(a, axis, "first argument") + nb = _get_len(b, axis, "second argument") + if na != nb: + raise ValueError('unequal length arrays') + + if na == 0 or nb == 0: + # _axis_nan_policy decorator ensures this only happens with 1d input + NaN = _get_nan(a, b) + return TtestResult(NaN, NaN, df=NaN, alternative=NaN, + standard_error=NaN, estimate=NaN) + + n = a.shape[axis] + df = n - 1 + + d = (a - b).astype(np.float64) + v = _var(d, axis, ddof=1) + dm = np.mean(d, axis) + denom = np.sqrt(v / n) + + with np.errstate(divide='ignore', invalid='ignore'): + t = np.divide(dm, denom)[()] + prob = _get_pvalue(t, distributions.t(df), alternative) + + # when nan_policy='omit', `df` can be different for different axis-slices + df = np.broadcast_to(df, t.shape)[()] + + # _axis_nan_policy decorator doesn't play well with strings + alternative_num = {"less": -1, "two-sided": 0, "greater": 1}[alternative] + return TtestResult(t, prob, df=df, alternative=alternative_num, + standard_error=denom, estimate=dm) + + +# Map from names to lambda_ values used in power_divergence(). +_power_div_lambda_names = { + "pearson": 1, + "log-likelihood": 0, + "freeman-tukey": -0.5, + "mod-log-likelihood": -1, + "neyman": -2, + "cressie-read": 2/3, +} + + +def _count(a, axis=None): + """Count the number of non-masked elements of an array. + + This function behaves like `np.ma.count`, but is much faster + for ndarrays. + """ + if hasattr(a, 'count'): + num = a.count(axis=axis) + if isinstance(num, np.ndarray) and num.ndim == 0: + # In some cases, the `count` method returns a scalar array (e.g. + # np.array(3)), but we want a plain integer. + num = int(num) + else: + if axis is None: + num = a.size + else: + num = a.shape[axis] + return num + + +def _m_broadcast_to(a, shape): + if np.ma.isMaskedArray(a): + return np.ma.masked_array(np.broadcast_to(a, shape), + mask=np.broadcast_to(a.mask, shape)) + return np.broadcast_to(a, shape, subok=True) + + +Power_divergenceResult = namedtuple('Power_divergenceResult', + ('statistic', 'pvalue')) + + +def power_divergence(f_obs, f_exp=None, ddof=0, axis=0, lambda_=None): + """Cressie-Read power divergence statistic and goodness of fit test. + + This function tests the null hypothesis that the categorical data + has the given frequencies, using the Cressie-Read power divergence + statistic. + + Parameters + ---------- + f_obs : array_like + Observed frequencies in each category. + f_exp : array_like, optional + Expected frequencies in each category. By default the categories are + assumed to be equally likely. + ddof : int, optional + "Delta degrees of freedom": adjustment to the degrees of freedom + for the p-value. The p-value is computed using a chi-squared + distribution with ``k - 1 - ddof`` degrees of freedom, where `k` + is the number of observed frequencies. The default value of `ddof` + is 0. + axis : int or None, optional + The axis of the broadcast result of `f_obs` and `f_exp` along which to + apply the test. If axis is None, all values in `f_obs` are treated + as a single data set. Default is 0. + lambda_ : float or str, optional + The power in the Cressie-Read power divergence statistic. The default + is 1. For convenience, `lambda_` may be assigned one of the following + strings, in which case the corresponding numerical value is used: + + * ``"pearson"`` (value 1) + Pearson's chi-squared statistic. In this case, the function is + equivalent to `chisquare`. + * ``"log-likelihood"`` (value 0) + Log-likelihood ratio. Also known as the G-test [3]_. + * ``"freeman-tukey"`` (value -1/2) + Freeman-Tukey statistic. + * ``"mod-log-likelihood"`` (value -1) + Modified log-likelihood ratio. + * ``"neyman"`` (value -2) + Neyman's statistic. + * ``"cressie-read"`` (value 2/3) + The power recommended in [5]_. + + Returns + ------- + res: Power_divergenceResult + An object containing attributes: + + statistic : float or ndarray + The Cressie-Read power divergence test statistic. The value is + a float if `axis` is None or if` `f_obs` and `f_exp` are 1-D. + pvalue : float or ndarray + The p-value of the test. The value is a float if `ddof` and the + return value `stat` are scalars. + + See Also + -------- + chisquare + + Notes + ----- + This test is invalid when the observed or expected frequencies in each + category are too small. A typical rule is that all of the observed + and expected frequencies should be at least 5. + + Also, the sum of the observed and expected frequencies must be the same + for the test to be valid; `power_divergence` raises an error if the sums + do not agree within a relative tolerance of ``1e-8``. + + When `lambda_` is less than zero, the formula for the statistic involves + dividing by `f_obs`, so a warning or error may be generated if any value + in `f_obs` is 0. + + Similarly, a warning or error may be generated if any value in `f_exp` is + zero when `lambda_` >= 0. + + The default degrees of freedom, k-1, are for the case when no parameters + of the distribution are estimated. If p parameters are estimated by + efficient maximum likelihood then the correct degrees of freedom are + k-1-p. If the parameters are estimated in a different way, then the + dof can be between k-1-p and k-1. However, it is also possible that + the asymptotic distribution is not a chisquare, in which case this + test is not appropriate. + + This function handles masked arrays. If an element of `f_obs` or `f_exp` + is masked, then data at that position is ignored, and does not count + towards the size of the data set. + + .. versionadded:: 0.13.0 + + References + ---------- + .. [1] Lowry, Richard. "Concepts and Applications of Inferential + Statistics". Chapter 8. + https://web.archive.org/web/20171015035606/http://faculty.vassar.edu/lowry/ch8pt1.html + .. [2] "Chi-squared test", https://en.wikipedia.org/wiki/Chi-squared_test + .. [3] "G-test", https://en.wikipedia.org/wiki/G-test + .. [4] Sokal, R. R. and Rohlf, F. J. "Biometry: the principles and + practice of statistics in biological research", New York: Freeman + (1981) + .. [5] Cressie, N. and Read, T. R. C., "Multinomial Goodness-of-Fit + Tests", J. Royal Stat. Soc. Series B, Vol. 46, No. 3 (1984), + pp. 440-464. + + Examples + -------- + (See `chisquare` for more examples.) + + When just `f_obs` is given, it is assumed that the expected frequencies + are uniform and given by the mean of the observed frequencies. Here we + perform a G-test (i.e. use the log-likelihood ratio statistic): + + >>> import numpy as np + >>> from scipy.stats import power_divergence + >>> power_divergence([16, 18, 16, 14, 12, 12], lambda_='log-likelihood') + (2.006573162632538, 0.84823476779463769) + + The expected frequencies can be given with the `f_exp` argument: + + >>> power_divergence([16, 18, 16, 14, 12, 12], + ... f_exp=[16, 16, 16, 16, 16, 8], + ... lambda_='log-likelihood') + (3.3281031458963746, 0.6495419288047497) + + When `f_obs` is 2-D, by default the test is applied to each column. + + >>> obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T + >>> obs.shape + (6, 2) + >>> power_divergence(obs, lambda_="log-likelihood") + (array([ 2.00657316, 6.77634498]), array([ 0.84823477, 0.23781225])) + + By setting ``axis=None``, the test is applied to all data in the array, + which is equivalent to applying the test to the flattened array. + + >>> power_divergence(obs, axis=None) + (23.31034482758621, 0.015975692534127565) + >>> power_divergence(obs.ravel()) + (23.31034482758621, 0.015975692534127565) + + `ddof` is the change to make to the default degrees of freedom. + + >>> power_divergence([16, 18, 16, 14, 12, 12], ddof=1) + (2.0, 0.73575888234288467) + + The calculation of the p-values is done by broadcasting the + test statistic with `ddof`. + + >>> power_divergence([16, 18, 16, 14, 12, 12], ddof=[0,1,2]) + (2.0, array([ 0.84914504, 0.73575888, 0.5724067 ])) + + `f_obs` and `f_exp` are also broadcast. In the following, `f_obs` has + shape (6,) and `f_exp` has shape (2, 6), so the result of broadcasting + `f_obs` and `f_exp` has shape (2, 6). To compute the desired chi-squared + statistics, we must use ``axis=1``: + + >>> power_divergence([16, 18, 16, 14, 12, 12], + ... f_exp=[[16, 16, 16, 16, 16, 8], + ... [8, 20, 20, 16, 12, 12]], + ... axis=1) + (array([ 3.5 , 9.25]), array([ 0.62338763, 0.09949846])) + + """ + # Convert the input argument `lambda_` to a numerical value. + if isinstance(lambda_, str): + if lambda_ not in _power_div_lambda_names: + names = repr(list(_power_div_lambda_names.keys()))[1:-1] + raise ValueError(f"invalid string for lambda_: {lambda_!r}. " + f"Valid strings are {names}") + lambda_ = _power_div_lambda_names[lambda_] + elif lambda_ is None: + lambda_ = 1 + + f_obs = np.asanyarray(f_obs) + f_obs_float = f_obs.astype(np.float64) + + if f_exp is not None: + f_exp = np.asanyarray(f_exp) + bshape = np.broadcast_shapes(f_obs_float.shape, f_exp.shape) + f_obs_float = _m_broadcast_to(f_obs_float, bshape) + f_exp = _m_broadcast_to(f_exp, bshape) + rtol = 1e-8 # to pass existing tests + with np.errstate(invalid='ignore'): + f_obs_sum = f_obs_float.sum(axis=axis) + f_exp_sum = f_exp.sum(axis=axis) + relative_diff = (np.abs(f_obs_sum - f_exp_sum) / + np.minimum(f_obs_sum, f_exp_sum)) + diff_gt_tol = (relative_diff > rtol).any() + if diff_gt_tol: + msg = (f"For each axis slice, the sum of the observed " + f"frequencies must agree with the sum of the " + f"expected frequencies to a relative tolerance " + f"of {rtol}, but the percent differences are:\n" + f"{relative_diff}") + raise ValueError(msg) + + else: + # Ignore 'invalid' errors so the edge case of a data set with length 0 + # is handled without spurious warnings. + with np.errstate(invalid='ignore'): + f_exp = f_obs.mean(axis=axis, keepdims=True) + + # `terms` is the array of terms that are summed along `axis` to create + # the test statistic. We use some specialized code for a few special + # cases of lambda_. + if lambda_ == 1: + # Pearson's chi-squared statistic + terms = (f_obs_float - f_exp)**2 / f_exp + elif lambda_ == 0: + # Log-likelihood ratio (i.e. G-test) + terms = 2.0 * special.xlogy(f_obs, f_obs / f_exp) + elif lambda_ == -1: + # Modified log-likelihood ratio + terms = 2.0 * special.xlogy(f_exp, f_exp / f_obs) + else: + # General Cressie-Read power divergence. + terms = f_obs * ((f_obs / f_exp)**lambda_ - 1) + terms /= 0.5 * lambda_ * (lambda_ + 1) + + stat = terms.sum(axis=axis) + + num_obs = _count(terms, axis=axis) + ddof = asarray(ddof) + p = distributions.chi2.sf(stat, num_obs - 1 - ddof) + + return Power_divergenceResult(stat, p) + + +def chisquare(f_obs, f_exp=None, ddof=0, axis=0): + """Calculate a one-way chi-square test. + + The chi-square test tests the null hypothesis that the categorical data + has the given frequencies. + + Parameters + ---------- + f_obs : array_like + Observed frequencies in each category. + f_exp : array_like, optional + Expected frequencies in each category. By default the categories are + assumed to be equally likely. + ddof : int, optional + "Delta degrees of freedom": adjustment to the degrees of freedom + for the p-value. The p-value is computed using a chi-squared + distribution with ``k - 1 - ddof`` degrees of freedom, where `k` + is the number of observed frequencies. The default value of `ddof` + is 0. + axis : int or None, optional + The axis of the broadcast result of `f_obs` and `f_exp` along which to + apply the test. If axis is None, all values in `f_obs` are treated + as a single data set. Default is 0. + + Returns + ------- + res: Power_divergenceResult + An object containing attributes: + + statistic : float or ndarray + The chi-squared test statistic. The value is a float if `axis` is + None or `f_obs` and `f_exp` are 1-D. + pvalue : float or ndarray + The p-value of the test. The value is a float if `ddof` and the + result attribute `statistic` are scalars. + + See Also + -------- + scipy.stats.power_divergence + scipy.stats.fisher_exact : Fisher exact test on a 2x2 contingency table. + scipy.stats.barnard_exact : An unconditional exact test. An alternative + to chi-squared test for small sample sizes. + + Notes + ----- + This test is invalid when the observed or expected frequencies in each + category are too small. A typical rule is that all of the observed + and expected frequencies should be at least 5. According to [3]_, the + total number of samples is recommended to be greater than 13, + otherwise exact tests (such as Barnard's Exact test) should be used + because they do not overreject. + + Also, the sum of the observed and expected frequencies must be the same + for the test to be valid; `chisquare` raises an error if the sums do not + agree within a relative tolerance of ``1e-8``. + + The default degrees of freedom, k-1, are for the case when no parameters + of the distribution are estimated. If p parameters are estimated by + efficient maximum likelihood then the correct degrees of freedom are + k-1-p. If the parameters are estimated in a different way, then the + dof can be between k-1-p and k-1. However, it is also possible that + the asymptotic distribution is not chi-square, in which case this test + is not appropriate. + + References + ---------- + .. [1] Lowry, Richard. "Concepts and Applications of Inferential + Statistics". Chapter 8. + https://web.archive.org/web/20171022032306/http://vassarstats.net:80/textbook/ch8pt1.html + .. [2] "Chi-squared test", https://en.wikipedia.org/wiki/Chi-squared_test + .. [3] Pearson, Karl. "On the criterion that a given system of deviations from the probable + in the case of a correlated system of variables is such that it can be reasonably + supposed to have arisen from random sampling", Philosophical Magazine. Series 5. 50 + (1900), pp. 157-175. + .. [4] Mannan, R. William and E. Charles. Meslow. "Bird populations and + vegetation characteristics in managed and old-growth forests, + northeastern Oregon." Journal of Wildlife Management + 48, 1219-1238, :doi:`10.2307/3801783`, 1984. + + Examples + -------- + In [4]_, bird foraging behavior was investigated in an old-growth forest + of Oregon. + In the forest, 44% of the canopy volume was Douglas fir, + 24% was ponderosa pine, 29% was grand fir, and 3% was western larch. + The authors observed the behavior of several species of birds, one of + which was the red-breasted nuthatch. They made 189 observations of this + species foraging, recording 43 ("23%") of observations in Douglas fir, + 52 ("28%") in ponderosa pine, 54 ("29%") in grand fir, and 40 ("21%") in + western larch. + + Using a chi-square test, we can test the null hypothesis that the + proportions of foraging events are equal to the proportions of canopy + volume. The authors of the paper considered a p-value less than 1% to be + significant. + + Using the above proportions of canopy volume and observed events, we can + infer expected frequencies. + + >>> import numpy as np + >>> f_exp = np.array([44, 24, 29, 3]) / 100 * 189 + + The observed frequencies of foraging were: + + >>> f_obs = np.array([43, 52, 54, 40]) + + We can now compare the observed frequencies with the expected frequencies. + + >>> from scipy.stats import chisquare + >>> chisquare(f_obs=f_obs, f_exp=f_exp) + Power_divergenceResult(statistic=228.23515947653874, pvalue=3.3295585338846486e-49) + + The p-value is well below the chosen significance level. Hence, the + authors considered the difference to be significant and concluded + that the relative proportions of foraging events were not the same + as the relative proportions of tree canopy volume. + + Following are other generic examples to demonstrate how the other + parameters can be used. + + When just `f_obs` is given, it is assumed that the expected frequencies + are uniform and given by the mean of the observed frequencies. + + >>> chisquare([16, 18, 16, 14, 12, 12]) + Power_divergenceResult(statistic=2.0, pvalue=0.84914503608460956) + + With `f_exp` the expected frequencies can be given. + + >>> chisquare([16, 18, 16, 14, 12, 12], f_exp=[16, 16, 16, 16, 16, 8]) + Power_divergenceResult(statistic=3.5, pvalue=0.62338762774958223) + + When `f_obs` is 2-D, by default the test is applied to each column. + + >>> obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T + >>> obs.shape + (6, 2) + >>> chisquare(obs) + Power_divergenceResult(statistic=array([2. , 6.66666667]), pvalue=array([0.84914504, 0.24663415])) + + By setting ``axis=None``, the test is applied to all data in the array, + which is equivalent to applying the test to the flattened array. + + >>> chisquare(obs, axis=None) + Power_divergenceResult(statistic=23.31034482758621, pvalue=0.015975692534127565) + >>> chisquare(obs.ravel()) + Power_divergenceResult(statistic=23.310344827586206, pvalue=0.01597569253412758) + + `ddof` is the change to make to the default degrees of freedom. + + >>> chisquare([16, 18, 16, 14, 12, 12], ddof=1) + Power_divergenceResult(statistic=2.0, pvalue=0.7357588823428847) + + The calculation of the p-values is done by broadcasting the + chi-squared statistic with `ddof`. + + >>> chisquare([16, 18, 16, 14, 12, 12], ddof=[0,1,2]) + Power_divergenceResult(statistic=2.0, pvalue=array([0.84914504, 0.73575888, 0.5724067 ])) + + `f_obs` and `f_exp` are also broadcast. In the following, `f_obs` has + shape (6,) and `f_exp` has shape (2, 6), so the result of broadcasting + `f_obs` and `f_exp` has shape (2, 6). To compute the desired chi-squared + statistics, we use ``axis=1``: + + >>> chisquare([16, 18, 16, 14, 12, 12], + ... f_exp=[[16, 16, 16, 16, 16, 8], [8, 20, 20, 16, 12, 12]], + ... axis=1) + Power_divergenceResult(statistic=array([3.5 , 9.25]), pvalue=array([0.62338763, 0.09949846])) + + """ # noqa: E501 + return power_divergence(f_obs, f_exp=f_exp, ddof=ddof, axis=axis, + lambda_="pearson") + + +KstestResult = _make_tuple_bunch('KstestResult', ['statistic', 'pvalue'], + ['statistic_location', 'statistic_sign']) + + +def _compute_dplus(cdfvals, x): + """Computes D+ as used in the Kolmogorov-Smirnov test. + + Parameters + ---------- + cdfvals : array_like + Sorted array of CDF values between 0 and 1 + x: array_like + Sorted array of the stochastic variable itself + + Returns + ------- + res: Pair with the following elements: + - The maximum distance of the CDF values below Uniform(0, 1). + - The location at which the maximum is reached. + + """ + n = len(cdfvals) + dplus = (np.arange(1.0, n + 1) / n - cdfvals) + amax = dplus.argmax() + loc_max = x[amax] + return (dplus[amax], loc_max) + + +def _compute_dminus(cdfvals, x): + """Computes D- as used in the Kolmogorov-Smirnov test. + + Parameters + ---------- + cdfvals : array_like + Sorted array of CDF values between 0 and 1 + x: array_like + Sorted array of the stochastic variable itself + + Returns + ------- + res: Pair with the following elements: + - Maximum distance of the CDF values above Uniform(0, 1) + - The location at which the maximum is reached. + """ + n = len(cdfvals) + dminus = (cdfvals - np.arange(0.0, n)/n) + amax = dminus.argmax() + loc_max = x[amax] + return (dminus[amax], loc_max) + + +def _tuple_to_KstestResult(statistic, pvalue, + statistic_location, statistic_sign): + return KstestResult(statistic, pvalue, + statistic_location=statistic_location, + statistic_sign=statistic_sign) + + +def _KstestResult_to_tuple(res): + return *res, res.statistic_location, res.statistic_sign + + +@_axis_nan_policy_factory(_tuple_to_KstestResult, n_samples=1, n_outputs=4, + result_to_tuple=_KstestResult_to_tuple) +@_rename_parameter("mode", "method") +def ks_1samp(x, cdf, args=(), alternative='two-sided', method='auto'): + """ + Performs the one-sample Kolmogorov-Smirnov test for goodness of fit. + + This test compares the underlying distribution F(x) of a sample + against a given continuous distribution G(x). See Notes for a description + of the available null and alternative hypotheses. + + Parameters + ---------- + x : array_like + a 1-D array of observations of iid random variables. + cdf : callable + callable used to calculate the cdf. + args : tuple, sequence, optional + Distribution parameters, used with `cdf`. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the null and alternative hypotheses. Default is 'two-sided'. + Please see explanations in the Notes below. + method : {'auto', 'exact', 'approx', 'asymp'}, optional + Defines the distribution used for calculating the p-value. + The following options are available (default is 'auto'): + + * 'auto' : selects one of the other options. + * 'exact' : uses the exact distribution of test statistic. + * 'approx' : approximates the two-sided probability with twice + the one-sided probability + * 'asymp': uses asymptotic distribution of test statistic + + Returns + ------- + res: KstestResult + An object containing attributes: + + statistic : float + KS test statistic, either D+, D-, or D (the maximum of the two) + pvalue : float + One-tailed or two-tailed p-value. + statistic_location : float + Value of `x` corresponding with the KS statistic; i.e., the + distance between the empirical distribution function and the + hypothesized cumulative distribution function is measured at this + observation. + statistic_sign : int + +1 if the KS statistic is the maximum positive difference between + the empirical distribution function and the hypothesized cumulative + distribution function (D+); -1 if the KS statistic is the maximum + negative difference (D-). + + + See Also + -------- + ks_2samp, kstest + + Notes + ----- + There are three options for the null and corresponding alternative + hypothesis that can be selected using the `alternative` parameter. + + - `two-sided`: The null hypothesis is that the two distributions are + identical, F(x)=G(x) for all x; the alternative is that they are not + identical. + + - `less`: The null hypothesis is that F(x) >= G(x) for all x; the + alternative is that F(x) < G(x) for at least one x. + + - `greater`: The null hypothesis is that F(x) <= G(x) for all x; the + alternative is that F(x) > G(x) for at least one x. + + Note that the alternative hypotheses describe the *CDFs* of the + underlying distributions, not the observed values. For example, + suppose x1 ~ F and x2 ~ G. If F(x) > G(x) for all x, the values in + x1 tend to be less than those in x2. + + Examples + -------- + Suppose we wish to test the null hypothesis that a sample is distributed + according to the standard normal. + We choose a confidence level of 95%; that is, we will reject the null + hypothesis in favor of the alternative if the p-value is less than 0.05. + + When testing uniformly distributed data, we would expect the + null hypothesis to be rejected. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> stats.ks_1samp(stats.uniform.rvs(size=100, random_state=rng), + ... stats.norm.cdf) + KstestResult(statistic=0.5001899973268688, pvalue=1.1616392184763533e-23) + + Indeed, the p-value is lower than our threshold of 0.05, so we reject the + null hypothesis in favor of the default "two-sided" alternative: the data + are *not* distributed according to the standard normal. + + When testing random variates from the standard normal distribution, we + expect the data to be consistent with the null hypothesis most of the time. + + >>> x = stats.norm.rvs(size=100, random_state=rng) + >>> stats.ks_1samp(x, stats.norm.cdf) + KstestResult(statistic=0.05345882212970396, pvalue=0.9227159037744717) + + As expected, the p-value of 0.92 is not below our threshold of 0.05, so + we cannot reject the null hypothesis. + + Suppose, however, that the random variates are distributed according to + a normal distribution that is shifted toward greater values. In this case, + the cumulative density function (CDF) of the underlying distribution tends + to be *less* than the CDF of the standard normal. Therefore, we would + expect the null hypothesis to be rejected with ``alternative='less'``: + + >>> x = stats.norm.rvs(size=100, loc=0.5, random_state=rng) + >>> stats.ks_1samp(x, stats.norm.cdf, alternative='less') + KstestResult(statistic=0.17482387821055168, pvalue=0.001913921057766743) + + and indeed, with p-value smaller than our threshold, we reject the null + hypothesis in favor of the alternative. + + """ + mode = method + + alternative = {'t': 'two-sided', 'g': 'greater', 'l': 'less'}.get( + alternative.lower()[0], alternative) + if alternative not in ['two-sided', 'greater', 'less']: + raise ValueError("Unexpected alternative %s" % alternative) + + N = len(x) + x = np.sort(x) + cdfvals = cdf(x, *args) + np_one = np.int8(1) + + if alternative == 'greater': + Dplus, d_location = _compute_dplus(cdfvals, x) + return KstestResult(Dplus, distributions.ksone.sf(Dplus, N), + statistic_location=d_location, + statistic_sign=np_one) + + if alternative == 'less': + Dminus, d_location = _compute_dminus(cdfvals, x) + return KstestResult(Dminus, distributions.ksone.sf(Dminus, N), + statistic_location=d_location, + statistic_sign=-np_one) + + # alternative == 'two-sided': + Dplus, dplus_location = _compute_dplus(cdfvals, x) + Dminus, dminus_location = _compute_dminus(cdfvals, x) + if Dplus > Dminus: + D = Dplus + d_location = dplus_location + d_sign = np_one + else: + D = Dminus + d_location = dminus_location + d_sign = -np_one + + if mode == 'auto': # Always select exact + mode = 'exact' + if mode == 'exact': + prob = distributions.kstwo.sf(D, N) + elif mode == 'asymp': + prob = distributions.kstwobign.sf(D * np.sqrt(N)) + else: + # mode == 'approx' + prob = 2 * distributions.ksone.sf(D, N) + prob = np.clip(prob, 0, 1) + return KstestResult(D, prob, + statistic_location=d_location, + statistic_sign=d_sign) + + +Ks_2sampResult = KstestResult + + +def _compute_prob_outside_square(n, h): + """ + Compute the proportion of paths that pass outside the two diagonal lines. + + Parameters + ---------- + n : integer + n > 0 + h : integer + 0 <= h <= n + + Returns + ------- + p : float + The proportion of paths that pass outside the lines x-y = +/-h. + + """ + # Compute Pr(D_{n,n} >= h/n) + # Prob = 2 * ( binom(2n, n-h) - binom(2n, n-2a) + binom(2n, n-3a) - ... ) + # / binom(2n, n) + # This formulation exhibits subtractive cancellation. + # Instead divide each term by binom(2n, n), then factor common terms + # and use a Horner-like algorithm + # P = 2 * A0 * (1 - A1*(1 - A2*(1 - A3*(1 - A4*(...))))) + + P = 0.0 + k = int(np.floor(n / h)) + while k >= 0: + p1 = 1.0 + # Each of the Ai terms has numerator and denominator with + # h simple terms. + for j in range(h): + p1 = (n - k * h - j) * p1 / (n + k * h + j + 1) + P = p1 * (1.0 - P) + k -= 1 + return 2 * P + + +def _count_paths_outside_method(m, n, g, h): + """Count the number of paths that pass outside the specified diagonal. + + Parameters + ---------- + m : integer + m > 0 + n : integer + n > 0 + g : integer + g is greatest common divisor of m and n + h : integer + 0 <= h <= lcm(m,n) + + Returns + ------- + p : float + The number of paths that go low. + The calculation may overflow - check for a finite answer. + + Notes + ----- + Count the integer lattice paths from (0, 0) to (m, n), which at some + point (x, y) along the path, satisfy: + m*y <= n*x - h*g + The paths make steps of size +1 in either positive x or positive y + directions. + + We generally follow Hodges' treatment of Drion/Gnedenko/Korolyuk. + Hodges, J.L. Jr., + "The Significance Probability of the Smirnov Two-Sample Test," + Arkiv fiur Matematik, 3, No. 43 (1958), 469-86. + + """ + # Compute #paths which stay lower than x/m-y/n = h/lcm(m,n) + # B(x, y) = #{paths from (0,0) to (x,y) without + # previously crossing the boundary} + # = binom(x, y) - #{paths which already reached the boundary} + # Multiply by the number of path extensions going from (x, y) to (m, n) + # Sum. + + # Probability is symmetrical in m, n. Computation below assumes m >= n. + if m < n: + m, n = n, m + mg = m // g + ng = n // g + + # Not every x needs to be considered. + # xj holds the list of x values to be checked. + # Wherever n*x/m + ng*h crosses an integer + lxj = n + (mg-h)//mg + xj = [(h + mg * j + ng-1)//ng for j in range(lxj)] + # B is an array just holding a few values of B(x,y), the ones needed. + # B[j] == B(x_j, j) + if lxj == 0: + return special.binom(m + n, n) + B = np.zeros(lxj) + B[0] = 1 + # Compute the B(x, y) terms + for j in range(1, lxj): + Bj = special.binom(xj[j] + j, j) + for i in range(j): + bin = special.binom(xj[j] - xj[i] + j - i, j-i) + Bj -= bin * B[i] + B[j] = Bj + # Compute the number of path extensions... + num_paths = 0 + for j in range(lxj): + bin = special.binom((m-xj[j]) + (n - j), n-j) + term = B[j] * bin + num_paths += term + return num_paths + + +def _attempt_exact_2kssamp(n1, n2, g, d, alternative): + """Attempts to compute the exact 2sample probability. + + n1, n2 are the sample sizes + g is the gcd(n1, n2) + d is the computed max difference in ECDFs + + Returns (success, d, probability) + """ + lcm = (n1 // g) * n2 + h = int(np.round(d * lcm)) + d = h * 1.0 / lcm + if h == 0: + return True, d, 1.0 + saw_fp_error, prob = False, np.nan + try: + with np.errstate(invalid="raise", over="raise"): + if alternative == 'two-sided': + if n1 == n2: + prob = _compute_prob_outside_square(n1, h) + else: + prob = _compute_outer_prob_inside_method(n1, n2, g, h) + else: + if n1 == n2: + # prob = binom(2n, n-h) / binom(2n, n) + # Evaluating in that form incurs roundoff errors + # from special.binom. Instead calculate directly + jrange = np.arange(h) + prob = np.prod((n1 - jrange) / (n1 + jrange + 1.0)) + else: + with np.errstate(over='raise'): + num_paths = _count_paths_outside_method(n1, n2, g, h) + bin = special.binom(n1 + n2, n1) + if num_paths > bin or np.isinf(bin): + saw_fp_error = True + else: + prob = num_paths / bin + + except (FloatingPointError, OverflowError): + saw_fp_error = True + + if saw_fp_error: + return False, d, np.nan + if not (0 <= prob <= 1): + return False, d, prob + return True, d, prob + + +@_axis_nan_policy_factory(_tuple_to_KstestResult, n_samples=2, n_outputs=4, + result_to_tuple=_KstestResult_to_tuple) +@_rename_parameter("mode", "method") +def ks_2samp(data1, data2, alternative='two-sided', method='auto'): + """ + Performs the two-sample Kolmogorov-Smirnov test for goodness of fit. + + This test compares the underlying continuous distributions F(x) and G(x) + of two independent samples. See Notes for a description of the available + null and alternative hypotheses. + + Parameters + ---------- + data1, data2 : array_like, 1-Dimensional + Two arrays of sample observations assumed to be drawn from a continuous + distribution, sample sizes can be different. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the null and alternative hypotheses. Default is 'two-sided'. + Please see explanations in the Notes below. + method : {'auto', 'exact', 'asymp'}, optional + Defines the method used for calculating the p-value. + The following options are available (default is 'auto'): + + * 'auto' : use 'exact' for small size arrays, 'asymp' for large + * 'exact' : use exact distribution of test statistic + * 'asymp' : use asymptotic distribution of test statistic + + Returns + ------- + res: KstestResult + An object containing attributes: + + statistic : float + KS test statistic. + pvalue : float + One-tailed or two-tailed p-value. + statistic_location : float + Value from `data1` or `data2` corresponding with the KS statistic; + i.e., the distance between the empirical distribution functions is + measured at this observation. + statistic_sign : int + +1 if the empirical distribution function of `data1` exceeds + the empirical distribution function of `data2` at + `statistic_location`, otherwise -1. + + See Also + -------- + kstest, ks_1samp, epps_singleton_2samp, anderson_ksamp + + Notes + ----- + There are three options for the null and corresponding alternative + hypothesis that can be selected using the `alternative` parameter. + + - `less`: The null hypothesis is that F(x) >= G(x) for all x; the + alternative is that F(x) < G(x) for at least one x. The statistic + is the magnitude of the minimum (most negative) difference between the + empirical distribution functions of the samples. + + - `greater`: The null hypothesis is that F(x) <= G(x) for all x; the + alternative is that F(x) > G(x) for at least one x. The statistic + is the maximum (most positive) difference between the empirical + distribution functions of the samples. + + - `two-sided`: The null hypothesis is that the two distributions are + identical, F(x)=G(x) for all x; the alternative is that they are not + identical. The statistic is the maximum absolute difference between the + empirical distribution functions of the samples. + + Note that the alternative hypotheses describe the *CDFs* of the + underlying distributions, not the observed values of the data. For example, + suppose x1 ~ F and x2 ~ G. If F(x) > G(x) for all x, the values in + x1 tend to be less than those in x2. + + If the KS statistic is large, then the p-value will be small, and this may + be taken as evidence against the null hypothesis in favor of the + alternative. + + If ``method='exact'``, `ks_2samp` attempts to compute an exact p-value, + that is, the probability under the null hypothesis of obtaining a test + statistic value as extreme as the value computed from the data. + If ``method='asymp'``, the asymptotic Kolmogorov-Smirnov distribution is + used to compute an approximate p-value. + If ``method='auto'``, an exact p-value computation is attempted if both + sample sizes are less than 10000; otherwise, the asymptotic method is used. + In any case, if an exact p-value calculation is attempted and fails, a + warning will be emitted, and the asymptotic p-value will be returned. + + The 'two-sided' 'exact' computation computes the complementary probability + and then subtracts from 1. As such, the minimum probability it can return + is about 1e-16. While the algorithm itself is exact, numerical + errors may accumulate for large sample sizes. It is most suited to + situations in which one of the sample sizes is only a few thousand. + + We generally follow Hodges' treatment of Drion/Gnedenko/Korolyuk [1]_. + + References + ---------- + .. [1] Hodges, J.L. Jr., "The Significance Probability of the Smirnov + Two-Sample Test," Arkiv fiur Matematik, 3, No. 43 (1958), 469-486. + + Examples + -------- + Suppose we wish to test the null hypothesis that two samples were drawn + from the same distribution. + We choose a confidence level of 95%; that is, we will reject the null + hypothesis in favor of the alternative if the p-value is less than 0.05. + + If the first sample were drawn from a uniform distribution and the second + were drawn from the standard normal, we would expect the null hypothesis + to be rejected. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> sample1 = stats.uniform.rvs(size=100, random_state=rng) + >>> sample2 = stats.norm.rvs(size=110, random_state=rng) + >>> stats.ks_2samp(sample1, sample2) + KstestResult(statistic=0.5454545454545454, pvalue=7.37417839555191e-15) + + Indeed, the p-value is lower than our threshold of 0.05, so we reject the + null hypothesis in favor of the default "two-sided" alternative: the data + were *not* drawn from the same distribution. + + When both samples are drawn from the same distribution, we expect the data + to be consistent with the null hypothesis most of the time. + + >>> sample1 = stats.norm.rvs(size=105, random_state=rng) + >>> sample2 = stats.norm.rvs(size=95, random_state=rng) + >>> stats.ks_2samp(sample1, sample2) + KstestResult(statistic=0.10927318295739348, pvalue=0.5438289009927495) + + As expected, the p-value of 0.54 is not below our threshold of 0.05, so + we cannot reject the null hypothesis. + + Suppose, however, that the first sample were drawn from + a normal distribution shifted toward greater values. In this case, + the cumulative density function (CDF) of the underlying distribution tends + to be *less* than the CDF underlying the second sample. Therefore, we would + expect the null hypothesis to be rejected with ``alternative='less'``: + + >>> sample1 = stats.norm.rvs(size=105, loc=0.5, random_state=rng) + >>> stats.ks_2samp(sample1, sample2, alternative='less') + KstestResult(statistic=0.4055137844611529, pvalue=3.5474563068855554e-08) + + and indeed, with p-value smaller than our threshold, we reject the null + hypothesis in favor of the alternative. + + """ + mode = method + + if mode not in ['auto', 'exact', 'asymp']: + raise ValueError(f'Invalid value for mode: {mode}') + alternative = {'t': 'two-sided', 'g': 'greater', 'l': 'less'}.get( + alternative.lower()[0], alternative) + if alternative not in ['two-sided', 'less', 'greater']: + raise ValueError(f'Invalid value for alternative: {alternative}') + MAX_AUTO_N = 10000 # 'auto' will attempt to be exact if n1,n2 <= MAX_AUTO_N + if np.ma.is_masked(data1): + data1 = data1.compressed() + if np.ma.is_masked(data2): + data2 = data2.compressed() + data1 = np.sort(data1) + data2 = np.sort(data2) + n1 = data1.shape[0] + n2 = data2.shape[0] + if min(n1, n2) == 0: + raise ValueError('Data passed to ks_2samp must not be empty') + + data_all = np.concatenate([data1, data2]) + # using searchsorted solves equal data problem + cdf1 = np.searchsorted(data1, data_all, side='right') / n1 + cdf2 = np.searchsorted(data2, data_all, side='right') / n2 + cddiffs = cdf1 - cdf2 + + # Identify the location of the statistic + argminS = np.argmin(cddiffs) + argmaxS = np.argmax(cddiffs) + loc_minS = data_all[argminS] + loc_maxS = data_all[argmaxS] + + # Ensure sign of minS is not negative. + minS = np.clip(-cddiffs[argminS], 0, 1) + maxS = cddiffs[argmaxS] + + if alternative == 'less' or (alternative == 'two-sided' and minS > maxS): + d = minS + d_location = loc_minS + d_sign = -1 + else: + d = maxS + d_location = loc_maxS + d_sign = 1 + g = gcd(n1, n2) + n1g = n1 // g + n2g = n2 // g + prob = -np.inf + if mode == 'auto': + mode = 'exact' if max(n1, n2) <= MAX_AUTO_N else 'asymp' + elif mode == 'exact': + # If lcm(n1, n2) is too big, switch from exact to asymp + if n1g >= np.iinfo(np.int32).max / n2g: + mode = 'asymp' + warnings.warn( + f"Exact ks_2samp calculation not possible with samples sizes " + f"{n1} and {n2}. Switching to 'asymp'.", RuntimeWarning, + stacklevel=3) + + if mode == 'exact': + success, d, prob = _attempt_exact_2kssamp(n1, n2, g, d, alternative) + if not success: + mode = 'asymp' + warnings.warn(f"ks_2samp: Exact calculation unsuccessful. " + f"Switching to method={mode}.", RuntimeWarning, + stacklevel=3) + + if mode == 'asymp': + # The product n1*n2 is large. Use Smirnov's asymptoptic formula. + # Ensure float to avoid overflow in multiplication + # sorted because the one-sided formula is not symmetric in n1, n2 + m, n = sorted([float(n1), float(n2)], reverse=True) + en = m * n / (m + n) + if alternative == 'two-sided': + prob = distributions.kstwo.sf(d, np.round(en)) + else: + z = np.sqrt(en) * d + # Use Hodges' suggested approximation Eqn 5.3 + # Requires m to be the larger of (n1, n2) + expt = -2 * z**2 - 2 * z * (m + 2*n)/np.sqrt(m*n*(m+n))/3.0 + prob = np.exp(expt) + + prob = np.clip(prob, 0, 1) + # Currently, `d` is a Python float. We want it to be a NumPy type, so + # float64 is appropriate. An enhancement would be for `d` to respect the + # dtype of the input. + return KstestResult(np.float64(d), prob, statistic_location=d_location, + statistic_sign=np.int8(d_sign)) + + +def _parse_kstest_args(data1, data2, args, N): + # kstest allows many different variations of arguments. + # Pull out the parsing into a separate function + # (xvals, yvals, ) # 2sample + # (xvals, cdf function,..) + # (xvals, name of distribution, ...) + # (name of distribution, name of distribution, ...) + + # Returns xvals, yvals, cdf + # where cdf is a cdf function, or None + # and yvals is either an array_like of values, or None + # and xvals is array_like. + rvsfunc, cdf = None, None + if isinstance(data1, str): + rvsfunc = getattr(distributions, data1).rvs + elif callable(data1): + rvsfunc = data1 + + if isinstance(data2, str): + cdf = getattr(distributions, data2).cdf + data2 = None + elif callable(data2): + cdf = data2 + data2 = None + + data1 = np.sort(rvsfunc(*args, size=N) if rvsfunc else data1) + return data1, data2, cdf + + +def _kstest_n_samples(kwargs): + cdf = kwargs['cdf'] + return 1 if (isinstance(cdf, str) or callable(cdf)) else 2 + + +@_axis_nan_policy_factory(_tuple_to_KstestResult, n_samples=_kstest_n_samples, + n_outputs=4, result_to_tuple=_KstestResult_to_tuple) +@_rename_parameter("mode", "method") +def kstest(rvs, cdf, args=(), N=20, alternative='two-sided', method='auto'): + """ + Performs the (one-sample or two-sample) Kolmogorov-Smirnov test for + goodness of fit. + + The one-sample test compares the underlying distribution F(x) of a sample + against a given distribution G(x). The two-sample test compares the + underlying distributions of two independent samples. Both tests are valid + only for continuous distributions. + + Parameters + ---------- + rvs : str, array_like, or callable + If an array, it should be a 1-D array of observations of random + variables. + If a callable, it should be a function to generate random variables; + it is required to have a keyword argument `size`. + If a string, it should be the name of a distribution in `scipy.stats`, + which will be used to generate random variables. + cdf : str, array_like or callable + If array_like, it should be a 1-D array of observations of random + variables, and the two-sample test is performed + (and rvs must be array_like). + If a callable, that callable is used to calculate the cdf. + If a string, it should be the name of a distribution in `scipy.stats`, + which will be used as the cdf function. + args : tuple, sequence, optional + Distribution parameters, used if `rvs` or `cdf` are strings or + callables. + N : int, optional + Sample size if `rvs` is string or callable. Default is 20. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the null and alternative hypotheses. Default is 'two-sided'. + Please see explanations in the Notes below. + method : {'auto', 'exact', 'approx', 'asymp'}, optional + Defines the distribution used for calculating the p-value. + The following options are available (default is 'auto'): + + * 'auto' : selects one of the other options. + * 'exact' : uses the exact distribution of test statistic. + * 'approx' : approximates the two-sided probability with twice the + one-sided probability + * 'asymp': uses asymptotic distribution of test statistic + + Returns + ------- + res: KstestResult + An object containing attributes: + + statistic : float + KS test statistic, either D+, D-, or D (the maximum of the two) + pvalue : float + One-tailed or two-tailed p-value. + statistic_location : float + In a one-sample test, this is the value of `rvs` + corresponding with the KS statistic; i.e., the distance between + the empirical distribution function and the hypothesized cumulative + distribution function is measured at this observation. + + In a two-sample test, this is the value from `rvs` or `cdf` + corresponding with the KS statistic; i.e., the distance between + the empirical distribution functions is measured at this + observation. + statistic_sign : int + In a one-sample test, this is +1 if the KS statistic is the + maximum positive difference between the empirical distribution + function and the hypothesized cumulative distribution function + (D+); it is -1 if the KS statistic is the maximum negative + difference (D-). + + In a two-sample test, this is +1 if the empirical distribution + function of `rvs` exceeds the empirical distribution + function of `cdf` at `statistic_location`, otherwise -1. + + See Also + -------- + ks_1samp, ks_2samp + + Notes + ----- + There are three options for the null and corresponding alternative + hypothesis that can be selected using the `alternative` parameter. + + - `two-sided`: The null hypothesis is that the two distributions are + identical, F(x)=G(x) for all x; the alternative is that they are not + identical. + + - `less`: The null hypothesis is that F(x) >= G(x) for all x; the + alternative is that F(x) < G(x) for at least one x. + + - `greater`: The null hypothesis is that F(x) <= G(x) for all x; the + alternative is that F(x) > G(x) for at least one x. + + Note that the alternative hypotheses describe the *CDFs* of the + underlying distributions, not the observed values. For example, + suppose x1 ~ F and x2 ~ G. If F(x) > G(x) for all x, the values in + x1 tend to be less than those in x2. + + + Examples + -------- + Suppose we wish to test the null hypothesis that a sample is distributed + according to the standard normal. + We choose a confidence level of 95%; that is, we will reject the null + hypothesis in favor of the alternative if the p-value is less than 0.05. + + When testing uniformly distributed data, we would expect the + null hypothesis to be rejected. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng() + >>> stats.kstest(stats.uniform.rvs(size=100, random_state=rng), + ... stats.norm.cdf) + KstestResult(statistic=0.5001899973268688, pvalue=1.1616392184763533e-23) + + Indeed, the p-value is lower than our threshold of 0.05, so we reject the + null hypothesis in favor of the default "two-sided" alternative: the data + are *not* distributed according to the standard normal. + + When testing random variates from the standard normal distribution, we + expect the data to be consistent with the null hypothesis most of the time. + + >>> x = stats.norm.rvs(size=100, random_state=rng) + >>> stats.kstest(x, stats.norm.cdf) + KstestResult(statistic=0.05345882212970396, pvalue=0.9227159037744717) + + As expected, the p-value of 0.92 is not below our threshold of 0.05, so + we cannot reject the null hypothesis. + + Suppose, however, that the random variates are distributed according to + a normal distribution that is shifted toward greater values. In this case, + the cumulative density function (CDF) of the underlying distribution tends + to be *less* than the CDF of the standard normal. Therefore, we would + expect the null hypothesis to be rejected with ``alternative='less'``: + + >>> x = stats.norm.rvs(size=100, loc=0.5, random_state=rng) + >>> stats.kstest(x, stats.norm.cdf, alternative='less') + KstestResult(statistic=0.17482387821055168, pvalue=0.001913921057766743) + + and indeed, with p-value smaller than our threshold, we reject the null + hypothesis in favor of the alternative. + + For convenience, the previous test can be performed using the name of the + distribution as the second argument. + + >>> stats.kstest(x, "norm", alternative='less') + KstestResult(statistic=0.17482387821055168, pvalue=0.001913921057766743) + + The examples above have all been one-sample tests identical to those + performed by `ks_1samp`. Note that `kstest` can also perform two-sample + tests identical to those performed by `ks_2samp`. For example, when two + samples are drawn from the same distribution, we expect the data to be + consistent with the null hypothesis most of the time. + + >>> sample1 = stats.laplace.rvs(size=105, random_state=rng) + >>> sample2 = stats.laplace.rvs(size=95, random_state=rng) + >>> stats.kstest(sample1, sample2) + KstestResult(statistic=0.11779448621553884, pvalue=0.4494256912629795) + + As expected, the p-value of 0.45 is not below our threshold of 0.05, so + we cannot reject the null hypothesis. + + """ + # to not break compatibility with existing code + if alternative == 'two_sided': + alternative = 'two-sided' + if alternative not in ['two-sided', 'greater', 'less']: + raise ValueError("Unexpected alternative %s" % alternative) + xvals, yvals, cdf = _parse_kstest_args(rvs, cdf, args, N) + if cdf: + return ks_1samp(xvals, cdf, args=args, alternative=alternative, + method=method, _no_deco=True) + return ks_2samp(xvals, yvals, alternative=alternative, method=method, + _no_deco=True) + + +def tiecorrect(rankvals): + """Tie correction factor for Mann-Whitney U and Kruskal-Wallis H tests. + + Parameters + ---------- + rankvals : array_like + A 1-D sequence of ranks. Typically this will be the array + returned by `~scipy.stats.rankdata`. + + Returns + ------- + factor : float + Correction factor for U or H. + + See Also + -------- + rankdata : Assign ranks to the data + mannwhitneyu : Mann-Whitney rank test + kruskal : Kruskal-Wallis H test + + References + ---------- + .. [1] Siegel, S. (1956) Nonparametric Statistics for the Behavioral + Sciences. New York: McGraw-Hill. + + Examples + -------- + >>> from scipy.stats import tiecorrect, rankdata + >>> tiecorrect([1, 2.5, 2.5, 4]) + 0.9 + >>> ranks = rankdata([1, 3, 2, 4, 5, 7, 2, 8, 4]) + >>> ranks + array([ 1. , 4. , 2.5, 5.5, 7. , 8. , 2.5, 9. , 5.5]) + >>> tiecorrect(ranks) + 0.9833333333333333 + + """ + arr = np.sort(rankvals) + idx = np.nonzero(np.r_[True, arr[1:] != arr[:-1], True])[0] + cnt = np.diff(idx).astype(np.float64) + + size = np.float64(arr.size) + return 1.0 if size < 2 else 1.0 - (cnt**3 - cnt).sum() / (size**3 - size) + + +RanksumsResult = namedtuple('RanksumsResult', ('statistic', 'pvalue')) + + +@_axis_nan_policy_factory(RanksumsResult, n_samples=2) +def ranksums(x, y, alternative='two-sided'): + """Compute the Wilcoxon rank-sum statistic for two samples. + + The Wilcoxon rank-sum test tests the null hypothesis that two sets + of measurements are drawn from the same distribution. The alternative + hypothesis is that values in one sample are more likely to be + larger than the values in the other sample. + + This test should be used to compare two samples from continuous + distributions. It does not handle ties between measurements + in x and y. For tie-handling and an optional continuity correction + see `scipy.stats.mannwhitneyu`. + + Parameters + ---------- + x,y : array_like + The data from the two samples. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. Default is 'two-sided'. + The following options are available: + + * 'two-sided': one of the distributions (underlying `x` or `y`) is + stochastically greater than the other. + * 'less': the distribution underlying `x` is stochastically less + than the distribution underlying `y`. + * 'greater': the distribution underlying `x` is stochastically greater + than the distribution underlying `y`. + + .. versionadded:: 1.7.0 + + Returns + ------- + statistic : float + The test statistic under the large-sample approximation that the + rank sum statistic is normally distributed. + pvalue : float + The p-value of the test. + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Wilcoxon_rank-sum_test + + Examples + -------- + We can test the hypothesis that two independent unequal-sized samples are + drawn from the same distribution with computing the Wilcoxon rank-sum + statistic. + + >>> import numpy as np + >>> from scipy.stats import ranksums + >>> rng = np.random.default_rng() + >>> sample1 = rng.uniform(-1, 1, 200) + >>> sample2 = rng.uniform(-0.5, 1.5, 300) # a shifted distribution + >>> ranksums(sample1, sample2) + RanksumsResult(statistic=-7.887059, + pvalue=3.09390448e-15) # may vary + >>> ranksums(sample1, sample2, alternative='less') + RanksumsResult(statistic=-7.750585297581713, + pvalue=4.573497606342543e-15) # may vary + >>> ranksums(sample1, sample2, alternative='greater') + RanksumsResult(statistic=-7.750585297581713, + pvalue=0.9999999999999954) # may vary + + The p-value of less than ``0.05`` indicates that this test rejects the + hypothesis at the 5% significance level. + + """ + x, y = map(np.asarray, (x, y)) + n1 = len(x) + n2 = len(y) + alldata = np.concatenate((x, y)) + ranked = rankdata(alldata) + x = ranked[:n1] + s = np.sum(x, axis=0) + expected = n1 * (n1+n2+1) / 2.0 + z = (s - expected) / np.sqrt(n1*n2*(n1+n2+1)/12.0) + pvalue = _get_pvalue(z, distributions.norm, alternative) + + return RanksumsResult(z[()], pvalue[()]) + + +KruskalResult = namedtuple('KruskalResult', ('statistic', 'pvalue')) + + +@_axis_nan_policy_factory(KruskalResult, n_samples=None) +def kruskal(*samples, nan_policy='propagate'): + """Compute the Kruskal-Wallis H-test for independent samples. + + The Kruskal-Wallis H-test tests the null hypothesis that the population + median of all of the groups are equal. It is a non-parametric version of + ANOVA. The test works on 2 or more independent samples, which may have + different sizes. Note that rejecting the null hypothesis does not + indicate which of the groups differs. Post hoc comparisons between + groups are required to determine which groups are different. + + Parameters + ---------- + sample1, sample2, ... : array_like + Two or more arrays with the sample measurements can be given as + arguments. Samples must be one-dimensional. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Returns + ------- + statistic : float + The Kruskal-Wallis H statistic, corrected for ties. + pvalue : float + The p-value for the test using the assumption that H has a chi + square distribution. The p-value returned is the survival function of + the chi square distribution evaluated at H. + + See Also + -------- + f_oneway : 1-way ANOVA. + mannwhitneyu : Mann-Whitney rank test on two samples. + friedmanchisquare : Friedman test for repeated measurements. + + Notes + ----- + Due to the assumption that H has a chi square distribution, the number + of samples in each group must not be too small. A typical rule is + that each sample must have at least 5 measurements. + + References + ---------- + .. [1] W. H. Kruskal & W. W. Wallis, "Use of Ranks in + One-Criterion Variance Analysis", Journal of the American Statistical + Association, Vol. 47, Issue 260, pp. 583-621, 1952. + .. [2] https://en.wikipedia.org/wiki/Kruskal-Wallis_one-way_analysis_of_variance + + Examples + -------- + >>> from scipy import stats + >>> x = [1, 3, 5, 7, 9] + >>> y = [2, 4, 6, 8, 10] + >>> stats.kruskal(x, y) + KruskalResult(statistic=0.2727272727272734, pvalue=0.6015081344405895) + + >>> x = [1, 1, 1] + >>> y = [2, 2, 2] + >>> z = [2, 2] + >>> stats.kruskal(x, y, z) + KruskalResult(statistic=7.0, pvalue=0.0301973834223185) + + """ + samples = list(map(np.asarray, samples)) + + num_groups = len(samples) + if num_groups < 2: + raise ValueError("Need at least two groups in stats.kruskal()") + + for sample in samples: + if sample.size == 0: + NaN = _get_nan(*samples) + return KruskalResult(NaN, NaN) + elif sample.ndim != 1: + raise ValueError("Samples must be one-dimensional.") + + n = np.asarray(list(map(len, samples))) + + if nan_policy not in ('propagate', 'raise', 'omit'): + raise ValueError("nan_policy must be 'propagate', 'raise' or 'omit'") + + contains_nan = False + for sample in samples: + cn = _contains_nan(sample, nan_policy) + if cn[0]: + contains_nan = True + break + + if contains_nan and nan_policy == 'omit': + for sample in samples: + sample = ma.masked_invalid(sample) + return mstats_basic.kruskal(*samples) + + if contains_nan and nan_policy == 'propagate': + return KruskalResult(np.nan, np.nan) + + alldata = np.concatenate(samples) + ranked = rankdata(alldata) + ties = tiecorrect(ranked) + if ties == 0: + raise ValueError('All numbers are identical in kruskal') + + # Compute sum^2/n for each group and sum + j = np.insert(np.cumsum(n), 0, 0) + ssbn = 0 + for i in range(num_groups): + ssbn += _square_of_sums(ranked[j[i]:j[i+1]]) / n[i] + + totaln = np.sum(n, dtype=float) + h = 12.0 / (totaln * (totaln + 1)) * ssbn - 3 * (totaln + 1) + df = num_groups - 1 + h /= ties + + return KruskalResult(h, distributions.chi2.sf(h, df)) + + +FriedmanchisquareResult = namedtuple('FriedmanchisquareResult', + ('statistic', 'pvalue')) + + +@_axis_nan_policy_factory(FriedmanchisquareResult, n_samples=None, paired=True) +def friedmanchisquare(*samples): + """Compute the Friedman test for repeated samples. + + The Friedman test tests the null hypothesis that repeated samples of + the same individuals have the same distribution. It is often used + to test for consistency among samples obtained in different ways. + For example, if two sampling techniques are used on the same set of + individuals, the Friedman test can be used to determine if the two + sampling techniques are consistent. + + Parameters + ---------- + sample1, sample2, sample3... : array_like + Arrays of observations. All of the arrays must have the same number + of elements. At least three samples must be given. + + Returns + ------- + statistic : float + The test statistic, correcting for ties. + pvalue : float + The associated p-value assuming that the test statistic has a chi + squared distribution. + + Notes + ----- + Due to the assumption that the test statistic has a chi squared + distribution, the p-value is only reliable for n > 10 and more than + 6 repeated samples. + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Friedman_test + .. [2] P. Sprent and N.C. Smeeton, "Applied Nonparametric Statistical + Methods, Third Edition". Chapter 6, Section 6.3.2. + + Examples + -------- + In [2]_, the pulse rate (per minute) of a group of seven students was + measured before exercise, immediately after exercise and 5 minutes + after exercise. Is there evidence to suggest that the pulse rates on + these three occasions are similar? + + We begin by formulating a null hypothesis :math:`H_0`: + + The pulse rates are identical on these three occasions. + + Let's assess the plausibility of this hypothesis with a Friedman test. + + >>> from scipy.stats import friedmanchisquare + >>> before = [72, 96, 88, 92, 74, 76, 82] + >>> immediately_after = [120, 120, 132, 120, 101, 96, 112] + >>> five_min_after = [76, 95, 104, 96, 84, 72, 76] + >>> res = friedmanchisquare(before, immediately_after, five_min_after) + >>> res.statistic + 10.57142857142857 + >>> res.pvalue + 0.005063414171757498 + + Using a significance level of 5%, we would reject the null hypothesis in + favor of the alternative hypothesis: "the pulse rates are different on + these three occasions". + + """ + k = len(samples) + if k < 3: + raise ValueError('At least 3 sets of samples must be given ' + f'for Friedman test, got {k}.') + + n = len(samples[0]) + for i in range(1, k): + if len(samples[i]) != n: + raise ValueError('Unequal N in friedmanchisquare. Aborting.') + + # Rank data + data = np.vstack(samples).T + data = data.astype(float) + for i in range(len(data)): + data[i] = rankdata(data[i]) + + # Handle ties + ties = 0 + for d in data: + replist, repnum = find_repeats(array(d)) + for t in repnum: + ties += t * (t*t - 1) + c = 1 - ties / (k*(k*k - 1)*n) + + ssbn = np.sum(data.sum(axis=0)**2) + chisq = (12.0 / (k*n*(k+1)) * ssbn - 3*n*(k+1)) / c + + return FriedmanchisquareResult(chisq, distributions.chi2.sf(chisq, k - 1)) + + +BrunnerMunzelResult = namedtuple('BrunnerMunzelResult', + ('statistic', 'pvalue')) + + +@_axis_nan_policy_factory(BrunnerMunzelResult, n_samples=2) +def brunnermunzel(x, y, alternative="two-sided", distribution="t", + nan_policy='propagate'): + """Compute the Brunner-Munzel test on samples x and y. + + The Brunner-Munzel test is a nonparametric test of the null hypothesis that + when values are taken one by one from each group, the probabilities of + getting large values in both groups are equal. + Unlike the Wilcoxon-Mann-Whitney's U test, this does not require the + assumption of equivariance of two groups. Note that this does not assume + the distributions are same. This test works on two independent samples, + which may have different sizes. + + Parameters + ---------- + x, y : array_like + Array of samples, should be one-dimensional. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided' + * 'less': one-sided + * 'greater': one-sided + distribution : {'t', 'normal'}, optional + Defines how to get the p-value. + The following options are available (default is 't'): + + * 't': get the p-value by t-distribution + * 'normal': get the p-value by standard normal distribution. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': returns nan + * 'raise': throws an error + * 'omit': performs the calculations ignoring nan values + + Returns + ------- + statistic : float + The Brunner-Munzer W statistic. + pvalue : float + p-value assuming an t distribution. One-sided or + two-sided, depending on the choice of `alternative` and `distribution`. + + See Also + -------- + mannwhitneyu : Mann-Whitney rank test on two samples. + + Notes + ----- + Brunner and Munzel recommended to estimate the p-value by t-distribution + when the size of data is 50 or less. If the size is lower than 10, it would + be better to use permuted Brunner Munzel test (see [2]_). + + References + ---------- + .. [1] Brunner, E. and Munzel, U. "The nonparametric Benhrens-Fisher + problem: Asymptotic theory and a small-sample approximation". + Biometrical Journal. Vol. 42(2000): 17-25. + .. [2] Neubert, K. and Brunner, E. "A studentized permutation test for the + non-parametric Behrens-Fisher problem". Computational Statistics and + Data Analysis. Vol. 51(2007): 5192-5204. + + Examples + -------- + >>> from scipy import stats + >>> x1 = [1,2,1,1,1,1,1,1,1,1,2,4,1,1] + >>> x2 = [3,3,4,3,1,2,3,1,1,5,4] + >>> w, p_value = stats.brunnermunzel(x1, x2) + >>> w + 3.1374674823029505 + >>> p_value + 0.0057862086661515377 + + """ + + nx = len(x) + ny = len(y) + if nx == 0 or ny == 0: + NaN = _get_nan(x, y) + return BrunnerMunzelResult(NaN, NaN) + rankc = rankdata(np.concatenate((x, y))) + rankcx = rankc[0:nx] + rankcy = rankc[nx:nx+ny] + rankcx_mean = np.mean(rankcx) + rankcy_mean = np.mean(rankcy) + rankx = rankdata(x) + ranky = rankdata(y) + rankx_mean = np.mean(rankx) + ranky_mean = np.mean(ranky) + + Sx = np.sum(np.power(rankcx - rankx - rankcx_mean + rankx_mean, 2.0)) + Sx /= nx - 1 + Sy = np.sum(np.power(rankcy - ranky - rankcy_mean + ranky_mean, 2.0)) + Sy /= ny - 1 + + wbfn = nx * ny * (rankcy_mean - rankcx_mean) + wbfn /= (nx + ny) * np.sqrt(nx * Sx + ny * Sy) + + if distribution == "t": + df_numer = np.power(nx * Sx + ny * Sy, 2.0) + df_denom = np.power(nx * Sx, 2.0) / (nx - 1) + df_denom += np.power(ny * Sy, 2.0) / (ny - 1) + df = df_numer / df_denom + + if (df_numer == 0) and (df_denom == 0): + message = ("p-value cannot be estimated with `distribution='t' " + "because degrees of freedom parameter is undefined " + "(0/0). Try using `distribution='normal'") + warnings.warn(message, RuntimeWarning, stacklevel=2) + + distribution = distributions.t(df) + elif distribution == "normal": + distribution = distributions.norm() + else: + raise ValueError( + "distribution should be 't' or 'normal'") + + p = _get_pvalue(-wbfn, distribution, alternative) + + return BrunnerMunzelResult(wbfn, p) + + +@_axis_nan_policy_factory(SignificanceResult, kwd_samples=['weights'], paired=True) +def combine_pvalues(pvalues, method='fisher', weights=None): + """ + Combine p-values from independent tests that bear upon the same hypothesis. + + These methods are intended only for combining p-values from hypothesis + tests based upon continuous distributions. + + Each method assumes that under the null hypothesis, the p-values are + sampled independently and uniformly from the interval [0, 1]. A test + statistic (different for each method) is computed and a combined + p-value is calculated based upon the distribution of this test statistic + under the null hypothesis. + + Parameters + ---------- + pvalues : array_like + Array of p-values assumed to come from independent tests based on + continuous distributions. + method : {'fisher', 'pearson', 'tippett', 'stouffer', 'mudholkar_george'} + + Name of method to use to combine p-values. + + The available methods are (see Notes for details): + + * 'fisher': Fisher's method (Fisher's combined probability test) + * 'pearson': Pearson's method + * 'mudholkar_george': Mudholkar's and George's method + * 'tippett': Tippett's method + * 'stouffer': Stouffer's Z-score method + weights : array_like, optional + Optional array of weights used only for Stouffer's Z-score method. + Ignored by other methods. + + Returns + ------- + res : SignificanceResult + An object containing attributes: + + statistic : float + The statistic calculated by the specified method. + pvalue : float + The combined p-value. + + Examples + -------- + Suppose we wish to combine p-values from four independent tests + of the same null hypothesis using Fisher's method (default). + + >>> from scipy.stats import combine_pvalues + >>> pvalues = [0.1, 0.05, 0.02, 0.3] + >>> combine_pvalues(pvalues) + SignificanceResult(statistic=20.828626352604235, pvalue=0.007616871850449092) + + When the individual p-values carry different weights, consider Stouffer's + method. + + >>> weights = [1, 2, 3, 4] + >>> res = combine_pvalues(pvalues, method='stouffer', weights=weights) + >>> res.pvalue + 0.009578891494533616 + + Notes + ----- + If this function is applied to tests with a discrete statistics such as + any rank test or contingency-table test, it will yield systematically + wrong results, e.g. Fisher's method will systematically overestimate the + p-value [1]_. This problem becomes less severe for large sample sizes + when the discrete distributions become approximately continuous. + + The differences between the methods can be best illustrated by their + statistics and what aspects of a combination of p-values they emphasise + when considering significance [2]_. For example, methods emphasising large + p-values are more sensitive to strong false and true negatives; conversely + methods focussing on small p-values are sensitive to positives. + + * The statistics of Fisher's method (also known as Fisher's combined + probability test) [3]_ is :math:`-2\\sum_i \\log(p_i)`, which is + equivalent (as a test statistics) to the product of individual p-values: + :math:`\\prod_i p_i`. Under the null hypothesis, this statistics follows + a :math:`\\chi^2` distribution. This method emphasises small p-values. + * Pearson's method uses :math:`-2\\sum_i\\log(1-p_i)`, which is equivalent + to :math:`\\prod_i \\frac{1}{1-p_i}` [2]_. + It thus emphasises large p-values. + * Mudholkar and George compromise between Fisher's and Pearson's method by + averaging their statistics [4]_. Their method emphasises extreme + p-values, both close to 1 and 0. + * Stouffer's method [5]_ uses Z-scores and the statistic: + :math:`\\sum_i \\Phi^{-1} (p_i)`, where :math:`\\Phi` is the CDF of the + standard normal distribution. The advantage of this method is that it is + straightforward to introduce weights, which can make Stouffer's method + more powerful than Fisher's method when the p-values are from studies + of different size [6]_ [7]_. + * Tippett's method uses the smallest p-value as a statistic. + (Mind that this minimum is not the combined p-value.) + + Fisher's method may be extended to combine p-values from dependent tests + [8]_. Extensions such as Brown's method and Kost's method are not currently + implemented. + + .. versionadded:: 0.15.0 + + References + ---------- + .. [1] Kincaid, W. M., "The Combination of Tests Based on Discrete + Distributions." Journal of the American Statistical Association 57, + no. 297 (1962), 10-19. + .. [2] Heard, N. and Rubin-Delanchey, P. "Choosing between methods of + combining p-values." Biometrika 105.1 (2018): 239-246. + .. [3] https://en.wikipedia.org/wiki/Fisher%27s_method + .. [4] George, E. O., and G. S. Mudholkar. "On the convolution of logistic + random variables." Metrika 30.1 (1983): 1-13. + .. [5] https://en.wikipedia.org/wiki/Fisher%27s_method#Relation_to_Stouffer.27s_Z-score_method + .. [6] Whitlock, M. C. "Combining probability from independent tests: the + weighted Z-method is superior to Fisher's approach." Journal of + Evolutionary Biology 18, no. 5 (2005): 1368-1373. + .. [7] Zaykin, Dmitri V. "Optimally weighted Z-test is a powerful method + for combining probabilities in meta-analysis." Journal of + Evolutionary Biology 24, no. 8 (2011): 1836-1841. + .. [8] https://en.wikipedia.org/wiki/Extensions_of_Fisher%27s_method + + """ + if pvalues.size == 0: + NaN = _get_nan(pvalues) + return SignificanceResult(NaN, NaN) + + if method == 'fisher': + statistic = -2 * np.sum(np.log(pvalues)) + pval = distributions.chi2.sf(statistic, 2 * len(pvalues)) + elif method == 'pearson': + statistic = 2 * np.sum(np.log1p(-pvalues)) + pval = distributions.chi2.cdf(-statistic, 2 * len(pvalues)) + elif method == 'mudholkar_george': + normalizing_factor = np.sqrt(3/len(pvalues))/np.pi + statistic = -np.sum(np.log(pvalues)) + np.sum(np.log1p(-pvalues)) + nu = 5 * len(pvalues) + 4 + approx_factor = np.sqrt(nu / (nu - 2)) + pval = distributions.t.sf(statistic * normalizing_factor + * approx_factor, nu) + elif method == 'tippett': + statistic = np.min(pvalues) + pval = distributions.beta.cdf(statistic, 1, len(pvalues)) + elif method == 'stouffer': + if weights is None: + weights = np.ones_like(pvalues) + elif len(weights) != len(pvalues): + raise ValueError("pvalues and weights must be of the same size.") + + Zi = distributions.norm.isf(pvalues) + statistic = np.dot(weights, Zi) / np.linalg.norm(weights) + pval = distributions.norm.sf(statistic) + + else: + raise ValueError( + f"Invalid method {method!r}. Valid methods are 'fisher', " + "'pearson', 'mudholkar_george', 'tippett', and 'stouffer'" + ) + + return SignificanceResult(statistic, pval) + + +@dataclass +class QuantileTestResult: + r""" + Result of `scipy.stats.quantile_test`. + + Attributes + ---------- + statistic: float + The statistic used to calculate the p-value; either ``T1``, the + number of observations less than or equal to the hypothesized quantile, + or ``T2``, the number of observations strictly less than the + hypothesized quantile. Two test statistics are required to handle the + possibility the data was generated from a discrete or mixed + distribution. + + statistic_type : int + ``1`` or ``2`` depending on which of ``T1`` or ``T2`` was used to + calculate the p-value respectively. ``T1`` corresponds to the + ``"greater"`` alternative hypothesis and ``T2`` to the ``"less"``. For + the ``"two-sided"`` case, the statistic type that leads to smallest + p-value is used. For significant tests, ``statistic_type = 1`` means + there is evidence that the population quantile is significantly greater + than the hypothesized value and ``statistic_type = 2`` means there is + evidence that it is significantly less than the hypothesized value. + + pvalue : float + The p-value of the hypothesis test. + """ + statistic: float + statistic_type: int + pvalue: float + _alternative: list[str] = field(repr=False) + _x : np.ndarray = field(repr=False) + _p : float = field(repr=False) + + def confidence_interval(self, confidence_level=0.95): + """ + Compute the confidence interval of the quantile. + + Parameters + ---------- + confidence_level : float, default: 0.95 + Confidence level for the computed confidence interval + of the quantile. Default is 0.95. + + Returns + ------- + ci : ``ConfidenceInterval`` object + The object has attributes ``low`` and ``high`` that hold the + lower and upper bounds of the confidence interval. + + Examples + -------- + >>> import numpy as np + >>> import scipy.stats as stats + >>> p = 0.75 # quantile of interest + >>> q = 0 # hypothesized value of the quantile + >>> x = np.exp(np.arange(0, 1.01, 0.01)) + >>> res = stats.quantile_test(x, q=q, p=p, alternative='less') + >>> lb, ub = res.confidence_interval() + >>> lb, ub + (-inf, 2.293318740264183) + >>> res = stats.quantile_test(x, q=q, p=p, alternative='two-sided') + >>> lb, ub = res.confidence_interval(0.9) + >>> lb, ub + (1.9542373206359396, 2.293318740264183) + """ + + alternative = self._alternative + p = self._p + x = np.sort(self._x) + n = len(x) + bd = stats.binom(n, p) + + if confidence_level <= 0 or confidence_level >= 1: + message = "`confidence_level` must be a number between 0 and 1." + raise ValueError(message) + + low_index = np.nan + high_index = np.nan + + if alternative == 'less': + p = 1 - confidence_level + low = -np.inf + high_index = int(bd.isf(p)) + high = x[high_index] if high_index < n else np.nan + elif alternative == 'greater': + p = 1 - confidence_level + low_index = int(bd.ppf(p)) - 1 + low = x[low_index] if low_index >= 0 else np.nan + high = np.inf + elif alternative == 'two-sided': + p = (1 - confidence_level) / 2 + low_index = int(bd.ppf(p)) - 1 + low = x[low_index] if low_index >= 0 else np.nan + high_index = int(bd.isf(p)) + high = x[high_index] if high_index < n else np.nan + + return ConfidenceInterval(low, high) + + +def quantile_test_iv(x, q, p, alternative): + + x = np.atleast_1d(x) + message = '`x` must be a one-dimensional array of numbers.' + if x.ndim != 1 or not np.issubdtype(x.dtype, np.number): + raise ValueError(message) + + q = np.array(q)[()] + message = "`q` must be a scalar." + if q.ndim != 0 or not np.issubdtype(q.dtype, np.number): + raise ValueError(message) + + p = np.array(p)[()] + message = "`p` must be a float strictly between 0 and 1." + if p.ndim != 0 or p >= 1 or p <= 0: + raise ValueError(message) + + alternatives = {'two-sided', 'less', 'greater'} + message = f"`alternative` must be one of {alternatives}" + if alternative not in alternatives: + raise ValueError(message) + + return x, q, p, alternative + + +def quantile_test(x, *, q=0, p=0.5, alternative='two-sided'): + r""" + Perform a quantile test and compute a confidence interval of the quantile. + + This function tests the null hypothesis that `q` is the value of the + quantile associated with probability `p` of the population underlying + sample `x`. For example, with default parameters, it tests that the + median of the population underlying `x` is zero. The function returns an + object including the test statistic, a p-value, and a method for computing + the confidence interval around the quantile. + + Parameters + ---------- + x : array_like + A one-dimensional sample. + q : float, default: 0 + The hypothesized value of the quantile. + p : float, default: 0.5 + The probability associated with the quantile; i.e. the proportion of + the population less than `q` is `p`. Must be strictly between 0 and + 1. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + + * 'two-sided': the quantile associated with the probability `p` + is not `q`. + * 'less': the quantile associated with the probability `p` is less + than `q`. + * 'greater': the quantile associated with the probability `p` is + greater than `q`. + + Returns + ------- + result : QuantileTestResult + An object with the following attributes: + + statistic : float + One of two test statistics that may be used in the quantile test. + The first test statistic, ``T1``, is the proportion of samples in + `x` that are less than or equal to the hypothesized quantile + `q`. The second test statistic, ``T2``, is the proportion of + samples in `x` that are strictly less than the hypothesized + quantile `q`. + + When ``alternative = 'greater'``, ``T1`` is used to calculate the + p-value and ``statistic`` is set to ``T1``. + + When ``alternative = 'less'``, ``T2`` is used to calculate the + p-value and ``statistic`` is set to ``T2``. + + When ``alternative = 'two-sided'``, both ``T1`` and ``T2`` are + considered, and the one that leads to the smallest p-value is used. + + statistic_type : int + Either `1` or `2` depending on which of ``T1`` or ``T2`` was + used to calculate the p-value. + + pvalue : float + The p-value associated with the given alternative. + + The object also has the following method: + + confidence_interval(confidence_level=0.95) + Computes a confidence interval around the the + population quantile associated with the probability `p`. The + confidence interval is returned in a ``namedtuple`` with + fields `low` and `high`. Values are `nan` when there are + not enough observations to compute the confidence interval at + the desired confidence. + + Notes + ----- + This test and its method for computing confidence intervals are + non-parametric. They are valid if and only if the observations are i.i.d. + + The implementation of the test follows Conover [1]_. Two test statistics + are considered. + + ``T1``: The number of observations in `x` less than or equal to `q`. + + ``T1 = (x <= q).sum()`` + + ``T2``: The number of observations in `x` strictly less than `q`. + + ``T2 = (x < q).sum()`` + + The use of two test statistics is necessary to handle the possibility that + `x` was generated from a discrete or mixed distribution. + + The null hypothesis for the test is: + + H0: The :math:`p^{\mathrm{th}}` population quantile is `q`. + + and the null distribution for each test statistic is + :math:`\mathrm{binom}\left(n, p\right)`. When ``alternative='less'``, + the alternative hypothesis is: + + H1: The :math:`p^{\mathrm{th}}` population quantile is less than `q`. + + and the p-value is the probability that the binomial random variable + + .. math:: + Y \sim \mathrm{binom}\left(n, p\right) + + is greater than or equal to the observed value ``T2``. + + When ``alternative='greater'``, the alternative hypothesis is: + + H1: The :math:`p^{\mathrm{th}}` population quantile is greater than `q` + + and the p-value is the probability that the binomial random variable Y + is less than or equal to the observed value ``T1``. + + When ``alternative='two-sided'``, the alternative hypothesis is + + H1: `q` is not the :math:`p^{\mathrm{th}}` population quantile. + + and the p-value is twice the smaller of the p-values for the ``'less'`` + and ``'greater'`` cases. Both of these p-values can exceed 0.5 for the same + data, so the value is clipped into the interval :math:`[0, 1]`. + + The approach for confidence intervals is attributed to Thompson [2]_ and + later proven to be applicable to any set of i.i.d. samples [3]_. The + computation is based on the observation that the probability of a quantile + :math:`q` to be larger than any observations :math:`x_m (1\leq m \leq N)` + can be computed as + + .. math:: + + \mathbb{P}(x_m \leq q) = 1 - \sum_{k=0}^{m-1} \binom{N}{k} + q^k(1-q)^{N-k} + + By default, confidence intervals are computed for a 95% confidence level. + A common interpretation of a 95% confidence intervals is that if i.i.d. + samples are drawn repeatedly from the same population and confidence + intervals are formed each time, the confidence interval will contain the + true value of the specified quantile in approximately 95% of trials. + + A similar function is available in the QuantileNPCI R package [4]_. The + foundation is the same, but it computes the confidence interval bounds by + doing interpolations between the sample values, whereas this function uses + only sample values as bounds. Thus, ``quantile_test.confidence_interval`` + returns more conservative intervals (i.e., larger). + + The same computation of confidence intervals for quantiles is included in + the confintr package [5]_. + + Two-sided confidence intervals are not guaranteed to be optimal; i.e., + there may exist a tighter interval that may contain the quantile of + interest with probability larger than the confidence level. + Without further assumption on the samples (e.g., the nature of the + underlying distribution), the one-sided intervals are optimally tight. + + References + ---------- + .. [1] W. J. Conover. Practical Nonparametric Statistics, 3rd Ed. 1999. + .. [2] W. R. Thompson, "On Confidence Ranges for the Median and Other + Expectation Distributions for Populations of Unknown Distribution + Form," The Annals of Mathematical Statistics, vol. 7, no. 3, + pp. 122-128, 1936, Accessed: Sep. 18, 2019. [Online]. Available: + https://www.jstor.org/stable/2957563. + .. [3] H. A. David and H. N. Nagaraja, "Order Statistics in Nonparametric + Inference" in Order Statistics, John Wiley & Sons, Ltd, 2005, pp. + 159-170. Available: + https://onlinelibrary.wiley.com/doi/10.1002/0471722162.ch7. + .. [4] N. Hutson, A. Hutson, L. Yan, "QuantileNPCI: Nonparametric + Confidence Intervals for Quantiles," R package, + https://cran.r-project.org/package=QuantileNPCI + .. [5] M. Mayer, "confintr: Confidence Intervals," R package, + https://cran.r-project.org/package=confintr + + + Examples + -------- + + Suppose we wish to test the null hypothesis that the median of a population + is equal to 0.5. We choose a confidence level of 99%; that is, we will + reject the null hypothesis in favor of the alternative if the p-value is + less than 0.01. + + When testing random variates from the standard uniform distribution, which + has a median of 0.5, we expect the data to be consistent with the null + hypothesis most of the time. + + >>> import numpy as np + >>> from scipy import stats + >>> rng = np.random.default_rng(6981396440634228121) + >>> rvs = stats.uniform.rvs(size=100, random_state=rng) + >>> stats.quantile_test(rvs, q=0.5, p=0.5) + QuantileTestResult(statistic=45, statistic_type=1, pvalue=0.36820161732669576) + + As expected, the p-value is not below our threshold of 0.01, so + we cannot reject the null hypothesis. + + When testing data from the standard *normal* distribution, which has a + median of 0, we would expect the null hypothesis to be rejected. + + >>> rvs = stats.norm.rvs(size=100, random_state=rng) + >>> stats.quantile_test(rvs, q=0.5, p=0.5) + QuantileTestResult(statistic=67, statistic_type=2, pvalue=0.0008737198369123724) + + Indeed, the p-value is lower than our threshold of 0.01, so we reject the + null hypothesis in favor of the default "two-sided" alternative: the median + of the population is *not* equal to 0.5. + + However, suppose we were to test the null hypothesis against the + one-sided alternative that the median of the population is *greater* than + 0.5. Since the median of the standard normal is less than 0.5, we would not + expect the null hypothesis to be rejected. + + >>> stats.quantile_test(rvs, q=0.5, p=0.5, alternative='greater') + QuantileTestResult(statistic=67, statistic_type=1, pvalue=0.9997956114162866) + + Unsurprisingly, with a p-value greater than our threshold, we would not + reject the null hypothesis in favor of the chosen alternative. + + The quantile test can be used for any quantile, not only the median. For + example, we can test whether the third quartile of the distribution + underlying the sample is greater than 0.6. + + >>> rvs = stats.uniform.rvs(size=100, random_state=rng) + >>> stats.quantile_test(rvs, q=0.6, p=0.75, alternative='greater') + QuantileTestResult(statistic=64, statistic_type=1, pvalue=0.00940696592998271) + + The p-value is lower than the threshold. We reject the null hypothesis in + favor of the alternative: the third quartile of the distribution underlying + our sample is greater than 0.6. + + `quantile_test` can also compute confidence intervals for any quantile. + + >>> rvs = stats.norm.rvs(size=100, random_state=rng) + >>> res = stats.quantile_test(rvs, q=0.6, p=0.75) + >>> ci = res.confidence_interval(confidence_level=0.95) + >>> ci + ConfidenceInterval(low=0.284491604437432, high=0.8912531024914844) + + When testing a one-sided alternative, the confidence interval contains + all observations such that if passed as `q`, the p-value of the + test would be greater than 0.05, and therefore the null hypothesis + would not be rejected. For example: + + >>> rvs.sort() + >>> q, p, alpha = 0.6, 0.75, 0.95 + >>> res = stats.quantile_test(rvs, q=q, p=p, alternative='less') + >>> ci = res.confidence_interval(confidence_level=alpha) + >>> for x in rvs[rvs <= ci.high]: + ... res = stats.quantile_test(rvs, q=x, p=p, alternative='less') + ... assert res.pvalue > 1-alpha + >>> for x in rvs[rvs > ci.high]: + ... res = stats.quantile_test(rvs, q=x, p=p, alternative='less') + ... assert res.pvalue < 1-alpha + + Also, if a 95% confidence interval is repeatedly generated for random + samples, the confidence interval will contain the true quantile value in + approximately 95% of replications. + + >>> dist = stats.rayleigh() # our "unknown" distribution + >>> p = 0.2 + >>> true_stat = dist.ppf(p) # the true value of the statistic + >>> n_trials = 1000 + >>> quantile_ci_contains_true_stat = 0 + >>> for i in range(n_trials): + ... data = dist.rvs(size=100, random_state=rng) + ... res = stats.quantile_test(data, p=p) + ... ci = res.confidence_interval(0.95) + ... if ci[0] < true_stat < ci[1]: + ... quantile_ci_contains_true_stat += 1 + >>> quantile_ci_contains_true_stat >= 950 + True + + This works with any distribution and any quantile, as long as the samples + are i.i.d. + """ + # Implementation carefully follows [1] 3.2 + # "H0: the p*th quantile of X is x*" + # To facilitate comparison with [1], we'll use variable names that + # best match Conover's notation + X, x_star, p_star, H1 = quantile_test_iv(x, q, p, alternative) + + # "We will use two test statistics in this test. Let T1 equal " + # "the number of observations less than or equal to x*, and " + # "let T2 equal the number of observations less than x*." + T1 = (X <= x_star).sum() + T2 = (X < x_star).sum() + + # "The null distribution of the test statistics T1 and T2 is " + # "the binomial distribution, with parameters n = sample size, and " + # "p = p* as given in the null hypothesis.... Y has the binomial " + # "distribution with parameters n and p*." + n = len(X) + Y = stats.binom(n=n, p=p_star) + + # "H1: the p* population quantile is less than x*" + if H1 == 'less': + # "The p-value is the probability that a binomial random variable Y " + # "is greater than *or equal to* the observed value of T2...using p=p*" + pvalue = Y.sf(T2-1) # Y.pmf(T2) + Y.sf(T2) + statistic = T2 + statistic_type = 2 + # "H1: the p* population quantile is greater than x*" + elif H1 == 'greater': + # "The p-value is the probability that a binomial random variable Y " + # "is less than or equal to the observed value of T1... using p = p*" + pvalue = Y.cdf(T1) + statistic = T1 + statistic_type = 1 + # "H1: x* is not the p*th population quantile" + elif H1 == 'two-sided': + # "The p-value is twice the smaller of the probabilities that a + # binomial random variable Y is less than or equal to the observed + # value of T1 or greater than or equal to the observed value of T2 + # using p=p*." + # Note: both one-sided p-values can exceed 0.5 for the same data, so + # `clip` + pvalues = [Y.cdf(T1), Y.sf(T2 - 1)] # [greater, less] + sorted_idx = np.argsort(pvalues) + pvalue = np.clip(2*pvalues[sorted_idx[0]], 0, 1) + if sorted_idx[0]: + statistic, statistic_type = T2, 2 + else: + statistic, statistic_type = T1, 1 + + return QuantileTestResult( + statistic=statistic, + statistic_type=statistic_type, + pvalue=pvalue, + _alternative=H1, + _x=X, + _p=p_star + ) + + +##################################### +# STATISTICAL DISTANCES # +##################################### + + +def wasserstein_distance_nd(u_values, v_values, u_weights=None, v_weights=None): + r""" + Compute the Wasserstein-1 distance between two N-D discrete distributions. + + The Wasserstein distance, also called the Earth mover's distance or the + optimal transport distance, is a similarity metric between two probability + distributions [1]_. In the discrete case, the Wasserstein distance can be + understood as the cost of an optimal transport plan to convert one + distribution into the other. The cost is calculated as the product of the + amount of probability mass being moved and the distance it is being moved. + A brief and intuitive introduction can be found at [2]_. + + .. versionadded:: 1.13.0 + + Parameters + ---------- + u_values : 2d array_like + A sample from a probability distribution or the support (set of all + possible values) of a probability distribution. Each element along + axis 0 is an observation or possible value, and axis 1 represents the + dimensionality of the distribution; i.e., each row is a vector + observation or possible value. + + v_values : 2d array_like + A sample from or the support of a second distribution. + + u_weights, v_weights : 1d array_like, optional + Weights or counts corresponding with the sample or probability masses + corresponding with the support values. Sum of elements must be positive + and finite. If unspecified, each value is assigned the same weight. + + Returns + ------- + distance : float + The computed distance between the distributions. + + Notes + ----- + Given two probability mass functions, :math:`u` + and :math:`v`, the first Wasserstein distance between the distributions + using the Euclidean norm is: + + .. math:: + + l_1 (u, v) = \inf_{\pi \in \Gamma (u, v)} \int \| x-y \|_2 \mathrm{d} \pi (x, y) + + where :math:`\Gamma (u, v)` is the set of (probability) distributions on + :math:`\mathbb{R}^n \times \mathbb{R}^n` whose marginals are :math:`u` and + :math:`v` on the first and second factors respectively. For a given value + :math:`x`, :math:`u(x)` gives the probabilty of :math:`u` at position + :math:`x`, and the same for :math:`v(x)`. + + This is also called the optimal transport problem or the Monge problem. + Let the finite point sets :math:`\{x_i\}` and :math:`\{y_j\}` denote + the support set of probability mass function :math:`u` and :math:`v` + respectively. The Monge problem can be expressed as follows, + + Let :math:`\Gamma` denote the transport plan, :math:`D` denote the + distance matrix and, + + .. math:: + + x = \text{vec}(\Gamma) \\ + c = \text{vec}(D) \\ + b = \begin{bmatrix} + u\\ + v\\ + \end{bmatrix} + + The :math:`\text{vec}()` function denotes the Vectorization function + that transforms a matrix into a column vector by vertically stacking + the columns of the matrix. + The tranport plan :math:`\Gamma` is a matrix :math:`[\gamma_{ij}]` in + which :math:`\gamma_{ij}` is a positive value representing the amount of + probability mass transported from :math:`u(x_i)` to :math:`v(y_i)`. + Summing over the rows of :math:`\Gamma` should give the source distribution + :math:`u` : :math:`\sum_j \gamma_{ij} = u(x_i)` holds for all :math:`i` + and summing over the columns of :math:`\Gamma` should give the target + distribution :math:`v`: :math:`\sum_i \gamma_{ij} = v(y_j)` holds for all + :math:`j`. + The distance matrix :math:`D` is a matrix :math:`[d_{ij}]`, in which + :math:`d_{ij} = d(x_i, y_j)`. + + Given :math:`\Gamma`, :math:`D`, :math:`b`, the Monge problem can be + tranformed into a linear programming problem by + taking :math:`A x = b` as constraints and :math:`z = c^T x` as minimization + target (sum of costs) , where matrix :math:`A` has the form + + .. math:: + + \begin{array} {rrrr|rrrr|r|rrrr} + 1 & 1 & \dots & 1 & 0 & 0 & \dots & 0 & \dots & 0 & 0 & \dots & + 0 \cr + 0 & 0 & \dots & 0 & 1 & 1 & \dots & 1 & \dots & 0 & 0 &\dots & + 0 \cr + \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots + & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \cr + 0 & 0 & \dots & 0 & 0 & 0 & \dots & 0 & \dots & 1 & 1 & \dots & + 1 \cr \hline + + 1 & 0 & \dots & 0 & 1 & 0 & \dots & \dots & \dots & 1 & 0 & \dots & + 0 \cr + 0 & 1 & \dots & 0 & 0 & 1 & \dots & \dots & \dots & 0 & 1 & \dots & + 0 \cr + \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & + \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \cr + 0 & 0 & \dots & 1 & 0 & 0 & \dots & 1 & \dots & 0 & 0 & \dots & 1 + \end{array} + + By solving the dual form of the above linear programming problem (with + solution :math:`y^*`), the Wasserstein distance :math:`l_1 (u, v)` can + be computed as :math:`b^T y^*`. + + The above solution is inspired by Vincent Herrmann's blog [3]_ . For a + more thorough explanation, see [4]_ . + + The input distributions can be empirical, therefore coming from samples + whose values are effectively inputs of the function, or they can be seen as + generalized functions, in which case they are weighted sums of Dirac delta + functions located at the specified values. + + References + ---------- + .. [1] "Wasserstein metric", + https://en.wikipedia.org/wiki/Wasserstein_metric + .. [2] Lili Weng, "What is Wasserstein distance?", Lil'log, + https://lilianweng.github.io/posts/2017-08-20-gan/#what-is-wasserstein-distance. + .. [3] Hermann, Vincent. "Wasserstein GAN and the Kantorovich-Rubinstein + Duality". https://vincentherrmann.github.io/blog/wasserstein/. + .. [4] Peyré, Gabriel, and Marco Cuturi. "Computational optimal + transport." Center for Research in Economics and Statistics + Working Papers 2017-86 (2017). + + See Also + -------- + wasserstein_distance: Compute the Wasserstein-1 distance between two + 1D discrete distributions. + + Examples + -------- + Compute the Wasserstein distance between two three-dimensional samples, + each with two observations. + + >>> from scipy.stats import wasserstein_distance_nd + >>> wasserstein_distance_nd([[0, 2, 3], [1, 2, 5]], [[3, 2, 3], [4, 2, 5]]) + 3.0 + + Compute the Wasserstein distance between two two-dimensional distributions + with three and two weighted observations, respectively. + + >>> wasserstein_distance_nd([[0, 2.75], [2, 209.3], [0, 0]], + ... [[0.2, 0.322], [4.5, 25.1808]], + ... [0.4, 5.2, 0.114], [0.8, 1.5]) + 174.15840245217169 + """ + m, n = len(u_values), len(v_values) + u_values = asarray(u_values) + v_values = asarray(v_values) + + if u_values.ndim > 2 or v_values.ndim > 2: + raise ValueError('Invalid input values. The inputs must have either ' + 'one or two dimensions.') + # if dimensions are not equal throw error + if u_values.ndim != v_values.ndim: + raise ValueError('Invalid input values. Dimensions of inputs must be ' + 'equal.') + # if data is 1D then call the cdf_distance function + if u_values.ndim == 1 and v_values.ndim == 1: + return _cdf_distance(1, u_values, v_values, u_weights, v_weights) + + u_values, u_weights = _validate_distribution(u_values, u_weights) + v_values, v_weights = _validate_distribution(v_values, v_weights) + # if number of columns is not equal throw error + if u_values.shape[1] != v_values.shape[1]: + raise ValueError('Invalid input values. If two-dimensional, ' + '`u_values` and `v_values` must have the same ' + 'number of columns.') + + # if data contains np.inf then return inf or nan + if np.any(np.isinf(u_values)) ^ np.any(np.isinf(v_values)): + return np.inf + elif np.any(np.isinf(u_values)) and np.any(np.isinf(v_values)): + return np.nan + + # create constraints + A_upper_part = sparse.block_diag((np.ones((1, n)), ) * m) + A_lower_part = sparse.hstack((sparse.eye(n), ) * m) + # sparse constraint matrix of size (m + n)*(m * n) + A = sparse.vstack((A_upper_part, A_lower_part)) + A = sparse.coo_array(A) + + # get cost matrix + D = distance_matrix(u_values, v_values, p=2) + cost = D.ravel() + + # create the minimization target + p_u = np.full(m, 1/m) if u_weights is None else u_weights/np.sum(u_weights) + p_v = np.full(n, 1/n) if v_weights is None else v_weights/np.sum(v_weights) + b = np.concatenate((p_u, p_v), axis=0) + + # solving LP + constraints = LinearConstraint(A=A.T, ub=cost) + opt_res = milp(c=-b, constraints=constraints, bounds=(-np.inf, np.inf)) + return -opt_res.fun + + +def wasserstein_distance(u_values, v_values, u_weights=None, v_weights=None): + r""" + Compute the Wasserstein-1 distance between two 1D discrete distributions. + + The Wasserstein distance, also called the Earth mover's distance or the + optimal transport distance, is a similarity metric between two probability + distributions [1]_. In the discrete case, the Wasserstein distance can be + understood as the cost of an optimal transport plan to convert one + distribution into the other. The cost is calculated as the product of the + amount of probability mass being moved and the distance it is being moved. + A brief and intuitive introduction can be found at [2]_. + + .. versionadded:: 1.0.0 + + Parameters + ---------- + u_values : 1d array_like + A sample from a probability distribution or the support (set of all + possible values) of a probability distribution. Each element is an + observation or possible value. + + v_values : 1d array_like + A sample from or the support of a second distribution. + + u_weights, v_weights : 1d array_like, optional + Weights or counts corresponding with the sample or probability masses + corresponding with the support values. Sum of elements must be positive + and finite. If unspecified, each value is assigned the same weight. + + Returns + ------- + distance : float + The computed distance between the distributions. + + Notes + ----- + Given two 1D probability mass functions, :math:`u` and :math:`v`, the first + Wasserstein distance between the distributions is: + + .. math:: + + l_1 (u, v) = \inf_{\pi \in \Gamma (u, v)} \int_{\mathbb{R} \times + \mathbb{R}} |x-y| \mathrm{d} \pi (x, y) + + where :math:`\Gamma (u, v)` is the set of (probability) distributions on + :math:`\mathbb{R} \times \mathbb{R}` whose marginals are :math:`u` and + :math:`v` on the first and second factors respectively. For a given value + :math:`x`, :math:`u(x)` gives the probabilty of :math:`u` at position + :math:`x`, and the same for :math:`v(x)`. + + If :math:`U` and :math:`V` are the respective CDFs of :math:`u` and + :math:`v`, this distance also equals to: + + .. math:: + + l_1(u, v) = \int_{-\infty}^{+\infty} |U-V| + + See [3]_ for a proof of the equivalence of both definitions. + + The input distributions can be empirical, therefore coming from samples + whose values are effectively inputs of the function, or they can be seen as + generalized functions, in which case they are weighted sums of Dirac delta + functions located at the specified values. + + References + ---------- + .. [1] "Wasserstein metric", https://en.wikipedia.org/wiki/Wasserstein_metric + .. [2] Lili Weng, "What is Wasserstein distance?", Lil'log, + https://lilianweng.github.io/posts/2017-08-20-gan/#what-is-wasserstein-distance. + .. [3] Ramdas, Garcia, Cuturi "On Wasserstein Two Sample Testing and Related + Families of Nonparametric Tests" (2015). :arXiv:`1509.02237`. + + See Also + -------- + wasserstein_distance_nd: Compute the Wasserstein-1 distance between two N-D + discrete distributions. + + Examples + -------- + >>> from scipy.stats import wasserstein_distance + >>> wasserstein_distance([0, 1, 3], [5, 6, 8]) + 5.0 + >>> wasserstein_distance([0, 1], [0, 1], [3, 1], [2, 2]) + 0.25 + >>> wasserstein_distance([3.4, 3.9, 7.5, 7.8], [4.5, 1.4], + ... [1.4, 0.9, 3.1, 7.2], [3.2, 3.5]) + 4.0781331438047861 + + """ + return _cdf_distance(1, u_values, v_values, u_weights, v_weights) + + +def energy_distance(u_values, v_values, u_weights=None, v_weights=None): + r"""Compute the energy distance between two 1D distributions. + + .. versionadded:: 1.0.0 + + Parameters + ---------- + u_values, v_values : array_like + Values observed in the (empirical) distribution. + u_weights, v_weights : array_like, optional + Weight for each value. If unspecified, each value is assigned the same + weight. + `u_weights` (resp. `v_weights`) must have the same length as + `u_values` (resp. `v_values`). If the weight sum differs from 1, it + must still be positive and finite so that the weights can be normalized + to sum to 1. + + Returns + ------- + distance : float + The computed distance between the distributions. + + Notes + ----- + The energy distance between two distributions :math:`u` and :math:`v`, whose + respective CDFs are :math:`U` and :math:`V`, equals to: + + .. math:: + + D(u, v) = \left( 2\mathbb E|X - Y| - \mathbb E|X - X'| - + \mathbb E|Y - Y'| \right)^{1/2} + + where :math:`X` and :math:`X'` (resp. :math:`Y` and :math:`Y'`) are + independent random variables whose probability distribution is :math:`u` + (resp. :math:`v`). + + Sometimes the square of this quantity is referred to as the "energy + distance" (e.g. in [2]_, [4]_), but as noted in [1]_ and [3]_, only the + definition above satisfies the axioms of a distance function (metric). + + As shown in [2]_, for one-dimensional real-valued variables, the energy + distance is linked to the non-distribution-free version of the Cramér-von + Mises distance: + + .. math:: + + D(u, v) = \sqrt{2} l_2(u, v) = \left( 2 \int_{-\infty}^{+\infty} (U-V)^2 + \right)^{1/2} + + Note that the common Cramér-von Mises criterion uses the distribution-free + version of the distance. See [2]_ (section 2), for more details about both + versions of the distance. + + The input distributions can be empirical, therefore coming from samples + whose values are effectively inputs of the function, or they can be seen as + generalized functions, in which case they are weighted sums of Dirac delta + functions located at the specified values. + + References + ---------- + .. [1] Rizzo, Szekely "Energy distance." Wiley Interdisciplinary Reviews: + Computational Statistics, 8(1):27-38 (2015). + .. [2] Szekely "E-statistics: The energy of statistical samples." Bowling + Green State University, Department of Mathematics and Statistics, + Technical Report 02-16 (2002). + .. [3] "Energy distance", https://en.wikipedia.org/wiki/Energy_distance + .. [4] Bellemare, Danihelka, Dabney, Mohamed, Lakshminarayanan, Hoyer, + Munos "The Cramer Distance as a Solution to Biased Wasserstein + Gradients" (2017). :arXiv:`1705.10743`. + + Examples + -------- + >>> from scipy.stats import energy_distance + >>> energy_distance([0], [2]) + 2.0000000000000004 + >>> energy_distance([0, 8], [0, 8], [3, 1], [2, 2]) + 1.0000000000000002 + >>> energy_distance([0.7, 7.4, 2.4, 6.8], [1.4, 8. ], + ... [2.1, 4.2, 7.4, 8. ], [7.6, 8.8]) + 0.88003340976158217 + + """ + return np.sqrt(2) * _cdf_distance(2, u_values, v_values, + u_weights, v_weights) + + +def _cdf_distance(p, u_values, v_values, u_weights=None, v_weights=None): + r""" + Compute, between two one-dimensional distributions :math:`u` and + :math:`v`, whose respective CDFs are :math:`U` and :math:`V`, the + statistical distance that is defined as: + + .. math:: + + l_p(u, v) = \left( \int_{-\infty}^{+\infty} |U-V|^p \right)^{1/p} + + p is a positive parameter; p = 1 gives the Wasserstein distance, p = 2 + gives the energy distance. + + Parameters + ---------- + u_values, v_values : array_like + Values observed in the (empirical) distribution. + u_weights, v_weights : array_like, optional + Weight for each value. If unspecified, each value is assigned the same + weight. + `u_weights` (resp. `v_weights`) must have the same length as + `u_values` (resp. `v_values`). If the weight sum differs from 1, it + must still be positive and finite so that the weights can be normalized + to sum to 1. + + Returns + ------- + distance : float + The computed distance between the distributions. + + Notes + ----- + The input distributions can be empirical, therefore coming from samples + whose values are effectively inputs of the function, or they can be seen as + generalized functions, in which case they are weighted sums of Dirac delta + functions located at the specified values. + + References + ---------- + .. [1] Bellemare, Danihelka, Dabney, Mohamed, Lakshminarayanan, Hoyer, + Munos "The Cramer Distance as a Solution to Biased Wasserstein + Gradients" (2017). :arXiv:`1705.10743`. + + """ + u_values, u_weights = _validate_distribution(u_values, u_weights) + v_values, v_weights = _validate_distribution(v_values, v_weights) + + u_sorter = np.argsort(u_values) + v_sorter = np.argsort(v_values) + + all_values = np.concatenate((u_values, v_values)) + all_values.sort(kind='mergesort') + + # Compute the differences between pairs of successive values of u and v. + deltas = np.diff(all_values) + + # Get the respective positions of the values of u and v among the values of + # both distributions. + u_cdf_indices = u_values[u_sorter].searchsorted(all_values[:-1], 'right') + v_cdf_indices = v_values[v_sorter].searchsorted(all_values[:-1], 'right') + + # Calculate the CDFs of u and v using their weights, if specified. + if u_weights is None: + u_cdf = u_cdf_indices / u_values.size + else: + u_sorted_cumweights = np.concatenate(([0], + np.cumsum(u_weights[u_sorter]))) + u_cdf = u_sorted_cumweights[u_cdf_indices] / u_sorted_cumweights[-1] + + if v_weights is None: + v_cdf = v_cdf_indices / v_values.size + else: + v_sorted_cumweights = np.concatenate(([0], + np.cumsum(v_weights[v_sorter]))) + v_cdf = v_sorted_cumweights[v_cdf_indices] / v_sorted_cumweights[-1] + + # Compute the value of the integral based on the CDFs. + # If p = 1 or p = 2, we avoid using np.power, which introduces an overhead + # of about 15%. + if p == 1: + return np.sum(np.multiply(np.abs(u_cdf - v_cdf), deltas)) + if p == 2: + return np.sqrt(np.sum(np.multiply(np.square(u_cdf - v_cdf), deltas))) + return np.power(np.sum(np.multiply(np.power(np.abs(u_cdf - v_cdf), p), + deltas)), 1/p) + + +def _validate_distribution(values, weights): + """ + Validate the values and weights from a distribution input of `cdf_distance` + and return them as ndarray objects. + + Parameters + ---------- + values : array_like + Values observed in the (empirical) distribution. + weights : array_like + Weight for each value. + + Returns + ------- + values : ndarray + Values as ndarray. + weights : ndarray + Weights as ndarray. + + """ + # Validate the value array. + values = np.asarray(values, dtype=float) + if len(values) == 0: + raise ValueError("Distribution can't be empty.") + + # Validate the weight array, if specified. + if weights is not None: + weights = np.asarray(weights, dtype=float) + if len(weights) != len(values): + raise ValueError('Value and weight array-likes for the same ' + 'empirical distribution must be of the same size.') + if np.any(weights < 0): + raise ValueError('All weights must be non-negative.') + if not 0 < np.sum(weights) < np.inf: + raise ValueError('Weight array-like sum must be positive and ' + 'finite. Set as None for an equal distribution of ' + 'weight.') + + return values, weights + + return values, None + + +##################################### +# SUPPORT FUNCTIONS # +##################################### + +RepeatedResults = namedtuple('RepeatedResults', ('values', 'counts')) + + +def find_repeats(arr): + """Find repeats and repeat counts. + + Parameters + ---------- + arr : array_like + Input array. This is cast to float64. + + Returns + ------- + values : ndarray + The unique values from the (flattened) input that are repeated. + + counts : ndarray + Number of times the corresponding 'value' is repeated. + + Notes + ----- + In numpy >= 1.9 `numpy.unique` provides similar functionality. The main + difference is that `find_repeats` only returns repeated values. + + Examples + -------- + >>> from scipy import stats + >>> stats.find_repeats([2, 1, 2, 3, 2, 2, 5]) + RepeatedResults(values=array([2.]), counts=array([4])) + + >>> stats.find_repeats([[10, 20, 1, 2], [5, 5, 4, 4]]) + RepeatedResults(values=array([4., 5.]), counts=array([2, 2])) + + """ + # Note: always copies. + return RepeatedResults(*_find_repeats(np.array(arr, dtype=np.float64))) + + +def _sum_of_squares(a, axis=0): + """Square each element of the input array, and return the sum(s) of that. + + Parameters + ---------- + a : array_like + Input array. + axis : int or None, optional + Axis along which to calculate. Default is 0. If None, compute over + the whole array `a`. + + Returns + ------- + sum_of_squares : ndarray + The sum along the given axis for (a**2). + + See Also + -------- + _square_of_sums : The square(s) of the sum(s) (the opposite of + `_sum_of_squares`). + + """ + a, axis = _chk_asarray(a, axis) + return np.sum(a*a, axis) + + +def _square_of_sums(a, axis=0): + """Sum elements of the input array, and return the square(s) of that sum. + + Parameters + ---------- + a : array_like + Input array. + axis : int or None, optional + Axis along which to calculate. Default is 0. If None, compute over + the whole array `a`. + + Returns + ------- + square_of_sums : float or ndarray + The square of the sum over `axis`. + + See Also + -------- + _sum_of_squares : The sum of squares (the opposite of `square_of_sums`). + + """ + a, axis = _chk_asarray(a, axis) + s = np.sum(a, axis) + if not np.isscalar(s): + return s.astype(float) * s + else: + return float(s) * s + + +def rankdata(a, method='average', *, axis=None, nan_policy='propagate'): + """Assign ranks to data, dealing with ties appropriately. + + By default (``axis=None``), the data array is first flattened, and a flat + array of ranks is returned. Separately reshape the rank array to the + shape of the data array if desired (see Examples). + + Ranks begin at 1. The `method` argument controls how ranks are assigned + to equal values. See [1]_ for further discussion of ranking methods. + + Parameters + ---------- + a : array_like + The array of values to be ranked. + method : {'average', 'min', 'max', 'dense', 'ordinal'}, optional + The method used to assign ranks to tied elements. + The following methods are available (default is 'average'): + + * 'average': The average of the ranks that would have been assigned to + all the tied values is assigned to each value. + * 'min': The minimum of the ranks that would have been assigned to all + the tied values is assigned to each value. (This is also + referred to as "competition" ranking.) + * 'max': The maximum of the ranks that would have been assigned to all + the tied values is assigned to each value. + * 'dense': Like 'min', but the rank of the next highest element is + assigned the rank immediately after those assigned to the tied + elements. + * 'ordinal': All values are given a distinct rank, corresponding to + the order that the values occur in `a`. + axis : {None, int}, optional + Axis along which to perform the ranking. If ``None``, the data array + is first flattened. + nan_policy : {'propagate', 'omit', 'raise'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + + * 'propagate': propagates nans through the rank calculation + * 'omit': performs the calculations ignoring nan values + * 'raise': raises an error + + .. note:: + + When `nan_policy` is 'propagate', the output is an array of *all* + nans because ranks relative to nans in the input are undefined. + When `nan_policy` is 'omit', nans in `a` are ignored when ranking + the other values, and the corresponding locations of the output + are nan. + + .. versionadded:: 1.10 + + Returns + ------- + ranks : ndarray + An array of size equal to the size of `a`, containing rank + scores. + + References + ---------- + .. [1] "Ranking", https://en.wikipedia.org/wiki/Ranking + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import rankdata + >>> rankdata([0, 2, 3, 2]) + array([ 1. , 2.5, 4. , 2.5]) + >>> rankdata([0, 2, 3, 2], method='min') + array([ 1, 2, 4, 2]) + >>> rankdata([0, 2, 3, 2], method='max') + array([ 1, 3, 4, 3]) + >>> rankdata([0, 2, 3, 2], method='dense') + array([ 1, 2, 3, 2]) + >>> rankdata([0, 2, 3, 2], method='ordinal') + array([ 1, 2, 4, 3]) + >>> rankdata([[0, 2], [3, 2]]).reshape(2,2) + array([[1. , 2.5], + [4. , 2.5]]) + >>> rankdata([[0, 2, 2], [3, 2, 5]], axis=1) + array([[1. , 2.5, 2.5], + [2. , 1. , 3. ]]) + >>> rankdata([0, 2, 3, np.nan, -2, np.nan], nan_policy="propagate") + array([nan, nan, nan, nan, nan, nan]) + >>> rankdata([0, 2, 3, np.nan, -2, np.nan], nan_policy="omit") + array([ 2., 3., 4., nan, 1., nan]) + + """ + methods = ('average', 'min', 'max', 'dense', 'ordinal') + if method not in methods: + raise ValueError(f'unknown method "{method}"') + + x = np.asarray(a) + + if axis is None: + x = x.ravel() + axis = -1 + + if x.size == 0: + dtype = float if method == 'average' else np.dtype("long") + return np.empty(x.shape, dtype=dtype) + + contains_nan, nan_policy = _contains_nan(x, nan_policy) + + x = np.swapaxes(x, axis, -1) + ranks = _rankdata(x, method) + + if contains_nan: + i_nan = (np.isnan(x) if nan_policy == 'omit' + else np.isnan(x).any(axis=-1)) + ranks = ranks.astype(float, copy=False) + ranks[i_nan] = np.nan + + ranks = np.swapaxes(ranks, axis, -1) + return ranks + + +def _order_ranks(ranks, j): + # Reorder ascending order `ranks` according to `j` + ordered_ranks = np.empty(j.shape, dtype=ranks.dtype) + np.put_along_axis(ordered_ranks, j, ranks, axis=-1) + return ordered_ranks + + +def _rankdata(x, method, return_ties=False): + # Rank data `x` by desired `method`; `return_ties` if desired + shape = x.shape + + # Get sort order + kind = 'mergesort' if method == 'ordinal' else 'quicksort' + j = np.argsort(x, axis=-1, kind=kind) + ordinal_ranks = np.broadcast_to(np.arange(1, shape[-1]+1, dtype=int), shape) + + # Ordinal ranks is very easy because ties don't matter. We're done. + if method == 'ordinal': + return _order_ranks(ordinal_ranks, j) # never return ties + + # Sort array + y = np.take_along_axis(x, j, axis=-1) + # Logical indices of unique elements + i = np.concatenate([np.ones(shape[:-1] + (1,), dtype=np.bool_), + y[..., :-1] != y[..., 1:]], axis=-1) + + # Integer indices of unique elements + indices = np.arange(y.size)[i.ravel()] + # Counts of unique elements + counts = np.diff(indices, append=y.size) + + # Compute `'min'`, `'max'`, and `'mid'` ranks of unique elements + if method == 'min': + ranks = ordinal_ranks[i] + elif method == 'max': + ranks = ordinal_ranks[i] + counts - 1 + elif method == 'average': + ranks = ordinal_ranks[i] + (counts - 1)/2 + elif method == 'dense': + ranks = np.cumsum(i, axis=-1)[i] + + ranks = np.repeat(ranks, counts).reshape(shape) + ranks = _order_ranks(ranks, j) + + if return_ties: + # Tie information is returned in a format that is useful to functions that + # rely on this (private) function. Example: + # >>> x = np.asarray([3, 2, 1, 2, 2, 2, 1]) + # >>> _, t = _rankdata(x, 'average', return_ties=True) + # >>> t # array([2., 0., 4., 0., 0., 0., 1.]) # two 1s, four 2s, and one 3 + # Unlike ranks, tie counts are *not* reordered to correspond with the order of + # the input; e.g. the number of appearances of the lowest rank element comes + # first. This is a useful format because: + # - The shape of the result is the shape of the input. Different slices can + # have different numbers of tied elements but not result in a ragged array. + # - Functions that use `t` usually don't need to which each element of the + # original array is associated with each tie count; they perform a reduction + # over the tie counts onnly. The tie counts are naturally computed in a + # sorted order, so this does not unnecesarily reorder them. + # - One exception is `wilcoxon`, which needs the number of zeros. Zeros always + # have the lowest rank, so it is easy to find them at the zeroth index. + t = np.zeros(shape, dtype=float) + t[i] = counts + return ranks, t + return ranks + + +def expectile(a, alpha=0.5, *, weights=None): + r"""Compute the expectile at the specified level. + + Expectiles are a generalization of the expectation in the same way as + quantiles are a generalization of the median. The expectile at level + `alpha = 0.5` is the mean (average). See Notes for more details. + + Parameters + ---------- + a : array_like + Array containing numbers whose expectile is desired. + alpha : float, default: 0.5 + The level of the expectile; e.g., `alpha=0.5` gives the mean. + weights : array_like, optional + An array of weights associated with the values in `a`. + The `weights` must be broadcastable to the same shape as `a`. + Default is None, which gives each value a weight of 1.0. + An integer valued weight element acts like repeating the corresponding + observation in `a` that many times. See Notes for more details. + + Returns + ------- + expectile : ndarray + The empirical expectile at level `alpha`. + + See Also + -------- + numpy.mean : Arithmetic average + numpy.quantile : Quantile + + Notes + ----- + In general, the expectile at level :math:`\alpha` of a random variable + :math:`X` with cumulative distribution function (CDF) :math:`F` is given + by the unique solution :math:`t` of: + + .. math:: + + \alpha E((X - t)_+) = (1 - \alpha) E((t - X)_+) \,. + + Here, :math:`(x)_+ = \max(0, x)` is the positive part of :math:`x`. + This equation can be equivalently written as: + + .. math:: + + \alpha \int_t^\infty (x - t)\mathrm{d}F(x) + = (1 - \alpha) \int_{-\infty}^t (t - x)\mathrm{d}F(x) \,. + + The empirical expectile at level :math:`\alpha` (`alpha`) of a sample + :math:`a_i` (the array `a`) is defined by plugging in the empirical CDF of + `a`. Given sample or case weights :math:`w` (the array `weights`), it + reads :math:`F_a(x) = \frac{1}{\sum_i w_i} \sum_i w_i 1_{a_i \leq x}` + with indicator function :math:`1_{A}`. This leads to the definition of the + empirical expectile at level `alpha` as the unique solution :math:`t` of: + + .. math:: + + \alpha \sum_{i=1}^n w_i (a_i - t)_+ = + (1 - \alpha) \sum_{i=1}^n w_i (t - a_i)_+ \,. + + For :math:`\alpha=0.5`, this simplifies to the weighted average. + Furthermore, the larger :math:`\alpha`, the larger the value of the + expectile. + + As a final remark, the expectile at level :math:`\alpha` can also be + written as a minimization problem. One often used choice is + + .. math:: + + \operatorname{argmin}_t + E(\lvert 1_{t\geq X} - \alpha\rvert(t - X)^2) \,. + + References + ---------- + .. [1] W. K. Newey and J. L. Powell (1987), "Asymmetric Least Squares + Estimation and Testing," Econometrica, 55, 819-847. + .. [2] T. Gneiting (2009). "Making and Evaluating Point Forecasts," + Journal of the American Statistical Association, 106, 746 - 762. + :doi:`10.48550/arXiv.0912.0902` + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import expectile + >>> a = [1, 4, 2, -1] + >>> expectile(a, alpha=0.5) == np.mean(a) + True + >>> expectile(a, alpha=0.2) + 0.42857142857142855 + >>> expectile(a, alpha=0.8) + 2.5714285714285716 + >>> weights = [1, 3, 1, 1] + + """ + if alpha < 0 or alpha > 1: + raise ValueError( + "The expectile level alpha must be in the range [0, 1]." + ) + a = np.asarray(a) + + if weights is not None: + weights = np.broadcast_to(weights, a.shape) + + # This is the empirical equivalent of Eq. (13) with identification + # function from Table 9 (omitting a factor of 2) in [2] (their y is our + # data a, their x is our t) + def first_order(t): + return np.average(np.abs((a <= t) - alpha) * (t - a), weights=weights) + + if alpha >= 0.5: + x0 = np.average(a, weights=weights) + x1 = np.amax(a) + else: + x1 = np.average(a, weights=weights) + x0 = np.amin(a) + + if x0 == x1: + # a has a single unique element + return x0 + + # Note that the expectile is the unique solution, so no worries about + # finding a wrong root. + res = root_scalar(first_order, x0=x0, x1=x1) + return res.root diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_stats_pythran.cpython-310-x86_64-linux-gnu.so b/venv/lib/python3.10/site-packages/scipy/stats/_stats_pythran.cpython-310-x86_64-linux-gnu.so new file mode 100644 index 0000000000000000000000000000000000000000..b3506e9ecd8d9d21acdda64cc6b3f7cc1513e56b Binary files /dev/null and b/venv/lib/python3.10/site-packages/scipy/stats/_stats_pythran.cpython-310-x86_64-linux-gnu.so differ diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_survival.py b/venv/lib/python3.10/site-packages/scipy/stats/_survival.py new file mode 100644 index 0000000000000000000000000000000000000000..82aad05ab8e052e65f58059338761ae047279b7e --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_survival.py @@ -0,0 +1,686 @@ +from __future__ import annotations + +from dataclasses import dataclass, field +from typing import TYPE_CHECKING +import warnings + +import numpy as np +from scipy import special, interpolate, stats +from scipy.stats._censored_data import CensoredData +from scipy.stats._common import ConfidenceInterval +from scipy.stats import norm # type: ignore[attr-defined] + +if TYPE_CHECKING: + from typing import Literal + import numpy.typing as npt + + +__all__ = ['ecdf', 'logrank'] + + +@dataclass +class EmpiricalDistributionFunction: + """An empirical distribution function produced by `scipy.stats.ecdf` + + Attributes + ---------- + quantiles : ndarray + The unique values of the sample from which the + `EmpiricalDistributionFunction` was estimated. + probabilities : ndarray + The point estimates of the cumulative distribution function (CDF) or + its complement, the survival function (SF), corresponding with + `quantiles`. + """ + quantiles: np.ndarray + probabilities: np.ndarray + # Exclude these from __str__ + _n: np.ndarray = field(repr=False) # number "at risk" + _d: np.ndarray = field(repr=False) # number of "deaths" + _sf: np.ndarray = field(repr=False) # survival function for var estimate + _kind: str = field(repr=False) # type of function: "cdf" or "sf" + + def __init__(self, q, p, n, d, kind): + self.probabilities = p + self.quantiles = q + self._n = n + self._d = d + self._sf = p if kind == 'sf' else 1 - p + self._kind = kind + + f0 = 1 if kind == 'sf' else 0 # leftmost function value + f1 = 1 - f0 + # fill_value can't handle edge cases at infinity + x = np.insert(q, [0, len(q)], [-np.inf, np.inf]) + y = np.insert(p, [0, len(p)], [f0, f1]) + # `or` conditions handle the case of empty x, points + self._f = interpolate.interp1d(x, y, kind='previous', + assume_sorted=True) + + def evaluate(self, x): + """Evaluate the empirical CDF/SF function at the input. + + Parameters + ---------- + x : ndarray + Argument to the CDF/SF + + Returns + ------- + y : ndarray + The CDF/SF evaluated at the input + """ + return self._f(x) + + def plot(self, ax=None, **matplotlib_kwargs): + """Plot the empirical distribution function + + Available only if ``matplotlib`` is installed. + + Parameters + ---------- + ax : matplotlib.axes.Axes + Axes object to draw the plot onto, otherwise uses the current Axes. + + **matplotlib_kwargs : dict, optional + Keyword arguments passed directly to `matplotlib.axes.Axes.step`. + Unless overridden, ``where='post'``. + + Returns + ------- + lines : list of `matplotlib.lines.Line2D` + Objects representing the plotted data + """ + try: + import matplotlib # noqa: F401 + except ModuleNotFoundError as exc: + message = "matplotlib must be installed to use method `plot`." + raise ModuleNotFoundError(message) from exc + + if ax is None: + import matplotlib.pyplot as plt + ax = plt.gca() + + kwargs = {'where': 'post'} + kwargs.update(matplotlib_kwargs) + + delta = np.ptp(self.quantiles)*0.05 # how far past sample edge to plot + q = self.quantiles + q = [q[0] - delta] + list(q) + [q[-1] + delta] + + return ax.step(q, self.evaluate(q), **kwargs) + + def confidence_interval(self, confidence_level=0.95, *, method='linear'): + """Compute a confidence interval around the CDF/SF point estimate + + Parameters + ---------- + confidence_level : float, default: 0.95 + Confidence level for the computed confidence interval + + method : str, {"linear", "log-log"} + Method used to compute the confidence interval. Options are + "linear" for the conventional Greenwood confidence interval + (default) and "log-log" for the "exponential Greenwood", + log-negative-log-transformed confidence interval. + + Returns + ------- + ci : ``ConfidenceInterval`` + An object with attributes ``low`` and ``high``, instances of + `~scipy.stats._result_classes.EmpiricalDistributionFunction` that + represent the lower and upper bounds (respectively) of the + confidence interval. + + Notes + ----- + Confidence intervals are computed according to the Greenwood formula + (``method='linear'``) or the more recent "exponential Greenwood" + formula (``method='log-log'``) as described in [1]_. The conventional + Greenwood formula can result in lower confidence limits less than 0 + and upper confidence limits greater than 1; these are clipped to the + unit interval. NaNs may be produced by either method; these are + features of the formulas. + + References + ---------- + .. [1] Sawyer, Stanley. "The Greenwood and Exponential Greenwood + Confidence Intervals in Survival Analysis." + https://www.math.wustl.edu/~sawyer/handouts/greenwood.pdf + + """ + message = ("Confidence interval bounds do not implement a " + "`confidence_interval` method.") + if self._n is None: + raise NotImplementedError(message) + + methods = {'linear': self._linear_ci, + 'log-log': self._loglog_ci} + + message = f"`method` must be one of {set(methods)}." + if method.lower() not in methods: + raise ValueError(message) + + message = "`confidence_level` must be a scalar between 0 and 1." + confidence_level = np.asarray(confidence_level)[()] + if confidence_level.shape or not (0 <= confidence_level <= 1): + raise ValueError(message) + + method_fun = methods[method.lower()] + low, high = method_fun(confidence_level) + + message = ("The confidence interval is undefined at some observations." + " This is a feature of the mathematical formula used, not" + " an error in its implementation.") + if np.any(np.isnan(low) | np.isnan(high)): + warnings.warn(message, RuntimeWarning, stacklevel=2) + + low, high = np.clip(low, 0, 1), np.clip(high, 0, 1) + low = EmpiricalDistributionFunction(self.quantiles, low, None, None, + self._kind) + high = EmpiricalDistributionFunction(self.quantiles, high, None, None, + self._kind) + return ConfidenceInterval(low, high) + + def _linear_ci(self, confidence_level): + sf, d, n = self._sf, self._d, self._n + # When n == d, Greenwood's formula divides by zero. + # When s != 0, this can be ignored: var == inf, and CI is [0, 1] + # When s == 0, this results in NaNs. Produce an informative warning. + with np.errstate(divide='ignore', invalid='ignore'): + var = sf ** 2 * np.cumsum(d / (n * (n - d))) + + se = np.sqrt(var) + z = special.ndtri(1 / 2 + confidence_level / 2) + + z_se = z * se + low = self.probabilities - z_se + high = self.probabilities + z_se + + return low, high + + def _loglog_ci(self, confidence_level): + sf, d, n = self._sf, self._d, self._n + + with np.errstate(divide='ignore', invalid='ignore'): + var = 1 / np.log(sf) ** 2 * np.cumsum(d / (n * (n - d))) + + se = np.sqrt(var) + z = special.ndtri(1 / 2 + confidence_level / 2) + + with np.errstate(divide='ignore'): + lnl_points = np.log(-np.log(sf)) + + z_se = z * se + low = np.exp(-np.exp(lnl_points + z_se)) + high = np.exp(-np.exp(lnl_points - z_se)) + if self._kind == "cdf": + low, high = 1-high, 1-low + + return low, high + + +@dataclass +class ECDFResult: + """ Result object returned by `scipy.stats.ecdf` + + Attributes + ---------- + cdf : `~scipy.stats._result_classes.EmpiricalDistributionFunction` + An object representing the empirical cumulative distribution function. + sf : `~scipy.stats._result_classes.EmpiricalDistributionFunction` + An object representing the complement of the empirical cumulative + distribution function. + """ + cdf: EmpiricalDistributionFunction + sf: EmpiricalDistributionFunction + + def __init__(self, q, cdf, sf, n, d): + self.cdf = EmpiricalDistributionFunction(q, cdf, n, d, "cdf") + self.sf = EmpiricalDistributionFunction(q, sf, n, d, "sf") + + +def _iv_CensoredData( + sample: npt.ArrayLike | CensoredData, param_name: str = 'sample' +) -> CensoredData: + """Attempt to convert `sample` to `CensoredData`.""" + if not isinstance(sample, CensoredData): + try: # takes care of input standardization/validation + sample = CensoredData(uncensored=sample) + except ValueError as e: + message = str(e).replace('uncensored', param_name) + raise type(e)(message) from e + return sample + + +def ecdf(sample: npt.ArrayLike | CensoredData) -> ECDFResult: + """Empirical cumulative distribution function of a sample. + + The empirical cumulative distribution function (ECDF) is a step function + estimate of the CDF of the distribution underlying a sample. This function + returns objects representing both the empirical distribution function and + its complement, the empirical survival function. + + Parameters + ---------- + sample : 1D array_like or `scipy.stats.CensoredData` + Besides array_like, instances of `scipy.stats.CensoredData` containing + uncensored and right-censored observations are supported. Currently, + other instances of `scipy.stats.CensoredData` will result in a + ``NotImplementedError``. + + Returns + ------- + res : `~scipy.stats._result_classes.ECDFResult` + An object with the following attributes. + + cdf : `~scipy.stats._result_classes.EmpiricalDistributionFunction` + An object representing the empirical cumulative distribution + function. + sf : `~scipy.stats._result_classes.EmpiricalDistributionFunction` + An object representing the empirical survival function. + + The `cdf` and `sf` attributes themselves have the following attributes. + + quantiles : ndarray + The unique values in the sample that defines the empirical CDF/SF. + probabilities : ndarray + The point estimates of the probabilities corresponding with + `quantiles`. + + And the following methods: + + evaluate(x) : + Evaluate the CDF/SF at the argument. + + plot(ax) : + Plot the CDF/SF on the provided axes. + + confidence_interval(confidence_level=0.95) : + Compute the confidence interval around the CDF/SF at the values in + `quantiles`. + + Notes + ----- + When each observation of the sample is a precise measurement, the ECDF + steps up by ``1/len(sample)`` at each of the observations [1]_. + + When observations are lower bounds, upper bounds, or both upper and lower + bounds, the data is said to be "censored", and `sample` may be provided as + an instance of `scipy.stats.CensoredData`. + + For right-censored data, the ECDF is given by the Kaplan-Meier estimator + [2]_; other forms of censoring are not supported at this time. + + Confidence intervals are computed according to the Greenwood formula or the + more recent "Exponential Greenwood" formula as described in [4]_. + + References + ---------- + .. [1] Conover, William Jay. Practical nonparametric statistics. Vol. 350. + John Wiley & Sons, 1999. + + .. [2] Kaplan, Edward L., and Paul Meier. "Nonparametric estimation from + incomplete observations." Journal of the American statistical + association 53.282 (1958): 457-481. + + .. [3] Goel, Manish Kumar, Pardeep Khanna, and Jugal Kishore. + "Understanding survival analysis: Kaplan-Meier estimate." + International journal of Ayurveda research 1.4 (2010): 274. + + .. [4] Sawyer, Stanley. "The Greenwood and Exponential Greenwood Confidence + Intervals in Survival Analysis." + https://www.math.wustl.edu/~sawyer/handouts/greenwood.pdf + + Examples + -------- + **Uncensored Data** + + As in the example from [1]_ page 79, five boys were selected at random from + those in a single high school. Their one-mile run times were recorded as + follows. + + >>> sample = [6.23, 5.58, 7.06, 6.42, 5.20] # one-mile run times (minutes) + + The empirical distribution function, which approximates the distribution + function of one-mile run times of the population from which the boys were + sampled, is calculated as follows. + + >>> from scipy import stats + >>> res = stats.ecdf(sample) + >>> res.cdf.quantiles + array([5.2 , 5.58, 6.23, 6.42, 7.06]) + >>> res.cdf.probabilities + array([0.2, 0.4, 0.6, 0.8, 1. ]) + + To plot the result as a step function: + + >>> import matplotlib.pyplot as plt + >>> ax = plt.subplot() + >>> res.cdf.plot(ax) + >>> ax.set_xlabel('One-Mile Run Time (minutes)') + >>> ax.set_ylabel('Empirical CDF') + >>> plt.show() + + **Right-censored Data** + + As in the example from [1]_ page 91, the lives of ten car fanbelts were + tested. Five tests concluded because the fanbelt being tested broke, but + the remaining tests concluded for other reasons (e.g. the study ran out of + funding, but the fanbelt was still functional). The mileage driven + with the fanbelts were recorded as follows. + + >>> broken = [77, 47, 81, 56, 80] # in thousands of miles driven + >>> unbroken = [62, 60, 43, 71, 37] + + Precise survival times of the fanbelts that were still functional at the + end of the tests are unknown, but they are known to exceed the values + recorded in ``unbroken``. Therefore, these observations are said to be + "right-censored", and the data is represented using + `scipy.stats.CensoredData`. + + >>> sample = stats.CensoredData(uncensored=broken, right=unbroken) + + The empirical survival function is calculated as follows. + + >>> res = stats.ecdf(sample) + >>> res.sf.quantiles + array([37., 43., 47., 56., 60., 62., 71., 77., 80., 81.]) + >>> res.sf.probabilities + array([1. , 1. , 0.875, 0.75 , 0.75 , 0.75 , 0.75 , 0.5 , 0.25 , 0. ]) + + To plot the result as a step function: + + >>> ax = plt.subplot() + >>> res.cdf.plot(ax) + >>> ax.set_xlabel('Fanbelt Survival Time (thousands of miles)') + >>> ax.set_ylabel('Empirical SF') + >>> plt.show() + + """ + sample = _iv_CensoredData(sample) + + if sample.num_censored() == 0: + res = _ecdf_uncensored(sample._uncensor()) + elif sample.num_censored() == sample._right.size: + res = _ecdf_right_censored(sample) + else: + # Support additional censoring options in follow-up PRs + message = ("Currently, only uncensored and right-censored data is " + "supported.") + raise NotImplementedError(message) + + t, cdf, sf, n, d = res + return ECDFResult(t, cdf, sf, n, d) + + +def _ecdf_uncensored(sample): + sample = np.sort(sample) + x, counts = np.unique(sample, return_counts=True) + + # [1].81 "the fraction of [observations] that are less than or equal to x + events = np.cumsum(counts) + n = sample.size + cdf = events / n + + # [1].89 "the relative frequency of the sample that exceeds x in value" + sf = 1 - cdf + + at_risk = np.concatenate(([n], n - events[:-1])) + return x, cdf, sf, at_risk, counts + + +def _ecdf_right_censored(sample): + # It is conventional to discuss right-censored data in terms of + # "survival time", "death", and "loss" (e.g. [2]). We'll use that + # terminology here. + # This implementation was influenced by the references cited and also + # https://www.youtube.com/watch?v=lxoWsVco_iM + # https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator + # In retrospect it is probably most easily compared against [3]. + # Ultimately, the data needs to be sorted, so this implementation is + # written to avoid a separate call to `unique` after sorting. In hope of + # better performance on large datasets, it also computes survival + # probabilities at unique times only rather than at each observation. + tod = sample._uncensored # time of "death" + tol = sample._right # time of "loss" + times = np.concatenate((tod, tol)) + died = np.asarray([1]*tod.size + [0]*tol.size) + + # sort by times + i = np.argsort(times) + times = times[i] + died = died[i] + at_risk = np.arange(times.size, 0, -1) + + # logical indices of unique times + j = np.diff(times, prepend=-np.inf, append=np.inf) > 0 + j_l = j[:-1] # first instances of unique times + j_r = j[1:] # last instances of unique times + + # get number at risk and deaths at each unique time + t = times[j_l] # unique times + n = at_risk[j_l] # number at risk at each unique time + cd = np.cumsum(died)[j_r] # cumulative deaths up to/including unique times + d = np.diff(cd, prepend=0) # deaths at each unique time + + # compute survival function + sf = np.cumprod((n - d) / n) + cdf = 1 - sf + return t, cdf, sf, n, d + + +@dataclass +class LogRankResult: + """Result object returned by `scipy.stats.logrank`. + + Attributes + ---------- + statistic : float ndarray + The computed statistic (defined below). Its magnitude is the + square root of the magnitude returned by most other logrank test + implementations. + pvalue : float ndarray + The computed p-value of the test. + """ + statistic: np.ndarray + pvalue: np.ndarray + + +def logrank( + x: npt.ArrayLike | CensoredData, + y: npt.ArrayLike | CensoredData, + alternative: Literal['two-sided', 'less', 'greater'] = "two-sided" +) -> LogRankResult: + r"""Compare the survival distributions of two samples via the logrank test. + + Parameters + ---------- + x, y : array_like or CensoredData + Samples to compare based on their empirical survival functions. + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + + The null hypothesis is that the survival distributions of the two + groups, say *X* and *Y*, are identical. + + The following alternative hypotheses [4]_ are available (default is + 'two-sided'): + + * 'two-sided': the survival distributions of the two groups are not + identical. + * 'less': survival of group *X* is favored: the group *X* failure rate + function is less than the group *Y* failure rate function at some + times. + * 'greater': survival of group *Y* is favored: the group *X* failure + rate function is greater than the group *Y* failure rate function at + some times. + + Returns + ------- + res : `~scipy.stats._result_classes.LogRankResult` + An object containing attributes: + + statistic : float ndarray + The computed statistic (defined below). Its magnitude is the + square root of the magnitude returned by most other logrank test + implementations. + pvalue : float ndarray + The computed p-value of the test. + + See Also + -------- + scipy.stats.ecdf + + Notes + ----- + The logrank test [1]_ compares the observed number of events to + the expected number of events under the null hypothesis that the two + samples were drawn from the same distribution. The statistic is + + .. math:: + + Z_i = \frac{\sum_{j=1}^J(O_{i,j}-E_{i,j})}{\sqrt{\sum_{j=1}^J V_{i,j}}} + \rightarrow \mathcal{N}(0,1) + + where + + .. math:: + + E_{i,j} = O_j \frac{N_{i,j}}{N_j}, + \qquad + V_{i,j} = E_{i,j} \left(\frac{N_j-O_j}{N_j}\right) + \left(\frac{N_j-N_{i,j}}{N_j-1}\right), + + :math:`i` denotes the group (i.e. it may assume values :math:`x` or + :math:`y`, or it may be omitted to refer to the combined sample) + :math:`j` denotes the time (at which an event occurred), + :math:`N` is the number of subjects at risk just before an event occurred, + and :math:`O` is the observed number of events at that time. + + The ``statistic`` :math:`Z_x` returned by `logrank` is the (signed) square + root of the statistic returned by many other implementations. Under the + null hypothesis, :math:`Z_x**2` is asymptotically distributed according to + the chi-squared distribution with one degree of freedom. Consequently, + :math:`Z_x` is asymptotically distributed according to the standard normal + distribution. The advantage of using :math:`Z_x` is that the sign + information (i.e. whether the observed number of events tends to be less + than or greater than the number expected under the null hypothesis) is + preserved, allowing `scipy.stats.logrank` to offer one-sided alternative + hypotheses. + + References + ---------- + .. [1] Mantel N. "Evaluation of survival data and two new rank order + statistics arising in its consideration." + Cancer Chemotherapy Reports, 50(3):163-170, PMID: 5910392, 1966 + .. [2] Bland, Altman, "The logrank test", BMJ, 328:1073, + :doi:`10.1136/bmj.328.7447.1073`, 2004 + .. [3] "Logrank test", Wikipedia, + https://en.wikipedia.org/wiki/Logrank_test + .. [4] Brown, Mark. "On the choice of variance for the log rank test." + Biometrika 71.1 (1984): 65-74. + .. [5] Klein, John P., and Melvin L. Moeschberger. Survival analysis: + techniques for censored and truncated data. Vol. 1230. New York: + Springer, 2003. + + Examples + -------- + Reference [2]_ compared the survival times of patients with two different + types of recurrent malignant gliomas. The samples below record the time + (number of weeks) for which each patient participated in the study. The + `scipy.stats.CensoredData` class is used because the data is + right-censored: the uncensored observations correspond with observed deaths + whereas the censored observations correspond with the patient leaving the + study for another reason. + + >>> from scipy import stats + >>> x = stats.CensoredData( + ... uncensored=[6, 13, 21, 30, 37, 38, 49, 50, + ... 63, 79, 86, 98, 202, 219], + ... right=[31, 47, 80, 82, 82, 149] + ... ) + >>> y = stats.CensoredData( + ... uncensored=[10, 10, 12, 13, 14, 15, 16, 17, 18, 20, 24, 24, + ... 25, 28,30, 33, 35, 37, 40, 40, 46, 48, 76, 81, + ... 82, 91, 112, 181], + ... right=[34, 40, 70] + ... ) + + We can calculate and visualize the empirical survival functions + of both groups as follows. + + >>> import numpy as np + >>> import matplotlib.pyplot as plt + >>> ax = plt.subplot() + >>> ecdf_x = stats.ecdf(x) + >>> ecdf_x.sf.plot(ax, label='Astrocytoma') + >>> ecdf_y = stats.ecdf(y) + >>> ecdf_y.sf.plot(ax, label='Glioblastoma') + >>> ax.set_xlabel('Time to death (weeks)') + >>> ax.set_ylabel('Empirical SF') + >>> plt.legend() + >>> plt.show() + + Visual inspection of the empirical survival functions suggests that the + survival times tend to be different between the two groups. To formally + assess whether the difference is significant at the 1% level, we use the + logrank test. + + >>> res = stats.logrank(x=x, y=y) + >>> res.statistic + -2.73799... + >>> res.pvalue + 0.00618... + + The p-value is less than 1%, so we can consider the data to be evidence + against the null hypothesis in favor of the alternative that there is a + difference between the two survival functions. + + """ + # Input validation. `alternative` IV handled in `_get_pvalue` below. + x = _iv_CensoredData(sample=x, param_name='x') + y = _iv_CensoredData(sample=y, param_name='y') + + # Combined sample. (Under H0, the two groups are identical.) + xy = CensoredData( + uncensored=np.concatenate((x._uncensored, y._uncensored)), + right=np.concatenate((x._right, y._right)) + ) + + # Extract data from the combined sample + res = ecdf(xy) + idx = res.sf._d.astype(bool) # indices of observed events + times_xy = res.sf.quantiles[idx] # unique times of observed events + at_risk_xy = res.sf._n[idx] # combined number of subjects at risk + deaths_xy = res.sf._d[idx] # combined number of events + + # Get the number at risk within each sample. + # First compute the number at risk in group X at each of the `times_xy`. + # Could use `interpolate_1d`, but this is more compact. + res_x = ecdf(x) + i = np.searchsorted(res_x.sf.quantiles, times_xy) + at_risk_x = np.append(res_x.sf._n, 0)[i] # 0 at risk after last time + # Subtract from the combined number at risk to get number at risk in Y + at_risk_y = at_risk_xy - at_risk_x + + # Compute the variance. + num = at_risk_x * at_risk_y * deaths_xy * (at_risk_xy - deaths_xy) + den = at_risk_xy**2 * (at_risk_xy - 1) + # Note: when `at_risk_xy == 1`, we would have `at_risk_xy - 1 == 0` in the + # numerator and denominator. Simplifying the fraction symbolically, we + # would always find the overall quotient to be zero, so don't compute it. + i = at_risk_xy > 1 + sum_var = np.sum(num[i]/den[i]) + + # Get the observed and expected number of deaths in group X + n_died_x = x._uncensored.size + sum_exp_deaths_x = np.sum(at_risk_x * (deaths_xy/at_risk_xy)) + + # Compute the statistic. This is the square root of that in references. + statistic = (n_died_x - sum_exp_deaths_x)/np.sqrt(sum_var) + + # Equivalent to chi2(df=1).sf(statistic**2) when alternative='two-sided' + pvalue = stats._stats_py._get_pvalue(statistic, norm, alternative) + + return LogRankResult(statistic=statistic[()], pvalue=pvalue[()]) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_variation.py b/venv/lib/python3.10/site-packages/scipy/stats/_variation.py new file mode 100644 index 0000000000000000000000000000000000000000..b51cb856e213af829eeee03da7ace7e41c23b5c0 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_variation.py @@ -0,0 +1,121 @@ +import numpy as np +from scipy._lib._util import _get_nan +from ._axis_nan_policy import _axis_nan_policy_factory + + +@_axis_nan_policy_factory( + lambda x: x, n_outputs=1, result_to_tuple=lambda x: (x,) +) +def variation(a, axis=0, nan_policy='propagate', ddof=0, *, keepdims=False): + """ + Compute the coefficient of variation. + + The coefficient of variation is the standard deviation divided by the + mean. This function is equivalent to:: + + np.std(x, axis=axis, ddof=ddof) / np.mean(x) + + The default for ``ddof`` is 0, but many definitions of the coefficient + of variation use the square root of the unbiased sample variance + for the sample standard deviation, which corresponds to ``ddof=1``. + + The function does not take the absolute value of the mean of the data, + so the return value is negative if the mean is negative. + + Parameters + ---------- + a : array_like + Input array. + axis : int or None, optional + Axis along which to calculate the coefficient of variation. + Default is 0. If None, compute over the whole array `a`. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains ``nan``. + The following options are available: + + * 'propagate': return ``nan`` + * 'raise': raise an exception + * 'omit': perform the calculation with ``nan`` values omitted + + The default is 'propagate'. + ddof : int, optional + Gives the "Delta Degrees Of Freedom" used when computing the + standard deviation. The divisor used in the calculation of the + standard deviation is ``N - ddof``, where ``N`` is the number of + elements. `ddof` must be less than ``N``; if it isn't, the result + will be ``nan`` or ``inf``, depending on ``N`` and the values in + the array. By default `ddof` is zero for backwards compatibility, + but it is recommended to use ``ddof=1`` to ensure that the sample + standard deviation is computed as the square root of the unbiased + sample variance. + + Returns + ------- + variation : ndarray + The calculated variation along the requested axis. + + Notes + ----- + There are several edge cases that are handled without generating a + warning: + + * If both the mean and the standard deviation are zero, ``nan`` + is returned. + * If the mean is zero and the standard deviation is nonzero, ``inf`` + is returned. + * If the input has length zero (either because the array has zero + length, or all the input values are ``nan`` and ``nan_policy`` is + ``'omit'``), ``nan`` is returned. + * If the input contains ``inf``, ``nan`` is returned. + + References + ---------- + .. [1] Zwillinger, D. and Kokoska, S. (2000). CRC Standard + Probability and Statistics Tables and Formulae. Chapman & Hall: New + York. 2000. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats import variation + >>> variation([1, 2, 3, 4, 5], ddof=1) + 0.5270462766947299 + + Compute the variation along a given dimension of an array that contains + a few ``nan`` values: + + >>> x = np.array([[ 10.0, np.nan, 11.0, 19.0, 23.0, 29.0, 98.0], + ... [ 29.0, 30.0, 32.0, 33.0, 35.0, 56.0, 57.0], + ... [np.nan, np.nan, 12.0, 13.0, 16.0, 16.0, 17.0]]) + >>> variation(x, axis=1, ddof=1, nan_policy='omit') + array([1.05109361, 0.31428986, 0.146483 ]) + + """ + # `nan_policy` and `keepdims` are handled by `_axis_nan_policy` + n = a.shape[axis] + NaN = _get_nan(a) + + if a.size == 0 or ddof > n: + # Handle as a special case to avoid spurious warnings. + # The return values, if any, are all nan. + shp = np.asarray(a.shape) + shp = np.delete(shp, axis) + result = np.full(shp, fill_value=NaN) + return result[()] + + mean_a = a.mean(axis) + + if ddof == n: + # Another special case. Result is either inf or nan. + std_a = a.std(axis=axis, ddof=0) + result = np.full_like(std_a, fill_value=NaN) + i = std_a > 0 + result[i] = np.inf + result[i] = np.copysign(result[i], mean_a[i]) + return result[()] + + with np.errstate(divide='ignore', invalid='ignore'): + std_a = a.std(axis, ddof=ddof) + result = std_a / mean_a + + return result[()] diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_warnings_errors.py b/venv/lib/python3.10/site-packages/scipy/stats/_warnings_errors.py new file mode 100644 index 0000000000000000000000000000000000000000..38385b862c9d642b41af8d74279f98c6a427208a --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_warnings_errors.py @@ -0,0 +1,38 @@ +# Warnings + + +class DegenerateDataWarning(RuntimeWarning): + """Warns when data is degenerate and results may not be reliable.""" + def __init__(self, msg=None): + if msg is None: + msg = ("Degenerate data encountered; results may not be reliable.") + self.args = (msg,) + + +class ConstantInputWarning(DegenerateDataWarning): + """Warns when all values in data are exactly equal.""" + def __init__(self, msg=None): + if msg is None: + msg = ("All values in data are exactly equal; " + "results may not be reliable.") + self.args = (msg,) + + +class NearConstantInputWarning(DegenerateDataWarning): + """Warns when all values in data are nearly equal.""" + def __init__(self, msg=None): + if msg is None: + msg = ("All values in data are nearly equal; " + "results may not be reliable.") + self.args = (msg,) + + +# Errors + + +class FitError(RuntimeError): + """Represents an error condition when fitting a distribution to data.""" + def __init__(self, msg=None): + if msg is None: + msg = ("An error occurred when fitting a distribution to data.") + self.args = (msg,) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/_wilcoxon.py b/venv/lib/python3.10/site-packages/scipy/stats/_wilcoxon.py new file mode 100644 index 0000000000000000000000000000000000000000..555496461c1c63fb8b5039cbdfc9670c1b96b9a7 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/_wilcoxon.py @@ -0,0 +1,237 @@ +import warnings +import numpy as np + +from scipy import stats +from ._stats_py import _get_pvalue, _rankdata +from . import _morestats +from ._axis_nan_policy import _broadcast_arrays +from ._hypotests import _get_wilcoxon_distr +from scipy._lib._util import _lazywhere, _get_nan + + +class WilcoxonDistribution: + + def __init__(self, n): + n = np.asarray(n).astype(int, copy=False) + self.n = n + self._dists = {ni: _get_wilcoxon_distr(ni) for ni in np.unique(n)} + + def _cdf1(self, k, n): + pmfs = self._dists[n] + return pmfs[:k + 1].sum() + + def _cdf(self, k, n): + return np.vectorize(self._cdf1, otypes=[float])(k, n) + + def _sf1(self, k, n): + pmfs = self._dists[n] + return pmfs[k:].sum() + + def _sf(self, k, n): + return np.vectorize(self._sf1, otypes=[float])(k, n) + + def mean(self): + return self.n * (self.n + 1) / 4 + + def _prep(self, k): + k = np.asarray(k).astype(int, copy=False) + mn = self.mean() + out = np.empty(k.shape, dtype=np.float64) + return k, mn, out + + def cdf(self, k): + k, mn, out = self._prep(k) + return _lazywhere(k <= mn, (k, self.n), self._cdf, + f2=lambda k, n: 1 - self._sf(k+1, n))[()] + + def sf(self, k): + k, mn, out = self._prep(k) + return _lazywhere(k <= mn, (k, self.n), self._sf, + f2=lambda k, n: 1 - self._cdf(k-1, n))[()] + + +def _wilcoxon_iv(x, y, zero_method, correction, alternative, method, axis): + + axis = np.asarray(axis)[()] + message = "`axis` must be an integer." + if not np.issubdtype(axis.dtype, np.integer) or axis.ndim != 0: + raise ValueError(message) + + message = '`axis` must be compatible with the shape(s) of `x` (and `y`)' + try: + if y is None: + x = np.asarray(x) + d = x + else: + x, y = _broadcast_arrays((x, y), axis=axis) + d = x - y + d = np.moveaxis(d, axis, -1) + except np.AxisError as e: + raise ValueError(message) from e + + message = "`x` and `y` must have the same length along `axis`." + if y is not None and x.shape[axis] != y.shape[axis]: + raise ValueError(message) + + message = "`x` (and `y`, if provided) must be an array of real numbers." + if np.issubdtype(d.dtype, np.integer): + d = d.astype(np.float64) + if not np.issubdtype(d.dtype, np.floating): + raise ValueError(message) + + zero_method = str(zero_method).lower() + zero_methods = {"wilcox", "pratt", "zsplit"} + message = f"`zero_method` must be one of {zero_methods}." + if zero_method not in zero_methods: + raise ValueError(message) + + corrections = {True, False} + message = f"`correction` must be one of {corrections}." + if correction not in corrections: + raise ValueError(message) + + alternative = str(alternative).lower() + alternatives = {"two-sided", "less", "greater"} + message = f"`alternative` must be one of {alternatives}." + if alternative not in alternatives: + raise ValueError(message) + + if not isinstance(method, stats.PermutationMethod): + methods = {"auto", "approx", "exact"} + message = (f"`method` must be one of {methods} or " + "an instance of `stats.PermutationMethod`.") + if method not in methods: + raise ValueError(message) + + # logic unchanged here for backward compatibility + n_zero = np.sum(d == 0, axis=-1) + has_zeros = np.any(n_zero > 0) + if method == "auto": + if d.shape[-1] <= 50 and not has_zeros: + method = "exact" + else: + method = "approx" + + n_zero = np.sum(d == 0) + if n_zero > 0 and method == "exact": + method = "approx" + warnings.warn("Exact p-value calculation does not work if there are " + "zeros. Switching to normal approximation.", + stacklevel=2) + + if (method == "approx" and zero_method in ["wilcox", "pratt"] + and n_zero == d.size and d.size > 0 and d.ndim == 1): + raise ValueError("zero_method 'wilcox' and 'pratt' do not " + "work if x - y is zero for all elements.") + + if 0 < d.shape[-1] < 10 and method == "approx": + warnings.warn("Sample size too small for normal approximation.", stacklevel=2) + + return d, zero_method, correction, alternative, method, axis + + +def _wilcoxon_statistic(d, zero_method='wilcox'): + + i_zeros = (d == 0) + + if zero_method == 'wilcox': + # Wilcoxon's method for treating zeros was to remove them from + # the calculation. We do this by replacing 0s with NaNs, which + # are ignored anyway. + if not d.flags['WRITEABLE']: + d = d.copy() + d[i_zeros] = np.nan + + i_nan = np.isnan(d) + n_nan = np.sum(i_nan, axis=-1) + count = d.shape[-1] - n_nan + + r, t = _rankdata(abs(d), 'average', return_ties=True) + + r_plus = np.sum((d > 0) * r, axis=-1) + r_minus = np.sum((d < 0) * r, axis=-1) + + if zero_method == "zsplit": + # The "zero-split" method for treating zeros is to add half their contribution + # to r_plus and half to r_minus. + # See gh-2263 for the origin of this method. + r_zero_2 = np.sum(i_zeros * r, axis=-1) / 2 + r_plus += r_zero_2 + r_minus += r_zero_2 + + mn = count * (count + 1.) * 0.25 + se = count * (count + 1.) * (2. * count + 1.) + + if zero_method == "pratt": + # Pratt's method for treating zeros was just to modify the z-statistic. + + # normal approximation needs to be adjusted, see Cureton (1967) + n_zero = i_zeros.sum(axis=-1) + mn -= n_zero * (n_zero + 1.) * 0.25 + se -= n_zero * (n_zero + 1.) * (2. * n_zero + 1.) + + # zeros are not to be included in tie-correction. + # any tie counts corresponding with zeros are in the 0th column + t[i_zeros.any(axis=-1), 0] = 0 + + tie_correct = (t**3 - t).sum(axis=-1) + se -= tie_correct/2 + se = np.sqrt(se / 24) + + z = (r_plus - mn) / se + + return r_plus, r_minus, se, z, count + + +def _correction_sign(z, alternative): + if alternative == 'greater': + return 1 + elif alternative == 'less': + return -1 + else: + return np.sign(z) + + +def _wilcoxon_nd(x, y=None, zero_method='wilcox', correction=True, + alternative='two-sided', method='auto', axis=0): + + temp = _wilcoxon_iv(x, y, zero_method, correction, alternative, method, axis) + d, zero_method, correction, alternative, method, axis = temp + + if d.size == 0: + NaN = _get_nan(d) + res = _morestats.WilcoxonResult(statistic=NaN, pvalue=NaN) + if method == 'approx': + res.zstatistic = NaN + return res + + r_plus, r_minus, se, z, count = _wilcoxon_statistic(d, zero_method) + + if method == 'approx': + if correction: + sign = _correction_sign(z, alternative) + z -= sign * 0.5 / se + p = _get_pvalue(z, stats.norm, alternative) + elif method == 'exact': + dist = WilcoxonDistribution(count) + if alternative == 'less': + p = dist.cdf(r_plus) + elif alternative == 'greater': + p = dist.sf(r_plus) + else: + p = 2 * np.minimum(dist.sf(r_plus), dist.cdf(r_plus)) + p = np.clip(p, 0, 1) + else: # `PermutationMethod` instance (already validated) + p = stats.permutation_test( + (d,), lambda d: _wilcoxon_statistic(d, zero_method)[0], + permutation_type='samples', **method._asdict(), + alternative=alternative, axis=-1).pvalue + + # for backward compatibility... + statistic = np.minimum(r_plus, r_minus) if alternative=='two-sided' else r_plus + z = -np.abs(z) if (alternative == 'two-sided' and method == 'approx') else z + + res = _morestats.WilcoxonResult(statistic=statistic, pvalue=p[()]) + if method == 'approx': + res.zstatistic = z[()] + return res diff --git a/venv/lib/python3.10/site-packages/scipy/stats/biasedurn.py b/venv/lib/python3.10/site-packages/scipy/stats/biasedurn.py new file mode 100644 index 0000000000000000000000000000000000000000..f5e1cd5c84897ed9e65db1cf20d3281479d07a1f --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/biasedurn.py @@ -0,0 +1,20 @@ +# This file is not meant for public use and will be removed in SciPy v2.0.0. + +from scipy._lib.deprecation import _sub_module_deprecation + + +__all__ = [ # noqa: F822 + '_PyFishersNCHypergeometric', + '_PyWalleniusNCHypergeometric', + '_PyStochasticLib3' +] + + +def __dir__(): + return __all__ + + +def __getattr__(name): + return _sub_module_deprecation(sub_package="stats", module="biasedurn", + private_modules=["_biasedurn"], all=__all__, + attribute=name) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/contingency.py b/venv/lib/python3.10/site-packages/scipy/stats/contingency.py new file mode 100644 index 0000000000000000000000000000000000000000..399322475b08953a26a23135d0244d87890467bc --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/contingency.py @@ -0,0 +1,468 @@ +""" +Contingency table functions (:mod:`scipy.stats.contingency`) +============================================================ + +Functions for creating and analyzing contingency tables. + +.. currentmodule:: scipy.stats.contingency + +.. autosummary:: + :toctree: generated/ + + chi2_contingency + relative_risk + odds_ratio + crosstab + association + + expected_freq + margins + +""" + + +from functools import reduce +import math +import numpy as np +from ._stats_py import power_divergence +from ._relative_risk import relative_risk +from ._crosstab import crosstab +from ._odds_ratio import odds_ratio +from scipy._lib._bunch import _make_tuple_bunch + + +__all__ = ['margins', 'expected_freq', 'chi2_contingency', 'crosstab', + 'association', 'relative_risk', 'odds_ratio'] + + +def margins(a): + """Return a list of the marginal sums of the array `a`. + + Parameters + ---------- + a : ndarray + The array for which to compute the marginal sums. + + Returns + ------- + margsums : list of ndarrays + A list of length `a.ndim`. `margsums[k]` is the result + of summing `a` over all axes except `k`; it has the same + number of dimensions as `a`, but the length of each axis + except axis `k` will be 1. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats.contingency import margins + + >>> a = np.arange(12).reshape(2, 6) + >>> a + array([[ 0, 1, 2, 3, 4, 5], + [ 6, 7, 8, 9, 10, 11]]) + >>> m0, m1 = margins(a) + >>> m0 + array([[15], + [51]]) + >>> m1 + array([[ 6, 8, 10, 12, 14, 16]]) + + >>> b = np.arange(24).reshape(2,3,4) + >>> m0, m1, m2 = margins(b) + >>> m0 + array([[[ 66]], + [[210]]]) + >>> m1 + array([[[ 60], + [ 92], + [124]]]) + >>> m2 + array([[[60, 66, 72, 78]]]) + """ + margsums = [] + ranged = list(range(a.ndim)) + for k in ranged: + marg = np.apply_over_axes(np.sum, a, [j for j in ranged if j != k]) + margsums.append(marg) + return margsums + + +def expected_freq(observed): + """ + Compute the expected frequencies from a contingency table. + + Given an n-dimensional contingency table of observed frequencies, + compute the expected frequencies for the table based on the marginal + sums under the assumption that the groups associated with each + dimension are independent. + + Parameters + ---------- + observed : array_like + The table of observed frequencies. (While this function can handle + a 1-D array, that case is trivial. Generally `observed` is at + least 2-D.) + + Returns + ------- + expected : ndarray of float64 + The expected frequencies, based on the marginal sums of the table. + Same shape as `observed`. + + Examples + -------- + >>> import numpy as np + >>> from scipy.stats.contingency import expected_freq + >>> observed = np.array([[10, 10, 20],[20, 20, 20]]) + >>> expected_freq(observed) + array([[ 12., 12., 16.], + [ 18., 18., 24.]]) + + """ + # Typically `observed` is an integer array. If `observed` has a large + # number of dimensions or holds large values, some of the following + # computations may overflow, so we first switch to floating point. + observed = np.asarray(observed, dtype=np.float64) + + # Create a list of the marginal sums. + margsums = margins(observed) + + # Create the array of expected frequencies. The shapes of the + # marginal sums returned by apply_over_axes() are just what we + # need for broadcasting in the following product. + d = observed.ndim + expected = reduce(np.multiply, margsums) / observed.sum() ** (d - 1) + return expected + + +Chi2ContingencyResult = _make_tuple_bunch( + 'Chi2ContingencyResult', + ['statistic', 'pvalue', 'dof', 'expected_freq'], [] +) + + +def chi2_contingency(observed, correction=True, lambda_=None): + """Chi-square test of independence of variables in a contingency table. + + This function computes the chi-square statistic and p-value for the + hypothesis test of independence of the observed frequencies in the + contingency table [1]_ `observed`. The expected frequencies are computed + based on the marginal sums under the assumption of independence; see + `scipy.stats.contingency.expected_freq`. The number of degrees of + freedom is (expressed using numpy functions and attributes):: + + dof = observed.size - sum(observed.shape) + observed.ndim - 1 + + + Parameters + ---------- + observed : array_like + The contingency table. The table contains the observed frequencies + (i.e. number of occurrences) in each category. In the two-dimensional + case, the table is often described as an "R x C table". + correction : bool, optional + If True, *and* the degrees of freedom is 1, apply Yates' correction + for continuity. The effect of the correction is to adjust each + observed value by 0.5 towards the corresponding expected value. + lambda_ : float or str, optional + By default, the statistic computed in this test is Pearson's + chi-squared statistic [2]_. `lambda_` allows a statistic from the + Cressie-Read power divergence family [3]_ to be used instead. See + `scipy.stats.power_divergence` for details. + + Returns + ------- + res : Chi2ContingencyResult + An object containing attributes: + + statistic : float + The test statistic. + pvalue : float + The p-value of the test. + dof : int + The degrees of freedom. + expected_freq : ndarray, same shape as `observed` + The expected frequencies, based on the marginal sums of the table. + + See Also + -------- + scipy.stats.contingency.expected_freq + scipy.stats.fisher_exact + scipy.stats.chisquare + scipy.stats.power_divergence + scipy.stats.barnard_exact + scipy.stats.boschloo_exact + + Notes + ----- + An often quoted guideline for the validity of this calculation is that + the test should be used only if the observed and expected frequencies + in each cell are at least 5. + + This is a test for the independence of different categories of a + population. The test is only meaningful when the dimension of + `observed` is two or more. Applying the test to a one-dimensional + table will always result in `expected` equal to `observed` and a + chi-square statistic equal to 0. + + This function does not handle masked arrays, because the calculation + does not make sense with missing values. + + Like `scipy.stats.chisquare`, this function computes a chi-square + statistic; the convenience this function provides is to figure out the + expected frequencies and degrees of freedom from the given contingency + table. If these were already known, and if the Yates' correction was not + required, one could use `scipy.stats.chisquare`. That is, if one calls:: + + res = chi2_contingency(obs, correction=False) + + then the following is true:: + + (res.statistic, res.pvalue) == stats.chisquare(obs.ravel(), + f_exp=ex.ravel(), + ddof=obs.size - 1 - dof) + + The `lambda_` argument was added in version 0.13.0 of scipy. + + References + ---------- + .. [1] "Contingency table", + https://en.wikipedia.org/wiki/Contingency_table + .. [2] "Pearson's chi-squared test", + https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test + .. [3] Cressie, N. and Read, T. R. C., "Multinomial Goodness-of-Fit + Tests", J. Royal Stat. Soc. Series B, Vol. 46, No. 3 (1984), + pp. 440-464. + .. [4] Berger, Jeffrey S. et al. "Aspirin for the Primary Prevention of + Cardiovascular Events in Women and Men: A Sex-Specific + Meta-analysis of Randomized Controlled Trials." + JAMA, 295(3):306-313, :doi:`10.1001/jama.295.3.306`, 2006. + + Examples + -------- + In [4]_, the use of aspirin to prevent cardiovascular events in women + and men was investigated. The study notably concluded: + + ...aspirin therapy reduced the risk of a composite of + cardiovascular events due to its effect on reducing the risk of + ischemic stroke in women [...] + + The article lists studies of various cardiovascular events. Let's + focus on the ischemic stoke in women. + + The following table summarizes the results of the experiment in which + participants took aspirin or a placebo on a regular basis for several + years. Cases of ischemic stroke were recorded:: + + Aspirin Control/Placebo + Ischemic stroke 176 230 + No stroke 21035 21018 + + Is there evidence that the aspirin reduces the risk of ischemic stroke? + We begin by formulating a null hypothesis :math:`H_0`: + + The effect of aspirin is equivalent to that of placebo. + + Let's assess the plausibility of this hypothesis with + a chi-square test. + + >>> import numpy as np + >>> from scipy.stats import chi2_contingency + >>> table = np.array([[176, 230], [21035, 21018]]) + >>> res = chi2_contingency(table) + >>> res.statistic + 6.892569132546561 + >>> res.pvalue + 0.008655478161175739 + + Using a significance level of 5%, we would reject the null hypothesis in + favor of the alternative hypothesis: "the effect of aspirin + is not equivalent to the effect of placebo". + Because `scipy.stats.contingency.chi2_contingency` performs a two-sided + test, the alternative hypothesis does not indicate the direction of the + effect. We can use `stats.contingency.odds_ratio` to support the + conclusion that aspirin *reduces* the risk of ischemic stroke. + + Below are further examples showing how larger contingency tables can be + tested. + + A two-way example (2 x 3): + + >>> obs = np.array([[10, 10, 20], [20, 20, 20]]) + >>> res = chi2_contingency(obs) + >>> res.statistic + 2.7777777777777777 + >>> res.pvalue + 0.24935220877729619 + >>> res.dof + 2 + >>> res.expected_freq + array([[ 12., 12., 16.], + [ 18., 18., 24.]]) + + Perform the test using the log-likelihood ratio (i.e. the "G-test") + instead of Pearson's chi-squared statistic. + + >>> res = chi2_contingency(obs, lambda_="log-likelihood") + >>> res.statistic + 2.7688587616781319 + >>> res.pvalue + 0.25046668010954165 + + A four-way example (2 x 2 x 2 x 2): + + >>> obs = np.array( + ... [[[[12, 17], + ... [11, 16]], + ... [[11, 12], + ... [15, 16]]], + ... [[[23, 15], + ... [30, 22]], + ... [[14, 17], + ... [15, 16]]]]) + >>> res = chi2_contingency(obs) + >>> res.statistic + 8.7584514426741897 + >>> res.pvalue + 0.64417725029295503 + """ + observed = np.asarray(observed) + if np.any(observed < 0): + raise ValueError("All values in `observed` must be nonnegative.") + if observed.size == 0: + raise ValueError("No data; `observed` has size 0.") + + expected = expected_freq(observed) + if np.any(expected == 0): + # Include one of the positions where expected is zero in + # the exception message. + zeropos = list(zip(*np.nonzero(expected == 0)))[0] + raise ValueError("The internally computed table of expected " + f"frequencies has a zero element at {zeropos}.") + + # The degrees of freedom + dof = expected.size - sum(expected.shape) + expected.ndim - 1 + + if dof == 0: + # Degenerate case; this occurs when `observed` is 1D (or, more + # generally, when it has only one nontrivial dimension). In this + # case, we also have observed == expected, so chi2 is 0. + chi2 = 0.0 + p = 1.0 + else: + if dof == 1 and correction: + # Adjust `observed` according to Yates' correction for continuity. + # Magnitude of correction no bigger than difference; see gh-13875 + diff = expected - observed + direction = np.sign(diff) + magnitude = np.minimum(0.5, np.abs(diff)) + observed = observed + magnitude * direction + + chi2, p = power_divergence(observed, expected, + ddof=observed.size - 1 - dof, axis=None, + lambda_=lambda_) + + return Chi2ContingencyResult(chi2, p, dof, expected) + + +def association(observed, method="cramer", correction=False, lambda_=None): + """Calculates degree of association between two nominal variables. + + The function provides the option for computing one of three measures of + association between two nominal variables from the data given in a 2d + contingency table: Tschuprow's T, Pearson's Contingency Coefficient + and Cramer's V. + + Parameters + ---------- + observed : array-like + The array of observed values + method : {"cramer", "tschuprow", "pearson"} (default = "cramer") + The association test statistic. + correction : bool, optional + Inherited from `scipy.stats.contingency.chi2_contingency()` + lambda_ : float or str, optional + Inherited from `scipy.stats.contingency.chi2_contingency()` + + Returns + ------- + statistic : float + Value of the test statistic + + Notes + ----- + Cramer's V, Tschuprow's T and Pearson's Contingency Coefficient, all + measure the degree to which two nominal or ordinal variables are related, + or the level of their association. This differs from correlation, although + many often mistakenly consider them equivalent. Correlation measures in + what way two variables are related, whereas, association measures how + related the variables are. As such, association does not subsume + independent variables, and is rather a test of independence. A value of + 1.0 indicates perfect association, and 0.0 means the variables have no + association. + + Both the Cramer's V and Tschuprow's T are extensions of the phi + coefficient. Moreover, due to the close relationship between the + Cramer's V and Tschuprow's T the returned values can often be similar + or even equivalent. They are likely to diverge more as the array shape + diverges from a 2x2. + + References + ---------- + .. [1] "Tschuprow's T", + https://en.wikipedia.org/wiki/Tschuprow's_T + .. [2] Tschuprow, A. A. (1939) + Principles of the Mathematical Theory of Correlation; + translated by M. Kantorowitsch. W. Hodge & Co. + .. [3] "Cramer's V", https://en.wikipedia.org/wiki/Cramer's_V + .. [4] "Nominal Association: Phi and Cramer's V", + http://www.people.vcu.edu/~pdattalo/702SuppRead/MeasAssoc/NominalAssoc.html + .. [5] Gingrich, Paul, "Association Between Variables", + http://uregina.ca/~gingrich/ch11a.pdf + + Examples + -------- + An example with a 4x2 contingency table: + + >>> import numpy as np + >>> from scipy.stats.contingency import association + >>> obs4x2 = np.array([[100, 150], [203, 322], [420, 700], [320, 210]]) + + Pearson's contingency coefficient + + >>> association(obs4x2, method="pearson") + 0.18303298140595667 + + Cramer's V + + >>> association(obs4x2, method="cramer") + 0.18617813077483678 + + Tschuprow's T + + >>> association(obs4x2, method="tschuprow") + 0.14146478765062995 + """ + arr = np.asarray(observed) + if not np.issubdtype(arr.dtype, np.integer): + raise ValueError("`observed` must be an integer array.") + + if len(arr.shape) != 2: + raise ValueError("method only accepts 2d arrays") + + chi2_stat = chi2_contingency(arr, correction=correction, + lambda_=lambda_) + + phi2 = chi2_stat.statistic / arr.sum() + n_rows, n_cols = arr.shape + if method == "cramer": + value = phi2 / min(n_cols - 1, n_rows - 1) + elif method == "tschuprow": + value = phi2 / math.sqrt((n_rows - 1) * (n_cols - 1)) + elif method == 'pearson': + value = phi2 / (1 + phi2) + else: + raise ValueError("Invalid argument value: 'method' argument must " + "be 'cramer', 'tschuprow', or 'pearson'") + + return math.sqrt(value) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/distributions.py b/venv/lib/python3.10/site-packages/scipy/stats/distributions.py new file mode 100644 index 0000000000000000000000000000000000000000..ac9c37aa98c9545b2616c8d32e8f676d8d49289e --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/distributions.py @@ -0,0 +1,24 @@ +# +# Author: Travis Oliphant 2002-2011 with contributions from +# SciPy Developers 2004-2011 +# +# NOTE: To look at history using `git blame`, use `git blame -M -C -C` +# instead of `git blame -Lxxx,+x`. +# +from ._distn_infrastructure import (rv_discrete, rv_continuous, rv_frozen) # noqa: F401 + +from . import _continuous_distns +from . import _discrete_distns + +from ._continuous_distns import * # noqa: F403 +from ._levy_stable import levy_stable +from ._discrete_distns import * # noqa: F403 +from ._entropy import entropy + +# For backwards compatibility e.g. pymc expects distributions.__all__. +__all__ = ['rv_discrete', 'rv_continuous', 'rv_histogram', 'entropy'] # noqa: F405 + +# Add only the distribution names, not the *_gen names. +__all__ += _continuous_distns._distn_names +__all__ += ['levy_stable'] +__all__ += _discrete_distns._distn_names diff --git a/venv/lib/python3.10/site-packages/scipy/stats/kde.py b/venv/lib/python3.10/site-packages/scipy/stats/kde.py new file mode 100644 index 0000000000000000000000000000000000000000..08e299b5137c44d70c4db841a0b53060d552d50d --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/kde.py @@ -0,0 +1,23 @@ +# This file is not meant for public use and will be removed in SciPy v2.0.0. +# Use the `scipy.stats` namespace for importing the functions +# included below. + +from scipy._lib.deprecation import _sub_module_deprecation + + +__all__ = [ # noqa: F822 + 'gaussian_kde', 'linalg', 'logsumexp', 'check_random_state', + 'atleast_2d', 'reshape', 'newaxis', 'exp', 'ravel', 'power', + 'atleast_1d', 'squeeze', 'sum', 'transpose', 'cov', + 'gaussian_kernel_estimate' +] + + +def __dir__(): + return __all__ + + +def __getattr__(name): + return _sub_module_deprecation(sub_package="stats", module="kde", + private_modules=["_kde"], all=__all__, + attribute=name) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/mstats_extras.py b/venv/lib/python3.10/site-packages/scipy/stats/mstats_extras.py new file mode 100644 index 0000000000000000000000000000000000000000..01a19f22b257d537762838f86dcfa10146c9a4a5 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/mstats_extras.py @@ -0,0 +1,26 @@ +# This file is not meant for public use and will be removed in SciPy v2.0.0. +# Use the `scipy.stats` namespace for importing the functions +# included below. + +from scipy._lib.deprecation import _sub_module_deprecation + + +__all__ = [ # noqa: F822 + 'compare_medians_ms', + 'hdquantiles', 'hdmedian', 'hdquantiles_sd', + 'idealfourths', + 'median_cihs','mjci','mquantiles_cimj', + 'rsh', + 'trimmed_mean_ci', 'ma', 'MaskedArray', 'mstats', + 'norm', 'beta', 't', 'binom' +] + + +def __dir__(): + return __all__ + + +def __getattr__(name): + return _sub_module_deprecation(sub_package="stats", module="mstats_extras", + private_modules=["_mstats_extras"], all=__all__, + attribute=name, correct_module="mstats") diff --git a/venv/lib/python3.10/site-packages/scipy/stats/mvn.py b/venv/lib/python3.10/site-packages/scipy/stats/mvn.py new file mode 100644 index 0000000000000000000000000000000000000000..832993d4702946c19cd7e2a35326a074dac64625 --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/mvn.py @@ -0,0 +1,23 @@ +# This file is not meant for public use and will be removed in SciPy v2.0.0. +# Use the `scipy.stats` namespace for importing the functions +# included below. + +from scipy._lib.deprecation import _sub_module_deprecation + + +__all__ = [ # noqa: F822 + 'mvnun', + 'mvnun_weighted', + 'mvndst', + 'dkblck' +] + + +def __dir__(): + return __all__ + + +def __getattr__(name): + return _sub_module_deprecation(sub_package="stats", module="mvn", + private_modules=["_mvn"], all=__all__, + attribute=name) diff --git a/venv/lib/python3.10/site-packages/scipy/stats/qmc.py b/venv/lib/python3.10/site-packages/scipy/stats/qmc.py new file mode 100644 index 0000000000000000000000000000000000000000..4f3c8182857bee5e25433e154ec72c9f05c0fc5d --- /dev/null +++ b/venv/lib/python3.10/site-packages/scipy/stats/qmc.py @@ -0,0 +1,235 @@ +r""" +==================================================== +Quasi-Monte Carlo submodule (:mod:`scipy.stats.qmc`) +==================================================== + +.. currentmodule:: scipy.stats.qmc + +This module provides Quasi-Monte Carlo generators and associated helper +functions. + + +Quasi-Monte Carlo +================= + +Engines +------- + +.. autosummary:: + :toctree: generated/ + + QMCEngine + Sobol + Halton + LatinHypercube + PoissonDisk + MultinomialQMC + MultivariateNormalQMC + +Helpers +------- + +.. autosummary:: + :toctree: generated/ + + discrepancy + geometric_discrepancy + update_discrepancy + scale + + +Introduction to Quasi-Monte Carlo +================================= + +Quasi-Monte Carlo (QMC) methods [1]_, [2]_, [3]_ provide an +:math:`n \times d` array of numbers in :math:`[0,1]`. They can be used in +place of :math:`n` points from the :math:`U[0,1]^{d}` distribution. Compared to +random points, QMC points are designed to have fewer gaps and clumps. This is +quantified by discrepancy measures [4]_. From the Koksma-Hlawka +inequality [5]_ we know that low discrepancy reduces a bound on +integration error. Averaging a function :math:`f` over :math:`n` QMC points +can achieve an integration error close to :math:`O(n^{-1})` for well +behaved functions [2]_. + +Most QMC constructions are designed for special values of :math:`n` +such as powers of 2 or large primes. Changing the sample +size by even one can degrade their performance, even their +rate of convergence [6]_. For instance :math:`n=100` points may give less +accuracy than :math:`n=64` if the method was designed for :math:`n=2^m`. + +Some QMC constructions are extensible in :math:`n`: we can find +another special sample size :math:`n' > n` and often an infinite +sequence of increasing special sample sizes. Some QMC +constructions are extensible in :math:`d`: we can increase the dimension, +possibly to some upper bound, and typically without requiring +special values of :math:`d`. Some QMC methods are extensible in +both :math:`n` and :math:`d`. + +QMC points are deterministic. That makes it hard to estimate the accuracy of +integrals estimated by averages over QMC points. Randomized QMC (RQMC) [7]_ +points are constructed so that each point is individually :math:`U[0,1]^{d}` +while collectively the :math:`n` points retain their low discrepancy. +One can make :math:`R` independent replications of RQMC points to +see how stable a computation is. From :math:`R` independent values, +a t-test (or bootstrap t-test [8]_) then gives approximate confidence +intervals on the mean value. Some RQMC methods produce a +root mean squared error that is actually :math:`o(1/n)` and smaller than +the rate seen in unrandomized QMC. An intuitive explanation is +that the error is a sum of many small ones and random errors +cancel in a way that deterministic ones do not. RQMC also +has advantages on integrands that are singular or, for other +reasons, fail to be Riemann integrable. + +(R)QMC cannot beat Bahkvalov's curse of dimension (see [9]_). For +any random or deterministic method, there are worst case functions +that will give it poor performance in high dimensions. A worst +case function for QMC might be 0 at all n points but very +large elsewhere. Worst case analyses get very pessimistic +in high dimensions. (R)QMC can bring a great improvement over +MC when the functions on which it is used are not worst case. +For instance (R)QMC can be especially effective on integrands +that are well approximated by sums of functions of +some small number of their input variables at a time [10]_, [11]_. +That property is often a surprising finding about those functions. + +Also, to see an improvement over IID MC, (R)QMC requires a bit of smoothness of +the integrand, roughly the mixed first order derivative in each direction, +:math:`\partial^d f/\partial x_1 \cdots \partial x_d`, must be integral. +For instance, a function that is 1 inside the hypersphere and 0 outside of it +has infinite variation in the sense of Hardy and Krause for any dimension +:math:`d = 2`. + +Scrambled nets are a kind of RQMC that have some valuable robustness +properties [12]_. If the integrand is square integrable, they give variance +:math:`var_{SNET} = o(1/n)`. There is a finite upper bound on +:math:`var_{SNET} / var_{MC}` that holds simultaneously for every square +integrable integrand. Scrambled nets satisfy a strong law of large numbers +for :math:`f` in :math:`L^p` when :math:`p>1`. In some +special cases there is a central limit theorem [13]_. For smooth enough +integrands they can achieve RMSE nearly :math:`O(n^{-3})`. See [12]_ +for references about these properties. + +The main kinds of QMC methods are lattice rules [14]_ and digital +nets and sequences [2]_, [15]_. The theories meet up in polynomial +lattice rules [16]_ which can produce digital nets. Lattice rules +require some form of search for good constructions. For digital +nets there are widely used default constructions. + +The most widely used QMC methods are Sobol' sequences [17]_. +These are digital nets. They are extensible in both :math:`n` and :math:`d`. +They can be scrambled. The special sample sizes are powers +of 2. Another popular method are Halton sequences [18]_. +The constructions resemble those of digital nets. The earlier +dimensions have much better equidistribution properties than +later ones. There are essentially no special sample sizes. +They are not thought to be as accurate as Sobol' sequences. +They can be scrambled. The nets of Faure [19]_ are also widely +used. All dimensions are equally good, but the special sample +sizes grow rapidly with dimension :math:`d`. They can be scrambled. +The nets of Niederreiter and Xing [20]_ have the best asymptotic +properties but have not shown good empirical performance [21]_. + +Higher order digital nets are formed by a digit interleaving process +in the digits of the constructed points. They can achieve higher +levels of asymptotic accuracy given higher smoothness conditions on :math:`f` +and they can be scrambled [22]_. There is little or no empirical work +showing the improved rate to be attained. + +Using QMC is like using the entire period of a small random +number generator. The constructions are similar and so +therefore are the computational costs [23]_. + +(R)QMC is sometimes improved by passing the points through +a baker's transformation (tent function) prior to using them. +That function has the form :math:`1-2|x-1/2|`. As :math:`x` goes from 0 to +1, this function goes from 0 to 1 and then back. It is very +useful to produce a periodic function for lattice rules [14]_, +and sometimes it improves the convergence rate [24]_. + +It is not straightforward to apply QMC methods to Markov +chain Monte Carlo (MCMC). We can think of MCMC as using +:math:`n=1` point in :math:`[0,1]^{d}` for very large :math:`d`, with +ergodic results corresponding to :math:`d \to \infty`. One proposal is +in [25]_ and under strong conditions an improved rate of convergence +has been shown [26]_. + +Returning to Sobol' points: there are many versions depending +on what are called direction numbers. Those are the result of +searches and are tabulated. A very widely used set of direction +numbers come from [27]_. It is extensible in dimension up to +:math:`d=21201`. + +References +---------- +.. [1] Owen, Art B. "Monte Carlo Book: the Quasi-Monte Carlo parts." 2019. +.. [2] Niederreiter, Harald. "Random number generation and quasi-Monte Carlo + methods." Society for Industrial and Applied Mathematics, 1992. +.. [3] Dick, Josef, Frances Y. Kuo, and Ian H. Sloan. "High-dimensional + integration: the quasi-Monte Carlo way." Acta Numerica no. 22: 133, 2013. +.. [4] Aho, A. V., C. Aistleitner, T. Anderson, K. Appel, V. Arnol'd, N. + Aronszajn, D. Asotsky et al. "W. Chen et al.(eds.), "A Panorama of + Discrepancy Theory", Sringer International Publishing, + Switzerland: 679, 2014. +.. [5] Hickernell, Fred J. "Koksma-Hlawka Inequality." Wiley StatsRef: + Statistics Reference Online, 2014. +.. [6] Owen, Art B. "On dropping the first Sobol' point." :arxiv:`2008.08051`, + 2020. +.. [7] L'Ecuyer, Pierre, and Christiane Lemieux. "Recent advances in randomized + quasi-Monte Carlo methods." In Modeling uncertainty, pp. 419-474. Springer, + New York, NY, 2002. +.. [8] DiCiccio, Thomas J., and Bradley Efron. "Bootstrap confidence + intervals." Statistical science: 189-212, 1996. +.. [9] Dimov, Ivan T. "Monte Carlo methods for applied scientists." World + Scientific, 2008. +.. [10] Caflisch, Russel E., William J. Morokoff, and Art B. Owen. "Valuation + of mortgage backed securities using Brownian bridges to reduce effective + dimension." Journal of Computational Finance: no. 1 27-46, 1997. +.. [11] Sloan, Ian H., and Henryk Wozniakowski. "When are quasi-Monte Carlo + algorithms efficient for high dimensional integrals?." Journal of Complexity + 14, no. 1 (1998): 1-33. +.. [12] Owen, Art B., and Daniel Rudolf, "A strong law of large numbers for + scrambled net integration." SIAM Review, to appear. +.. [13] Loh, Wei-Liem. "On the asymptotic distribution of scrambled net + quadrature." The Annals of Statistics 31, no. 4: 1282-1324, 2003. +.. [14] Sloan, Ian H. and S. Joe. "Lattice methods for multiple integration." + Oxford University Press, 1994. +.. [15] Dick, Josef, and Friedrich Pillichshammer. "Digital nets and sequences: + discrepancy theory and quasi-Monte Carlo integration." Cambridge University + Press, 2010. +.. [16] Dick, Josef, F. Kuo, Friedrich Pillichshammer, and I. Sloan. + "Construction algorithms for polynomial lattice rules for multivariate + integration." Mathematics of computation 74, no. 252: 1895-1921, 2005. +.. [17] Sobol', Il'ya Meerovich. "On the distribution of points in a cube and + the approximate evaluation of integrals." Zhurnal Vychislitel'noi Matematiki + i Matematicheskoi Fiziki 7, no. 4: 784-802, 1967. +.. [18] Halton, John H. "On the efficiency of certain quasi-random sequences of + points in evaluating multi-dimensional integrals." Numerische Mathematik 2, + no. 1: 84-90, 1960. +.. [19] Faure, Henri. "Discrepance de suites associees a un systeme de + numeration (en dimension s)." Acta arithmetica 41, no. 4: 337-351, 1982. +.. [20] Niederreiter, Harold, and Chaoping Xing. "Low-discrepancy sequences and + global function fields with many rational places." Finite Fields and their + applications 2, no. 3: 241-273, 1996. +.. [21] Hong, Hee Sun, and Fred J. Hickernell. "Algorithm 823: Implementing + scrambled digital sequences." ACM Transactions on Mathematical Software + (TOMS) 29, no. 2: 95-109, 2003. +.. [22] Dick, Josef. "Higher order scrambled digital nets achieve the optimal + rate of the root mean square error for smooth integrands." The Annals of + Statistics 39, no. 3: 1372-1398, 2011. +.. [23] Niederreiter, Harald. "Multidimensional numerical integration using + pseudorandom numbers." In Stochastic Programming 84 Part I, pp. 17-38. + Springer, Berlin, Heidelberg, 1986. +.. [24] Hickernell, Fred J. "Obtaining O (N-2+e) Convergence for Lattice + Quadrature Rules." In Monte Carlo and Quasi-Monte Carlo Methods 2000, + pp. 274-289. Springer, Berlin, Heidelberg, 2002. +.. [25] Owen, Art B., and Seth D. Tribble. "A quasi-Monte Carlo Metropolis + algorithm." Proceedings of the National Academy of Sciences 102, + no. 25: 8844-8849, 2005. +.. [26] Chen, Su. "Consistency and convergence rate of Markov chain quasi Monte + Carlo with examples." PhD diss., Stanford University, 2011. +.. [27] Joe, Stephen, and Frances Y. Kuo. "Constructing Sobol sequences with + better two-dimensional projections." SIAM Journal on Scientific Computing + 30, no. 5: 2635-2654, 2008. + +""" +from ._qmc import * # noqa: F403