repo_name
stringlengths
6
130
hexsha
list
file_path
list
code
list
apis
list
Alt-Shivam/colour
[ "22dd196592e6db03113fe19c4d5810b6d96fa094" ]
[ "colour/notation/munsell.py" ]
[ "\"\"\"\nMunsell Renotation System\n=========================\n\nDefines various objects for *Munsell Renotation System* computations:\n\n- :func:`colour.notation.munsell_value_Priest1920`: *Munsell* value :math:`V`\n computation of given *luminance* :math:`Y` using\n *Priest, Gibson and MacNicholas (1920)* method.\n- :func:`colour.notation.munsell_value_Munsell1933`: *Munsell* value\n :math:`V` computation of given *luminance* :math:`Y` using\n *Munsell, Sloan and Godlove (1933)* method.\n- :func:`colour.notation.munsell_value_Moon1943`: *Munsell* value :math:`V`\n computation of given *luminance* :math:`Y` using\n *Moon and Spencer (1943)* method.\n- :func:`colour.notation.munsell_value_Saunderson1944`: *Munsell* value\n :math:`V` computation of given *luminance* :math:`Y` using\n *Saunderson and Milner (1944)* method.\n- :func:`colour.notation.munsell_value_Ladd1955`: *Munsell* value :math:`V`\n computation of given *luminance* :math:`Y` using *Ladd and Pinney (1955)*\n method.\n- :func:`colour.notation.munsell_value_McCamy1987`: *Munsell* value :math:`V`\n computation of given *luminance* :math:`Y` using *McCamy (1987)* method.\n- :func:`colour.notation.munsell_value_ASTMD1535`: *Munsell* value\n :math:`V` computation of given *luminance* :math:`Y` using\n *ASTM D1535-08e1* method.\n- :attr:`colour.MUNSELL_VALUE_METHODS`: Supported *Munsell* value\n computation methods.\n- :func:`colour.munsell_value`: *Munsell* value :math:`V` computation of\n given *luminance* :math:`Y` using given method.\n- :func:`colour.munsell_colour_to_xyY`\n- :func:`colour.xyY_to_munsell_colour`\n\nNotes\n-----\n- The Munsell Renotation data commonly available within the *all.dat*,\n *experimental.dat* and *real.dat* files features *CIE xyY* colourspace values\n that are scaled by a :math:`1 / 0.975 \\\\simeq 1.02568` factor. If you are\n performing conversions using *Munsell* *Colorlab* specification,\n e.g. *2.5R 9/2*, according to *ASTM D1535-08e1* method, you should not\n scale the output :math:`Y` Luminance. However, if you use directly the\n *CIE xyY* colourspace values from the Munsell Renotation data data, you\n should scale the :math:`Y` Luminance before conversions by a :math:`0.975`\n factor.\n\n *ASTM D1535-08e1* states that::\n\n The coefficients of this equation are obtained from the 1943 equation\n by multiplying each coefficient by 0.975, the reflectance factor of\n magnesium oxide with respect to the perfect reflecting diffuser, and\n rounding to ve digits of precision.\n\nReferences\n----------\n- :cite:`ASTMInternational1989a` : ASTM International. (1989). ASTM D1535-89\n - Standard Practice for Specifying Color by the Munsell System (pp. 1-29).\n Retrieved September 25, 2014, from\n http://www.astm.org/DATABASE.CART/HISTORICAL/D1535-89.htm\n- :cite:`Centore2012a` : Centore, P. (2012). An open-source inversion\n algorithm for the Munsell renotation. Color Research & Application, 37(6),\n 455-464. doi:10.1002/col.20715\n- :cite:`Centore2014k` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellRenotationRoutines/MunsellHueToASTMHue.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014l` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellSystemRoutines/LinearVsRadialInterpOnRenotationOvoid.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014m` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellRenotationRoutines/MunsellToxyY.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014n` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellRenotationRoutines/FindHueOnRenotationOvoid.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014o` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellSystemRoutines/BoundingRenotationHues.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014p` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellRenotationRoutines/xyYtoMunsell.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014q` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellRenotationRoutines/MunsellToxyForIntegerMunsellValue.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014r` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellRenotationRoutines/MaxChromaForExtrapolatedRenotation.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014s` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellRenotationRoutines/MunsellHueToChromDiagHueAngle.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014t` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n MunsellRenotationRoutines/ChromDiagHueAngleToMunsellHue.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centore2014u` : Centore, P. (2014).\n MunsellAndKubelkaMunkToolboxApr2014 -\n GeneralRoutines/CIELABtoApproxMunsellSpec.m.\n https://github.com/colour-science/MunsellAndKubelkaMunkToolbox\n- :cite:`Centorea` : Centore, P. (n.d.). The Munsell and Kubelka-Munk\n Toolbox. Retrieved January 23, 2018, from\n http://www.munsellcolourscienceforpainters.com/\\\nMunsellAndKubelkaMunkToolbox/MunsellAndKubelkaMunkToolbox.html\n- :cite:`Wikipedia2007c` : Nayatani, Y., Sobagaki, H., & Yano, K. H. T.\n (1995). Lightness dependency of chroma scales of a nonlinear\n color-appearance model and its latest formulation. Color Research &\n Application, 20(3), 156-167. doi:10.1002/col.5080200305\n\"\"\"\n\nfrom __future__ import annotations\n\nimport numpy as np\nimport re\n\nfrom colour.algebra import (\n Extrapolator,\n LinearInterpolator,\n cartesian_to_cylindrical,\n euclidean_distance,\n polar_to_cartesian,\n spow,\n)\nfrom colour.colorimetry import CCS_ILLUMINANTS, luminance_ASTMD1535\nfrom colour.constants import (\n INTEGER_THRESHOLD,\n FLOATING_POINT_NUMBER_PATTERN,\n)\nfrom colour.hints import (\n ArrayLike,\n Boolean,\n Dict,\n Floating,\n FloatingOrArrayLike,\n FloatingOrNDArray,\n Integer,\n Literal,\n NDArray,\n Optional,\n StrOrArrayLike,\n StrOrNDArray,\n Tuple,\n Union,\n)\nfrom colour.models import Lab_to_LCHab, XYZ_to_Lab, XYZ_to_xy, xyY_to_XYZ\nfrom colour.volume import is_within_macadam_limits\nfrom colour.notation import MUNSELL_COLOURS_ALL\nfrom colour.utilities import (\n CACHE_REGISTRY,\n CaseInsensitiveMapping,\n Lookup,\n as_float,\n as_float_array,\n as_float_scalar,\n as_int_scalar,\n attest,\n domain_range_scale,\n from_range_1,\n from_range_10,\n get_domain_range_scale,\n to_domain_1,\n to_domain_10,\n to_domain_100,\n is_integer,\n is_numeric,\n tsplit,\n tstack,\n usage_warning,\n validate_method,\n)\n\n__author__ = \"Colour Developers, Paul Centore\"\n__copyright__ = \"Copyright 2013 Colour Developers\"\n__copyright__ += \", \"\n__copyright__ += (\n \"The Munsell and Kubelka-Munk Toolbox: Copyright 2010-2018 Paul Centore \"\n \"(Gales Ferry, CT 06335, USA); used by permission.\"\n)\n__license__ = \"New BSD License - https://opensource.org/licenses/BSD-3-Clause\"\n__maintainer__ = \"Colour Developers\"\n__email__ = \"[email protected]\"\n__status__ = \"Production\"\n\n__all__ = [\n \"MUNSELL_GRAY_PATTERN\",\n \"MUNSELL_COLOUR_PATTERN\",\n \"MUNSELL_GRAY_FORMAT\",\n \"MUNSELL_COLOUR_FORMAT\",\n \"MUNSELL_GRAY_EXTENDED_FORMAT\",\n \"MUNSELL_COLOUR_EXTENDED_FORMAT\",\n \"MUNSELL_HUE_LETTER_CODES\",\n \"ILLUMINANT_NAME_MUNSELL\",\n \"CCS_ILLUMINANT_MUNSELL\",\n \"munsell_value_Priest1920\",\n \"munsell_value_Munsell1933\",\n \"munsell_value_Moon1943\",\n \"munsell_value_Saunderson1944\",\n \"munsell_value_Ladd1955\",\n \"munsell_value_McCamy1987\",\n \"munsell_value_ASTMD1535\",\n \"MUNSELL_VALUE_METHODS\",\n \"munsell_value\",\n \"munsell_specification_to_xyY\",\n \"munsell_colour_to_xyY\",\n \"xyY_to_munsell_specification\",\n \"xyY_to_munsell_colour\",\n \"parse_munsell_colour\",\n \"is_grey_munsell_colour\",\n \"normalise_munsell_specification\",\n \"munsell_colour_to_munsell_specification\",\n \"munsell_specification_to_munsell_colour\",\n \"xyY_from_renotation\",\n \"is_specification_in_renotation\",\n \"bounding_hues_from_renotation\",\n \"hue_to_hue_angle\",\n \"hue_angle_to_hue\",\n \"hue_to_ASTM_hue\",\n \"interpolation_method_from_renotation_ovoid\",\n \"xy_from_renotation_ovoid\",\n \"LCHab_to_munsell_specification\",\n \"maximum_chroma_from_renotation\",\n \"munsell_specification_to_xy\",\n]\n\nMUNSELL_GRAY_PATTERN: str = f\"N(?P<value>{FLOATING_POINT_NUMBER_PATTERN})\"\nMUNSELL_COLOUR_PATTERN: str = (\n f\"(?P<hue>{FLOATING_POINT_NUMBER_PATTERN})\\\\s*\"\n f\"(?P<letter>BG|GY|YR|RP|PB|B|G|Y|R|P)\\\\s*\"\n f\"(?P<value>{FLOATING_POINT_NUMBER_PATTERN})\\\\s*\\\\/\\\\s*\"\n f\"(?P<chroma>[-+]?{FLOATING_POINT_NUMBER_PATTERN})\"\n)\n\nMUNSELL_GRAY_FORMAT: str = \"N{0}\"\nMUNSELL_COLOUR_FORMAT: str = \"{0} {1}/{2}\"\nMUNSELL_GRAY_EXTENDED_FORMAT: str = \"N{0:.{1}f}\"\nMUNSELL_COLOUR_EXTENDED_FORMAT: str = \"{0:.{1}f}{2} {3:.{4}f}/{5:.{6}f}\"\n\nMUNSELL_HUE_LETTER_CODES: Lookup = Lookup(\n {\n \"BG\": 2,\n \"GY\": 4,\n \"YR\": 6,\n \"RP\": 8,\n \"PB\": 10,\n \"B\": 1,\n \"G\": 3,\n \"Y\": 5,\n \"R\": 7,\n \"P\": 9,\n }\n)\n\nILLUMINANT_NAME_MUNSELL: str = \"C\"\nCCS_ILLUMINANT_MUNSELL: NDArray = CCS_ILLUMINANTS[\n \"CIE 1931 2 Degree Standard Observer\"\n][ILLUMINANT_NAME_MUNSELL]\n\n_MUNSELL_SPECIFICATIONS_CACHE: Dict = CACHE_REGISTRY.register_cache(\n f\"{__name__}._MUNSELL_SPECIFICATIONS_CACHE\"\n)\n_MUNSELL_VALUE_ASTM_D1535_08_INTERPOLATOR_CACHE: Dict = (\n CACHE_REGISTRY.register_cache(\n f\"{__name__}._MUNSELL_VALUE_ASTM_D1535_08_INTERPOLATOR_CACHE\"\n )\n)\n_MUNSELL_MAXIMUM_CHROMAS_FROM_RENOTATION_CACHE: Dict = (\n CACHE_REGISTRY.register_cache(\n f\"{__name__}._MUNSELL_MAXIMUM_CHROMAS_FROM_RENOTATION_CACHE\"\n )\n)\n\n\ndef _munsell_specifications() -> NDArray:\n \"\"\"\n Return the *Munsell Renotation System* specifications and caches them if\n not existing.\n\n The *Munsell Renotation System* data is stored in\n :attr:`colour.notation.MUNSELL_COLOURS` attribute in a 2 columns form::\n\n (\n (('2.5GY', 0.2, 2.0), (0.713, 1.414, 0.237)),\n (('5GY', 0.2, 2.0), (0.449, 1.145, 0.237)),\n ...,\n (('7.5GY', 0.2, 2.0), (0.262, 0.837, 0.237)),\n )\n\n The first column is converted from *Munsell* colour to specification using\n :func:`colour.notation.munsell.munsell_colour_to_munsell_specification`\n definition:\n\n ('2.5GY', 0.2, 2.0) --> (2.5, 0.2, 2.0, 4)\n\n Returns\n -------\n :class:`numpy.ndarray`\n *Munsell Renotation System* specifications.\n \"\"\"\n\n global _MUNSELL_SPECIFICATIONS_CACHE\n\n if \"All\" in _MUNSELL_SPECIFICATIONS_CACHE:\n return _MUNSELL_SPECIFICATIONS_CACHE[\"All\"]\n\n munsell_specifications = np.array(\n [\n munsell_colour_to_munsell_specification(\n MUNSELL_COLOUR_FORMAT.format(*colour[0])\n )\n for colour in MUNSELL_COLOURS_ALL\n ]\n )\n\n _MUNSELL_SPECIFICATIONS_CACHE[\"All\"] = munsell_specifications\n\n return munsell_specifications\n\n\ndef _munsell_value_ASTMD1535_interpolator() -> Extrapolator:\n \"\"\"\n Return the *Munsell* value interpolator for *ASTM D1535-08e1* method and\n caches it if not existing.\n\n Returns\n -------\n :class:`colour.Extrapolator`\n *Munsell* value interpolator for *ASTM D1535-08e1* method.\n \"\"\"\n\n global _MUNSELL_VALUE_ASTM_D1535_08_INTERPOLATOR_CACHE\n\n if \"ASTM D1535-08 Interpolator\" in (\n _MUNSELL_VALUE_ASTM_D1535_08_INTERPOLATOR_CACHE\n ):\n return _MUNSELL_VALUE_ASTM_D1535_08_INTERPOLATOR_CACHE[\n \"ASTM D1535-08 Interpolator\"\n ]\n\n munsell_values = np.arange(0, 10, 0.001)\n interpolator = LinearInterpolator(\n luminance_ASTMD1535(munsell_values), munsell_values\n )\n extrapolator = Extrapolator(interpolator)\n\n _MUNSELL_VALUE_ASTM_D1535_08_INTERPOLATOR_CACHE[\n \"ASTM D1535-08 Interpolator\"\n ] = extrapolator\n\n return extrapolator\n\n\ndef _munsell_maximum_chromas_from_renotation() -> Tuple[\n Tuple[Tuple[Floating, Floating, Floating], Floating], ...\n]:\n \"\"\"\n Return the maximum *Munsell* chromas from *Munsell Renotation System* data\n and caches them if not existing.\n\n Returns\n -------\n :class:`tuple`\n Maximum *Munsell* chromas.\n \"\"\"\n\n global _MUNSELL_MAXIMUM_CHROMAS_FROM_RENOTATION_CACHE\n\n if \"Maximum Chromas From Renotation\" in (\n _MUNSELL_MAXIMUM_CHROMAS_FROM_RENOTATION_CACHE\n ):\n return _MUNSELL_MAXIMUM_CHROMAS_FROM_RENOTATION_CACHE[\n \"Maximum Chromas From Renotation\"\n ]\n\n chromas: Dict[Tuple[Floating, Floating, Floating], Floating] = {}\n for munsell_colour in MUNSELL_COLOURS_ALL:\n hue, value, chroma, code = tsplit(\n munsell_colour_to_munsell_specification(\n MUNSELL_COLOUR_FORMAT.format(*munsell_colour[0])\n )\n )\n index = (hue, value, code)\n if index in chromas:\n chroma = max([chromas[index], chroma])\n\n chromas[index] = chroma\n\n maximum_chromas_from_renotation = tuple(\n zip(chromas.keys(), chromas.values())\n )\n\n _MUNSELL_MAXIMUM_CHROMAS_FROM_RENOTATION_CACHE[\n \"Maximum Chromas From Renotation\"\n ] = maximum_chromas_from_renotation\n\n return maximum_chromas_from_renotation\n\n\ndef munsell_value_Priest1920(Y: FloatingOrArrayLike) -> FloatingOrNDArray:\n \"\"\"\n Return the *Munsell* value :math:`V` of given *luminance* :math:`Y` using\n *Priest et al. (1920)* method.\n\n Parameters\n ----------\n Y\n *luminance* :math:`Y`.\n\n Returns\n -------\n :class:`np.floating` or :class:`numpy.ndarray`\n *Munsell* value :math:`V`.\n\n Notes\n -----\n +------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``Y`` | [0, 100] | [0, 1] |\n +------------+-----------------------+---------------+\n\n +------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``V`` | [0, 10] | [0, 1] |\n +------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`Wikipedia2007c`\n\n Examples\n --------\n >>> munsell_value_Priest1920(12.23634268) # doctest: +ELLIPSIS\n 3.4980484...\n \"\"\"\n\n Y = to_domain_100(Y)\n\n V = 10 * np.sqrt(Y / 100)\n\n return as_float(from_range_10(V))\n\n\ndef munsell_value_Munsell1933(Y: FloatingOrArrayLike) -> FloatingOrNDArray:\n \"\"\"\n Return the *Munsell* value :math:`V` of given *luminance* :math:`Y` using\n *Munsell et al. (1933)* method.\n\n Parameters\n ----------\n Y\n *luminance* :math:`Y`.\n\n Returns\n -------\n :class:`np.floating` or :class:`numpy.ndarray`\n *Munsell* value :math:`V`.\n\n Notes\n -----\n +------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``Y`` | [0, 100] | [0, 1] |\n +------------+-----------------------+---------------+\n\n +------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``V`` | [0, 10] | [0, 1] |\n +------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`Wikipedia2007c`\n\n Examples\n --------\n >>> munsell_value_Munsell1933(12.23634268) # doctest: +ELLIPSIS\n 4.1627702...\n \"\"\"\n\n Y = to_domain_100(Y)\n\n V = np.sqrt(1.4742 * Y - 0.004743 * (Y * Y))\n\n return as_float(from_range_10(V))\n\n\ndef munsell_value_Moon1943(Y: FloatingOrArrayLike) -> FloatingOrNDArray:\n \"\"\"\n Return the *Munsell* value :math:`V` of given *luminance* :math:`Y` using\n *Moon and Spencer (1943)* method.\n\n\n Parameters\n ----------\n Y\n *luminance* :math:`Y`.\n\n Returns\n -------\n :class:`np.floating` or :class:`numpy.ndarray`\n *Munsell* value :math:`V`.\n\n Notes\n -----\n +------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``Y`` | [0, 100] | [0, 1] |\n +------------+-----------------------+---------------+\n\n +------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``V`` | [0, 10] | [0, 1] |\n +------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`Wikipedia2007c`\n\n Examples\n --------\n >>> munsell_value_Moon1943(12.23634268) # doctest: +ELLIPSIS\n 4.0688120...\n \"\"\"\n\n Y = to_domain_100(Y)\n\n V = 1.4 * spow(Y, 0.426)\n\n return as_float(from_range_10(V))\n\n\ndef munsell_value_Saunderson1944(Y: FloatingOrArrayLike) -> FloatingOrNDArray:\n \"\"\"\n Return the *Munsell* value :math:`V` of given *luminance* :math:`Y` using\n *Saunderson and Milner (1944)* method.\n\n Parameters\n ----------\n Y\n *luminance* :math:`Y`.\n\n Returns\n -------\n :class:`np.floating` or :class:`numpy.ndarray`\n *Munsell* value :math:`V`.\n\n Notes\n -----\n +------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``Y`` | [0, 100] | [0, 1] |\n +------------+-----------------------+---------------+\n\n +------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``V`` | [0, 10] | [0, 1] |\n +------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`Wikipedia2007c`\n\n Examples\n --------\n >>> munsell_value_Saunderson1944(12.23634268) # doctest: +ELLIPSIS\n 4.0444736...\n \"\"\"\n\n Y = to_domain_100(Y)\n\n V = 2.357 * spow(Y, 0.343) - 1.52\n\n return as_float(from_range_10(V))\n\n\ndef munsell_value_Ladd1955(Y: FloatingOrArrayLike) -> FloatingOrNDArray:\n \"\"\"\n Return the *Munsell* value :math:`V` of given *luminance* :math:`Y` using\n *Ladd and Pinney (1955)* method.\n\n Parameters\n ----------\n Y\n *luminance* :math:`Y`.\n\n Returns\n -------\n :class:`np.floating` or :class:`numpy.ndarray`\n *Munsell* value :math:`V`.\n\n Notes\n -----\n +------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``Y`` | [0, 100] | [0, 1] |\n +------------+-----------------------+---------------+\n\n +------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``V`` | [0, 10] | [0, 1] |\n +------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`Wikipedia2007c`\n\n Examples\n --------\n >>> munsell_value_Ladd1955(12.23634268) # doctest: +ELLIPSIS\n 4.0511633...\n \"\"\"\n\n Y = to_domain_100(Y)\n\n V = 2.468 * spow(Y, 1 / 3) - 1.636\n\n return as_float(from_range_10(V))\n\n\ndef munsell_value_McCamy1987(Y: FloatingOrArrayLike) -> FloatingOrNDArray:\n \"\"\"\n Return the *Munsell* value :math:`V` of given *luminance* :math:`Y` using\n *McCamy (1987)* method.\n\n Parameters\n ----------\n Y\n *luminance* :math:`Y`.\n\n Returns\n -------\n :class:`np.floating` or :class:`numpy.ndarray`\n *Munsell* value :math:`V`.\n\n Notes\n -----\n +------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``Y`` | [0, 100] | [0, 1] |\n +------------+-----------------------+---------------+\n\n +------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``V`` | [0, 10] | [0, 1] |\n +------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`ASTMInternational1989a`\n\n Examples\n --------\n >>> munsell_value_McCamy1987(12.23634268) # doctest: +ELLIPSIS\n 4.0814348...\n \"\"\"\n\n Y = to_domain_100(Y)\n\n V = np.where(\n Y <= 0.9,\n 0.87445 * spow(Y, 0.9967),\n 2.49268 * spow(Y, 1 / 3)\n - 1.5614\n - (0.985 / (((0.1073 * Y - 3.084) ** 2) + 7.54))\n + (0.0133 / spow(Y, 2.3))\n + 0.0084 * np.sin(4.1 * spow(Y, 1 / 3) + 1)\n + (0.0221 / Y) * np.sin(0.39 * (Y - 2))\n - (0.0037 / (0.44 * Y)) * np.sin(1.28 * (Y - 0.53)),\n )\n\n return as_float(from_range_10(V))\n\n\ndef munsell_value_ASTMD1535(Y: FloatingOrArrayLike) -> FloatingOrNDArray:\n \"\"\"\n Return the *Munsell* value :math:`V` of given *luminance* :math:`Y` using\n an inverse lookup table from *ASTM D1535-08e1* method.\n\n Parameters\n ----------\n Y\n *luminance* :math:`Y`\n\n Returns\n -------\n :class:`np.floating` or :class:`numpy.ndarray`\n *Munsell* value :math:`V`.\n\n Notes\n -----\n +------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``Y`` | [0, 100] | [0, 1] |\n +------------+-----------------------+---------------+\n\n +------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``V`` | [0, 10] | [0, 1] |\n +------------+-----------------------+---------------+\n\n - The *Munsell* value* computation with *ASTM D1535-08e1* method is only\n defined for domain [0, 100].\n\n References\n ----------\n :cite:`ASTMInternational1989a`\n\n Examples\n --------\n >>> munsell_value_ASTMD1535(12.23634268) # doctest: +ELLIPSIS\n 4.0824437...\n \"\"\"\n\n Y = to_domain_100(Y)\n\n V = _munsell_value_ASTMD1535_interpolator()(Y)\n\n return as_float(from_range_10(V))\n\n\nMUNSELL_VALUE_METHODS: CaseInsensitiveMapping = CaseInsensitiveMapping(\n {\n \"Priest 1920\": munsell_value_Priest1920,\n \"Munsell 1933\": munsell_value_Munsell1933,\n \"Moon 1943\": munsell_value_Moon1943,\n \"Saunderson 1944\": munsell_value_Saunderson1944,\n \"Ladd 1955\": munsell_value_Ladd1955,\n \"McCamy 1987\": munsell_value_McCamy1987,\n \"ASTM D1535\": munsell_value_ASTMD1535,\n }\n)\nMUNSELL_VALUE_METHODS.__doc__ = \"\"\"\nSupported *Munsell* value computation methods.\n\nReferences\n----------\n:cite:`ASTMInternational1989a`, :cite:`Wikipedia2007c`\n\nAliases:\n\n- 'astm2008': 'ASTM D1535'\n\"\"\"\nMUNSELL_VALUE_METHODS[\"astm2008\"] = MUNSELL_VALUE_METHODS[\"ASTM D1535\"]\n\n\ndef munsell_value(\n Y: FloatingOrArrayLike,\n method: Union[\n Literal[\n \"ASTM D1535\",\n \"Ladd 1955\",\n \"McCamy 1987\",\n \"Moon 1943\",\n \"Munsell 1933\",\n \"Priest 1920\",\n \"Saunderson 1944\",\n ],\n str,\n ] = \"ASTM D1535\",\n) -> FloatingOrNDArray:\n \"\"\"\n Return the *Munsell* value :math:`V` of given *luminance* :math:`Y` using\n given method.\n\n Parameters\n ----------\n Y\n *luminance* :math:`Y`.\n method\n Computation method.\n\n Returns\n -------\n :class:`np.floating` or :class:`numpy.ndarray`\n *Munsell* value :math:`V`.\n\n Notes\n -----\n +------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``Y`` | [0, 100] | [0, 1] |\n +------------+-----------------------+---------------+\n\n +------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``V`` | [0, 10] | [0, 1] |\n +------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`ASTMInternational1989a`, :cite:`Wikipedia2007c`\n\n Examples\n --------\n >>> munsell_value(12.23634268) # doctest: +ELLIPSIS\n 4.0824437...\n >>> munsell_value(12.23634268, method='Priest 1920') # doctest: +ELLIPSIS\n 3.4980484...\n >>> munsell_value(12.23634268, method='Munsell 1933') # doctest: +ELLIPSIS\n 4.1627702...\n >>> munsell_value(12.23634268, method='Moon 1943') # doctest: +ELLIPSIS\n 4.0688120...\n >>> munsell_value(12.23634268, method='Saunderson 1944')\n ... # doctest: +ELLIPSIS\n 4.0444736...\n >>> munsell_value(12.23634268, method='Ladd 1955') # doctest: +ELLIPSIS\n 4.0511633...\n >>> munsell_value(12.23634268, method='McCamy 1987') # doctest: +ELLIPSIS\n 4.0814348...\n \"\"\"\n\n method = validate_method(method, MUNSELL_VALUE_METHODS)\n\n return MUNSELL_VALUE_METHODS[method](Y)\n\n\ndef _munsell_scale_factor() -> NDArray:\n \"\"\"\n Return the domain-range scale factor for *Munsell Renotation System*.\n\n Returns\n -------\n :class:`numpy.ndarray`\n Domain-range scale factor for *Munsell Renotation System*.\n \"\"\"\n\n return np.array([10, 10, 50 if get_domain_range_scale() == \"1\" else 2, 10])\n\n\ndef _munsell_specification_to_xyY(specification: ArrayLike) -> NDArray:\n \"\"\"\n Convert given *Munsell* *Colorlab* specification to *CIE xyY* colourspace.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n\n Returns\n -------\n :class:`numpy.ndarray`\n *CIE xyY* colourspace array.\n \"\"\"\n\n specification = normalise_munsell_specification(specification)\n\n if is_grey_munsell_colour(specification):\n specification = as_float_array(to_domain_10(specification))\n hue, value, chroma, code = specification\n else:\n specification = to_domain_10(specification, _munsell_scale_factor())\n hue, value, chroma, code = specification\n code = as_int_scalar(code)\n\n attest(\n 0 <= hue <= 10,\n f'\"{specification}\" specification hue must be normalised to '\n f\"domain [0, 10]!\",\n )\n\n attest(\n 0 <= value <= 10,\n f'\"{specification}\" specification value must be normalised to '\n f\"domain [0, 10]!\",\n )\n\n with domain_range_scale(\"ignore\"):\n Y = luminance_ASTMD1535(value)\n\n if is_integer(value):\n value_minus = value_plus = round(value)\n else:\n value_minus = np.floor(value)\n value_plus = value_minus + 1\n\n specification_minus = as_float_array(\n value_minus\n if is_grey_munsell_colour(specification)\n else [hue, value_minus, chroma, code]\n )\n x_minus, y_minus = tsplit(munsell_specification_to_xy(specification_minus))\n\n specification_plus = as_float_array(\n value_plus\n if (is_grey_munsell_colour(specification) or value_plus == 10)\n else [hue, value_plus, chroma, code]\n )\n x_plus, y_plus = tsplit(munsell_specification_to_xy(specification_plus))\n\n if value_minus == value_plus:\n x = x_minus\n y = y_minus\n else:\n with domain_range_scale(\"ignore\"):\n Y_minus = as_float_array(luminance_ASTMD1535(value_minus))\n Y_plus = as_float_array(luminance_ASTMD1535(value_plus))\n\n Y_minus_plus = np.squeeze([Y_minus, Y_plus])\n x_minus_plus = np.squeeze([x_minus, x_plus])\n y_minus_plus = np.squeeze([y_minus, y_plus])\n\n x = LinearInterpolator(Y_minus_plus, x_minus_plus)(Y)\n y = LinearInterpolator(Y_minus_plus, y_minus_plus)(Y)\n\n return tstack([x, y, from_range_1(Y / 100)])\n\n\ndef munsell_specification_to_xyY(specification: ArrayLike) -> NDArray:\n \"\"\"\n Convert given *Munsell* *Colorlab* specification to *CIE xyY* colourspace.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n\n Returns\n -------\n :class:`numpy.ndarray`\n *CIE xyY* colourspace array.\n\n Notes\n -----\n +-------------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +===================+=======================+===============+\n | ``specification`` | ``hue`` : [0, 10] | [0, 1] |\n | | | |\n | | ``value`` : [0, 10] | [0, 1] |\n | | | |\n | | ``chroma`` : [0, 50] | [0, 1] |\n | | | |\n | | ``code`` : [0, 10] | [0, 1] |\n +-------------------+-----------------------+---------------+\n\n +-------------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +===================+=======================+===============+\n | ``xyY`` | [0, 1] | [0, 1] |\n +-------------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`Centore2014m`\n\n Examples\n --------\n >>> munsell_specification_to_xyY(np.array([2.1, 8.0, 17.9, 4]))\n ... # doctest: +ELLIPSIS\n array([ 0.4400632..., 0.5522428..., 0.5761962...])\n >>> munsell_specification_to_xyY(np.array([np.nan, 8.9, np.nan, np.nan]))\n ... # doctest: +ELLIPSIS\n array([ 0.31006 , 0.31616 , 0.7461345...])\n \"\"\"\n\n specification = as_float_array(specification)\n shape = list(specification.shape)\n\n xyY = [\n _munsell_specification_to_xyY(a)\n for a in specification.reshape([-1, 4])\n ]\n\n shape[-1] = 3\n\n return np.reshape(as_float_array(xyY), shape)\n\n\ndef munsell_colour_to_xyY(munsell_colour: StrOrArrayLike) -> NDArray:\n \"\"\"\n Convert given *Munsell* colour to *CIE xyY* colourspace.\n\n Parameters\n ----------\n munsell_colour\n *Munsell* colour.\n\n Returns\n -------\n :class:`numpy.ndarray`\n *CIE xyY* colourspace array.\n\n Notes\n -----\n +-----------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +===========+=======================+===============+\n | ``xyY`` | [0, 1] | [0, 1] |\n +-----------+-----------------------+---------------+\n\n References\n ----------\n :cite:`Centorea`, :cite:`Centore2012a`\n\n Examples\n --------\n >>> munsell_colour_to_xyY('4.2YR 8.1/5.3') # doctest: +ELLIPSIS\n array([ 0.3873694..., 0.3575165..., 0.59362 ])\n >>> munsell_colour_to_xyY('N8.9') # doctest: +ELLIPSIS\n array([ 0.31006 , 0.31616 , 0.7461345...])\n \"\"\"\n\n munsell_colour = np.array(munsell_colour)\n shape = list(munsell_colour.shape)\n\n specification = np.array(\n [\n munsell_colour_to_munsell_specification(a)\n for a in np.ravel(munsell_colour)\n ]\n )\n\n return munsell_specification_to_xyY(\n from_range_10(\n specification.reshape(shape + [4]), _munsell_scale_factor()\n )\n )\n\n\ndef _xyY_to_munsell_specification(xyY: ArrayLike) -> NDArray:\n \"\"\"\n Convert from *CIE xyY* colourspace to *Munsell* *Colorlab* specification.\n\n Parameters\n ----------\n xyY\n *CIE xyY* colourspace array.\n\n Returns\n -------\n :class:`numpy.ndarray`\n *Munsell* *Colorlab* specification.\n\n Raises\n ------\n ValueError\n If the given *CIE xyY* colourspace array is not within MacAdam\n limits.\n RuntimeError\n If the maximum iterations count has been reached without converging to\n a result.\n \"\"\"\n\n x, y, Y = tsplit(xyY)\n Y = to_domain_1(Y)\n\n if not is_within_macadam_limits(xyY, ILLUMINANT_NAME_MUNSELL):\n usage_warning(\n f'\"{xyY!r}\" is not within \"MacAdam\" limits for illuminant '\n f'\"{ILLUMINANT_NAME_MUNSELL}\"!'\n )\n\n with domain_range_scale(\"ignore\"):\n value = munsell_value_ASTMD1535(Y * 100)\n\n if is_integer(value):\n value = np.around(value)\n\n with domain_range_scale(\"ignore\"):\n x_center, y_center, Y_center = tsplit(\n _munsell_specification_to_xyY(value)\n )\n\n rho_input, phi_input, _z_input = tsplit(\n cartesian_to_cylindrical([x - x_center, y - y_center, Y_center])\n )\n phi_input = np.degrees(phi_input)\n\n grey_threshold = 1e-7\n if rho_input < grey_threshold:\n return from_range_10(normalise_munsell_specification(value))\n\n X, Y, Z = xyY_to_XYZ([x, y, Y])\n xi, yi = CCS_ILLUMINANT_MUNSELL\n Xr, Yr, Zr = xyY_to_XYZ([xi, yi, Y])\n\n XYZ = np.array([X, Y, Z])\n XYZr = np.array([(1 / Yr) * Xr, 1, (1 / Yr) * Zr])\n\n Lab = XYZ_to_Lab(XYZ, XYZ_to_xy(XYZr))\n LCHab = Lab_to_LCHab(Lab)\n hue_initial, _value_initial, chroma_initial, code_initial = tsplit(\n LCHab_to_munsell_specification(LCHab)\n )\n specification_current = [\n hue_initial,\n value,\n (5 / 5.5) * chroma_initial,\n code_initial,\n ]\n\n convergence_threshold = 1e-7\n iterations_maximum = 64\n iterations = 0\n\n while iterations <= iterations_maximum:\n iterations += 1\n\n (\n hue_current,\n _value_current,\n chroma_current,\n code_current,\n ) = specification_current\n hue_angle_current = hue_to_hue_angle([hue_current, code_current])\n\n chroma_maximum = maximum_chroma_from_renotation(\n [hue_current, value, code_current]\n )\n if chroma_current > chroma_maximum:\n chroma_current = specification_current[2] = chroma_maximum\n\n with domain_range_scale(\"ignore\"):\n x_current, y_current, _Y_current = tsplit(\n _munsell_specification_to_xyY(specification_current)\n )\n\n rho_current, phi_current, _z_current = tsplit(\n cartesian_to_cylindrical(\n [x_current - x_center, y_current - y_center, Y_center]\n )\n )\n phi_current = np.degrees(phi_current)\n phi_current_difference = (360 - phi_input + phi_current) % 360\n if phi_current_difference > 180:\n phi_current_difference -= 360\n\n phi_differences_data = [phi_current_difference]\n hue_angles_differences_data = [0]\n hue_angles = [hue_angle_current]\n\n iterations_maximum_inner = 16\n iterations_inner = 0\n extrapolate = False\n\n while (\n np.sign(np.min(phi_differences_data))\n == np.sign(np.max(phi_differences_data))\n and extrapolate is False\n ):\n iterations_inner += 1\n\n if iterations_inner > iterations_maximum_inner:\n # NOTE: This exception is likely never raised in practice:\n # 300K iterations with random numbers never reached this code\n # path, it is kept for consistency with the reference\n # implementation.\n raise RuntimeError( # pragma: no cover\n \"Maximum inner iterations count reached without \"\n \"convergence!\"\n )\n\n hue_angle_inner = (\n hue_angle_current\n + iterations_inner * (phi_input - phi_current)\n ) % 360\n hue_angle_difference_inner = (\n iterations_inner * (phi_input - phi_current) % 360\n )\n if hue_angle_difference_inner > 180:\n hue_angle_difference_inner -= 360\n\n hue_inner, code_inner = hue_angle_to_hue(hue_angle_inner)\n\n with domain_range_scale(\"ignore\"):\n x_inner, y_inner, _Y_inner = _munsell_specification_to_xyY(\n [hue_inner, value, chroma_current, code_inner]\n )\n\n if len(phi_differences_data) >= 2:\n extrapolate = True\n\n if extrapolate is False:\n rho_inner, phi_inner, _z_inner = cartesian_to_cylindrical(\n [x_inner - x_center, y_inner - y_center, Y_center]\n )\n phi_inner = np.degrees(phi_inner)\n phi_inner_difference = (360 - phi_input + phi_inner) % 360\n if phi_inner_difference > 180:\n phi_inner_difference -= 360\n\n phi_differences_data.append(phi_inner_difference)\n hue_angles.append(hue_angle_inner)\n hue_angles_differences_data.append(hue_angle_difference_inner)\n\n phi_differences = np.array(phi_differences_data)\n hue_angles_differences = np.array(hue_angles_differences_data)\n\n phi_differences_indexes = phi_differences.argsort()\n\n phi_differences = phi_differences[phi_differences_indexes]\n hue_angles_differences = hue_angles_differences[\n phi_differences_indexes\n ]\n\n hue_angle_difference_new = (\n Extrapolator(\n LinearInterpolator(phi_differences, hue_angles_differences)\n )(0)\n % 360\n )\n hue_angle_new = (hue_angle_current + hue_angle_difference_new) % 360\n\n hue_new, code_new = hue_angle_to_hue(as_float_scalar(hue_angle_new))\n specification_current = [hue_new, value, chroma_current, code_new]\n\n with domain_range_scale(\"ignore\"):\n x_current, y_current, _Y_current = _munsell_specification_to_xyY(\n specification_current\n )\n\n chroma_scale = 50 if get_domain_range_scale() == \"1\" else 2\n\n difference = euclidean_distance([x, y], [x_current, y_current])\n if difference < convergence_threshold:\n return from_range_10(\n np.array(specification_current),\n np.array([10, 10, chroma_scale, 10]),\n )\n\n # TODO: Consider refactoring implementation.\n (\n hue_current,\n _value_current,\n chroma_current,\n code_current,\n ) = specification_current\n chroma_maximum = maximum_chroma_from_renotation(\n [hue_current, value, code_current]\n )\n\n # NOTE: This condition is likely never \"True\" while producing a valid\n # \"Munsell Specification\" in practice: 100K iterations with random\n # numbers never reached this code path while producing a valid\n # \"Munsell Specification\".\n if chroma_current > chroma_maximum:\n chroma_current = specification_current[2] = chroma_maximum\n\n with domain_range_scale(\"ignore\"):\n x_current, y_current, _Y_current = _munsell_specification_to_xyY(\n specification_current\n )\n\n rho_current, phi_current, _z_current = cartesian_to_cylindrical(\n [x_current - x_center, y_current - y_center, Y_center]\n )\n\n rho_bounds_data = [rho_current]\n chroma_bounds_data = [chroma_current]\n\n iterations_maximum_inner = 16\n iterations_inner = 0\n while not (\n np.min(rho_bounds_data) < rho_input < np.max(rho_bounds_data)\n ):\n iterations_inner += 1\n\n if iterations_inner > iterations_maximum_inner:\n raise RuntimeError(\n \"Maximum inner iterations count reached \"\n \"without convergence!\"\n )\n\n chroma_inner = (\n (rho_input / rho_current) ** iterations_inner\n ) * chroma_current\n if chroma_inner > chroma_maximum:\n chroma_inner = specification_current[2] = chroma_maximum\n\n specification_inner = (\n hue_current,\n value,\n chroma_inner,\n code_current,\n )\n\n with domain_range_scale(\"ignore\"):\n x_inner, y_inner, _Y_inner = _munsell_specification_to_xyY(\n specification_inner\n )\n\n rho_inner, phi_inner, _z_inner = cartesian_to_cylindrical(\n [x_inner - x_center, y_inner - y_center, Y_center]\n )\n\n rho_bounds_data.append(rho_inner)\n chroma_bounds_data.append(chroma_inner)\n\n rho_bounds = np.array(rho_bounds_data)\n chroma_bounds = np.array(chroma_bounds_data)\n\n rhos_bounds_indexes = rho_bounds.argsort()\n\n rho_bounds = rho_bounds[rhos_bounds_indexes]\n chroma_bounds = chroma_bounds[rhos_bounds_indexes]\n chroma_new = LinearInterpolator(rho_bounds, chroma_bounds)(rho_input)\n\n specification_current = [hue_current, value, chroma_new, code_current]\n\n with domain_range_scale(\"ignore\"):\n x_current, y_current, _Y_current = _munsell_specification_to_xyY(\n specification_current\n )\n\n difference = euclidean_distance([x, y], [x_current, y_current])\n if difference < convergence_threshold:\n return from_range_10(\n np.array(specification_current),\n np.array([10, 10, chroma_scale, 10]),\n )\n\n # NOTE: This exception is likely never raised in practice: 300K iterations\n # with random numbers never reached this code path, it is kept for\n # consistency with the reference # implementation\n raise RuntimeError( # pragma: no cover\n \"Maximum outside iterations count reached without convergence!\"\n )\n\n\ndef xyY_to_munsell_specification(xyY: ArrayLike) -> NDArray:\n \"\"\"\n Convert from *CIE xyY* colourspace to *Munsell* *Colorlab* specification.\n\n Parameters\n ----------\n xyY\n *CIE xyY* colourspace array.\n\n Returns\n -------\n :class:`numpy.ndarray`\n *Munsell* *Colorlab* specification.\n\n Raises\n ------\n ValueError\n If the given *CIE xyY* colourspace array is not within MacAdam\n limits.\n RuntimeError\n If the maximum iterations count has been reached without converging to\n a result.\n\n Notes\n -----\n +-------------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +===================+=======================+===============+\n | ``xyY`` | [0, 1] | [0, 1] |\n +-------------------+-----------------------+---------------+\n\n +-------------------+-----------------------+---------------+\n | **Range** | **Scale - Reference** | **Scale - 1** |\n +===================+=======================+===============+\n | ``specification`` | ``hue`` : [0, 10] | [0, 1] |\n | | | |\n | | ``value`` : [0, 10] | [0, 1] |\n | | | |\n | | ``chroma`` : [0, 50] | [0, 1] |\n | | | |\n | | ``code`` : [0, 10] | [0, 1] |\n +-------------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`Centore2014p`\n\n Examples\n --------\n >>> xyY = np.array([0.38736945, 0.35751656, 0.59362000])\n >>> xyY_to_munsell_specification(xyY) # doctest: +ELLIPSIS\n array([ 4.2000019..., 8.0999999..., 5.2999996..., 6. ])\n \"\"\"\n\n xyY = as_float_array(xyY)\n shape = list(xyY.shape)\n\n specification = [\n _xyY_to_munsell_specification(a) for a in xyY.reshape([-1, 3])\n ]\n\n shape[-1] = 4\n\n return np.reshape(as_float_array(specification), shape)\n\n\ndef xyY_to_munsell_colour(\n xyY: ArrayLike,\n hue_decimals: Integer = 1,\n value_decimals: Integer = 1,\n chroma_decimals: Integer = 1,\n) -> StrOrNDArray:\n \"\"\"\n Convert from *CIE xyY* colourspace to *Munsell* colour.\n\n Parameters\n ----------\n xyY\n *CIE xyY* colourspace array.\n hue_decimals\n Hue formatting decimals.\n value_decimals\n Value formatting decimals.\n chroma_decimals\n Chroma formatting decimals.\n\n Returns\n -------\n :class:`str` or :class:`numpy.ndarray`\n *Munsell* colour.\n\n Notes\n -----\n +------------+-----------------------+---------------+\n | **Domain** | **Scale - Reference** | **Scale - 1** |\n +============+=======================+===============+\n | ``xyY`` | [0, 1] | [0, 1] |\n +------------+-----------------------+---------------+\n\n References\n ----------\n :cite:`Centorea`, :cite:`Centore2012a`\n\n Examples\n --------\n >>> xyY = np.array([0.38736945, 0.35751656, 0.59362000])\n >>> xyY_to_munsell_colour(xyY)\n '4.2YR 8.1/5.3'\n \"\"\"\n\n specification = to_domain_10(\n xyY_to_munsell_specification(xyY), _munsell_scale_factor()\n )\n shape = list(specification.shape)\n decimals = (hue_decimals, value_decimals, chroma_decimals)\n\n munsell_colour = np.reshape(\n np.array(\n [\n munsell_specification_to_munsell_colour(a, *decimals)\n for a in specification.reshape([-1, 4])\n ]\n ),\n shape[:-1],\n )\n\n return str(munsell_colour) if shape == [4] else munsell_colour\n\n\ndef parse_munsell_colour(munsell_colour: str) -> NDArray:\n \"\"\"\n Parse given *Munsell* colour and returns an intermediate *Munsell*\n *Colorlab* specification.\n\n Parameters\n ----------\n munsell_colour\n *Munsell* colour.\n\n Returns\n -------\n :class:`numpy.ndarray`\n Intermediate *Munsell* *Colorlab* specification.\n\n Raises\n ------\n ValueError\n If the given specification is not a valid *Munsell Renotation System*\n colour specification.\n\n Examples\n --------\n >>> parse_munsell_colour('N5.2')\n array([ nan, 5.2, nan, nan])\n >>> parse_munsell_colour('0YR 2.0/4.0')\n array([ 0., 2., 4., 6.])\n \"\"\"\n\n match = re.match(MUNSELL_GRAY_PATTERN, munsell_colour, flags=re.IGNORECASE)\n if match:\n return tstack(\n [\n np.nan,\n match.group(\"value\"),\n np.nan,\n np.nan,\n ]\n )\n\n match = re.match(\n MUNSELL_COLOUR_PATTERN, munsell_colour, flags=re.IGNORECASE\n )\n if match:\n return tstack(\n [\n match.group(\"hue\"),\n match.group(\"value\"),\n match.group(\"chroma\"),\n MUNSELL_HUE_LETTER_CODES[match.group(\"letter\").upper()],\n ]\n )\n\n raise ValueError(\n f'\"{munsell_colour}\" is not a valid \"Munsell Renotation System\" '\n f\"colour specification!\"\n )\n\n\ndef is_grey_munsell_colour(specification: ArrayLike) -> Boolean:\n \"\"\"\n Return if given *Munsell* *Colorlab* specification is a grey colour.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n\n Returns\n -------\n :class:`bool`\n Is specification a grey colour.\n\n Examples\n --------\n >>> is_grey_munsell_colour(np.array([0.0, 2.0, 4.0, 6]))\n False\n >>> is_grey_munsell_colour(np.array([np.nan, 0.5, np.nan, np.nan]))\n True\n \"\"\"\n\n specification = as_float_array(specification)\n\n specification = specification[~np.isnan(specification)]\n\n return is_numeric(as_float(specification))\n\n\ndef normalise_munsell_specification(specification: ArrayLike) -> NDArray:\n \"\"\"\n Normalise given *Munsell* *Colorlab* specification.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n\n Returns\n -------\n :class:`numpy.ndarray`\n Normalised *Munsell* *Colorlab* specification.\n\n Examples\n --------\n >>> normalise_munsell_specification(\n ... np.array([0.0, 2.0, 4.0, 6]))\n array([ 10., 2., 4., 7.])\n >>> normalise_munsell_specification(\n ... np.array([np.nan, 0.5, np.nan, np.nan]))\n array([ nan, 0.5, nan, nan])\n \"\"\"\n\n specification = as_float_array(specification)\n\n if is_grey_munsell_colour(specification):\n return specification * np.array([np.nan, 1, np.nan, np.nan])\n else:\n hue, value, chroma, code = specification\n\n if hue == 0:\n # 0YR is equivalent to 10R.\n hue, code = 10, (code + 1) % 10\n\n if chroma == 0:\n return tstack([np.nan, value, np.nan, np.nan])\n else:\n return tstack([hue, value, chroma, code])\n\n\ndef munsell_colour_to_munsell_specification(munsell_colour: str) -> NDArray:\n \"\"\"\n Retrieve a normalised *Munsell* *Colorlab* specification from given\n *Munsell* colour.\n\n Parameters\n ----------\n munsell_colour\n *Munsell* colour.\n\n Returns\n -------\n :class:`numpy.ndarray`\n Normalised *Munsell* *Colorlab* specification.\n\n Examples\n --------\n >>> munsell_colour_to_munsell_specification('N5.2')\n array([ nan, 5.2, nan, nan])\n >>> munsell_colour_to_munsell_specification('0YR 2.0/4.0')\n array([ 10., 2., 4., 7.])\n \"\"\"\n\n return normalise_munsell_specification(\n parse_munsell_colour(munsell_colour)\n )\n\n\ndef munsell_specification_to_munsell_colour(\n specification: ArrayLike,\n hue_decimals: Integer = 1,\n value_decimals: Integer = 1,\n chroma_decimals: Integer = 1,\n) -> str:\n \"\"\"\n Convert from *Munsell* *Colorlab* specification to given *Munsell* colour.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n hue_decimals\n Hue formatting decimals.\n value_decimals\n Value formatting decimals.\n chroma_decimals\n Chroma formatting decimals.\n\n Returns\n -------\n :class:`str`\n *Munsell* colour.\n\n Examples\n --------\n >>> munsell_specification_to_munsell_colour(\n ... np.array([np.nan, 5.2, np.nan, np.nan]))\n 'N5.2'\n >>> munsell_specification_to_munsell_colour(\n ... np.array([10, 2.0, 4.0, 7]))\n '10.0R 2.0/4.0'\n \"\"\"\n\n hue, value, chroma, code = tsplit(\n normalise_munsell_specification(specification)\n )\n\n if is_grey_munsell_colour(specification):\n return MUNSELL_GRAY_EXTENDED_FORMAT.format(value, value_decimals)\n else:\n hue = round(hue, hue_decimals)\n attest(\n 0 <= hue <= 10,\n f'\"{specification!r}\" specification hue must be normalised to '\n f\"domain [0, 10]!\",\n )\n\n value = round(value, value_decimals)\n attest(\n 0 <= value <= 10,\n f'\"{specification!r}\" specification value must be normalised to '\n f\"domain [0, 10]!\",\n )\n\n chroma = round(chroma, chroma_decimals)\n attest(\n 2 <= chroma <= 50,\n f'\"{specification!r}\" specification chroma must be normalised to '\n f\"domain [2, 50]!\",\n )\n\n code_values = MUNSELL_HUE_LETTER_CODES.values()\n code = round(code, 1)\n attest(\n code in code_values,\n f'\"{specification!r}\" specification code must one of '\n f'\"{code_values}\"!',\n )\n\n if value == 0:\n return MUNSELL_GRAY_EXTENDED_FORMAT.format(value, value_decimals)\n else:\n hue_letter = MUNSELL_HUE_LETTER_CODES.first_key_from_value(code)\n\n return MUNSELL_COLOUR_EXTENDED_FORMAT.format(\n hue,\n hue_decimals,\n hue_letter,\n value,\n value_decimals,\n chroma,\n chroma_decimals,\n )\n\n\ndef xyY_from_renotation(specification: ArrayLike) -> NDArray:\n \"\"\"\n Return given existing *Munsell* *Colorlab* specification *CIE xyY*\n colourspace vector from *Munsell Renotation System* data.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n\n Returns\n -------\n :class:`numpy.ndarray`\n *CIE xyY* colourspace vector.\n\n Raises\n ------\n ValueError\n If the given specification doesn't exist in *Munsell Renotation System*\n data.\n\n Examples\n --------\n >>> xyY_from_renotation(np.array([2.5, 0.2, 2.0, 4])) # doctest: +ELLIPSIS\n array([ 0.71..., 1.41..., 0.23...])\n \"\"\"\n\n specification = normalise_munsell_specification(specification)\n\n try:\n index = np.where(\n (_munsell_specifications() == specification).all(axis=-1)\n )\n\n return MUNSELL_COLOURS_ALL[int(index[0])][1]\n except Exception:\n raise ValueError(\n f'\"{specification}\" specification does not exists in '\n '\"Munsell Renotation System\" data!'\n )\n\n\ndef is_specification_in_renotation(specification: ArrayLike) -> Boolean:\n \"\"\"\n Return whether given *Munsell* *Colorlab* specification is in\n *Munsell Renotation System* data.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n\n Returns\n -------\n :class:`bool`\n Whether specification is in *Munsell Renotation System* data.\n\n Examples\n --------\n >>> is_specification_in_renotation(np.array([2.5, 0.2, 2.0, 4]))\n True\n >>> is_specification_in_renotation(np.array([64, 0.2, 2.0, 4]))\n False\n \"\"\"\n\n try:\n xyY_from_renotation(specification)\n\n return True\n except ValueError:\n return False\n\n\ndef bounding_hues_from_renotation(hue_and_code: ArrayLike) -> NDArray:\n \"\"\"\n Return for a given *Munsell* *Colorlab* specification hue and *Munsell*\n *Colorlab* specification code the two bounding hues from\n *Munsell Renotation System* data.\n\n Parameters\n ----------\n hue_and_code\n *Munsell* *Colorlab* specification hue and *Munsell* *Colorlab*\n specification code.\n\n Returns\n -------\n :class:`numpy.ndarray`\n Bounding hues.\n\n References\n ----------\n :cite:`Centore2014o`\n\n Examples\n --------\n >>> bounding_hues_from_renotation([3.2, 4])\n array([[ 2.5, 4. ],\n [ 5. , 4. ]])\n\n # Coverage Doctests\n\n >>> bounding_hues_from_renotation([0.0, 1])\n array([[ 10., 2.],\n [ 10., 2.]])\n \"\"\"\n\n hue, code = as_float_array(hue_and_code)\n\n hue_cw: Floating\n code_cw: Floating\n hue_ccw: Floating\n code_ccw: Floating\n\n if hue % 2.5 == 0:\n if hue == 0:\n hue_cw = 10\n code_cw = (code + 1) % 10\n else:\n hue_cw = hue\n code_cw = code\n hue_ccw = hue_cw\n code_ccw = code_cw\n else:\n hue_cw = 2.5 * np.floor(hue / 2.5)\n hue_ccw = (hue_cw + 2.5) % 10\n if hue_ccw == 0:\n hue_ccw = 10\n\n if hue_cw == 0:\n hue_cw = 10\n code_cw = (code + 1) % 10\n if code_cw == 0:\n code_cw = 10\n else:\n code_cw = code\n code_ccw = code\n\n return as_float_array([(hue_cw, code_cw), (hue_ccw, code_ccw)])\n\n\ndef hue_to_hue_angle(hue_and_code: ArrayLike) -> Floating:\n \"\"\"\n Convert from the *Munsell* *Colorlab* specification hue and *Munsell*\n *Colorlab* specification code to hue angle in degrees.\n\n Parameters\n ----------\n hue_and_code\n *Munsell* *Colorlab* specification hue and *Munsell* *Colorlab*\n specification code.\n\n Returns\n -------\n :class:`numpy.floating`\n Hue angle in degrees.\n\n References\n ----------\n :cite:`Centore2014s`\n\n Examples\n --------\n >>> hue_to_hue_angle([3.2, 4])\n 65.5\n \"\"\"\n\n hue, code = as_float_array(hue_and_code)\n\n single_hue = ((17 - code) % 10 + (hue / 10) - 0.5) % 10\n\n hue_angle = LinearInterpolator(\n [0, 2, 3, 4, 5, 6, 8, 9, 10], [0, 45, 70, 135, 160, 225, 255, 315, 360]\n )(single_hue)\n\n return as_float_scalar(hue_angle)\n\n\ndef hue_angle_to_hue(hue_angle: Floating) -> NDArray:\n \"\"\"\n Convert from hue angle in degrees to the *Munsell* *Colorlab*\n specification hue and code.\n\n Parameters\n ----------\n hue_angle\n Hue angle in degrees.\n\n Returns\n -------\n :class:`numpy.ndarray`\n (*Munsell* *Colorlab* specification hue, *Munsell* *Colorlab*\n specification code).\n\n References\n ----------\n :cite:`Centore2014t`\n\n Examples\n --------\n >>> hue_angle_to_hue(65.54) # doctest: +ELLIPSIS\n array([ 3.216, 4. ])\n \"\"\"\n\n single_hue = LinearInterpolator(\n [0, 45, 70, 135, 160, 225, 255, 315, 360], [0, 2, 3, 4, 5, 6, 8, 9, 10]\n )(hue_angle)\n\n if single_hue <= 0.5:\n code = 7\n elif single_hue <= 1.5:\n code = 6\n elif single_hue <= 2.5:\n code = 5\n elif single_hue <= 3.5:\n code = 4\n elif single_hue <= 4.5:\n code = 3\n elif single_hue <= 5.5:\n code = 2\n elif single_hue <= 6.5:\n code = 1\n elif single_hue <= 7.5:\n code = 10\n elif single_hue <= 8.5:\n code = 9\n elif single_hue <= 9.5:\n code = 8\n else:\n code = 7\n\n hue = (10 * (single_hue % 1) + 5) % 10\n if hue == 0:\n hue = 10\n\n return tstack([hue, code])\n\n\ndef hue_to_ASTM_hue(hue_and_code) -> Floating:\n \"\"\"\n Convert from the *Munsell* *Colorlab* specification hue and *Munsell*\n *Colorlab* specification codeto *ASTM* hue number.\n\n Parameters\n ----------\n hue_and_code\n *Munsell* *Colorlab* specification hue and *Munsell* *Colorlab*\n specification code.\n\n\n Returns\n -------\n :class:`numpy.floating`\n *ASTM* hue number.\n\n References\n ----------\n :cite:`Centore2014k`\n\n Examples\n --------\n >>> hue_to_ASTM_hue([3.2, 4]) # doctest: +ELLIPSIS\n 33.2...\n \"\"\"\n\n hue, code = as_float_array(hue_and_code)\n\n ASTM_hue = 10 * ((7 - code) % 10) + hue\n\n return 100 if ASTM_hue == 0 else ASTM_hue\n\n\ndef interpolation_method_from_renotation_ovoid(\n specification: ArrayLike,\n) -> Optional[Literal[\"Linear\", \"Radial\"]]:\n \"\"\"\n Return whether to use linear or radial interpolation when drawing ovoids\n through data points in the *Munsell Renotation System* data from given\n specification.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n\n Returns\n -------\n :py:data:`None` or :class:`str`\n Interpolation method.\n\n References\n ----------\n :cite:`Centore2014l`\n\n Examples\n --------\n >>> interpolation_method_from_renotation_ovoid([2.5, 5.0, 12.0, 4])\n 'Radial'\n \"\"\"\n\n specification = normalise_munsell_specification(specification)\n\n interpolation_methods: Dict[\n Integer, Optional[Literal[\"Linear\", \"Radial\"]]\n ] = {\n 0: None,\n 1: \"Linear\",\n 2: \"Radial\",\n }\n\n if is_grey_munsell_colour(specification):\n # No interpolation needed for grey colours.\n interpolation_method = 0\n else:\n hue, value, chroma, code = specification\n\n attest(\n 0 <= value <= 10,\n f'\"{specification}\" specification value must be normalised to '\n f\"domain [0, 10]!\",\n )\n\n attest(\n is_integer(value),\n f'\"{specification}\" specification value must be an integer!',\n )\n\n value = round(value)\n\n attest(\n 2 <= chroma <= 50,\n f'\"{specification}\" specification chroma must be normalised to '\n f\"domain [2, 50]!\",\n )\n\n attest(\n abs(2 * (chroma / 2 - round(chroma / 2))) <= INTEGER_THRESHOLD,\n f'\"{specification}\" specification chroma must be an integer and '\n f\"multiple of 2!\",\n )\n\n chroma = 2 * round(chroma / 2)\n\n interpolation_method = 0\n\n # Standard Munsell Renotation System hue, no interpolation needed.\n if hue % 2.5 == 0:\n interpolation_method = 0\n\n ASTM_hue = hue_to_ASTM_hue([hue, code])\n\n if value == 1:\n if chroma == 2:\n if 15 < ASTM_hue < 30 or 60 < ASTM_hue < 85:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 4:\n if 12.5 < ASTM_hue < 27.5 or 57.5 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 6:\n if 55 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 8:\n if 67.5 < ASTM_hue < 77.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma >= 10:\n # NOTE: This condition is likely never \"True\" while producing a\n # valid \"Munsell Specification\" in practice: 1M iterations with\n # random numbers never reached this code path while producing a\n # valid \"Munsell Specification\".\n if 72.5 < ASTM_hue < 77.5: # pragma: no cover\n interpolation_method = 2\n else:\n interpolation_method = 1\n else: # pragma: no cover\n interpolation_method = 1\n elif value == 2:\n if chroma == 2:\n if 15 < ASTM_hue < 27.5 or 77.5 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 4:\n if 12.5 < ASTM_hue < 30 or 62.5 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 6:\n if 7.5 < ASTM_hue < 22.5 or 62.5 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 8:\n if 7.5 < ASTM_hue < 15 or 60 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma >= 10:\n if 65 < ASTM_hue < 77.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n else: # pragma: no cover\n interpolation_method = 1\n elif value == 3:\n if chroma == 2:\n if 10 < ASTM_hue < 37.5 or 65 < ASTM_hue < 85:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 4:\n if 5 < ASTM_hue < 37.5 or 55 < ASTM_hue < 72.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma in (6, 8, 10):\n if 7.5 < ASTM_hue < 37.5 or 57.5 < ASTM_hue < 82.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma >= 12:\n if 7.5 < ASTM_hue < 42.5 or 57.5 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n else: # pragma: no cover\n interpolation_method = 1\n elif value == 4:\n if chroma in (2, 4):\n if 7.5 < ASTM_hue < 42.5 or 57.5 < ASTM_hue < 85:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma in (6, 8):\n if 7.5 < ASTM_hue < 40 or 57.5 < ASTM_hue < 82.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma >= 10:\n if 7.5 < ASTM_hue < 40 or 57.5 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n else: # pragma: no cover\n interpolation_method = 1\n elif value == 5:\n if chroma == 2:\n if 5 < ASTM_hue < 37.5 or 55 < ASTM_hue < 85:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma in (4, 6, 8):\n if 2.5 < ASTM_hue < 42.5 or 55 < ASTM_hue < 85:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma >= 10:\n if 2.5 < ASTM_hue < 42.5 or 55 < ASTM_hue < 82.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n else: # pragma: no cover\n interpolation_method = 1\n elif value == 6:\n if chroma in (2, 4):\n if 5 < ASTM_hue < 37.5 or 55 < ASTM_hue < 87.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 6:\n if 5 < ASTM_hue < 42.5 or 57.5 < ASTM_hue < 87.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma in (8, 10):\n if 5 < ASTM_hue < 42.5 or 60 < ASTM_hue < 85:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma in (12, 14):\n if 5 < ASTM_hue < 42.5 or 60 < ASTM_hue < 82.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma >= 16:\n if 5 < ASTM_hue < 42.5 or 60 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n else: # pragma: no cover\n interpolation_method = 1\n elif value == 7:\n if chroma in (2, 4, 6):\n if 5 < ASTM_hue < 42.5 or 60 < ASTM_hue < 85:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 8:\n if 5 < ASTM_hue < 42.5 or 60 < ASTM_hue < 82.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 10:\n if (\n 30 < ASTM_hue < 42.5\n or 5 < ASTM_hue < 25\n or 60 < ASTM_hue < 82.5\n ):\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma == 12:\n if (\n 30 < ASTM_hue < 42.5\n or 7.5 < ASTM_hue < 27.5\n or 80 < ASTM_hue < 82.5\n ):\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma >= 14:\n if (\n 32.5 < ASTM_hue < 40\n or 7.5 < ASTM_hue < 15\n or 80 < ASTM_hue < 82.5\n ):\n interpolation_method = 2\n else:\n interpolation_method = 1\n else: # pragma: no cover\n interpolation_method = 1\n elif value == 8:\n if chroma in (2, 4, 6, 8, 10, 12):\n if 5 < ASTM_hue < 40 or 60 < ASTM_hue < 85:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma >= 14:\n if (\n 32.5 < ASTM_hue < 40\n or 5 < ASTM_hue < 15\n or 60 < ASTM_hue < 85\n ):\n interpolation_method = 2\n else:\n interpolation_method = 1\n else: # pragma: no cover\n interpolation_method = 1\n elif value == 9:\n if chroma in (2, 4):\n if 5 < ASTM_hue < 40 or 55 < ASTM_hue < 80:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma in (6, 8, 10, 12, 14):\n if 5 < ASTM_hue < 42.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n elif chroma >= 16:\n if 35 < ASTM_hue < 42.5:\n interpolation_method = 2\n else:\n interpolation_method = 1\n else: # pragma: no cover\n interpolation_method = 1\n elif value == 10:\n # Ideal white, no interpolation needed.\n interpolation_method = 0\n\n return interpolation_methods[interpolation_method]\n\n\ndef xy_from_renotation_ovoid(specification: ArrayLike) -> NDArray:\n \"\"\"\n Convert given *Munsell* *Colorlab* specification to *CIE xy* chromaticity\n coordinates on *Munsell Renotation System* ovoid.\n The *CIE xy* point will be on the ovoid about the achromatic point,\n corresponding to the *Munsell* *Colorlab* specification\n value and chroma.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n\n Returns\n -------\n :class:`numpy.ndarray`\n *CIE xy* chromaticity coordinates.\n\n Raises\n ------\n ValueError\n If an invalid interpolation method is retrieved from internal\n computations.\n\n References\n ----------\n :cite:`Centore2014n`\n\n Examples\n --------\n >>> xy_from_renotation_ovoid([2.5, 5.0, 12.0, 4])\n ... # doctest: +ELLIPSIS\n array([ 0.4333..., 0.5602...])\n >>> xy_from_renotation_ovoid([np.nan, 8, np.nan, np.nan])\n ... # doctest: +ELLIPSIS\n array([ 0.31006..., 0.31616...])\n \"\"\"\n\n specification = normalise_munsell_specification(specification)\n\n if is_grey_munsell_colour(specification):\n return CCS_ILLUMINANT_MUNSELL\n else:\n hue, value, chroma, code = specification\n\n attest(\n 1 <= value <= 9,\n f'\"{specification}\" specification value must be normalised to '\n f\"domain [1, 9]!\",\n )\n\n attest(\n is_integer(value),\n f'\"{specification}\" specification value must be an integer!',\n )\n\n value = round(value)\n\n attest(\n 2 <= chroma <= 50,\n f'\"{specification}\" specification chroma must be normalised to '\n f\"domain [2, 50]!\",\n )\n\n attest(\n abs(2 * (chroma / 2 - round(chroma / 2))) <= INTEGER_THRESHOLD,\n f'\"{specification}\" specification chroma must be an integer and '\n f\"multiple of 2!\",\n )\n\n chroma = 2 * round(chroma / 2)\n\n # Checking if renotation data is available without interpolation using\n # given threshold.\n threshold = 1e-7\n if (\n abs(hue) < threshold\n or abs(hue - 2.5) < threshold\n or abs(hue - 5) < threshold\n or abs(hue - 7.5) < threshold\n or abs(hue - 10) < threshold\n ):\n hue = 2.5 * round(hue / 2.5)\n\n x, y, _Y = xyY_from_renotation([hue, value, chroma, code])\n\n return tstack([x, y])\n\n hue_cw, hue_ccw = bounding_hues_from_renotation([hue, code])\n hue_minus, code_minus = hue_cw\n hue_plus, code_plus = hue_ccw\n\n x_grey, y_grey = CCS_ILLUMINANT_MUNSELL\n\n specification_minus = (hue_minus, value, chroma, code_minus)\n x_minus, y_minus, Y_minus = xyY_from_renotation(specification_minus)\n rho_minus, phi_minus, _z_minus = cartesian_to_cylindrical(\n [x_minus - x_grey, y_minus - y_grey, Y_minus]\n )\n phi_minus = np.degrees(phi_minus)\n\n specification_plus = (hue_plus, value, chroma, code_plus)\n x_plus, y_plus, Y_plus = xyY_from_renotation(specification_plus)\n rho_plus, phi_plus, _z_plus = cartesian_to_cylindrical(\n [x_plus - x_grey, y_plus - y_grey, Y_plus]\n )\n phi_plus = np.degrees(phi_plus)\n\n hue_angle_lower = hue_to_hue_angle([hue_minus, code_minus])\n hue_angle = hue_to_hue_angle([hue, code])\n hue_angle_upper = hue_to_hue_angle([hue_plus, code_plus])\n\n if phi_minus - phi_plus > 180:\n phi_plus += 360\n\n if hue_angle_lower == 0:\n hue_angle_lower = 360\n\n if hue_angle_lower > hue_angle_upper:\n if hue_angle_lower > hue_angle:\n hue_angle_lower -= 360\n else:\n hue_angle_lower -= 360\n hue_angle -= 360\n\n interpolation_method = interpolation_method_from_renotation_ovoid(\n specification\n )\n\n attest(\n interpolation_method is not None,\n f\"Interpolation method must be one of: \"\n f\"\\\"{', '.join(['Linear', 'Radial'])}\\\"\",\n )\n\n hue_angle_lower_upper = np.squeeze(\n as_float_array([hue_angle_lower, hue_angle_upper])\n )\n\n if interpolation_method == \"Linear\":\n x_minus_plus = np.squeeze([x_minus, x_plus])\n y_minus_plus = np.squeeze([y_minus, y_plus])\n\n x = LinearInterpolator(hue_angle_lower_upper, x_minus_plus)(\n hue_angle\n )\n y = LinearInterpolator(hue_angle_lower_upper, y_minus_plus)(\n hue_angle\n )\n elif interpolation_method == \"Radial\":\n rho_minus_plus = np.squeeze([rho_minus, rho_plus])\n phi_minus_plus = np.squeeze([phi_minus, phi_plus])\n\n rho = as_float_array(\n LinearInterpolator(hue_angle_lower_upper, rho_minus_plus)(\n hue_angle\n )\n )\n phi = as_float_array(\n LinearInterpolator(hue_angle_lower_upper, phi_minus_plus)(\n hue_angle\n )\n )\n\n rho_phi = np.squeeze([rho, np.radians(phi)])\n x, y = tsplit(\n polar_to_cartesian(rho_phi) + tstack([x_grey, y_grey])\n )\n\n return tstack([x, y])\n\n\ndef LCHab_to_munsell_specification(LCHab: ArrayLike) -> NDArray:\n \"\"\"\n Convert from *CIE L\\\\*C\\\\*Hab* colourspace to approximate *Munsell*\n *Colorlab* specification.\n\n Parameters\n ----------\n LCHab\n *CIE L\\\\*C\\\\*Hab* colourspace array.\n\n Returns\n -------\n :class:`numpy.ndarray`\n *Munsell* *Colorlab* specification.\n\n References\n ----------\n :cite:`Centore2014u`\n\n Examples\n --------\n >>> LCHab = np.array([100, 17.50664796, 244.93046842])\n >>> LCHab_to_munsell_specification(LCHab) # doctest: +ELLIPSIS\n array([ 8.0362412..., 10. , 3.5013295..., 1. ])\n \"\"\"\n\n L, C, Hab = tsplit(LCHab)\n\n if Hab == 0:\n code = 8\n elif Hab <= 36:\n code = 7\n elif Hab <= 72:\n code = 6\n elif Hab <= 108:\n code = 5\n elif Hab <= 144:\n code = 4\n elif Hab <= 180:\n code = 3\n elif Hab <= 216:\n code = 2\n elif Hab <= 252:\n code = 1\n elif Hab <= 288:\n code = 10\n elif Hab <= 324:\n code = 9\n else:\n code = 8\n\n hue = LinearInterpolator([0, 36], [0, 10])(Hab % 36)\n if hue == 0:\n hue = 10\n\n value = L / 10\n chroma = C / 5\n\n return tstack([hue, value, chroma, code])\n\n\ndef maximum_chroma_from_renotation(\n hue_and_value_and_code: ArrayLike,\n) -> Floating:\n \"\"\"\n Return the maximum *Munsell* chroma from *Munsell Renotation System* data\n using given *Munsell* *Colorlab* specification hue, *Munsell* *Colorlab*\n specification value and *Munsell* *Colorlab* specification code.\n\n Parameters\n ----------\n hue_and_value_and_code\n *Munsell* *Colorlab* specification hue, *Munsell* *Colorlab*\n specification value and *Munsell* *Colorlab* specification code.\n\n Returns\n -------\n :class:`numpy.floating`\n Maximum chroma.\n\n References\n ----------\n :cite:`Centore2014r`\n\n Examples\n --------\n >>> maximum_chroma_from_renotation([2.5, 5, 5])\n 14.0\n \"\"\"\n\n hue, value, code = as_float_array(hue_and_value_and_code)\n\n # Ideal white, no chroma.\n if value >= 9.99:\n return 0\n\n attest(\n 1 <= value <= 10,\n f'\"{value}\" value must be normalised to domain [1, 10]!',\n )\n\n if value % 1 == 0:\n value_minus = value\n value_plus = value\n else:\n value_minus = np.floor(value)\n value_plus = value_minus + 1\n\n hue_cw, hue_ccw = bounding_hues_from_renotation([hue, code])\n hue_cw, code_cw = hue_cw\n hue_ccw, code_ccw = hue_ccw\n\n maximum_chromas = _munsell_maximum_chromas_from_renotation()\n specification_for_indexes = [chroma[0] for chroma in maximum_chromas]\n\n ma_limit_mcw = maximum_chromas[\n specification_for_indexes.index((hue_cw, value_minus, code_cw))\n ][1]\n ma_limit_mccw = maximum_chromas[\n specification_for_indexes.index((hue_ccw, value_minus, code_ccw))\n ][1]\n\n if value_plus <= 9:\n ma_limit_pcw = maximum_chromas[\n specification_for_indexes.index((hue_cw, value_plus, code_cw))\n ][1]\n ma_limit_pccw = maximum_chromas[\n specification_for_indexes.index((hue_ccw, value_plus, code_ccw))\n ][1]\n max_chroma = min(\n [ma_limit_mcw, ma_limit_mccw, ma_limit_pcw, ma_limit_pccw]\n )\n else:\n L = as_float_scalar(luminance_ASTMD1535(value))\n L9 = as_float_scalar(luminance_ASTMD1535(9))\n L10 = as_float_scalar(luminance_ASTMD1535(10))\n\n max_chroma = min(\n [\n as_float_scalar(\n LinearInterpolator([L9, L10], [ma_limit_mcw, 0])(L)\n ),\n as_float_scalar(\n LinearInterpolator([L9, L10], [ma_limit_mccw, 0])(L)\n ),\n ]\n )\n\n return max_chroma\n\n\ndef munsell_specification_to_xy(specification: ArrayLike) -> NDArray:\n \"\"\"\n Convert given *Munsell* *Colorlab* specification to *CIE xy* chromaticity\n coordinates by interpolating over *Munsell Renotation System* data.\n\n Parameters\n ----------\n specification\n *Munsell* *Colorlab* specification.\n\n Returns\n -------\n :class:`numpy.ndarray`\n *CIE xy* chromaticity coordinates.\n\n References\n ----------\n :cite:`Centore2014q`\n\n Examples\n --------\n >>> munsell_specification_to_xy([2.1, 8.0, 17.9, 4])\n ... # doctest: +ELLIPSIS\n array([ 0.4400632..., 0.5522428...])\n >>> munsell_specification_to_xy([np.nan, 8, np.nan, np.nan])\n ... # doctest: +ELLIPSIS\n array([ 0.31006..., 0.31616...])\n \"\"\"\n\n specification = normalise_munsell_specification(specification)\n\n if is_grey_munsell_colour(specification):\n return CCS_ILLUMINANT_MUNSELL\n else:\n hue, value, chroma, code = specification\n\n attest(\n 0 <= value <= 10,\n f'\"{specification}\" specification value must be normalised to '\n f\"domain [0, 10]!\",\n )\n\n attest(\n is_integer(value),\n f'\"{specification}\" specification value must be an integer!',\n )\n\n value = round(value)\n\n if chroma % 2 == 0:\n chroma_minus = chroma_plus = chroma\n else:\n chroma_minus = 2 * np.floor(chroma / 2)\n chroma_plus = chroma_minus + 2\n\n if chroma_minus == 0:\n # Smallest chroma ovoid collapses to illuminant chromaticity\n # coordinates.\n x_minus, y_minus = CCS_ILLUMINANT_MUNSELL\n else:\n x_minus, y_minus = xy_from_renotation_ovoid(\n [hue, value, chroma_minus, code]\n )\n\n x_plus, y_plus = xy_from_renotation_ovoid(\n [hue, value, chroma_plus, code]\n )\n\n if chroma_minus == chroma_plus:\n x = x_minus\n y = y_minus\n else:\n chroma_minus_plus = np.squeeze([chroma_minus, chroma_plus])\n x_minus_plus = np.squeeze([x_minus, x_plus])\n y_minus_plus = np.squeeze([y_minus, y_plus])\n\n x = LinearInterpolator(chroma_minus_plus, x_minus_plus)(chroma)\n y = LinearInterpolator(chroma_minus_plus, y_minus_plus)(chroma)\n\n return tstack([x, y])\n" ]
[ [ "numpy.max", "numpy.array", "numpy.sin", "numpy.isnan", "numpy.min", "numpy.degrees", "numpy.radians", "numpy.arange", "numpy.ravel", "numpy.sqrt", "numpy.around", "numpy.squeeze", "numpy.floor" ] ]
tylercroberts/PartyParrot
[ "89894ea054a466f2ec1c5a620fd312cdc2008a77" ]
[ "party_parrot/parrot_engine.py" ]
[ "\"\"\"\n\n Engine\n ~~~~~~\n\n\"\"\"\nimport pyperclip\nfrom sklearn.utils import shuffle\nfrom string import ascii_lowercase as chars\nfrom party_parrot._slack_parrots import parrots\nfrom party_parrot._utils import dict_str_replace\n\n\nclass ParrotLang(object):\n def __init__(self):\n self._default_key = 1\n self._dicts = self._get_dict(self._default_key)\n\n def _get_dict(self, encryption_key):\n if not hasattr(self, '_dicts') or encryption_key not in self._dicts:\n self._dicts = dict() # reset to constrain memory load.\n parrots_shuf = shuffle(parrots, random_state=encryption_key)\n parrot_dict = {char: par for char, par in zip(chars, parrots_shuf)}\n self._dicts[encryption_key] = parrot_dict\n return self._dicts[encryption_key]\n\n def _mapper(self, string, direction, key, copy):\n if key is None:\n key = self._default_key\n elif not isinstance(key, int):\n raise TypeError(\"`key` must be an integer.\")\n\n string = string.lower()\n d = self._get_dict(encryption_key=key)\n if direction == \"forward\":\n out = \"\".join(map(lambda key: d.get(key, key), string))\n else:\n out = dict_str_replace(string, d=d, swap_key_value=True)\n\n if copy:\n pyperclip.copy(out)\n return out\n\n def to_parrot(self, string, key=None, copy=False):\n return self._mapper(string, direction=\"forward\", key=key, copy=copy)\n\n def from_parrot(self, string, key=None, copy=False):\n return self._mapper(string, direction=\"reverse\", key=key, copy=copy)\n" ]
[ [ "sklearn.utils.shuffle" ] ]
ectucker1/info-diffusion
[ "36a78861d393b7f7fb2a9d5e8dff8ccdfe0baf33" ]
[ "indiff/data/combine_partials.py" ]
[ "from pathlib import Path\nimport click\nimport pandas as pd\nimport os\n\n\[email protected]()\ndef main():\n partial_folder = Path('data/raw')\n dataset_files = partial_folder.glob('*.h5')\n\n partials = []\n\n for file in dataset_files:\n print('Reading file', file)\n partials.append(pd.read_hdf(file))\n\n print('Combining...')\n combined = pd.concat(partials)\n\n combined.to_hdf('data/combined.h5', 'dataset')\n\n\nif __name__ == '__main__':\n main()\n" ]
[ [ "pandas.read_hdf", "pandas.concat" ] ]
jayadeepsasikumartud/Mask_RCNN
[ "3d2fa2a701b328547cd6a2ed51c5d43ad41085d0" ]
[ "mrcnn/visualize.py" ]
[ "\"\"\"\nMask R-CNN\nDisplay and Visualization Functions.\n\nCopyright (c) 2017 Matterport, Inc.\nLicensed under the MIT License (see LICENSE for details)\nWritten by Waleed Abdulla\n\"\"\"\n\nimport os\nimport sys\nimport random\nimport itertools\nimport colorsys\n\nimport cv2\nimport numpy as np\nfrom skimage.measure import find_contours\nimport matplotlib.pyplot as plt\nfrom matplotlib import patches, lines\nfrom matplotlib.patches import Polygon\nimport IPython.display\n\n# Root directory of the project\nROOT_DIR = os.path.abspath(\"../\")\n\n# Import Mask RCNN\nsys.path.append(ROOT_DIR) # To find local version of the library\nfrom mrcnn import utils\n\n\n############################################################\n# Visualization\n############################################################\n\ndef display_images(images, titles=None, cols=4, cmap=None, norm=None,\n interpolation=None):\n \"\"\"Display the given set of images, optionally with titles.\n images: list or array of image tensors in HWC format.\n titles: optional. A list of titles to display with each image.\n cols: number of images per row\n cmap: Optional. Color map to use. For example, \"Blues\".\n norm: Optional. A Normalize instance to map values to colors.\n interpolation: Optional. Image interpolation to use for display.\n \"\"\"\n titles = titles if titles is not None else [\"\"] * len(images)\n rows = len(images) // cols + 1\n plt.figure(figsize=(14, 14 * rows // cols))\n i = 1\n for image, title in zip(images, titles):\n plt.subplot(rows, cols, i)\n plt.title(title, fontsize=9)\n plt.axis('off')\n plt.imshow(image.astype(np.uint8), cmap=cmap,\n norm=norm, interpolation=interpolation)\n i += 1\n plt.show()\n\n\ndef random_colors(N, bright=True):\n \"\"\"\n Generate random colors.\n To get visually distinct colors, generate them in HSV space then\n convert to RGB.\n \"\"\"\n brightness = 1.0 if bright else 0.7\n hsv = [(i / N, 1, brightness) for i in range(N)]\n colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))\n random.shuffle(colors)\n return colors\n\n\ndef apply_mask(image, mask, color, alpha=0.5):\n \"\"\"Apply the given mask to the image.\n \"\"\"\n for c in range(3):\n image[:, :, c] = np.where(mask == 1,\n image[:, :, c] *\n (1 - alpha) + alpha * color[c] * 255,\n image[:, :, c])\n return image\n\n\ndef display_instances(image, boxes, masks, class_ids, class_names,\n scores=None, title=\"\",\n figsize=(16, 16), ax=None,\n show_mask=True, show_bbox=True,\n colors=None, captions=None):\n \"\"\"\n boxes: [num_instance, (y1, x1, y2, x2, class_id)] in image coordinates.\n masks: [height, width, num_instances]\n class_ids: [num_instances]\n class_names: list of class names of the dataset\n scores: (optional) confidence scores for each box\n title: (optional) Figure title\n show_mask, show_bbox: To show masks and bounding boxes or not\n figsize: (optional) the size of the image\n colors: (optional) An array or colors to use with each object\n captions: (optional) A list of strings to use as captions for each object\n \"\"\"\n # Number of instances\n N = boxes.shape[0]\n if not N:\n print(\"\\n*** No instances to display *** \\n\")\n else:\n assert boxes.shape[0] == masks.shape[-1] == class_ids.shape[0]\n\n # If no axis is passed, create one and automatically call show()\n auto_show = False\n if not ax:\n _, ax = plt.subplots(1, figsize=figsize)\n auto_show = True\n\n # Generate random colors\n colors = colors or random_colors(N)\n\n # Show area outside image boundaries.\n height, width = image.shape[:2]\n ax.set_ylim(height + 10, -10)\n ax.set_xlim(-10, width + 10)\n ax.axis('off')\n ax.set_title(title)\n\n masked_image = image.astype(np.uint32).copy()\n for i in range(N):\n color = colors[i]\n\n # Bounding box\n if not np.any(boxes[i]):\n # Skip this instance. Has no bbox. Likely lost in image cropping.\n continue\n y1, x1, y2, x2 = boxes[i]\n if show_bbox:\n p = patches.Rectangle((x1, y1), x2 - x1, y2 - y1, linewidth=2,\n alpha=0.7, linestyle=\"dashed\",\n edgecolor=color, facecolor='none')\n ax.add_patch(p)\n\n # Label\n if not captions:\n class_id = class_ids[i]\n score = scores[i] if scores is not None else None\n label = class_names[class_id]\n caption = \"{} {:.3f}\".format(label, score) if score else label\n else:\n caption = captions[i]\n ax.text(x1, y1 + 8, caption,\n color='w', size=11, backgroundcolor=\"none\")\n\n # Mask\n mask = masks[:, :, i]\n if show_mask:\n masked_image = apply_mask(masked_image, mask, color)\n\n # Mask Polygon\n # Pad to ensure proper polygons for masks that touch image edges.\n padded_mask = np.zeros(\n (mask.shape[0] + 2, mask.shape[1] + 2), dtype=np.uint8)\n padded_mask[1:-1, 1:-1] = mask\n contours = find_contours(padded_mask, 0.5)\n for verts in contours:\n # Subtract the padding and flip (y, x) to (x, y)\n verts = np.fliplr(verts) - 1\n p = Polygon(verts, facecolor=\"none\", edgecolor=color)\n ax.add_patch(p)\n ax.imshow(masked_image.astype(np.uint8))\n if auto_show:\n plt.show()\n\ndef write_image(image, boxes, masks, class_ids, class_names,\n scores=None, title=\"\",\n figsize=(16, 16), ax=None,\n show_mask=True, show_bbox=True,\n colors=None, captions=None, target_file_name=\"\"):\n \"\"\"\n boxes: [num_instance, (y1, x1, y2, x2, class_id)] in image coordinates.\n masks: [height, width, num_instances]\n class_ids: [num_instances]\n class_names: list of class names of the dataset\n scores: (optional) confidence scores for each box\n title: (optional) Figure title\n show_mask, show_bbox: To show masks and bounding boxes or not\n figsize: (optional) the size of the image\n colors: (optional) An array or colors to use with each object\n captions: (optional) A list of strings to use as captions for each object\n \"\"\"\n # Number of instances\n N = boxes.shape[0]\n if not N:\n print(\"\\n*** No instances to display *** \\n\")\n else:\n assert boxes.shape[0] == masks.shape[-1] == class_ids.shape[0]\n\n # Generate random colors\n colors = colors or random_colors(N)\n\n # masked_image = image.astype(np.uint32).copy()\n masked_image = image.copy()\n for i in range(N):\n color = colors[i]\n\n # Bounding box\n if not np.any(boxes[i]):\n # Skip this instance. Has no bbox. Likely lost in image cropping.\n continue\n # Mask\n mask = masks[:, :, i]\n if show_mask:\n masked_image = apply_mask(masked_image, mask, color)\n cv2.imwrite(target_file_name, masked_image)\n\n\ndef display_differences(image,\n gt_box, gt_class_id, gt_mask,\n pred_box, pred_class_id, pred_score, pred_mask,\n class_names, title=\"\", ax=None,\n show_mask=True, show_box=True,\n iou_threshold=0.5, score_threshold=0.5):\n \"\"\"Display ground truth and prediction instances on the same image.\"\"\"\n # Match predictions to ground truth\n gt_match, pred_match, overlaps = utils.compute_matches(\n gt_box, gt_class_id, gt_mask,\n pred_box, pred_class_id, pred_score, pred_mask,\n iou_threshold=iou_threshold, score_threshold=score_threshold)\n # Ground truth = green. Predictions = red\n colors = [(0, 1, 0, .8)] * len(gt_match)\\\n + [(1, 0, 0, 1)] * len(pred_match)\n # Concatenate GT and predictions\n class_ids = np.concatenate([gt_class_id, pred_class_id])\n scores = np.concatenate([np.zeros([len(gt_match)]), pred_score])\n boxes = np.concatenate([gt_box, pred_box])\n masks = np.concatenate([gt_mask, pred_mask], axis=-1)\n # Captions per instance show score/IoU\n captions = [\"\" for m in gt_match] + [\"{:.2f} / {:.2f}\".format(\n pred_score[i],\n (overlaps[i, int(pred_match[i])]\n if pred_match[i] > -1 else overlaps[i].max()))\n for i in range(len(pred_match))]\n # Set title if not provided\n title = title or \"Ground Truth and Detections\\n GT=green, pred=red, captions: score/IoU\"\n # Display\n display_instances(\n image,\n boxes, masks, class_ids,\n class_names, scores, ax=ax,\n show_bbox=show_box, show_mask=show_mask,\n colors=colors, captions=captions,\n title=title)\n\n\ndef draw_rois(image, rois, refined_rois, mask, class_ids, class_names, limit=10):\n \"\"\"\n anchors: [n, (y1, x1, y2, x2)] list of anchors in image coordinates.\n proposals: [n, 4] the same anchors but refined to fit objects better.\n \"\"\"\n masked_image = image.copy()\n\n # Pick random anchors in case there are too many.\n ids = np.arange(rois.shape[0], dtype=np.int32)\n ids = np.random.choice(\n ids, limit, replace=False) if ids.shape[0] > limit else ids\n\n fig, ax = plt.subplots(1, figsize=(12, 12))\n if rois.shape[0] > limit:\n plt.title(\"Showing {} random ROIs out of {}\".format(\n len(ids), rois.shape[0]))\n else:\n plt.title(\"{} ROIs\".format(len(ids)))\n\n # Show area outside image boundaries.\n ax.set_ylim(image.shape[0] + 20, -20)\n ax.set_xlim(-50, image.shape[1] + 20)\n ax.axis('off')\n\n for i, id in enumerate(ids):\n color = np.random.rand(3)\n class_id = class_ids[id]\n # ROI\n y1, x1, y2, x2 = rois[id]\n p = patches.Rectangle((x1, y1), x2 - x1, y2 - y1, linewidth=2,\n edgecolor=color if class_id else \"gray\",\n facecolor='none', linestyle=\"dashed\")\n ax.add_patch(p)\n # Refined ROI\n if class_id:\n ry1, rx1, ry2, rx2 = refined_rois[id]\n p = patches.Rectangle((rx1, ry1), rx2 - rx1, ry2 - ry1, linewidth=2,\n edgecolor=color, facecolor='none')\n ax.add_patch(p)\n # Connect the top-left corners of the anchor and proposal for easy visualization\n ax.add_line(lines.Line2D([x1, rx1], [y1, ry1], color=color))\n\n # Label\n label = class_names[class_id]\n ax.text(rx1, ry1 + 8, \"{}\".format(label),\n color='w', size=11, backgroundcolor=\"none\")\n\n # Mask\n m = utils.unmold_mask(mask[id], rois[id]\n [:4].astype(np.int32), image.shape)\n masked_image = apply_mask(masked_image, m, color)\n\n ax.imshow(masked_image)\n\n # Print stats\n print(\"Positive ROIs: \", class_ids[class_ids > 0].shape[0])\n print(\"Negative ROIs: \", class_ids[class_ids == 0].shape[0])\n print(\"Positive Ratio: {:.2f}\".format(\n class_ids[class_ids > 0].shape[0] / class_ids.shape[0]))\n\n\n# TODO: Replace with matplotlib equivalent?\ndef draw_box(image, box, color):\n \"\"\"Draw 3-pixel width bounding boxes on the given image array.\n color: list of 3 int values for RGB.\n \"\"\"\n y1, x1, y2, x2 = box\n image[y1:y1 + 2, x1:x2] = color\n image[y2:y2 + 2, x1:x2] = color\n image[y1:y2, x1:x1 + 2] = color\n image[y1:y2, x2:x2 + 2] = color\n return image\n\n\ndef display_top_masks(image, mask, class_ids, class_names, limit=4):\n \"\"\"Display the given image and the top few class masks.\"\"\"\n to_display = []\n titles = []\n to_display.append(image)\n titles.append(\"H x W={}x{}\".format(image.shape[0], image.shape[1]))\n # Pick top prominent classes in this image\n unique_class_ids = np.unique(class_ids)\n mask_area = [np.sum(mask[:, :, np.where(class_ids == i)[0]])\n for i in unique_class_ids]\n top_ids = [v[0] for v in sorted(zip(unique_class_ids, mask_area),\n key=lambda r: r[1], reverse=True) if v[1] > 0]\n # Generate images and titles\n for i in range(limit):\n class_id = top_ids[i] if i < len(top_ids) else -1\n # Pull masks of instances belonging to the same class.\n m = mask[:, :, np.where(class_ids == class_id)[0]]\n m = np.sum(m * np.arange(1, m.shape[-1] + 1), -1)\n to_display.append(m)\n titles.append(class_names[class_id] if class_id != -1 else \"-\")\n display_images(to_display, titles=titles, cols=limit + 1, cmap=\"Blues_r\")\n\n\ndef plot_precision_recall(AP, precisions, recalls):\n \"\"\"Draw the precision-recall curve.\n\n AP: Average precision at IoU >= 0.5\n precisions: list of precision values\n recalls: list of recall values\n \"\"\"\n # Plot the Precision-Recall curve\n _, ax = plt.subplots(1)\n ax.set_title(\"Precision-Recall Curve. AP@50 = {:.3f}\".format(AP))\n ax.set_ylim(0, 1.1)\n ax.set_xlim(0, 1.1)\n _ = ax.plot(recalls, precisions)\n\n\ndef plot_overlaps(gt_class_ids, pred_class_ids, pred_scores,\n overlaps, class_names, threshold=0.5):\n \"\"\"Draw a grid showing how ground truth objects are classified.\n gt_class_ids: [N] int. Ground truth class IDs\n pred_class_id: [N] int. Predicted class IDs\n pred_scores: [N] float. The probability scores of predicted classes\n overlaps: [pred_boxes, gt_boxes] IoU overlaps of predictions and GT boxes.\n class_names: list of all class names in the dataset\n threshold: Float. The prediction probability required to predict a class\n \"\"\"\n gt_class_ids = gt_class_ids[gt_class_ids != 0]\n pred_class_ids = pred_class_ids[pred_class_ids != 0]\n\n plt.figure(figsize=(12, 10))\n plt.imshow(overlaps, interpolation='nearest', cmap=plt.cm.Blues)\n plt.yticks(np.arange(len(pred_class_ids)),\n [\"{} ({:.2f})\".format(class_names[int(id)], pred_scores[i])\n for i, id in enumerate(pred_class_ids)])\n plt.xticks(np.arange(len(gt_class_ids)),\n [class_names[int(id)] for id in gt_class_ids], rotation=90)\n\n thresh = overlaps.max() / 2.\n for i, j in itertools.product(range(overlaps.shape[0]),\n range(overlaps.shape[1])):\n text = \"\"\n if overlaps[i, j] > threshold:\n text = \"match\" if gt_class_ids[j] == pred_class_ids[i] else \"wrong\"\n color = (\"white\" if overlaps[i, j] > thresh\n else \"black\" if overlaps[i, j] > 0\n else \"grey\")\n plt.text(j, i, \"{:.3f}\\n{}\".format(overlaps[i, j], text),\n horizontalalignment=\"center\", verticalalignment=\"center\",\n fontsize=9, color=color)\n\n plt.tight_layout()\n plt.xlabel(\"Ground Truth\")\n plt.ylabel(\"Predictions\")\n\n\ndef draw_boxes(image, boxes=None, refined_boxes=None,\n masks=None, captions=None, visibilities=None,\n title=\"\", ax=None):\n \"\"\"Draw bounding boxes and segmentation masks with different\n customizations.\n\n boxes: [N, (y1, x1, y2, x2, class_id)] in image coordinates.\n refined_boxes: Like boxes, but draw with solid lines to show\n that they're the result of refining 'boxes'.\n masks: [N, height, width]\n captions: List of N titles to display on each box\n visibilities: (optional) List of values of 0, 1, or 2. Determine how\n prominent each bounding box should be.\n title: An optional title to show over the image\n ax: (optional) Matplotlib axis to draw on.\n \"\"\"\n # Number of boxes\n assert boxes is not None or refined_boxes is not None\n N = boxes.shape[0] if boxes is not None else refined_boxes.shape[0]\n\n # Matplotlib Axis\n if not ax:\n _, ax = plt.subplots(1, figsize=(12, 12))\n\n # Generate random colors\n colors = random_colors(N)\n\n # Show area outside image boundaries.\n margin = image.shape[0] // 10\n ax.set_ylim(image.shape[0] + margin, -margin)\n ax.set_xlim(-margin, image.shape[1] + margin)\n ax.axis('off')\n\n ax.set_title(title)\n\n masked_image = image.astype(np.uint32).copy()\n for i in range(N):\n # Box visibility\n visibility = visibilities[i] if visibilities is not None else 1\n if visibility == 0:\n color = \"gray\"\n style = \"dotted\"\n alpha = 0.5\n elif visibility == 1:\n color = colors[i]\n style = \"dotted\"\n alpha = 1\n elif visibility == 2:\n color = colors[i]\n style = \"solid\"\n alpha = 1\n\n # Boxes\n if boxes is not None:\n if not np.any(boxes[i]):\n # Skip this instance. Has no bbox. Likely lost in cropping.\n continue\n y1, x1, y2, x2 = boxes[i]\n p = patches.Rectangle((x1, y1), x2 - x1, y2 - y1, linewidth=2,\n alpha=alpha, linestyle=style,\n edgecolor=color, facecolor='none')\n ax.add_patch(p)\n\n # Refined boxes\n if refined_boxes is not None and visibility > 0:\n ry1, rx1, ry2, rx2 = refined_boxes[i].astype(np.int32)\n p = patches.Rectangle((rx1, ry1), rx2 - rx1, ry2 - ry1, linewidth=2,\n edgecolor=color, facecolor='none')\n ax.add_patch(p)\n # Connect the top-left corners of the anchor and proposal\n if boxes is not None:\n ax.add_line(lines.Line2D([x1, rx1], [y1, ry1], color=color))\n\n # Captions\n if captions is not None:\n caption = captions[i]\n # If there are refined boxes, display captions on them\n if refined_boxes is not None:\n y1, x1, y2, x2 = ry1, rx1, ry2, rx2\n ax.text(x1, y1, caption, size=11, verticalalignment='top',\n color='w', backgroundcolor=\"none\",\n bbox={'facecolor': color, 'alpha': 0.5,\n 'pad': 2, 'edgecolor': 'none'})\n\n # Masks\n if masks is not None:\n mask = masks[:, :, i]\n masked_image = apply_mask(masked_image, mask, color)\n # Mask Polygon\n # Pad to ensure proper polygons for masks that touch image edges.\n padded_mask = np.zeros(\n (mask.shape[0] + 2, mask.shape[1] + 2), dtype=np.uint8)\n padded_mask[1:-1, 1:-1] = mask\n contours = find_contours(padded_mask, 0.5)\n for verts in contours:\n # Subtract the padding and flip (y, x) to (x, y)\n verts = np.fliplr(verts) - 1\n p = Polygon(verts, facecolor=\"none\", edgecolor=color)\n ax.add_patch(p)\n ax.imshow(masked_image.astype(np.uint8))\n\n\ndef display_table(table):\n \"\"\"Display values in a table format.\n table: an iterable of rows, and each row is an iterable of values.\n \"\"\"\n html = \"\"\n for row in table:\n row_html = \"\"\n for col in row:\n row_html += \"<td>{:40}</td>\".format(str(col))\n html += \"<tr>\" + row_html + \"</tr>\"\n html = \"<table>\" + html + \"</table>\"\n IPython.display.display(IPython.display.HTML(html))\n\n\ndef display_weight_stats(model):\n \"\"\"Scans all the weights in the model and returns a list of tuples\n that contain stats about each weight.\n \"\"\"\n layers = model.get_trainable_layers()\n table = [[\"WEIGHT NAME\", \"SHAPE\", \"MIN\", \"MAX\", \"STD\"]]\n for l in layers:\n weight_values = l.get_weights() # list of Numpy arrays\n weight_tensors = l.weights # list of TF tensors\n for i, w in enumerate(weight_values):\n weight_name = weight_tensors[i].name\n # Detect problematic layers. Exclude biases of conv layers.\n alert = \"\"\n if w.min() == w.max() and not (l.__class__.__name__ == \"Conv2D\" and i == 1):\n alert += \"<span style='color:red'>*** dead?</span>\"\n if np.abs(w.min()) > 1000 or np.abs(w.max()) > 1000:\n alert += \"<span style='color:red'>*** Overflow?</span>\"\n # Add row\n table.append([\n weight_name + alert,\n str(w.shape),\n \"{:+9.4f}\".format(w.min()),\n \"{:+10.4f}\".format(w.max()),\n \"{:+9.4f}\".format(w.std()),\n ])\n display_table(table)\n" ]
[ [ "numpy.random.choice", "numpy.random.rand", "numpy.where", "matplotlib.patches.Rectangle", "numpy.concatenate", "matplotlib.pyplot.subplots", "numpy.arange", "matplotlib.pyplot.tight_layout", "matplotlib.pyplot.axis", "matplotlib.pyplot.subplot", "matplotlib.lines.Line2D", "numpy.zeros", "matplotlib.pyplot.title", "matplotlib.patches.Polygon", "matplotlib.pyplot.figure", "matplotlib.pyplot.show", "numpy.fliplr", "matplotlib.pyplot.xlabel", "numpy.any", "matplotlib.pyplot.ylabel", "numpy.unique", "matplotlib.pyplot.imshow" ] ]
zbzhu99/NGSIM_Imitation
[ "0af6ce327e4fc4da32eddb08ba0bba5403dac24e" ]
[ "ILSwiss/run_scripts/ma_adv_irl_exp_script.py" ]
[ "import yaml\r\nimport argparse\r\nimport numpy as np\r\nimport os\r\nimport sys\r\nimport inspect\r\nimport random\r\nimport pickle\r\n\r\ncurrentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))\r\nparentdir = os.path.dirname(currentdir)\r\nsys.path.insert(0, parentdir)\r\nprint(sys.path)\r\n\r\nimport gym\r\nfrom rlkit.envs import get_env, get_envs\r\n\r\nimport rlkit.torch.utils.pytorch_util as ptu\r\nfrom rlkit.launchers.launcher_util import setup_logger, set_seed\r\nfrom rlkit.data_management.env_replay_buffer import EnvReplayBuffer, PolicyReplayBuffer\r\nfrom rlkit.torch.common.networks import FlattenMlp\r\nfrom rlkit.torch.common.policies import ReparamTanhMultivariateGaussianPolicy\r\nfrom rlkit.torch.algorithms.sac.sac_alpha import (\r\n SoftActorCritic,\r\n) # SAC Auto alpha version\r\nfrom rlkit.torch.algorithms.adv_irl.disc_models.simple_disc_models import MLPDisc\r\nfrom rlkit.torch.algorithms.ma_adv_irl.ma_adv_irl import MAAdvIRL\r\nfrom rlkit.torch.algorithms.adv_irl.adv_irl import AdvIRL\r\nfrom rlkit.envs.wrappers import ProxyEnv, NormalizedBoxActEnv, ObsScaledEnv, EPS\r\nfrom rlkit.samplers import MultiagentPathSampler\r\nfrom smarts_imitation.utils.env_split import split_vehicles\r\n\r\n\r\ndef experiment(variant):\r\n with open(\"demos_listing.yaml\", \"r\") as f:\r\n listings = yaml.load(f.read(), Loader=yaml.FullLoader)\r\n\r\n demos_path = listings[variant[\"expert_name\"]][\"file_paths\"][variant[\"expert_idx\"]]\r\n \"\"\"\r\n Buffer input format\r\n \"\"\"\r\n # buffer_save_dict = joblib.load(expert_demos_path)\r\n # expert_replay_buffer = buffer_save_dict['train']\r\n # obs_mean, obs_std = buffer_save_dict['obs_mean'], buffer_save_dict['obs_std']\r\n # acts_mean, acts_std = buffer_save_dict['acts_mean'], buffer_save_dict['acts_std']\r\n # obs_min, obs_max = buffer_save_dict['obs_min'], buffer_save_dict['obs_max']\r\n # if 'minmax_env_with_demo_stats' in variant.keys():\r\n # if (variant['minmax_env_with_demo_stats']) and not (variant['scale_env_with_demo_stats']):\r\n # assert 'norm_train' in buffer_save_dict.keys()\r\n # expert_replay_buffer = buffer_save_dict['norm_train']\r\n \"\"\"\r\n PKL input format\r\n \"\"\"\r\n print(\"demos_path\", demos_path)\r\n with open(demos_path, \"rb\") as f:\r\n traj_list = pickle.load(f)\r\n if variant[\"traj_num\"] > 0:\r\n traj_list = random.sample(traj_list, variant[\"traj_num\"])\r\n\r\n train_split_path = listings[variant[\"expert_name\"]][\"train_split\"][0]\r\n with open(train_split_path, \"rb\") as f:\r\n # train_vehicle_ids is a OrderedDcit\r\n train_vehicles = pickle.load(f)\r\n\r\n eval_split_path = listings[variant[\"expert_name\"]][\"eval_split\"][0]\r\n with open(eval_split_path, \"rb\") as f:\r\n # eval_vehicle_ids is a OrderedDict\r\n eval_vehicles = pickle.load(f)\r\n\r\n env_specs = variant[\"env_specs\"]\r\n env_specs[\"env_kwargs\"][\"agent_number\"] = 1\r\n s_name = list(eval_vehicles.keys())[0]\r\n t_name = list(eval_vehicles[s_name].keys())[0]\r\n env = get_env(env_specs, scenario_name=s_name, traffic_name=t_name)\r\n env.seed(env_specs[\"eval_env_seed\"])\r\n\r\n print(\"\\n\\nEnv: {}\".format(env_specs[\"env_creator\"]))\r\n print(\"kwargs: {}\".format(env_specs[\"env_kwargs\"]))\r\n print(\"Obs Space: {}\".format(env.observation_space_n))\r\n print(\"Act Space: {}\\n\\n\".format(env.action_space_n))\r\n\r\n expert_replay_buffer = PolicyReplayBuffer(\r\n variant[\"adv_irl_params\"].get(\r\n \"expert_buffer_size\", variant[\"adv_irl_params\"][\"replay_buffer_size\"]\r\n ),\r\n env,\r\n random_seed=np.random.randint(10000),\r\n )\r\n\r\n replay_buffer = PolicyReplayBuffer(\r\n variant[\"adv_irl_params\"][\"replay_buffer_size\"],\r\n env,\r\n random_seed=np.random.randint(10000),\r\n )\r\n variant[\"adv_irl_params\"][\"replay_buffer\"] = replay_buffer\r\n\r\n variant[\"adv_irl_params\"].pop(\"expert_buffer_size\")\r\n variant[\"adv_irl_params\"].pop(\"replay_buffer_size\")\r\n\r\n obs_space_n = env.observation_space_n\r\n act_space_n = env.action_space_n\r\n\r\n policy_mapping_dict = dict(\r\n zip(env.agent_ids, [\"policy_0\" for _ in range(env.n_agents)])\r\n )\r\n default_policy_name = \"policy_0\"\r\n\r\n policy_trainer_n = {}\r\n policy_n = {}\r\n disc_model_n = {}\r\n\r\n for agent_id in env.agent_ids:\r\n policy_id = policy_mapping_dict.get(agent_id)\r\n if policy_id not in policy_trainer_n:\r\n print(f\"Create {policy_id} for {agent_id} ...\")\r\n obs_space = obs_space_n[agent_id]\r\n act_space = act_space_n[agent_id]\r\n assert isinstance(obs_space, gym.spaces.Box)\r\n assert isinstance(act_space, gym.spaces.Box)\r\n assert len(obs_space.shape) == 1\r\n assert len(act_space.shape) == 1\r\n\r\n obs_dim = obs_space.shape[0]\r\n action_dim = act_space.shape[0]\r\n\r\n # build the policy models\r\n net_size = variant[\"policy_net_size\"]\r\n num_hidden = variant[\"policy_num_hidden_layers\"]\r\n qf1 = FlattenMlp(\r\n hidden_sizes=num_hidden * [net_size],\r\n input_size=obs_dim + action_dim,\r\n output_size=1,\r\n )\r\n qf2 = FlattenMlp(\r\n hidden_sizes=num_hidden * [net_size],\r\n input_size=obs_dim + action_dim,\r\n output_size=1,\r\n )\r\n vf = FlattenMlp(\r\n hidden_sizes=num_hidden * [net_size],\r\n input_size=obs_dim,\r\n output_size=1,\r\n )\r\n policy = ReparamTanhMultivariateGaussianPolicy(\r\n hidden_sizes=num_hidden * [net_size],\r\n obs_dim=obs_dim,\r\n action_dim=action_dim,\r\n )\r\n\r\n # build the discriminator model\r\n disc_model = MLPDisc(\r\n obs_dim + action_dim\r\n if not variant[\"adv_irl_params\"][\"state_only\"]\r\n else 2 * obs_dim,\r\n num_layer_blocks=variant[\"disc_num_blocks\"],\r\n hid_dim=variant[\"disc_hid_dim\"],\r\n hid_act=variant[\"disc_hid_act\"],\r\n use_bn=variant[\"disc_use_bn\"],\r\n clamp_magnitude=variant[\"disc_clamp_magnitude\"],\r\n )\r\n\r\n # set up the algorithm\r\n trainer = SoftActorCritic(\r\n policy=policy,\r\n qf1=qf1,\r\n qf2=qf2,\r\n vf=vf,\r\n action_space=env.action_space_n[agent_id],\r\n **variant[\"sac_params\"],\r\n )\r\n\r\n policy_trainer_n[policy_id] = trainer\r\n policy_n[policy_id] = policy\r\n disc_model_n[policy_id] = disc_model\r\n else:\r\n print(f\"Use existing {policy_id} for {agent_id} ...\")\r\n\r\n env_wrapper = ProxyEnv # Identical wrapper\r\n for act_space in act_space_n.values():\r\n if isinstance(act_space, gym.spaces.Box):\r\n env_wrapper = NormalizedBoxActEnv\r\n break\r\n\r\n if variant[\"scale_env_with_demo_stats\"]:\r\n obs = np.vstack(\r\n [\r\n traj_list[i][k][\"observations\"]\r\n for i in range(len(traj_list))\r\n for k in traj_list[i].keys()\r\n ]\r\n )\r\n obs_mean, obs_std = np.mean(obs, axis=0), np.std(obs, axis=0)\r\n\r\n _env_wrapper = env_wrapper\r\n env_wrapper = lambda *args, **kwargs: ObsScaledEnv(\r\n _env_wrapper(*args, **kwargs),\r\n obs_mean=obs_mean,\r\n obs_std=obs_std,\r\n )\r\n for i in range(len(traj_list)):\r\n for k in traj_list[i].keys():\r\n traj_list[i][k][\"observations\"] = (\r\n traj_list[i][k][\"observations\"] - obs_mean\r\n ) / (obs_std + EPS)\r\n traj_list[i][k][\"next_observations\"] = (\r\n traj_list[i][k][\"next_observations\"] - obs_mean\r\n ) / (obs_std + EPS)\r\n\r\n env = env_wrapper(env)\r\n\r\n for i in range(len(traj_list)):\r\n expert_replay_buffer.add_path(traj_list[i], env=env)\r\n\r\n print(\r\n \"Load {} trajectories, {} samples\".format(\r\n len(traj_list), expert_replay_buffer.num_steps_can_sample()\r\n )\r\n )\r\n\r\n train_splitted_vehicles, train_real_env_num = split_vehicles(\r\n train_vehicles, env_specs[\"training_env_specs\"][\"env_num\"]\r\n )\r\n train_env_nums = {\r\n scenario_name: {\r\n traffic_name: len(vehicles) for traffic_name, vehicles in traffics.items()\r\n }\r\n for scenario_name, traffics in train_splitted_vehicles.items()\r\n }\r\n print(\"training env nums: {}\".format(train_env_nums))\r\n env_specs[\"training_env_specs\"][\"env_num\"] = train_real_env_num\r\n env_specs[\"training_env_specs\"][\"wait_num\"] = min(\r\n train_real_env_num, env_specs[\"training_env_specs\"][\"wait_num\"]\r\n )\r\n training_env = get_envs(\r\n env_specs,\r\n env_wrapper,\r\n splitted_vehicles=train_splitted_vehicles,\r\n **env_specs[\"training_env_specs\"],\r\n )\r\n\r\n eval_splitted_vehicles, eval_real_env_num = split_vehicles(\r\n eval_vehicles, env_specs[\"eval_env_specs\"][\"env_num\"]\r\n )\r\n eval_env_nums = {\r\n scenario_name: {\r\n traffic_name: len(vehicles) for traffic_name, vehicles in traffics.items()\r\n }\r\n for scenario_name, traffics in eval_splitted_vehicles.items()\r\n }\r\n print(\"eval env nums: {}\".format(eval_env_nums))\r\n env_specs[\"eval_env_specs\"][\"env_num\"] = eval_real_env_num\r\n env_specs[\"eval_env_specs\"][\"wait_num\"] = min(\r\n eval_real_env_num, env_specs[\"eval_env_specs\"][\"wait_num\"]\r\n )\r\n eval_env = get_envs(\r\n env_specs,\r\n env_wrapper,\r\n splitted_vehicles=eval_splitted_vehicles,\r\n **env_specs[\"eval_env_specs\"],\r\n )\r\n eval_car_num = [\r\n eval_env.sub_envs_info[env_id].vehicle_num\r\n for env_id in range(eval_real_env_num)\r\n ]\r\n\r\n variant[\"adv_irl_params\"][\"eval_sampler_func\"] = MultiagentPathSampler\r\n\r\n algorithm = MAAdvIRL(\r\n env=env,\r\n training_env=training_env,\r\n eval_env=eval_env,\r\n exploration_policy_n=policy_n,\r\n policy_mapping_dict=policy_mapping_dict,\r\n discriminator_n=disc_model_n,\r\n policy_trainer_n=policy_trainer_n,\r\n expert_replay_buffer=expert_replay_buffer,\r\n eval_car_num=eval_car_num,\r\n **variant[\"adv_irl_params\"],\r\n )\r\n\r\n if ptu.gpu_enabled():\r\n algorithm.to(ptu.device)\r\n algorithm.train()\r\n\r\n return 1\r\n\r\n\r\nif __name__ == \"__main__\":\r\n # Arguments\r\n parser = argparse.ArgumentParser()\r\n parser.add_argument(\"-e\", \"--experiment\", help=\"experiment specification file\")\r\n parser.add_argument(\"-g\", \"--gpu\", help=\"gpu id\", type=int, default=0)\r\n args = parser.parse_args()\r\n with open(args.experiment, \"r\") as spec_file:\r\n spec_string = spec_file.read()\r\n exp_specs = yaml.load(spec_string, Loader=yaml.FullLoader)\r\n\r\n # make all seeds the same.\r\n exp_specs[\"env_specs\"][\"eval_env_seed\"] = exp_specs[\"env_specs\"][\r\n \"training_env_seed\"\r\n ] = exp_specs[\"seed\"]\r\n\r\n exp_suffix = \"\"\r\n exp_suffix = \"--gp-{}--rs-{}--trajnum-{}\".format(\r\n exp_specs[\"adv_irl_params\"][\"grad_pen_weight\"],\r\n exp_specs[\"sac_params\"][\"reward_scale\"],\r\n format(exp_specs[\"traj_num\"]),\r\n )\r\n\r\n if not exp_specs[\"adv_irl_params\"][\"no_terminal\"]:\r\n exp_suffix = \"--terminal\" + exp_suffix\r\n\r\n if exp_specs[\"scale_env_with_demo_stats\"]:\r\n exp_suffix = \"--scale\" + exp_suffix\r\n\r\n if exp_specs[\"using_gpus\"] > 0:\r\n print(\"\\n\\nUSING GPU\\n\\n\")\r\n ptu.set_gpu_mode(True, args.gpu)\r\n\r\n exp_id = exp_specs[\"exp_id\"]\r\n exp_prefix = exp_specs[\"exp_name\"]\r\n seed = exp_specs[\"seed\"]\r\n set_seed(seed)\r\n\r\n exp_prefix = exp_prefix + exp_suffix\r\n setup_logger(exp_prefix=exp_prefix, exp_id=exp_id, variant=exp_specs, seed=seed)\r\n\r\n experiment(exp_specs)\r\n" ]
[ [ "numpy.std", "numpy.random.randint", "numpy.mean" ] ]
C0deZLee/bosch-hackathon
[ "1cc6f5c63ed88d121daed1cf6701ef5e45e8563e" ]
[ "Bosch/message/vec2img.py" ]
[ "# 简化 : 假设其中一个轴为0\n\n\ndef getRange(pts, ax):\n mx = pts[0][ax]\n mi = pts[0][ax]\n for p in pts:\n if p[ax] < mi:\n mi = p[ax]\n if p[ax] > mx:\n mx = p[ax]\n return mx - mi\n\n\ndef pts2flatten(pts):\n ret = []\n rg = [getRange(pts, i) for i in range(3)]\n deli = rg.index(min(rg))\n print(\"deli:\", deli)\n rsvi = [i for i in range(3) if i != deli]\n for p in pts:\n ret.append([p[rsvi[0]], p[rsvi[1]]])\n return np.array(ret)\n\n\nimport math\nimport numpy as np\nfrom PIL import Image\n\nsize = 100\n\n\ndef pts2image(pts):\n pts = np.array(pts)\n x_range = [min(pts[:, 0]), max(pts[:, 0])]\n y_range = [min(pts[:, 1]), max(pts[:, 1])]\n x_w = int((x_range[1] - x_range[0]) / size)\n y_w = int((y_range[1] - y_range[0]) / size)\n\n arr = np.zeros((y_w + 1, x_w + 1), dtype=np.uint)\n print(arr.shape)\n # print(len(pts))\n for p in pts:\n x = int((p[0] - x_range[0]) / size)\n y = int((p[1] - y_range[0]) / size)\n # print(x,y)\n arr[y][x] = 255\n # for i in range(1,min(len(arr),len(arr[0]))):\n # arr[i][i] = 177\n print(arr.shape)\n img = Image.fromarray(arr, '1')\n img.save('tmp.png')\n\n\ndef corse(arr, x, y, val):\n if val <= 0:\n return arr\n w,h = arr.shape\n if 0 <= x < w and 0<=y<h:\n arr[x][y] = min(255, arr[x][y] + val)\n corse(arr, x - 1, y, val - 50)\n corse(arr, x, y - 1, val - 50)\n corse(arr, x + 1, y, val - 50)\n corse(arr, x, y + 1, val - 50)\n\n\ndef pos2mnistlike(x, y):\n size = 28\n arr = np.zeros((size, size))\n x_range = [min(x), max(x)]\n y_range = [min(y), max(y)]\n x_w = (x_range[1] - x_range[0]) / (size - 1)\n y_w = (y_range[1] - y_range[0]) / (size - 1)\n\n for i in range(len(x)):\n _x = int((x[i] - x_range[0]) / x_w)\n _y = int((y[i] - y_range[0]) / y_w)\n # arr[_x][_y] = min(arr[_x][_y] + 50, 255)\n corse(arr, _x, _y, 100)\n return arr\n\n\n# import pytesseract\nfrom PIL import Image\n\n\n#\n\n\n#\ndef getNumAns(pts):\n image = Image.open('tmp.jpg')\n code = pytesseract.image_to_string(image)\n return code\n\n\nimport numpy as np\n" ]
[ [ "numpy.array", "numpy.zeros" ] ]
satvik007/AI_Lab
[ "cfd95ffabd46c07bfff0a4da46026ba44492fd8e" ]
[ "Week4/1.py" ]
[ "from __future__ import print_function\nimport numpy as np\nimport random\n\n\nclass Environment:\n\n def __init__ (self, n):\n self.arr = np.arange(1, n * n + 1).reshape((n, n))\n print (self.arr)\n self. n = n\n self.zero = self.find_zero()\n\n def find_zero(self):\n for i in range(0, self.n):\n for j in range(0, self.n):\n if(self.arr[i][j] == self.n * self.n):\n return i, j\n raise EnvironmentError\n\n def d(self, s):\n return abs(self.n - s[0] - 1) + abs(self.n - s[1] - 1)\n\n def parity(self):\n res = 0\n for i in range(0, self.n * self.n):\n for j in range(i + 1, self.n * self.n):\n x1 = i // self.n\n y1 = i % self.n\n x2 = j // self.n\n y2 = j % self.n\n\n if self.arr[x2][y2] < self.arr[x1][y1]:\n res += 1\n\n return (res + self.d(self.zero)) % 2\n\n def move_up(self, s):\n v1 = self.zero[0]\n v2 = self.zero[1]\n if v1 == 0:\n return False, s\n else:\n s[v1][v2] = s[v1 - 1][v2]\n s[v1 - 1][v2] = self.n * self.n\n self.zero = v1 - 1, v2\n return True, s\n\n def move_down(self, s):\n v1 = self.zero[0]\n v2 = self.zero[1]\n if v1 == self.n - 1:\n return False, s\n else:\n s[v1][v2] = s[v1 + 1][v2]\n s[v1 + 1][v2] = self.n * self.n\n self.zero = v1 + 1, v2\n return True, s\n\n def move_right(self, s):\n v1 = self.zero[0]\n v2 = self.zero[1]\n if v2 == self.n - 1:\n return False, s\n else:\n s[v1][v2] = s[v1][v2 + 1]\n s[v1][v2 + 1] = self.n * self.n\n self.zero = v1, v2 + 1\n return True, s\n\n def move_left(self, s):\n v1 = self.zero[0]\n v2 = self.zero[1]\n if v2 == 0:\n return False, s\n else:\n s[v1][v2] = s[v1][v2 - 1]\n s[v1][v2 - 1] = self.n * self.n\n self.zero = v1, v2 - 1\n return True, s\n\n def testing(self, steps):\n for i in range(0, steps):\n j = random.randint(0, 4)\n if j == 0:\n print (\"\\nmoving up\")\n _, self.arr = self.move_up(self.arr)\n print (self.arr)\n if j == 1:\n print (\"\\nmoving right\")\n _, self.arr = self.move_right(self.arr)\n print(self.arr)\n if j == 2:\n print (\"\\nmoving down\")\n _, self.arr = self.move_down(self.arr)\n print(self.arr)\n if j == 3:\n print (\"\\nmoving left\")\n _, self.arr = self.move_left(self.arr)\n print(self.arr)\n\n\nenv = Environment(4)\nenv.testing(20)\n\n\n" ]
[ [ "numpy.arange" ] ]
simitii/Bongard-LOGO
[ "45a0ab244c809b32bcba139b7273b8ec5aa0708c" ]
[ "Bongard-LOGO_Baselines/models/meta_baseline.py" ]
[ "# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.\n#\n# This work is licensed under the MIT License.\n# To view a copy of this license, visit https://opensource.org/licenses/MIT\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport models\nimport utils\nfrom .models import register\n\n\n@register('meta-baseline')\nclass MetaBaseline(nn.Module):\n\n def __init__(self, encoder, encoder_args={}, method='cos',\n temp=10., temp_learnable=True):\n super().__init__()\n self.encoder = models.make(encoder, **encoder_args)\n self.method = method\n\n if temp_learnable:\n self.temp = nn.Parameter(torch.tensor(temp))\n else:\n self.temp = temp\n\n def forward(self, x_shot, x_query, **kwargs):\n shot_shape = x_shot.shape[:-3]\n query_shape = x_query.shape[:-3]\n img_shape = x_shot.shape[-3:]\n\n x_shot = x_shot.view(-1, *img_shape)\n x_query = x_query.view(-1, *img_shape)\n x_tot = self.encoder(torch.cat([x_shot, x_query], dim=0))\n x_shot, x_query = x_tot[:len(x_shot)], x_tot[-len(x_query):]\n x_shot = x_shot.view(*shot_shape, -1)\n x_query = x_query.view(*query_shape, -1)\n\n if self.method == 'cos':\n x_shot = x_shot.mean(dim=-2)\n x_shot = F.normalize(x_shot, dim=-1) # [ep_per_batch, way, feature_len]\n x_query = F.normalize(x_query, dim=-1) # [ep_per_batch, way * query, feature_len]\n metric = 'dot'\n elif self.method == 'sqr':\n x_shot = x_shot.mean(dim=-2)\n metric = 'sqr'\n\n logits = utils.compute_logits(\n x_query, x_shot, metric=metric, temp=self.temp) # [ep_per_batch, way * query, way]\n return logits\n\n" ]
[ [ "torch.nn.functional.normalize", "torch.cat", "torch.tensor" ] ]
lxuechen/fast-dpsgd
[ "bcd920f81cb8501d16d4e953133bedba86029f56", "bcd920f81cb8501d16d4e953133bedba86029f56" ]
[ "owkindp.py", "fast_torch_dp.py" ]
[ "'''\nCode for Grad-CNN implementations\n'''\n\nimport time\n\nimport torch\nimport torch.nn.functional as F\nfrom gradcnn import crb, make_optimizer\nfrom torch import nn, optim\n\nimport data\nimport utils\nfrom pytorch import get_data\n\n\nclass MNISTNet(crb.Module):\n def __init__(self, **_):\n super().__init__()\n self.conv1 = crb.Conv2d(1, 16, 8, 2, padding=3)\n self.conv2 = crb.Conv2d(16, 32, 4, 2)\n self.fc1 = crb.Linear(32 * 4 * 4, 32)\n self.fc2 = crb.Linear(32, 10)\n\n def forward(self, x):\n # x of shape [B, 1, 28, 28]\n x = F.relu(self.conv1(x)) # -> [B, 16, 14, 14]\n x = F.max_pool2d(x, 2, 1) # -> [B, 16, 13, 13]\n x = F.relu(self.conv2(x)) # -> [B, 32, 5, 5]\n x = F.max_pool2d(x, 2, 1) # -> [B, 32, 4, 4]\n x = x.view(-1, 32 * 4 * 4) # -> [B, 512]\n x = F.relu(self.fc1(x)) # -> [B, 32]\n x = self.fc2(x) # -> [B, 10]\n return x\n\n\nclass FFNN(crb.Module):\n def __init__(self, **_):\n super().__init__()\n self.fc1 = crb.Linear(104, 50)\n self.fc2 = crb.Linear(50, 2)\n\n def forward(self, x):\n out = self.fc1(x)\n out = F.relu(out)\n out = self.fc2(out)\n return out\n\n\nclass Logistic(crb.Module):\n def __init__(self, **_):\n super().__init__()\n self.fc1 = crb.Linear(104, 1)\n\n def forward(self, x):\n out = self.fc1(x)\n out = F.sigmoid(out)\n return out\n\n\nclass CIFAR10Model(crb.Module):\n def __init__(self, **_):\n super().__init__()\n self.layer_list = crb.ModuleList([\n crb.Sequential(crb.Conv2d(3, 32, (3, 3), padding=1, stride=(1, 1)), nn.ReLU()),\n crb.Sequential(crb.Conv2d(32, 32, (3, 3), padding=1, stride=(1, 1)), nn.ReLU()),\n nn.AvgPool2d(2, stride=2),\n crb.Sequential(crb.Conv2d(32, 64, (3, 3), padding=1, stride=(1, 1)), nn.ReLU()),\n crb.Sequential(crb.Conv2d(64, 64, (3, 3), padding=1, stride=(1, 1)), nn.ReLU()),\n nn.AvgPool2d(2, stride=2),\n crb.Sequential(crb.Conv2d(64, 128, (3, 3), padding=1, stride=(1, 1)), nn.ReLU()),\n crb.Sequential(crb.Conv2d(128, 128, (3, 3), padding=1, stride=(1, 1)), nn.ReLU()),\n nn.AvgPool2d(2, stride=2),\n crb.Sequential(crb.Conv2d(128, 256, (3, 3), padding=1, stride=(1, 1)), nn.ReLU()),\n crb.Conv2d(256, 10, (3, 3), padding=1, stride=(1, 1)),\n ])\n\n def forward(self, x):\n for layer in self.layer_list:\n x = layer(x)\n # print(x.shape)\n return torch.mean(x, dim=(2, 3))\n\n\nmodel_dict = {\n 'mnist': MNISTNet,\n 'ffnn': FFNN,\n 'logreg': Logistic,\n 'cifar10': CIFAR10Model,\n}\n\n\ndef main(args):\n print(args)\n assert args.dpsgd\n torch.backends.cudnn.benchmark = True\n\n train_data, train_labels = get_data(args)\n model = model_dict[args.experiment](vocab_size=args.max_features).cuda()\n model.get_detail(True)\n\n optimizer = make_optimizer(\n cls=optim.SGD,\n noise_multiplier=args.noise_multiplier,\n l2_norm_clip=args.l2_norm_clip,\n )(model.parameters(), lr=args.learning_rate)\n\n loss_function = nn.CrossEntropyLoss() if args.experiment != 'logreg' else nn.BCELoss()\n\n timings = []\n for epoch in range(1, args.epochs + 1):\n start = time.perf_counter()\n dataloader = data.dataloader(train_data, train_labels, args.batch_size)\n for batch_idx, (x, y) in enumerate(dataloader):\n x, y = x.cuda(non_blocking=True), y.cuda(non_blocking=True)\n model.zero_grad()\n outputs = model(x)\n loss = loss_function(outputs, y)\n loss.backward()\n optimizer.step()\n torch.cuda.synchronize()\n duration = time.perf_counter() - start\n print(\"Time Taken for Epoch: \", duration)\n timings.append(duration)\n\n if not args.no_save:\n utils.save_runtimes(__file__.split('.')[0], args, timings)\n else:\n print('Not saving!')\n print('Done!')\n\n\nif __name__ == '__main__':\n parser = utils.get_parser(model_dict.keys())\n parser.add_argument(\n \"--sigma\",\n type=float,\n default=1.0,\n help=\"Noise multiplier (default 1.0)\",\n )\n parser.add_argument(\n \"-c\",\n \"--max-per-sample-grad_norm\",\n type=float,\n default=1.0,\n help=\"Clip per-sample gradients to this norm (default 1.0)\",\n )\n parser.add_argument(\n \"--delta\",\n type=float,\n default=1e-5,\n help=\"Target delta (default: 1e-5)\",\n )\n args = parser.parse_args()\n main(args)\n", "'''\nOpacus experiments for all the models\n'''\nimport time\n\nimport torch\nfrom torch import nn, optim\n\nimport data\nfrom experimental.privacy_utils import autograd_grad_sample\nfrom experimental.privacy_utils.privacy_engine import EfficientPrivacyEngine\nfrom pytorch import get_data, model_dict\nimport utils\n\n\ndef main(args):\n print(args)\n assert args.dpsgd\n torch.backends.cudnn.benchmark = True\n\n mdict = model_dict.copy()\n train_data, train_labels = get_data(args)\n model = mdict[args.experiment](vocab_size=args.max_features, batch_size=args.batch_size).cuda()\n optimizer = optim.SGD(model.parameters(), lr=args.learning_rate, momentum=0)\n loss_function = nn.CrossEntropyLoss(reduction=\"none\") if args.experiment != 'logreg' else nn.BCELoss(\n reduction=\"none\")\n\n privacy_engine = EfficientPrivacyEngine(\n model,\n batch_size=args.batch_size,\n sample_size=len(train_data),\n alphas=[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64)),\n noise_multiplier=args.sigma,\n max_grad_norm=args.max_per_sample_grad_norm,\n )\n privacy_engine.attach(optimizer)\n\n timings = []\n for epoch in range(1, args.epochs + 1):\n start = time.perf_counter()\n dataloader = data.dataloader(train_data, train_labels, args.batch_size)\n for batch_idx, (x, y) in enumerate(dataloader):\n x, y = x.cuda(non_blocking=True), y.cuda(non_blocking=True)\n outputs = model(x)\n loss = loss_function(outputs, y)\n\n autograd_grad_sample.set_hooks_mode(mode=\"norm\")\n first_loss = loss.mean(dim=0)\n first_loss.backward(retain_graph=True)\n\n autograd_grad_sample.set_hooks_mode(mode=\"grad\")\n coef_sample = optimizer.privacy_engine.get_coef_sample()\n second_loss = (coef_sample * loss).sum(dim=0)\n second_loss.backward()\n\n optimizer.step()\n optimizer.zero_grad()\n torch.cuda.synchronize()\n duration = time.perf_counter() - start\n print(\"Time Taken for Epoch: \", duration)\n timings.append(duration)\n\n if args.dpsgd:\n epsilon, best_alpha = optimizer.privacy_engine.get_privacy_spent(args.delta)\n print(f\"Train Epoch: {epoch} \\t\"\n f\"(ε = {epsilon}, δ = {args.delta}) for α = {best_alpha}\")\n else:\n print(f\"Train Epoch: {epoch}\")\n\n if not args.no_save:\n utils.save_runtimes(__file__.split('.')[0], args, timings)\n else:\n print('Not saving!')\n print('Done!')\n\n\nif __name__ == '__main__':\n # python fast_torch_dp.py ffnn --dpsgd --batch_size 100000 --dummy_data --epochs 100000\n parser = utils.get_parser(model_dict.keys())\n parser.add_argument(\n \"--sigma\",\n type=float,\n default=1.0,\n help=\"Noise multiplier (default 1.0)\",\n )\n parser.add_argument(\n \"-c\",\n \"--max-per-sample-grad_norm\",\n type=float,\n default=1.0,\n help=\"Clip per-sample gradients to this norm (default 1.0)\",\n )\n parser.add_argument(\n \"--delta\",\n type=float,\n default=1e-5,\n help=\"Target delta (default: 1e-5)\",\n )\n args = parser.parse_args()\n main(args)\n" ]
[ [ "torch.nn.functional.sigmoid", "torch.cuda.synchronize", "torch.nn.AvgPool2d", "torch.nn.ReLU", "torch.mean", "torch.nn.BCELoss", "torch.nn.functional.relu", "torch.nn.functional.max_pool2d", "torch.nn.CrossEntropyLoss" ], [ "torch.cuda.synchronize", "torch.nn.CrossEntropyLoss", "torch.nn.BCELoss" ] ]
bdvllrs/marl-patroling-agents
[ "f2924d88a412acd27c02e3889aedc648a6d7400e" ]
[ "sim/env.py" ]
[ "from typing import List\n\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection\nimport mpl_toolkits.mplot3d.art3d as art3d\nimport numpy as np\nfrom sim.rewards import reward_full\nfrom sim.agents.agents import Agent\nimport random\n\n\nclass Env:\n agents: List[Agent]\n\n def __init__(self, env_config, config):\n self.reward_type = env_config.reward_type\n self.noise = env_config.noise\n self.board_size = env_config.board_size\n\n self.plot_radius = env_config.plot_radius\n\n self.possible_location_values = [float(k) / float(self.board_size) for k in range(self.board_size)]\n\n self.current_iteration = 0\n self.max_iterations = env_config.max_iterations\n\n self.infinite_world = env_config.infinite_world\n self.config = config\n\n self.obstacles = env_config.obstacles\n\n self.magic_switch = None\n self.initial_types = []\n\n self.obstacle_positions = []\n for (x, y) in self.obstacles:\n self.obstacle_positions.append(self.possible_location_values[x])\n self.obstacle_positions.append(self.possible_location_values[y])\n\n self.agents = []\n self.initial_positions = []\n\n def add_agent(self, agent: Agent, position=None):\n \"\"\"\n Args:\n agent:\n position: If None, random position at each new episode.\n \"\"\"\n assert position is None or (0 <= position[0] < 1 and 0 <= position[1] < 1), \"Initial position is incorrect.\"\n if self.config.env.world_3D:\n assert position is None or len(position) == 3, \"Please provide 3D positions if use 3D world.\"\n if position is not None:\n assert position not in self.obstacles, \"Initial position in an obstacle\"\n if position is not None:\n x, y = self.possible_location_values[-1], self.possible_location_values[-1]\n z = position[2] if len(position) == 3 else 0\n for k in range(self.board_size):\n if position[0] <= self.possible_location_values[k]:\n x = self.possible_location_values[k]\n if position[1] <= self.possible_location_values[k]:\n y = self.possible_location_values[k]\n if len(position) == 2 and position[2] <= self.possible_location_values[k]:\n z = self.possible_location_values[k]\n position = x, y, z\n self.agents.append(agent)\n self.initial_types.append(agent.type)\n self.initial_positions.append(position)\n\n def _get_random_position(self):\n possible_values = [(x, y) for x in range(self.board_size)\n for y in range(self.board_size) if (x, y) not in self.obstacles]\n x, y = random.sample(possible_values, 1)[0]\n x, y = self.possible_location_values[x], self.possible_location_values[y]\n z = self.possible_location_values[0]\n if self.config.env.world_3D:\n z = random.sample(self.possible_location_values, 1)[0]\n return x, y, z\n\n def _get_position_from_action(self, current_position, action):\n \"\"\"\n From an action number, returns the new position.\n If position is not correct, then the position stays the same.\n Args:\n current_position: (x_cur, y_cur)\n action: in {0, 1, 2, 3, 4}\n Returns: (x_new, y_new)\n \"\"\"\n index_x = self.possible_location_values.index(current_position[0])\n index_y = self.possible_location_values.index(current_position[1])\n index_z = 0\n if self.config.env.world_3D:\n index_z = self.possible_location_values.index(current_position[2])\n\n if action == 1: # Front\n position = index_x, index_y + 1, index_z\n elif action == 2: # Left\n position = index_x - 1, index_y, index_z\n elif action == 3: # Back\n position = index_x, index_y - 1, index_z\n elif action == 4: # Right\n position = index_x + 1, index_y, index_z\n elif self.config.env.world_3D and action == 5: # Top\n position = index_x, index_y, index_z + 1\n elif self.config.env.world_3D and action == 6: # Bottom\n position = index_x, index_y, index_z - 1\n else: # None (=0)\n position = index_x, index_y, index_z\n if not self.infinite_world:\n if position[0] < 0 or position[0] >= len(self.possible_location_values):\n position = index_x, position[1], position[2]\n if position[1] < 0 or position[1] >= len(self.possible_location_values):\n position = position[0], index_y, position[2]\n if position[2] < 0 or position[2] >= len(self.possible_location_values):\n position = position[0], position[1], index_z\n else:\n # If infinite world, goes back to the other side\n position = (position[0] % len(self.possible_location_values),\n position[1] % len(self.possible_location_values),\n position[2] % len(self.possible_location_values))\n if [position[0], position[1]] in self.obstacles:\n position = index_x, index_y, index_z\n\n position = (self.possible_location_values[position[0]], self.possible_location_values[position[1]],\n self.possible_location_values[position[2]])\n\n if self.config.env.magic_switch and position[0] == self.magic_switch[0] and position[1] == self.magic_switch[1]:\n for agent in self.agents:\n if agent.type == \"predator\":\n agent.type = \"prey\"\n else:\n agent.type = \"predator\"\n return position\n\n def _get_state_from_positions(self, positions):\n # return positions\n states = []\n for k in range(len(self.agents)):\n # relative_positions = []\n # x = positions[3 * k] # Position of the current agent\n # y = positions[3 * k + 1] # Position of the current agent\n # z = positions[3 * k + 2] # Position of the current agent\n # # Compute relative positions\n # for i in range(len(self.agents)):\n # x_other = positions[3 * i] # Position of the current agent\n # y_other = positions[3 * i + 1] # Position of the current agent\n # z_other = positions[3 * i + 2] # Position of the current agent\n # relative_positions.append(x - x_other)\n # relative_positions.append(y - y_other)\n # relative_positions.append(z - z_other)\n state = positions[:]\n # state.extend(relative_positions)\n state.extend(self.obstacle_positions)\n if self.config.env.magic_switch:\n state.extend([self.magic_switch[0], self.magic_switch[1]])\n types = [int(agent.type == \"predator\") for agent in self.agents]\n state.extend(types)\n states.append(state)\n return states\n\n def _get_possible_positions(self, current_position):\n \"\"\"\n Return possible positions from the given one\n Args:\n current_position:\n Returns: x_index, y_index of the possible new positions\n \"\"\"\n index_x = self.possible_location_values.index(current_position[0])\n index_y = self.possible_location_values.index(current_position[1])\n index_z = self.possible_location_values.index(current_position[2])\n max_len = len(self.possible_location_values)\n indexes = [(index_x, index_y, index_z)]\n if (self.infinite_world or index_x > 0) and ((index_x - 1) % max_len, index_y) not in self.obstacles:\n indexes.append(((index_x - 1) % max_len, index_y, index_z)) # Left\n if (self.infinite_world or index_x < len(self.possible_location_values) - 1) and (\n ((index_x + 1) % max_len, index_y) not in self.obstacles): # Right\n indexes.append(((index_x + 1) % max_len, index_y, index_z))\n if (self.infinite_world or index_y > 0) and (\n (index_x, (index_y - 1) % max_len) not in self.obstacles): # Back\n indexes.append((index_x, (index_y - 1) % max_len, index_z))\n if (self.infinite_world or index_y < len(self.possible_location_values) - 1) and (\n (index_x, (index_y + 1) % max_len) not in self.obstacles): # Front\n indexes.append((index_x, (index_y + 1) % max_len, index_z))\n if self.config.env.world_3D:\n if (self.infinite_world or index_z < len(self.possible_location_values) - 1) and (\n (index_x, index_y) not in self.obstacles): # Top\n indexes.append((index_x, index_y, (index_z + 1) % max_len))\n if (self.infinite_world or index_z > 0) and (\n (index_x, index_y) not in self.obstacles): # Bottom\n indexes.append((index_x, index_y, (index_z - 1) % max_len))\n return indexes\n\n def _get_collisions(self, positions):\n n_collisions = 0\n for i, agent in enumerate(self.agents):\n x, y, z = positions[3 * i], positions[3 * i + 1], positions[3 * i + 2]\n for j, other_agent in enumerate(self.agents):\n x_2, y_2, z_2 = positions[3 * j], positions[3 * j + 1], positions[3 * j + 2]\n distance = np.linalg.norm([x_2 - x, y_2 - y, z_2 - z])\n if agent.type != other_agent.type and distance < self.possible_location_values[1]:\n n_collisions += 1\n return n_collisions // 2\n\n def reset(self, test=False):\n \"\"\"\n Returns: State for each agent. Size is (number_agents, 4 * number_agents)\n for each agent, the state is the positions [x_1, y_1, x_2, y_2, ..., x_n, y_n]\n concatenated with the relative position with each other agent:\n [x_i-x_1, y_i - y_1, ..., x_i - x_n, y_i - y_n].\n \"\"\"\n self.current_iteration = 0\n # Get all positions\n absolute_positions = []\n for k in range(len(self.initial_positions)):\n position = self.initial_positions[k]\n if position is None: # If random position\n position = self._get_random_position()\n # Absolute positions for the state. List of size num_agents * 2.\n absolute_positions.append(position[0])\n absolute_positions.append(position[1])\n absolute_positions.append(position[2])\n # Define the initial states\n types = [agent.type for agent in self.agents]\n if self.config.env.magic_switch:\n self.magic_switch = self._get_random_position()[:2]\n for k in range(len(self.agents)):\n if not test:\n self.agents[k].type = \"predator\" if self.agents[k].type == \"prey\" else \"prey\"\n else:\n self.agents[k].type = self.initial_types[k]\n types[k] = self.agents[k].type\n return self._get_state_from_positions(absolute_positions), types\n\n def step(self, prev_states, actions):\n \"\"\"\n Args:\n prev_states: states for each agent. Size (num_agents, 4 * num_agents)\n actions: actions for each agent\n \"\"\"\n positions = []\n for k in range(len(self.agents)):\n # Retrieve absolute positions\n position = prev_states[0][3 * k], prev_states[0][3 * k + 1], prev_states[0][3 * k + 2]\n new_position = self._get_position_from_action(position, actions[k])\n if random.random() < self.noise:\n index_x, index_y, index_z = random.sample(self._get_possible_positions(position), 1)[0]\n new_position = (self.possible_location_values[index_x], self.possible_location_values[index_y],\n self.possible_location_values[index_z])\n positions.append(new_position[0])\n positions.append(new_position[1])\n positions.append(new_position[2])\n n_colisions = self._get_collisions(positions)\n next_state = self._get_state_from_positions(positions)\n # Determine rewards\n border_positions = [self.possible_location_values[0], self.possible_location_values[-1]]\n rewards = reward_full(positions, self.agents, border_positions, self.obstacles, self.current_iteration)\n types = [agent.type for agent in self.agents]\n self.current_iteration += 1\n terminal = False\n if self.current_iteration == self.max_iterations:\n terminal = True\n return next_state, rewards, terminal, n_colisions, types\n\n def plot(self, state, types, rewards, ax):\n # Add obstacles\n tick_labels = np.arange(0, self.board_size)\n ax.set_xticks(self.possible_location_values)\n ax.set_yticks(self.possible_location_values)\n ax.set_xticklabels(tick_labels)\n ax.set_yticklabels(tick_labels)\n ax.set_xlim(0, self.possible_location_values[-1])\n ax.set_ylim(0, self.possible_location_values[-1])\n if self.config.env.world_3D:\n ax.set_zticks(self.possible_location_values)\n ax.set_zticklabels(tick_labels)\n ax.set_zlim(0, self.possible_location_values[-1])\n ax.grid(which=\"major\", alpha=0.5)\n for x, y in self.obstacles:\n x, y = self.possible_location_values[x], self.possible_location_values[y]\n side = self.possible_location_values[1]\n if self.config.env.world_3D:\n top = self.possible_location_values[-1]\n points = [(x - side / 2, y - side / 2, 0), (x - side / 2, y + side / 2, 0),\n (x + side / 2, y + side / 2, 0),\n (x + side / 2, y - side / 2, 0),\n (x - side / 2, y - side / 2, top), (x - side / 2, y + side / 2, top),\n (x + side / 2, y + side / 2, top), (x + side / 2, y - side / 2, top)]\n edges = [[points[0], points[1], points[2], points[3]],\n [points[4], points[5], points[6], points[7]],\n [points[0], points[1], points[5], points[4]],\n [points[2], points[3], points[6], points[7]],\n [points[0], points[4], points[7], points[3]],\n [points[1], points[5], points[6], points[2]]]\n block = Poly3DCollection(edges, linewidth=0)\n block.set_facecolor((0, 0, 0, 0.1))\n ax.add_collection3d(block)\n else:\n block = plt.Rectangle((x - side / 2, y - side / 2), width=side, height=side, linewidth=0, color=\"black\")\n ax.add_patch(block)\n if self.config.env.magic_switch:\n x, y = self.magic_switch\n side = self.possible_location_values[1]\n block = plt.Rectangle((x - side / 2, y - side / 2), width=side, height=side, linewidth=0, color=\"purple\")\n ax.add_patch(block)\n\n for k in range(len(self.agents)):\n if self.config.env.world_3D:\n position = state[0][3 * k], state[0][3 * k + 1], state[0][3 * k + 2]\n else:\n position = state[0][3 * k], state[0][3 * k + 1]\n radius = self.config.env.plot_radius_3D if self.config.env.world_3D else self.plot_radius\n self.agents[k].plot(position, types[k], rewards[k], radius, ax)\n" ]
[ [ "matplotlib.pyplot.Rectangle", "numpy.linalg.norm", "numpy.arange" ] ]
short-greg/takonet
[ "22ca288feb36fb435b7c120f76a6902d0d5529f4" ]
[ "tako/test_build.py" ]
[ "from multiprocessing.sharedctypes import Value\nfrom re import X\nimport typing\nimport torch\nfrom torch import nn\nfrom torch.nn.modules.container import Sequential\nfrom ._networks import In, Multitap, Node, NodePort, OpNode, Out, Port\nfrom ._build import (\n ChainFactory, CounterNamer, Kwargs, ModFactory, NetBuilder, OpFactory, OpMod, ParamMod, argv,\n ScalarInFactory, TensorFactory, TensorDefFactory, TensorInFactory, TensorMod, argf, diverge, \n SequenceFactory, sz, arg, factory, arg_, chain, LambdaMod, opself, NullFactory, concat\n)\nfrom ._modules import Null, OpAction\nimport pytest\n\n\nclass TestArg:\n\n def test_to_arg_to_arg(self):\n v = argv(\"x\")\n res = v.to(x=argv('y'))\n assert isinstance(res, arg)\n\n def test_to_arg_to_value(self):\n v = argv(\"x\")\n res = v.to(x=1)\n assert res == 1\n \n def test_to_var_to_val_not_contained(self):\n v = argv('x')\n res = v.to(y=2)\n assert res.name == \"x\"\n \n def test_arg__creates_an_arg(self):\n\n v = arg_.x\n assert v.name == 'x'\n\n\nclass TestSz:\n\n def test_to_sz_with_valid_sz(self):\n\n val = sz[1].process([torch.Size([1, 2])])\n assert val == 2\n\n def test_to_sz_with_invalid_sz(self):\n\n with pytest.raises(ValueError):\n val = sz[2].process([torch.Size([1, 2])])\n\n def test_to_sz_with_valid_sz_and_port(self):\n\n val = sz[1, 0].process([torch.Size([1, 2]), torch.Size([1, 2])])\n assert val == 2\n\n def test_to_sz_with_valid_sz_and_invalid_port(self):\n\n with pytest.raises(ValueError):\n sz[1, 2].process([torch.Size([1, 2]), torch.Size([1, 2])])\n\n def test_to_sz_with_valid_sz_only_port(self):\n \n target = torch.Size([1, 3])\n result = sz(None, 1).process([torch.Size([1, 2]), target])\n assert result == target\n\n def test_to_sz_with_all(self):\n \n target1 = torch.Size([1, 3])\n target2 = torch.Size([1, 4])\n result = sz().process([target1, target2])\n assert target1 == result[0]\n assert target2 == result[1]\n\n\nclass TestArgf:\n\n def test_argf_process_with_no_special_args(self):\n v = argf(lambda x, y: x * y, [3, 2])\n res = v.process([torch.Size([1, 2])])\n assert res == 6\n\n def test_argf_with_two_args(self):\n v = argf(lambda x, y: x * y, [arg_.x, arg_.y])\n res = v.process([torch.Size([1, 2])], dict(x=3, y=2))\n assert res == 6\n\n def test_argf_to_with_two_args(self):\n v = argf(lambda x, y: x * y, [arg_.x, arg_.y])\n v = v.to(x=3, y=2)\n res = v.process([torch.Size([1, 2])])\n assert res == 6\n\n def test_argf_to_with_one_arg_and_size(self):\n v = argf(lambda x, y: x * y, [arg_.x, sz[1]])\n v = v.to(x=3)\n res = v.process([torch.Size([1, 2])])\n assert res == 6\n\n def test_argf_procses_without_overriding(self):\n v = argf(lambda x, y: x * y, [arg_.x, arg_.y])\n v = v.to(x=3)\n with pytest.raises(ValueError):\n v.process([torch.Size([1, 2])])\n\n\nclass TestCounterNamer:\n\n def test_name_one_module(self):\n\n namer = CounterNamer()\n name = namer.name('X', nn.Linear(2, 2))\n assert name == 'X'\n\n def test_name_with_same_base_name(self):\n\n namer = CounterNamer()\n namer.name('X', nn.Linear(2, 2))\n name2 = namer.name('X', nn.Linear(2, 2))\n assert name2 == 'X_2'\n\n def test_name_with_two_different_base_names(self):\n\n namer = CounterNamer()\n namer.name('X', nn.Linear(2, 2))\n name = namer.name('Y', nn.Linear(2, 2))\n assert name == 'Y'\n\n def test_name_three_with_same_base_names(self):\n\n namer = CounterNamer()\n namer.name('X', nn.Linear(2, 2))\n namer.name('X', nn.Linear(2, 2))\n name = namer.name('X', nn.Linear(2, 2))\n assert name == 'X_3'\n\n def test_retrieve_third_name(self):\n\n namer = CounterNamer()\n namer.name('X', nn.Linear(2, 2))\n namer.name('X', nn.Linear(2, 2))\n namer.name('X', nn.Linear(2, 2))\n assert namer['X'][-1] == 'X_3'\n\n def test_retrieve_first_and_second_name(self):\n\n namer = CounterNamer()\n namer.name('X', nn.Linear(2, 2))\n namer.name('X', nn.Linear(2, 2))\n namer.name('X', nn.Linear(2, 2))\n assert namer['X'][:2] == ['X', 'X_2']\n\n\nclass TestNullFactory:\n\n def test_null_factory_with_multi(self):\n\n factory = NullFactory(True)\n null, out_size = factory.produce([Out([-1, 3]),Out([-1, 3])])\n x = [torch.rand(2, 3), torch.rand(1, 3)]\n y = null(*x)\n assert (y[0] == x[0]).all()\n assert (y[1] == x[1]).all()\n\n def test_null_factory_with_single(self):\n\n factory = NullFactory()\n null, out_size = factory.produce([Out([-1, 3])])\n x = torch.rand(2, 3)\n y = null(x)\n assert (x == y).all()\n\n\nclass TestOpMod:\n\n def test_op_mod_with_nn_linear(self):\n\n opnn = OpMod(nn)\n linear = opnn.Linear(1, 2)\n result, _ = linear.produce([Out(torch.Size([1, 1]))])\n assert isinstance(result, nn.Linear)\n\n def test_update_info(self):\n\n opnn = OpMod(nn)\n target = \"Linear 1\"\n linear: OpFactory = opnn.Linear(1, 2)\n result = linear.info_(name=target)\n assert result.name == target\n\n\n def test_update_labels(self):\n\n opnn = OpMod(nn)\n target = ['linear']\n linear: OpFactory = opnn.Linear(1, 2)\n result = linear.info_(labels=target)\n assert result.meta.labels.labels == target\n\n def test_op_mod_with_nn_sigmoid(self):\n\n opnn = OpMod(nn)\n sigmoid = opnn.Sigmoid()\n result, _ = sigmoid.produce(Out(torch.Size([1, 1])))\n assert isinstance(result, nn.Sigmoid)\n\n def test_op_mod_with_nn_sigmoid_and_unknown_first_size(self):\n\n opnn = OpMod(nn)\n sigmoid = opnn.Sigmoid()\n result, _ = sigmoid.produce(Out(torch.Size([-1, 1])))\n assert isinstance(result, nn.Sigmoid)\n\n def test_op_mod_with_torch_tensor_and_unknown_first_size(self):\n\n optorch = OpMod(torch, factory=TensorMod)\n zeros = optorch.zeros(2, 3)\n # result, _ = sigmoid.produce(Out(torch.Size([-1, 1])))\n assert isinstance(zeros, TensorInFactory)\n\n def test_op_mod_with_torch_tensor_will_produce_in(self):\n\n optorch = OpMod(torch, factory=TensorMod)\n zeros = optorch.zeros(2, 3).produce()\n assert isinstance(zeros, In)\n\n\n def test_lambda_mod_with_torch_sigmoid(self):\n\n torch.manual_seed(2)\n x = torch.rand(2, 2)\n optorch = OpMod(torch, LambdaMod)\n sigmoid_mod = optorch.sigmoid()\n sigmoid, _ = sigmoid_mod.produce([Out([-1, 2])])\n assert (sigmoid(x) == torch.sigmoid(x)).all()\n\n def test_selfmethod_mod_with_torch_sigmoid(self):\n\n torch.manual_seed(2)\n x = torch.rand(2, 2)\n log_mod = opself.log()\n log_, _ = log_mod.produce([Out([-1, 2])])\n assert (log_(x) == x.log()).all()\n\n def test_op_mod_with_torch_tensor_raises_exception_with_invalid_args(self):\n\n optorch = OpMod(torch, factory=TensorMod)\n with pytest.raises(RuntimeError):\n print(optorch.select(2, 3))\n\n def test_op_mod_with_torch_tensor_and_unknown_first_size(self):\n\n optorch = OpMod(torch, factory=TensorMod)\n with pytest.raises(RuntimeError):\n t = optorch.select(2, 3)\n t.produce()\n\n def test_op_mod_with_param_mod(self):\n\n optorch = OpMod(torch, factory=ParamMod)\n rand_param = optorch.rand(2, 3)\n assert isinstance(rand_param, TensorInFactory)\n\n def test_op_mod_produce_with_param_mod(self):\n\n optorch = OpMod(torch, factory=ParamMod)\n node = optorch.rand(2, 3).produce()\n assert isinstance(node, In)\n\n\nclass TestMod:\n\n def test_mod_with_sigmoid(self):\n\n m = factory(nn.Sigmoid)\n sigmoid= m.produce(Out( torch.Size([-1, 4])))[0]\n assert isinstance(sigmoid, nn.Sigmoid)\n\n def test_mod_with_nn_linear(self):\n\n m = factory(nn.Linear, sz[1], 4)\n linear = m.produce(Out(torch.Size([-1, 4])))[0]\n assert isinstance(linear, nn.Linear)\n \n def test_mod_with_nn_linear_and_arg(self):\n\n m = factory(nn.Linear, sz[1], argv('x'))\n linear = m.produce(Out(torch.Size([-1, 4])), x=3)[0]\n assert isinstance(linear, nn.Linear)\n\n def test_mod_with_nn_linear_and_arg(self):\n\n m = factory(nn.Linear, sz[1], argv('x'))\n linear = m.produce(Out(torch.Size([-1, 4])), x=3)[0]\n assert isinstance(linear, nn.Linear)\n\n def test_op_with_nn_linear_and_arg(self):\n\n m = factory(nn.Sigmoid)\n sigmoid, out_size = m.produce(Out(torch.Size([-1, 4])), x=3)\n \n assert isinstance(sigmoid, nn.Sigmoid)\n\n def test_mod_raises_error_when_arg_not_defined(self):\n\n m = factory(nn.Linear, sz[1], argv('x'))\n with pytest.raises(RuntimeError):\n m.produce(Out(torch.Size([-1, 4])))\n\n\nclass TestSequence:\n\n def test_sequence_from_two_ops(self):\n\n sequence = (\n factory(nn.Linear, 2, 4) << \n factory(nn.Sigmoid) <<\n factory(nn.Linear, 4, 3)\n )\n assert isinstance(sequence, SequenceFactory)\n\n def test_sequence_produce_from_two_ops(self):\n \n sequence, _ = (\n factory(nn.Linear, 2, 4) << \n factory(nn.Sigmoid) <<\n factory(nn.Linear, 4, 3)\n ).produce([Out(torch.Size([1, 2]))])\n \n assert isinstance(sequence, Sequential)\n\n def test_sequence_produce_nodes_from_three_ops(self):\n port = NodePort('mod', torch.Size([1, 2]))\n nodes = list((\n factory(nn.Linear, 2, 4) << \n factory(nn.Sigmoid) <<\n factory(nn.Linear, 4, 3)\n ).produce_nodes(port))\n assert len(nodes) == 3\n\n def test_sequence_produce_nodes_from_three_ops_and_args(self):\n port = NodePort(\"mod\", torch.Size([1, 2]))\n nodes = list((\n factory(nn.Linear, 2, 4) << \n factory('activation') <<\n factory(nn.Linear, 4, 3) <<\n factory('activation')\n ).to(activation=nn.ReLU).produce_nodes(port))\n \n assert isinstance(nodes[1].op, nn.ReLU) and isinstance(nodes[3].op, nn.ReLU)\n\n def test_update_info_for_sequential(self):\n\n target = 'Sequence'\n sequence = (\n factory(nn.Linear, 2, 4) << \n factory('activation') <<\n factory(nn.Linear, 4, 3) <<\n factory('activation')\n ).info_(name=target)\n assert sequence.name == target\n\n def test_sequence_produce_from_three_ops_and_args(self):\n\n size = torch.Size([1, 2])\n\n linear = (\n factory(nn.Linear, sz[1], argv('out'), bias=False) << \n factory('activation')\n )\n\n sequence, _ = (\n linear.to(out=argv('x')) << \n linear.to(out=argv('y'), activation=nn.Sigmoid)\n ).produce(Out(size), activation=nn.ReLU, x=4, y=3)\n assert isinstance(sequence[1], nn.ReLU) and isinstance(sequence[3], nn.Sigmoid)\n assert sequence(torch.rand(1, 2)).size() == torch.Size([1, 3])\n\n def test_sequence_produce_from_three_ops_and_alias(self):\n\n size = torch.Size([1, 2])\n layer = (\n \n factory(nn.Linear, sz[1], argv('out'), bias=False) << \n factory('activation')\n )\n\n sequence, sizes = (\n layer.alias(out='x') << \n layer.to(out=argv('y'), activation=nn.Sigmoid)\n ).produce(Out(size), activation=nn.ReLU, x=4, y=3)\n assert isinstance(sequence[1], nn.ReLU) and isinstance(sequence[3], nn.Sigmoid)\n assert sequence(torch.rand(1, 2)).size() == torch.Size([1, 3])\n assert sizes[0].size == torch.Size([-1, 3])\n\n\nclass TestDiverge:\n\n def test_produce_with_diverge(self):\n \n div = diverge ([\n factory(nn.Linear, 2, 3),\n factory(nn.Linear, 3, 4)\n ])\n layer, _ = div.produce([Out(torch.Size([-1, 2])), Out(torch.Size([-1, 3]))])\n results = layer(torch.rand(2, 2), torch.ones(2, 3))\n assert results[0].size() == torch.Size([2, 3])\n assert results[1].size() == torch.Size([2, 4])\n\n def test_produce_nodes_with_diverge(self):\n\n ports = [NodePort('x', torch.Size([-1, 2])), NodePort('y', torch.Size([-1, 3]))]\n div = diverge ([\n factory(nn.Linear, 2, 3),\n factory(nn.Linear, 3, 4)\n ])\n nodes: typing.List[OpNode] = []\n for node in div.produce_nodes(Multitap(ports)):\n nodes.append(node)\n \n p1, = nodes[0].inputs\n p2, = nodes[1].inputs\n \n assert p1.node == 'x'\n assert p2.node == 'y'\n\n def test_update_info_for_diverge(self):\n\n div = diverge ([\n factory(nn.Linear, 2, 3),\n factory(nn.Linear, 3, 4)\n ]).info_('Diverge')\n assert div.name == 'Diverge'\n\n\nclass TestChain:\n\n def test_chained_linear(self):\n\n op = OpFactory(ModFactory(nn.Linear, 2, 2))\n chain_ = ChainFactory(op, [Kwargs(), Kwargs()])\n sequence, _ = chain_.produce([Out(torch.Size([-1, 2]))])\n\n assert isinstance(sequence[0], nn.Linear)\n\n def test_chained_linear_with_arg(self):\n\n op = OpFactory(ModFactory(nn.Linear, sz[1], argv('x')))\n chain_ = ChainFactory(op, [Kwargs(x=4), Kwargs(x=5)])\n sequence, _ = chain_.produce([Out(torch.Size([-1, 2]))])\n\n assert isinstance(sequence[0], nn.Linear)\n\n def test_chained_linear_size_is_correct(self):\n\n op = OpFactory(ModFactory(nn.Linear, sz[1], argv('x')))\n chain_ = chain(op, [Kwargs(x=4), Kwargs(x=5)])\n sequence, out = chain_.produce([Out(torch.Size([-1, 2]))])\n\n assert out[0].size == torch.Size([-1, 5])\n\n def test_chained_linear_size_raises_error_with_undefined_argument(self):\n\n op = OpFactory(ModFactory(nn.Linear, sz[1], argv('x')))\n chain_ = chain(op, [Kwargs(y=4), Kwargs(x=5)])\n with pytest.raises(RuntimeError):\n _, out = chain_.produce([Out(torch.Size([-1, 2]))])\n\n def test_chained_produce_nodes(self):\n\n op = OpFactory(ModFactory(nn.Linear, sz[1], argv('x')))\n chain_ = chain(op, [Kwargs(x=4), Kwargs(x=5)])\n nodes: typing.List[Node] = []\n for node in chain_.produce_nodes(Multitap([NodePort('x', torch.Size([-1, 2]))])):\n nodes.append(node)\n\n assert nodes[-1].ports[0].size == torch.Size([-1, 5])\n\n def test_chain_to_produce_nodes(self):\n\n op = OpFactory(ModFactory(nn.Linear, sz[1], argv('x')))\n chain_ = chain(op, [Kwargs(x=4), Kwargs(x=5)])\n chain_ = chain_.to(x=argv('y'))\n nodes: typing.List[Node] = []\n for node in chain_.produce_nodes(NodePort(\"x\", torch.Size([-1, 2]))):\n nodes.append(node)\n\n assert nodes[-1].ports[0].size == torch.Size([-1, 5])\n\n def test_chain_to_produce_nodes_raises_error(self):\n\n op = OpFactory(ModFactory(nn.Linear, sz[1], argv('x')))\n chain_ = chain(op, [Kwargs(y=4), Kwargs(x=5)])\n chain_ = chain_.to(x=argv('y'))\n with pytest.raises(RuntimeError):\n for _ in chain_.produce_nodes(NodePort(\"x\", torch.Size([-1, 2]))):\n pass\n\n def test_chain_(self):\n\n op = OpFactory(ModFactory(nn.Linear, sz[1], argv('x')))\n chain_ = chain(op, [Kwargs(y=4), Kwargs(x=5)]).info_(name='Hi')\n assert chain_.name == 'Hi'\n\n\nclass TestTensorInFactory:\n\n def test_produce_tensor_input_with_call_default(self):\n\n factory = TensorFactory(torch.zeros,torch.Size([1, 2]), Kwargs())\n op = TensorInFactory(\n factory\n )\n in_ = op.produce()\n assert isinstance(in_, In)\n\n def test_produce_tensor_in_with_no_default(self):\n\n op = TensorDefFactory(-1, 5, dtype=torch.float)\n in_ = op.produce()\n assert isinstance(in_, In)\n\n def test_produce_tensor_in_with_no_default_and_device(self):\n\n op = TensorDefFactory(-1, 5, dtype=torch.float)\n in_ = op.produce()\n assert isinstance(in_, In)\n\n def test_produce_tensor_input(self):\n\n default = [[1, 2], [3, 4]]\n op = TensorDefFactory(2, 2)\n in_ = op.produce()\n assert isinstance(in_, In)\n\n\nclass TestScalarInFactory:\n\n def test_produce_tensor_input(self):\n\n op = ScalarInFactory(\n int, 2, False\n )\n in_ = op.produce()\n\n assert isinstance(in_, In)\n\n def test_produce_tensor_input_with_call_default(self):\n\n op = ScalarInFactory(\n type(dict()), dict, True\n )\n in_ = op.produce()\n\n assert isinstance(in_, In)\n\n\nclass TestParameterFactory:\n\n def test_produce_tensor_input_with_call_default(self):\n\n factory = TensorFactory(torch.zeros, [1, 4], Kwargs(), requires_grad=True)\n op = TensorInFactory(\n factory, name='Hi'\n )\n in_ = op.produce()\n assert isinstance(in_, In)\n\n def test_produce_tensor_input_with_reset(self):\n\n factory = TensorFactory(torch.zeros, [1, 4], Kwargs(), requires_grad=True)\n op = TensorInFactory(\n factory, name='Hi'\n )\n parameter = op.produce()\n assert parameter.default.size() == torch.Size([1, 4])\n\n\nclass Activation(nn.Module):\n\n def forward(self, x):\n return x\n def reverse(self, x):\n return x * 2\n\n\nclass TestNetBuilder:\n\n def test_produce_network_with_in(self):\n\n x = TensorInFactory(\n TensorFactory(torch.randn, [1, 2]), name='x'\n )\n op2 = OpFactory(\n ModFactory(nn.Linear, 2, 3)\n )\n op3 = OpFactory(\n ModFactory(nn.Linear, 3, 4)\n )\n builder = NetBuilder()\n x_, = builder.add_in(x) \n builder.append(\n x_, op2 << op3\n )\n assert builder['x'][0].size == torch.Size([1, 2])\n\n def test_produce_network_with_one_op(self):\n\n x = TensorInFactory(\n TensorFactory(torch.randn, [1, 2]), name='x'\n )\n op2 = OpFactory(\n ModFactory(nn.Linear, 2, 3)\n )\n builder = NetBuilder()\n x, = builder.add_in(x=x)\n builder.append(\n x, op2\n )\n assert builder['Linear'][0].size == torch.Size([-1, 3])\n\n def test_produce_network_with_two_ops_same_name(self):\n\n x = TensorInFactory(\n TensorFactory(torch.randn, [1, 2]), name='x'\n )\n op2 = OpFactory(\n ModFactory(nn.Linear, 2, 3)\n )\n op3 = OpFactory(\n ModFactory(nn.Linear, 3, 4)\n )\n builder = NetBuilder()\n x_, = builder.add_in(x)\n builder.append(x_, op2 << op3)\n assert builder.net['Linear_2'] != builder.net['Linear']\n\n def test_produce_network_with_sequence(self):\n\n sequence = OpFactory(\n ModFactory(nn.Linear, 2, 3)\n ) << OpFactory(\n ModFactory(nn.Linear, 3, 4)\n )\n\n x = TensorInFactory(\n TensorFactory(torch.randn, [1, 2]), name='x'\n )\n \n builder = NetBuilder()\n x_ = builder.add_in(x)\n builder.append(x_, sequence)\n assert builder.net['Linear_2'] != builder.net['Linear']\n\n def test_produce_network_with_tensor_in(self):\n\n sequence = OpFactory(\n ModFactory(nn.Linear, 2, 3)\n ) << OpFactory(\n ModFactory(nn.Linear, 3, 4)\n ) \n \n x = TensorInFactory(\n TensorFactory(torch.randn, [1, 2]), name='x'\n )\n \n builder = NetBuilder()\n multitap = builder.add_in(x)\n builder.append(multitap, sequence)\n \n assert builder.net['Linear_2'] != builder.net['Linear']\n\n def test_with_multiple_net_builders(self):\n\n factory1 = OpFactory(\n ModFactory(nn.Linear, 2, 3)\n ) \n \n factory2 = OpFactory(\n ModFactory(nn.Linear, 3, 4)\n )\n \n x = TensorDefFactory('x', 1, 2)\n \n builder = NetBuilder()\n x_, = builder.add_in(x)\n port1, = builder.append(x_, factory1)\n port2, = builder.append(port1, factory2)\n \n net = builder.net\n y = port2.node\n y0 = port1.node\n x = x_.node\n z = net.probe([y, y0, x], by={x: torch.randn(1, 2)})\n assert z[0].size(1) == 4\n\n def test_output_with_chained_factories(self):\n\n factory1 = OpFactory(ModFactory(nn.Linear, 2, 3)) \n factory2 = OpFactory(ModFactory(nn.Linear, 3, 4))\n factory3 = OpFactory(ModFactory(nn.Linear, 4, 2))\n \n x = TensorDefFactory('x', 1, 2)\n \n builder = NetBuilder()\n x_, = builder.add_in(x)\n port1, = builder.append(x_, factory1)\n port2, = builder.append(port1, factory2)\n port3, = builder.append(port2, factory3)\n \n net = builder.net\n y0 = port3.node\n x = x_.node\n z = net.probe([y0, x], by={x: torch.randn(1, 2)})\n assert z[0].size(1) == 2\n\n def test_output_with_chained_factories(self):\n#\n factory1 = OpFactory(ModFactory(nn.Linear, sz[1], arg_.out_features)) \n x = TensorDefFactory('x', 1, 2)\n \n builder = NetBuilder()\n x_, = builder.add_in(x)\n port1, = builder.append(\n x_,\n chain(factory1, [{'out_features': 3}, {'out_features': 4}, {'out_features': 2}])\n )\n \n net = builder.net\n y0 = port1.node\n x = x_.node\n z = net.probe([y0, x], by={x: torch.randn(1, 2)})\n assert z[0].size(1) == 2\n\n def test_produce_action_from_op(self):\n\n sequence = OpFactory(\n ModFactory(nn.Linear, 2, 3)\n ) << OpFactory(\n ModFactory(Activation)\n ) \n \n x = TensorInFactory(\n TensorFactory(torch.randn, [1, 2]), name='x'\n )\n \n builder = NetBuilder()\n multitap = builder.add_in(x)\n y, = builder.append(multitap, sequence)\n action = builder.action(y.node, 'reverse')\n res = action.produce([Out((2, 3))])\n \n assert isinstance(res, OpAction)\n\n def test_produce_action_from_op_gives_correct_results(self):\n\n torch.manual_seed(2)\n sequence = OpFactory(\n ModFactory(nn.Linear, 2, 3)\n ) << OpFactory(\n ModFactory(Activation)\n ) \n \n x = TensorInFactory(\n TensorFactory(torch.randn, [1, 2]), name='x'\n )\n \n builder = NetBuilder()\n multitap = builder.add_in(x)\n y, = builder.append(multitap, sequence)\n action = builder.action(y.node, 'reverse')\n res = action.produce([Out((2, 3))])\n \n x = torch.rand(2, 2)\n assert (res.forward(x) == 2 * x).all()\n\n\nclass TestConcat:\n\n def test_concat_with_none(self):\n\n f = concat([])\n assert isinstance(f, NullFactory)\n \n def test_concat_with_one(self):\n\n f = factory(nn.Linear, sz[1], arg_.x)\n f = concat([f])\n linear, _ = f.produce([Out([-1, 2])], x=4)\n y = linear.forward(torch.randn(2, 2))\n assert y.size() == torch.Size([2, 4])\n\n def test_concat_with_two(self):\n\n f = factory(nn.Linear, sz[1], arg_.x)\n f2 = factory(nn.Linear, sz[1], arg_.y)\n f = concat([f, f2])\n linear, _ = f.produce([Out([-1, 2])], x=4, y=5)\n y = linear.forward(torch.randn(2, 2))\n assert y.size() == torch.Size([2, 5])\n" ]
[ [ "torch.Size", "torch.rand", "torch.nn.Linear", "torch.sigmoid", "torch.ones", "torch.manual_seed", "torch.randn" ] ]
WayneDW/Bayesian-Sparse-Deep-Learning
[ "94036065c4a249e8c87ebef84e686f885528c23f" ]
[ "trainer.py" ]
[ "''' Trainer file\nAn Adaptive Empirical Bayesian Method for Sparse Deep Learning (NeurIPS 2019)\n(c) Wei Deng, Xiao Zhang, Faming Liang, Guang Lin\n'''\nimport math\nimport copy\nimport sys\nimport os\nimport timeit\nimport csv\nimport dill\nimport argparse\nimport random\nfrom random import shuffle\n\nfrom tqdm import tqdm ## better progressbar\nfrom math import exp\nfrom sys import getsizeof\nimport numpy as np\n\n## import pytorch modules\nimport torch\nfrom torch.autograd import Variable\nimport numpy as np\nimport torch.nn as nn\nfrom torchvision import datasets, transforms\nfrom torchvision.datasets import ImageFolder\nfrom torchvision.transforms import ToTensor\nfrom torch.utils.data import DataLoader\nimport torch.utils.data as data\nimport torchvision.datasets as datasets\n\n## Import helper functions\nfrom tools import model_eval, BayesEval\nfrom sgmcmc import Sampler\n\nCUDA_EXISTS = torch.cuda.is_available()\n\n\ndef sgmcmc(net, train_loader, test_loader, pars):\n '''\n Perform SG-MCMC, which we sample the weights and optimize the hidden variables at the same time\n '''\n net.invT = pars.invT # reset the tempreture, useful when start from pretrained model\n \n start = timeit.default_timer()\n if pars.model == 'resnet20':\n sampler = Sampler(net, pars)\n net.set_hidden(pars)\n net.sparse_rate = 0\n best_acc = 0\n counter = 1.\n for epoch in range(1, pars.sn + 1):\n # switch to train mode\n net.train()\n for i, (images, labels) in enumerate(train_loader):\n images = Variable(images).cuda() if CUDA_EXISTS else Variable(images)\n labels = Variable(labels).cuda() if CUDA_EXISTS else Variable(labels)\n loss = sampler.step(images, labels)\n if pars.prune > 0:\n net.update_hidden(prune=True, adaptive_sparse=True)\n \"\"\" Anneal learning rate \"\"\"\n if pars.model == 'resnet20' and epoch in [700, 900]:\n sampler.eta *= 0.1\n \"\"\" Anneal temperature \"\"\"\n if pars.model == 'resnet20':\n sampler.invT *= pars.anneal\n \n acc = model_eval(net, test_loader, if_print=0)\n if net.adaptive_sparse >= net.target_sparse - 0.005:\n best_acc = max(best_acc, acc)\n print('\\nEpoch {} Sparse Rate: {:.2f}% Acc: {:0.2f} Best Acc: {:0.2f} InvT: {:.1E} Loss: {:0.1f}'.format(\\\n epoch, net.sparse_rate, acc, best_acc, sampler.invT, loss))\n if acc < 15 and epoch > 10:\n exit('Sampling lr may be too large')\n\n end = timeit.default_timer()\n print(\"Sampling Time used: {:0.1f}\".format(end - start))\n if pars.sn > 0:\n model_eval(net, test_loader)\n\n" ]
[ [ "torch.autograd.Variable", "torch.cuda.is_available" ] ]
TizioMaurizio/ArduinoWorkshop
[ "d38ede91c6b7a925eafb0272a5fa9f44885ae017" ]
[ "Projects/OpenCVwebCamera/AnimeColor.py" ]
[ "import cv2\nimport numpy as np\nimport serial\nimport time\nfrom typing import Dict\n\nimport mss\nfrom PIL import Image\nfrom screeninfo import get_monitors\n\n\ndef nothing(x):\n pass\n \ncv2.namedWindow(\"Trackbars\")\n \ncv2.createTrackbar(\"B\", \"Trackbars\", 0, 255, nothing)\ncv2.createTrackbar(\"G\", \"Trackbars\", 0, 255, nothing)\ncv2.createTrackbar(\"R\", \"Trackbars\", 0, 255, nothing)\n\ncap = cv2.VideoCapture('http://192.168.1.9:8080/video')\n\nwhile(True):\n ret, frame = cap.read()\n scale_percent = 60 # percent of original size\n width = int(frame.shape[1] * scale_percent / 100)\n height = int(frame.shape[0] * scale_percent / 100)\n dim = (width, height)\n # half size:\n image = cv2.resize(frame, dim, interpolation = cv2.INTER_LINEAR)\n hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)\n\n B = cv2.getTrackbarPos(\"B\", \"Trackbars\")\n G = cv2.getTrackbarPos(\"G\", \"Trackbars\")\n R = cv2.getTrackbarPos(\"R\", \"Trackbars\")\n\n green = np.uint8([[[B, G, R]]])\n hsvGreen = cv2.cvtColor(green,cv2.COLOR_BGR2HSV)\n lowerLimit = np.uint8([hsvGreen[0][0][0]-10,100,100])\n upperLimit = np.uint8([hsvGreen[0][0][0]+10,255,255])\n\n mask = cv2.inRange(hsv, lowerLimit, upperLimit)\n\n result = cv2.bitwise_and(image\t, image\t, mask=mask)\n\n cv2.imshow(\"frame\", image)\n cv2.imshow(\"mask\", mask)\n cv2.imshow(\"result\", result)\n if cv2.waitKey(1) & 0xFF == ord('q'):\n cv2.destroyAllWindows()\n break\n\ncap.release()\ncv2.destroyAllWindows()" ]
[ [ "numpy.uint8" ] ]
nshelch/NCams
[ "a2027a739337df8b620b2454cf83bb2516db8a00" ]
[ "ncams/camera_pose.py" ]
[ "#!python3\n# -*- coding: utf-8 -*-\n\"\"\"\nNCams Toolbox\nCopyright 2019-2020 Charles M Greenspon, Anton Sobinov\nhttps://github.com/CMGreenspon/NCams\n\nFunctions related to estimation of relative positions and orientations of the cameras.\n\nFor more details on the camera data structures and dicts, see help(ncams.camera_tools).\n\"\"\"\n\nimport os\nimport tkinter\nfrom tkinter.filedialog import askopenfilename\n\nimport numpy as np\nimport cv2\n\nimport matplotlib\nimport matplotlib.pyplot as mpl_pp\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection\n\nfrom . import utils\nfrom . import camera_io\nfrom . import camera_tools\n\n\n#################### Board detectors\ndef charuco_board_detector(camera_config):\n '''Detects charuco board in all cameras.\n\n (Should be run after cameras have been calibrated.)\n A general function for bulk identifying all charuco corners across cameras and storing them in\n usable arrays for subsequent pose estimation.\n\n Arguments:\n camera_config {dict} -- see help(ncams.camera_tools). Should have following keys:\n serials {list of numbers} -- list of camera serials.\n dicts {dict of 'camera_dict's} -- keys are serials, values are 'camera_dict'.\n pose_estimation_path {string} -- relative path to where pose estimation information is\n stored from 'setup_path'.\n\n Output:\n cam_image_points {list} -- x,y coordinates of identified points\n cam_charuco_ids {list} -- ids of the points\n '''\n # Unpack the dict\n serials = camera_config['serials']\n names = [camera_config['dicts'][serial]['name'] for serial in serials]\n pose_estimation_path = os.path.join(camera_config['setup_path'],\n camera_config['pose_estimation_path'])\n\n # Get number of cameras\n num_cameras = len(serials)\n charuco_dict, charuco_board, _ = camera_tools.create_board(camera_config)\n\n # Get list of images for each camera\n cam_image_list = []\n num_images = np.zeros((1, num_cameras), dtype=int)\n for icam, name in enumerate(names):\n path_check = os.path.isdir(os.path.join(pose_estimation_path, name))\n if path_check is False:\n full_image_list = utils.get_image_list(path=os.path.join(pose_estimation_path))\n image_list = [fn for fn in full_image_list if name in fn]\n else:\n image_list = utils.get_image_list(path=os.path.join(pose_estimation_path, name))\n\n num_images[0, icam] = len(image_list)\n cam_image_list.append(image_list)\n\n # Crucial: each camera must have the same number of images so that we can assume the order is\n # maintained and that they are synced\n if not np.ma.allequal(num_images, np.mean(num_images)):\n raise Exception('Image lists are of unequal size and may not be synced.')\n\n num_images = num_images[0, 0]\n cam_image_points = []\n cam_charuco_ids = []\n # Look at one synced image across cameras and find the points\n for image in range(num_images):\n im_ids, image_points = [], [] # reset for each image\n for icam, name in enumerate(names):\n # Load the image\n img = cv2.imread(os.path.join(pose_estimation_path, name,\n cam_image_list[icam][image]))\n # Detect the aruco markers and get IDs\n corners, ids, _ = cv2.aruco.detectMarkers(img, charuco_dict)\n if ids is not None:\n # Find the corners and IDs\n _, charuco_corners, charuco_ids = cv2.aruco.interpolateCornersCharuco(\n corners, ids, img, charuco_board)\n if isinstance(charuco_corners, np.ndarray): # If present then append\n image_points.append(charuco_corners)\n im_ids.append(charuco_ids)\n else: # For formatting/indexing\n image_points.append([])\n im_ids.append([])\n else:\n image_points.append([])\n im_ids.append([])\n # Concatenate them to get super list which can be parsed later\n cam_image_points.append(image_points)\n cam_charuco_ids.append(im_ids)\n\n return cam_image_points, cam_charuco_ids\n\n\ndef checkerboard_detector(camera_config):\n '''Get all image points and determine which calibration mode is better. ???\n AS: description seems wrong\n\n Should be run after cameras have been calibrated.\n\n Arguments:\n camera_config {dict} -- see help(ncams.camera_tools). Should have following keys:\n serials {list of numbers} -- list of camera serials.\n board_dim: list with the number of checks [height, width]\n dicts {dict of 'camera_dict's} -- keys are serials, values are 'camera_dict'.\n pose_estimation_path {string} -- relative path to where pose estimation information is\n stored from 'setup_path'.\n\n Output:\n cam_board_logit {list} -- if checkerboard: logical array (num_cameras, num_images)\n indicating in which images each camera detected a checkerboard.\n cam_image_points {list} -- if checkerboard: array of image points (num_cameras, image,\n (x, y))\n pose_strategy {string} -- string indicating which pose estimation strategy is ideal.\n '''\n # Unpack the dict\n serials = camera_config['serials']\n num_cameras = len(serials) # How many cameras are there\n names = [camera_config['dicts'][serial]['name'] for serial in serials]\n pose_estimation_path = os.path.join(camera_config['setup_path'],\n camera_config['pose_estimation_path'])\n board_dim = camera_config['board_dim']\n\n # Begin the checkerboard detection for each camera\n criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) # Default criteria\n cam_board_logit = []\n cam_image_points = []\n\n print('Beginning checkerboard detection.')\n for icam, cam_name in enumerate(names):\n print('- Camera {} of {}.'.format(icam+1, num_cameras))\n cam_image_list = utils.get_image_list(path=os.path.join(pose_estimation_path, cam_name))\n\n # Analyze the images to get checkerboard corners\n image_points = [] # x,y image points\n board_logit = np.zeros((1, len(cam_image_list)), dtype=bool)\n\n for iimage, image_name in enumerate(cam_image_list):\n img = cv2.imread(image_name, 0) # Load as grayscale\n board_logit[0, iimage], corners = cv2.findChessboardCorners(\n img, (board_dim[0]-1, board_dim[1]-1), None)\n\n # If a checkerboard was found then append the image points variable for calibration\n if board_logit[0, iimage]:\n corners_refined = cv2.cornerSubPix(img, corners, (11, 11), (-1, -1), criteria)\n image_points.append(corners_refined)\n else:\n image_points.append([]) # To keep consistent with the board_logit list\n\n # Add exports to list structure\n cam_board_logit.append(board_logit)\n cam_image_points.append(image_points)\n\n print('* Checkerboard detection complete.')\n return cam_board_logit, cam_image_points\n\n\n#################### Automated calibration\ndef multi_camera_pose_estimation(camera_config, show_poses=True):\n '''[Short description]\n\n [Long description]\n\n Arguments:\n []\n Keyword Arguments:\n []\n Output:\n []\n '''\n raise NotImplementedError\n num_cameras = len(camera_config['camera_names'])\n if num_cameras == 1:\n raise Exception('Only one camera present, pose cannot be calculated with this function.')\n return []\n\n\ndef get_optimal_pose_method(input_array, board_type, num_corners):\n '''[Short description]\n\n [Long description]\n\n Arguments:\n []\n Keyword Arguments:\n []\n Output:\n []\n '''\n raise NotImplementedError\n if board_type == 'charuco':\n num_images = len(input_array)\n num_cameras = len(input_array[0])\n shared_points_counter = np.zeros((len(input_array),1), dtype = int)\n for image in range(num_images):\n # Empty array with a spot for each corner and ID\n point_logit = np.zeros((num_corners, num_cameras), dtype = bool)\n for cam in range(num_cameras):\n # Get the array specific to the camera and image\n temp = input_array[image][cam]\n if isinstance(temp, np.ndarray):\n for corner in temp:\n point_logit[int(corner),cam] = True\n\n sum_point_logit = np.sum(point_logit.astype(int), 1)\n common_points = sum_point_logit == num_cameras\n shared_points_counter[image,0] = np.sum(common_points.astype(int))\n num_common_points = np.sum(shared_points_counter)\n\n if num_common_points >= 250:\n optimal_method = 'common'\n else:\n optimal_method = 'sequential-stereo'\n\n return optimal_method\n\n\n#################### Pose estimation methods\ndef get_world_pose(image, image_size, charuco_dict, charuco_board, world_points, camera_matrix,\n cam_distortion_coefficients):\n (w,h) = image_size\n # Get the image points\n # Detect the aruco markers and IDs\n corners, ids, _ = cv2.aruco.detectMarkers(image, charuco_dict)\n _, charuco_corners, charuco_ids = cv2.aruco.interpolateCornersCharuco(\n corners, ids, image, charuco_board)\n # Match to world points\n filtered_world_points = []\n for cid in charuco_ids:\n filtered_world_points.append(world_points[cid, :, :])\n filtered_world_points = np.vstack(filtered_world_points)\n\n # PnP\n _, cam_orientation, camera_location = cv2.solvePnP(\n filtered_world_points, charuco_corners,\n camera_matrix, cam_distortion_coefficients)\n\n return camera_location, cam_orientation\n\n\ndef one_shot_multi_PnP(camera_config, calibration_config, export_full=True, show_poses=False):\n '''Position estimation based on a single frame from each camera.\n\n Assumes that a single synchronized image was taken where all cameras can see the calibration\n board. Will then use the world points to compute the position of each camera independently.\n This method utilizes the fewest images and points to compute the positions and orientations but\n is also the simplest to implement.\n\n Arguments:\n camera_config {dict} -- see help(ncams.camera_tools). Should have following keys:\n serials {list of numbers} -- list of camera serials.\n image_size {(height, width)} -- size of the images captured by the cameras.\n board_dim: list with the number of checks [height, width]\n dicts {dict of 'camera_dict's} -- keys are serials, values are 'camera_dict'.\n pose_estimation_path {string} -- directory where pose estimation information is stored.\n calibration_config {dict} -- see help(ncams.camera_tools). Should have following keys:\n distortion_coefficients {list of np.arrays} -- distortion coefficients for each camera\n camera_matrices {list of np.arrays} -- the essential camera matrix for each camera.\n dicts {dict of 'camera_calib_dict's} -- keys are serials, values are\n 'camera_calib_dict', see below.\n Keyword Arguments:\n export_full {bool} -- save the pose estimation to a dedicated file. (default: {True})\n Output:\n pose_estimation_config {dict} -- information on estimation of relative position of all\n cameras and the results of said pose estimation. For more info, see\n help(ncams.camera_tools). Should have following keys:\n serials {list of numbers} -- list of camera serials.\n world_locations {list of np.arrays} -- world locations of each camera.\n world_orientations {list of np.arrays} -- world orientation of each camera.\n path {string} -- directory where pose estimation information is stored. Should be same\n as information in camera_config.\n filename {string} -- name of the YAML file to store the config in/load from.\n dicts {dict of 'camera_pe_dict's} -- keys are serials, values are 'camera_pe_dict',\n see below.\n\n camera_pe_dict {dict} -- info on pose estimation of a single camera. Sould have following\n keys:\n serial {number} -- UID of the camera.\n world_location {np.array} -- world location of the camera.\n world_orientation {np.array} -- world orientation of the camera.\n '''\n names = [camera_config['dicts'][serial]['name'] for serial in camera_config['serials']]\n pose_estimation_path = os.path.join(camera_config['setup_path'],\n camera_config['pose_estimation_path'])\n camera_matrices = calibration_config['camera_matrices']\n distortion_coefficients = calibration_config['distortion_coefficients']\n\n charuco_dict, charuco_board, _ = camera_tools.create_board(camera_config)\n world_points = camera_tools.create_world_points(camera_config)\n h, w = camera_config['image_size']\n im_list = utils.get_image_list(path=pose_estimation_path)\n\n world_locations = []\n world_orientations = []\n for icam, name in enumerate(names):\n # Find the correct image\n im_name = [i for i in im_list if name in i]\n # If more than one image contains the camera name ask user to select\n if len(im_name) > 1:\n print('--> Multiple images contain the camera name. Select the correct file for'\n ' \"{}\".'.format(name))\n # Select the file\n root = tkinter.Tk()\n root.update()\n im_path = askopenfilename(initialdir=pose_estimation_path,\n title='select the image for \"{}\".'.format(name))\n root.destroy()\n\n else:\n im_path = os.path.join(pose_estimation_path, im_name[0])\n\n world_image = cv2.imread(im_path)\n\n cam_location, cam_orientation = get_world_pose(world_image, (w, h), charuco_dict,\n charuco_board, world_points,\n camera_matrices[icam],\n distortion_coefficients[icam])\n\n world_locations.append(cam_location)\n world_orientations.append(cam_orientation)\n\n # Make the output structure\n dicts = {}\n for icam, serial in enumerate(camera_config['serials']):\n dicts[serial] = {\n 'serial': serial,\n 'world_location': world_locations[icam],\n 'world_orientation': world_orientations[icam]\n }\n\n pose_estimation_config = {\n 'serials': camera_config['serials'],\n 'world_locations': world_locations,\n 'world_orientations': world_orientations,\n 'path': pose_estimation_path,\n 'filename': camera_config['pose_estimation_filename'],\n 'dicts': dicts\n }\n\n if export_full:\n camera_io.export_pose_estimation(pose_estimation_config)\n\n if show_poses:\n plot_poses(pose_estimation_config, scale_factor=1)\n\n return pose_estimation_config\n\n\ndef common_pose_estimation(camera_config, calibration_config, cam_image_points, detection_logit,\n export_full=True):\n '''Position estimation based on frames from multiple cameras simultaneously.\n\n If there are sufficient shared world points across all cameras then camera pose can\n be estimated from all of them simultaneously. This allows for more points to be used\n than with the one_shot method.\n\n Arguments:\n camera_config {dict} -- see help(ncams.camera_tools). Should have following keys:\n serials {list of numbers} -- list of camera serials.\n reference_camera_serial {number} -- serial number of the reference camera.\n image_size {(height, width)} -- size of the images captured by the cameras.\n board_dim: list with the number of checks [height, width]\n dicts {dict of 'camera_dict's} -- keys are serials, values are 'camera_dict'.\n pose_estimation_path {string} -- relative path to where pose estimation information is\n stored from 'setup_path'.\n pose_estimation_filename {string} -- name of the pickle file to store the pose\n estimation config in/load from.\n calibration_config {dict} -- see help(ncams.camera_tools). Should have following keys:\n distortion_coefficients {list of np.arrays} -- distortion coefficients for each camera\n camera_matrices {list of np.arrays} -- the essential camera matrix for each camera.\n dicts {dict of 'camera_calib_dict's} -- keys are serials, values are\n 'camera_calib_dict', see below.\n cam_image_points {[type]} -- [description]\n detection_logit {[type]} -- [description]\n Keyword Arguments:\n export_full {bool} -- save the pose estimation to a dedicated file. (default: {True})\n Output:\n pose_estimation_config {dict} -- information on estimation of relative position of all\n cameras and the results of said pose estimation. For more info, see\n help(ncams.camera_tools). Should have following keys:\n serials {list of numbers} -- list of camera serials.\n world_locations {list of np.arrays} -- world locations of each camera.\n world_orientations {list of np.arrays} -- world orientation of each camera.\n path {string} -- directory where pose estimation information is stored. Should be same\n as information in camera_config.\n filename {string} -- name of the YAML file to store the config in/load from.\n dicts {dict of 'camera_pe_dict's} -- keys are serials, values are 'camera_pe_dict',\n see below.\n\n camera_pe_dict {dict} -- info on pose estimation of a single camera. Sould have following\n keys:\n serial {number} -- UID of the camera.\n world_location {np.array} -- world location of the camera.\n world_orientation {np.array} -- world orientation of the camera.\n '''\n num_cameras = len(camera_config['serials'])\n camera_matrices = calibration_config['camera_matrices']\n distortion_coefficients = calibration_config['distortion_coefficients']\n\n # Determine the reference camera\n ireference_cam = camera_config['serials'].index(camera_config['reference_camera_serial'])\n\n h, w = camera_config['image_size']\n\n cam_idx = np.arange(0, num_cameras)\n secondary_idx = cam_idx[cam_idx != ireference_cam] # Get the indices of the non-primary cameras\n # Get the world points\n world_points = camera_tools.create_world_points(camera_config)\n corner_idx = np.arange(0, len(world_points))\n\n if camera_config['board_type'] == 'charuco':\n # Get all the points shared across cameras\n filtered_object_points = []\n filtered_image_points = [[] for icam in range(num_cameras)]\n for cip, detl in zip(cam_image_points, detection_logit):\n # Empty array with a spot for each corner and ID\n point_logit = np.zeros((len(world_points), num_cameras), dtype=bool)\n for icam in range(num_cameras):\n temp = detl[icam] # Get the array specific to the camera and im\n if isinstance(temp, np.ndarray):\n for corner in temp: # For each detected corner\n point_logit[int(corner), icam] = True\n sum_point_logit = np.sum(point_logit.astype(int), 1)\n\n # Find which points are shared across all cameras\n common_points = sum_point_logit == num_cameras\n if np.sum(common_points) >= 6:\n # Append only those points\n filtered_object_points.append(world_points[common_points, :].astype('float32'))\n for icam in range(num_cameras):\n temp_corners = np.zeros((np.sum(common_points), 2), dtype=float)\n temp_ids = detl[icam]\n temp_points = cip[icam]\n running_idx = 0\n for corner in corner_idx[common_points]: # Only append\n idx = int(np.where(temp_ids == corner)[0])\n temp_corners[running_idx, :] = temp_points[idx, :, :]\n running_idx += 1\n filtered_image_points[icam].append(temp_corners.astype('float32'))\n\n elif camera_config['board_type'] == 'checkerboard':\n raise NotImplementedError\n\n # Get the optimal matrices and undistorted points\n optimal_matrices, undistorted_points = [], []\n for icam in range(num_cameras):\n temp_optim, _ = cv2.getOptimalNewCameraMatrix(\n camera_matrices[icam], distortion_coefficients[icam], (w, h), 1, (w, h))\n optimal_matrices.append(temp_optim)\n undistorted_points.append(cv2.undistortPoints(\n np.vstack(filtered_image_points[icam]), camera_matrices[icam],\n distortion_coefficients[icam], P=optimal_matrices[icam]))\n\n # Perform the initial stereo calibration\n secondary_cam = secondary_idx[0]\n stereo_calib_flags = 0\n stereo_calib_flags |= cv2.CALIB_FIX_INTRINSIC\n criteria = (cv2.TERM_CRITERIA_EPS, 30, 0.001)\n\n reprojection_error, _, _, _, _, R, T, _, _ = cv2.stereoCalibrate(\n filtered_object_points, filtered_image_points[ireference_cam],\n filtered_image_points[secondary_cam],\n camera_matrices[ireference_cam], distortion_coefficients[ireference_cam],\n camera_matrices[secondary_cam], distortion_coefficients[secondary_cam],\n (w, h), None, None, None, criteria, stereo_calib_flags)\n\n if reprojection_error > 1:\n print('Poor initial stereo-calibration. Subsequent pose estimates may be innacurate.')\n\n # Make projection matrices\n # This keeps the reference frame to that of the primary camera\n projection_matrix_primary = np.dot(\n camera_matrices[ireference_cam], np.hstack((np.identity(3), np.zeros((3, 1)))))\n\n # Make the secondary projection matrix from calculated rotation & translation matrix\n projection_matrix_secondary = np.dot(camera_matrices[secondary_cam], np.hstack((R, T)))\n\n # Triangulate those same points\n triangulated_points_norm = cv2.triangulatePoints(\n projection_matrix_primary, projection_matrix_secondary,\n undistorted_points[ireference_cam], undistorted_points[secondary_cam])\n # Normalize:\n triangulated_points = triangulated_points_norm[:3, :] / np.transpose(\n np.repeat(triangulated_points_norm[3, :], 3).reshape((-1, 3)))\n\n world_orientations = []\n world_locations = []\n # Get pose from the triangulated points for all cameras\n for icam in range(num_cameras):\n _, rvec, tvec = cv2.solvePnP(\n np.transpose(triangulated_points), np.vstack(filtered_image_points[icam]),\n camera_matrices[icam], distortion_coefficients[icam])\n world_orientations.append(rvec)\n world_locations.append(tvec)\n\n # Make the output structure\n dicts = {}\n for icam, serial in enumerate(camera_config['serials']):\n dicts[serial] = {\n 'serial': serial,\n 'world_location': world_locations[icam],\n 'world_orientation': world_orientations[icam]\n }\n\n pose_estimation_config = {\n 'serials': camera_config['serials'],\n 'world_locations': world_locations,\n 'world_orientations': world_orientations,\n 'path': camera_config['pose_estimation_path'],\n 'filename': camera_config['pose_estimation_filename'],\n 'dicts': dicts\n }\n\n if export_full:\n camera_io.export_pose_estimation(pose_estimation_config)\n\n return pose_estimation_config\n\n\ndef sequential_pose_estimation(cam_board_logit, cam_image_points, reference_camera,\n camera_matrices, distortion_coefficients):\n '''If insufficient shared points then we can instead use the reference pair of cameras and\n iteratively calibrate all other cameras.\n\n [Long description]\n\n Arguments:\n []\n Keyword Arguments:\n []\n Output:\n pose_estimation_config {dict} -- information on estimation of relative position of all\n cameras and the results of said pose estimation. For more info, see\n help(ncams.camera_tools). Should have following keys:\n serials {list of numbers} -- list of camera serials.\n world_locations {list of np.arrays} -- world locations of each camera.\n world_orientations {list of np.arrays} -- world orientation of each camera.\n path {string} -- directory where pose estimation information is stored. Should be same\n as information in camera_config.\n filename {string} -- name of the YAML file to store the config in/load from.\n dicts {dict of 'camera_pe_dict's} -- keys are serials, values are 'camera_pe_dict',\n see below.\n\n camera_pe_dict {dict} -- info on pose estimation of a single camera. Sould have following\n keys:\n serial {number} -- UID of the camera.\n world_location {np.array} -- world location of the camera.\n world_orientation {np.array} -- world orientation of the camera.\n '''\n raise NotImplementedError\n\n\ndef adjust_calibration_origin(world_rotation_vector, world_translation_vector,\n relative_rotations, relative_translations):\n '''Adjusts orientations and locations based on world rotation and translation.\n\n If the camera setup is thus that the desired world origin cannot be observed by all cameras\n but you wish to have the coordinate frame be relative to the world origin (or any other origin)\n then the values can be updated with this function. This is particularly useful for sequential\n pose estimates or any generic stereo-calibration.\n\n Arguments:\n world_rotation_vector {np.array} -- The rotation vector for the reference camera\n world_translation_vector {np.array} -- The translation vector for the reference camera\n relative_rotations {list of 'np.array's} -- List of rotations in the original coordinate frame\n relative_translations {list of 'np.array's} -- List of translations in the original coordinate frame\n\n Output:\n adjusted_rotation_vectors {list of np.array} -- rotations in space of the world\n adjusted_translation_vectors {list of np.array} -- locations in space of the world\n '''\n adjusted_rotation_vectors = []\n adjusted_translation_vectors = []\n\n # Format rotation for composeRT\n if world_rotation_vector.shape == (3, 3):\n world_rotation_vector = cv2.Rodrigues(world_rotation_vector)[0]\n\n for rel_rot, rel_trans in zip(relative_rotations, relative_translations):\n sec_r_vec = rel_rot\n # Format rotation for composeRT\n if sec_r_vec.shape == (3, 3):\n sec_r_vec = cv2.Rodrigues(sec_r_vec)[0]\n\n adjusted_orientation, adjusted_location = cv2.composeRT(\n world_rotation_vector, world_translation_vector, sec_r_vec, rel_trans)[:2]\n\n adjusted_rotation_vectors.append(adjusted_orientation)\n adjusted_translation_vectors.append(adjusted_location)\n\n return adjusted_rotation_vectors, adjusted_translation_vectors\n\n\n#################### Pose assessement functions\ndef inspect_pose_estimation():\n '''\n\n [Long description]\n\n Arguments:\n []\n Keyword Arguments:\n []\n Output:\n []\n '''\n raise NotImplementedError\n image_index = 10\n cam_indices = [0, 1]\n\n # Load the images\n im_list1 = utils.get_image_list(os.path.join(camera_config['folder_path'],\n 'pose_estimation', cam_names[cam_indices[0]]))\n im1 = matplotlib.image.imread(os.path.join(camera_config['folder_path'], 'pose_estimation',\n cam_names[cam_indices[0]], im_list1[image_index]))\n\n im_list2 = utils.get_image_list(os.path.join(camera_config['folder_path'],\n 'pose_estimation', cam_names[cam_indices[1]]))\n im2 = matplotlib.image.imread(os.path.join(camera_config['folder_path'], 'pose_estimation',\n cam_names[cam_indices[1]], im_list2[image_index]))\n\n # Detect the markers\n corners1, ids1, _ = cv2.aruco.detectMarkers(im1, charuco_dict)\n corners2, ids2, _ = cv2.aruco.detectMarkers(im2, charuco_dict)\n # Get the chessboard\n _, charuco_corners1, charuco_ids1 = cv2.aruco.interpolateCornersCharuco(corners1, ids1, im1, charuco_board)\n _, charuco_corners2, charuco_ids2 = cv2.aruco.interpolateCornersCharuco(corners2, ids2, im2, charuco_board)\n\n # Just get overlapping ones\n shared_ids, shared_world_points = [], []\n for id in charuco_ids1:\n if any(charuco_ids2 == id):\n shared_ids.append(id)\n shared_world_points.append(world_points[id,0,:])\n shared_ids = np.vstack(shared_ids)\n shared_world_points = np.vstack(shared_world_points).reshape(len(shared_ids), 1, 3)\n\n shared_corners1, shared_corners2 = [], []\n for id in shared_ids:\n idx1, _ = np.where(charuco_ids1 == id)\n shared_corners1.append(charuco_corners1[idx1,0,:])\n idx2, _ = np.where(charuco_ids2 == id)\n shared_corners2.append(charuco_corners2[idx2,0,:])\n\n shared_corners1 = np.vstack(shared_corners1).reshape(len(shared_ids), 1, 2)\n shared_corners2 = np.vstack(shared_corners2).reshape(len(shared_ids), 1, 2)\n\n cam_image_points, cam_charuco_ids = charuco_board_detector(camera_config)\n R, T = WORKING_common_pose_estimation(camera_config, cam_image_points, cam_charuco_ids, camera_matrices, distortion_coefficients)\n\n projection_primary = np.matmul(camera_matrices[0],np.hstack((np.identity(3), np.zeros((3,1)))))\n projection_secondary = np.matmul(camera_matrices[1],np.hstack((R, T)))\n\n # Now lets try to triangulate these shared points\n new_cam_mat1, _ = cv2.getOptimalNewCameraMatrix(camera_matrices[cam_indices[0]], distortion_coefficients[cam_indices[0]], (w,h), 1, (w,h))\n undistorted_points1 = cv2.undistortPoints(np.vstack(shared_corners1), camera_matrices[cam_indices[0]], distortion_coefficients[cam_indices[0]], P = new_cam_mat1)\n\n new_cam_mat2, _ = cv2.getOptimalNewCameraMatrix(camera_matrices[cam_indices[1]], distortion_coefficients[cam_indices[1]], (w,h), 1, (w,h))\n undistorted_points2 = cv2.undistortPoints(np.vstack(shared_corners2), camera_matrices[cam_indices[1]], distortion_coefficients[cam_indices[1]], P = new_cam_mat2)\n\n # Triangulate the points\n triangulated_points_norm = cv2.triangulatePoints(projection_primary, projection_secondary, undistorted_points1, undistorted_points2)\n triangulated_points = triangulated_points_norm[:3,:]/np.transpose(np.repeat(triangulated_points_norm[3,:], 3).reshape((-1,3)))\n\n # Reproject the points to each camera and verify\n reprojected_corners1,_ = cv2.projectPoints(triangulated_points, np.identity(3), np.zeros((3,1)),\n new_cam_mat1, distortion_coefficients[cam_indices[0]])\n reprojected_corners2,_ = cv2.projectPoints(triangulated_points, R, T,\n new_cam_mat2, distortion_coefficients[cam_indices[1]])\n\n fig, axs = matplotlib.pyplot.subplots(1,2, squeeze=False)\n axs[0,0].imshow(im1)\n axs[0,1].imshow(im2)\n for corner in range(len(shared_corners1)):\n axs[0,0].scatter(shared_corners1[corner, 0, 0], shared_corners1[corner, 0, 1], facecolors='none', edgecolors='b')\n axs[0,0].scatter(shared_corners1[corner, 0, 0], reprojected_corners1[corner, 0, 1], facecolors='none', edgecolors='r')\n\n axs[0,1].scatter(shared_corners2[corner, 0, 0], shared_corners2[corner, 0, 1], facecolors='none', edgecolors='b')\n axs[0,1].scatter(shared_corners2[corner, 0, 0], reprojected_corners2[corner, 0, 1], facecolors='none', edgecolors='r')\n\n\ndef plot_poses(pose_estimation_config, scale_factor=1):\n '''Creates a plot showing the location and orientation of all cameras.\n\n Creates a plot showing the location and orientation of all cameras given based on translation\n and rotation vectors. If your cameras are very close together or far apart you can change the\n scaling factor as necessary.\n\n Arguments:\n pose_estimation_config {dict} -- see help(ncams.camera_tools). Should have following keys:\n serials {list of numbers} -- list of camera serials.\n world_locations {list of np.arrays} -- world locations of each camera.\n world_orientations {list of np.arrays} -- world orientation of each camera.\n '''\n world_locations = pose_estimation_config['world_locations']\n world_orientations = pose_estimation_config['world_orientations']\n serials = pose_estimation_config['serials']\n num_cameras = len(serials)\n\n # Only accepts list format so check if this is true only when a single camera is present\n if num_cameras == 1: # AS: Not sure if needed anymore\n if isinstance(world_locations, np.ndarray):\n world_locations = [world_locations]\n if isinstance(world_orientations, np.ndarray):\n world_orientations = [world_orientations]\n\n # Create a figure with axes\n fig = mpl_pp.figure()\n ax = fig.gca(projection='3d')\n\n # Keep the verts for setting the axes later\n cam_verts = [[] for _ in range(num_cameras)]\n for icam in range(num_cameras):\n # Get the vertices to plot appropriate to the translation and rotation\n cam_verts[icam], cam_center = create_camera(\n scale_factor=scale_factor,\n rotation_vector=world_orientations[icam],\n translation_vector=world_locations[icam])\n\n # Plot it and change the color according to it's number\n ax.add_collection3d(Poly3DCollection(\n cam_verts[icam], facecolors='C'+str(icam), linewidths=1, edgecolors='k', alpha=1))\n\n # Give each camera a label\n ax.text(np.asscalar(cam_center[0]), np.asscalar(cam_center[1]), np.asscalar(cam_center[2]),\n 'Cam ' + str(serials[icam]))\n\n # mpl is weird about maintaining aspect ratios so this has to be done\n ax_min = np.min(np.hstack(cam_verts))\n ax_max = np.max(np.hstack(cam_verts))\n\n # Set the axes and viewing angle\n # Note that this is reversed so that the cameras are looking towards us\n ax.set_xlim([ax_max, ax_min])\n ax.set_ylim([ax_min, ax_max])\n ax.set_zlim([ax_min, ax_max])\n ax.view_init(elev=105, azim=-90)\n\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_zlabel('z')\n\n\n#################### Camera plotting helper functions\ndef create_camera(scale_factor=1, rotation_vector=None, translation_vector=None):\n '''Create a typical camera shape.\n\n [description]\n\n Keyword Arguments:\n scale_factor {number} -- [description] (default: {1})\n rotation_vector {[type]} -- [description] (default: {None})\n translation_vector {[type]} -- [description] (default: {None})\n Output:\n camera_vertices {np.array} -- [description]\n cam_center {np.array} -- [description]\n '''\n # Lines:\n # Back of camera body\n # Front of camera body/back of lens\n # Back of camera body\n cam_points = np.array([\n [0, 0, 0], [1, 0, 0], [1, 1, 0], [0, 1, 0],\n [0.2, 0.2, 0.5], [0.8, 0.2, 0.5], [0.8, 0.8, 0.5], [0.2, 0.8, 0.5],\n [0.2, 0.2, 1], [0.8, 0.2, 1], [0.8, 0.8, 1], [0.2, 0.8, 1]])\n\n # Set the origin as the back of the lens\n centering_vector = [0.5, 0.5, 0.5]\n cam_points = cam_points - centering_vector\n\n # Scale the points\n cam_points = cam_points * scale_factor\n\n # Move the camera\n cam_points = move_camera(cam_points, rotation_vector, translation_vector)\n\n # Get the vertices & center\n camera_vertices = get_camera_vertices(cam_points)\n cam_center = np.mean(cam_points[4:8, :], 0)\n cam_center[1] = cam_center[1] + scale_factor\n\n return camera_vertices, cam_center\n\n\ndef move_camera(cam_points, rotation_vector=None, translation_vector=None):\n '''Applies the appropriate rotation and translation to the camera points.\n\n [description]\n\n Arguments:\n cam_points {[type]} -- [description]\n\n Keyword Arguments:\n rotation_vector {np.array} -- [description] (default: {None})\n translation_vector {np.array} -- [description] (default: {None})\n '''\n # Check rotation vector format\n if rotation_vector is None:\n rotation_vector = np.identity(3) # Assume it's not rotating\n elif rotation_vector.shape == (3, 1) or rotation_vector.shape == (1, 3):\n # Make matrix if necessary\n rotation_vector = cv2.Rodrigues(rotation_vector)[0] # Convert to matrix\n\n if translation_vector is None:\n translation_vector = np.zeros((3, 1)) # Assume there is no translation\n elif translation_vector.shape == (1, 3):\n translation_vector = np.transpose(translation_vector) # Format\n\n # Create the translation vector\n translation_vector = np.matmul(-np.transpose(rotation_vector), translation_vector)\n\n # Rotate and then translate\n cam_points = np.transpose(np.matmul(np.transpose(rotation_vector), np.transpose(cam_points)))\n cam_points = cam_points - np.transpose(translation_vector)\n\n return cam_points\n\n\ndef get_camera_vertices(cam_points):\n '''Manual mapping of the camera points from in create_camera.\n\n [description]\n\n Arguments:\n cam_points {list} -- 12-element array.\n Output:\n cam_verts {list 9x4} -- [description]\n '''\n cam_verts = [\n [cam_points[0], cam_points[4], cam_points[5], cam_points[1]],\n [cam_points[1], cam_points[5], cam_points[6], cam_points[2]],\n [cam_points[2], cam_points[6], cam_points[7], cam_points[3]],\n [cam_points[3], cam_points[7], cam_points[4], cam_points[0]], # Sides of lenses\n [cam_points[4], cam_points[8], cam_points[9], cam_points[5]],\n [cam_points[5], cam_points[9], cam_points[10], cam_points[6]],\n [cam_points[6], cam_points[10], cam_points[11], cam_points[7]],\n [cam_points[7], cam_points[11], cam_points[8], cam_points[4]], # Sides of body\n [cam_points[8], cam_points[9], cam_points[10], cam_points[11]]] # Back of body\n\n return cam_verts\n" ]
[ [ "numpy.array", "numpy.zeros", "numpy.sum", "matplotlib.pyplot.subplots", "matplotlib.pyplot.figure", "numpy.mean", "numpy.where", "numpy.identity", "numpy.arange", "numpy.transpose", "numpy.asscalar", "numpy.repeat", "numpy.hstack", "numpy.vstack" ] ]
segurac/NCRFpp
[ "c922af88f3daa456be15454efe244bcfff42c693" ]
[ "model/seqmodel.py" ]
[ "# -*- coding: utf-8 -*-\n# @Author: Jie Yang\n# @Date: 2017-10-17 16:47:32\n# @Last Modified by: Jie Yang, Contact: [email protected]\n# @Last Modified time: 2018-03-30 16:20:07\n\nimport torch\nimport torch.autograd as autograd\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\nfrom .wordsequence import WordSequence\nfrom .crf import CRF\n\nclass SeqModel(nn.Module):\n def __init__(self, data):\n super(SeqModel, self).__init__()\n self.use_crf = data.use_crf\n print(\"build network...\")\n print(\"use_char: \", data.use_char) \n if data.use_char:\n print(\"char feature extractor: \", data.char_feature_extractor)\n print(\"word feature extractor: \", data.word_feature_extractor)\n print(\"use crf: \", self.use_crf)\n\n self.gpu = data.HP_gpu\n self.average_batch = data.average_batch_loss\n ## add two more label for downlayer lstm, use original label size for CRF\n label_size = data.label_alphabet_size\n data.label_alphabet_size += 2\n self.word_hidden = WordSequence(data) \n if self.use_crf:\n self.crf = CRF(label_size, self.gpu)\n\n\n def neg_log_likelihood_loss(self, word_inputs, feature_inputs, word_seq_lengths, char_inputs, char_seq_lengths, char_seq_recover, batch_label, mask):\n outs = self.word_hidden(word_inputs,feature_inputs, word_seq_lengths, char_inputs, char_seq_lengths, char_seq_recover)\n batch_size = word_inputs.size(0)\n seq_len = word_inputs.size(1)\n if self.use_crf:\n total_loss = self.crf.neg_log_likelihood_loss(outs, mask, batch_label)\n scores, tag_seq = self.crf._viterbi_decode(outs, mask)\n else:\n loss_function = nn.NLLLoss(ignore_index=0, size_average=False)\n outs = outs.view(batch_size * seq_len, -1)\n score = F.log_softmax(outs, 1)\n total_loss = loss_function(score, batch_label.view(batch_size * seq_len))\n _, tag_seq = torch.max(score, 1)\n tag_seq = tag_seq.view(batch_size, seq_len)\n if self.average_batch:\n total_loss = total_loss / batch_size\n return total_loss, tag_seq\n\n\n def forward(self, word_inputs, feature_inputs, word_seq_lengths, char_inputs, char_seq_lengths, char_seq_recover, mask):\n outs = self.word_hidden(word_inputs,feature_inputs, word_seq_lengths, char_inputs, char_seq_lengths, char_seq_recover)\n batch_size = word_inputs.size(0)\n seq_len = word_inputs.size(1)\n if self.use_crf:\n scores, tag_seq = self.crf._viterbi_decode(outs, mask)\n else:\n outs = outs.view(batch_size * seq_len, -1)\n _, tag_seq = torch.max(outs, 1)\n tag_seq = tag_seq.view(batch_size, seq_len)\n ## filter padded position with zero\n tag_seq = mask.long() * tag_seq\n return tag_seq\n\n\n # def get_lstm_features(self, word_inputs, word_seq_lengths, char_inputs, char_seq_lengths, char_seq_recover):\n # return self.word_hidden(word_inputs, word_seq_lengths, char_inputs, char_seq_lengths, char_seq_recover)\n\n\n def decode_nbest(self, word_inputs, feature_inputs, word_seq_lengths, char_inputs, char_seq_lengths, char_seq_recover, mask, nbest):\n if not self.use_crf:\n print(\"Nbest output is currently supported only for CRF! Exit...\")\n exit(0)\n outs = self.word_hidden(word_inputs,feature_inputs, word_seq_lengths, char_inputs, char_seq_lengths, char_seq_recover)\n batch_size = word_inputs.size(0)\n seq_len = word_inputs.size(1)\n scores, tag_seq = self.crf._viterbi_decode_nbest(outs, mask, nbest)\n return scores, tag_seq\n\n " ]
[ [ "torch.nn.NLLLoss", "torch.max", "torch.nn.functional.log_softmax" ] ]
andudu/Pose_Net_tracking
[ "63229cbe380985d78de165c41a133998b734d931" ]
[ "tracking.py" ]
[ "import numpy as np\nimport operator\n\nclass tracking(object):\n\n def __init__(self):\n\n self.previous_dict = {}\n self.current_dict = {}\n self.return_dict = {}\n\n def return_most_similar(self,current_keypoints):\n\n # Intializes the prev dictionary with persons and all of their keep points\n if not self.previous_dict:\n for ii, keypoint in current_keypoints.items():\n temp_current = []\n temp_current.append(keypoint[0][5])\n temp_current.append(keypoint[0][6])\n temp_current.append(keypoint[0][11])\n temp_current.append(keypoint[0][12])\n self.previous_dict[ii] = temp_current\n self.return_dict[ii] = [\n\n keypoint[0], #coordinats\n keypoint[1], #keypoint\n keypoint[2], #confidence_score\n ii\n ]\n # Create a dictionary of only relevant keypoints for similarity testing\n else:\n for ii, keypoints in current_keypoints.items():\n temp_current = []\n temp_current.append(keypoints[0][5])\n temp_current.append(keypoints[0][6])\n temp_current.append(keypoints[0][11])\n temp_current.append(keypoints[0][12])\n current_keypoints[ii] = temp_current\n all_sim_indices = []\n for index, keypoints in self.previous_dict.items():\n similarity_scores = {}\n # Calculate the similarity score all individuals compared to currented person\n for current_index, current_keypoints in current_keypoints.items():\n print('looped')\n similarity_scores[current_index] = self.calculate_similarity(keypoints, current_keypoints)\n # The current frame's individual with the most similar keypoint score gets\n # assigned accordingly to master individual list\n print(similarity_scores)\n most_sim_index = min(similarity_scores.items(), key=operator.itemgetter(1))[0]\n self.previous_dict[index] = current_keypoints[most_sim_index]\n all_sim_indices.append(most_sim_index)\n for i, index in enumerate(all_sim_indices):\n self.return_dict[i] = [\n current_keypoints[index][0],\n current_keypoints[index][1],\n current_keypoints[index][2],\n index\n ]\n return self.return_dict\n\n def calculate_similarity(self, previous_keypoints, current_keypoints):\n diff = [current_keypoints[a] - previous_keypoints[a] for a in range(4)]\n return np.linalg.norm(diff)\n" ]
[ [ "numpy.linalg.norm" ] ]
judithabk6/Clonesig_analysis
[ "b473425833a75a6699b3b5b3026d91d00dbb53ba" ]
[ "signature_code/evaluate_dream.py" ]
[ "#!/usr/bin/env python\n# -*- coding:utf-8 -*-\nimport pandas as pd\nimport sys\nfrom collections import Iterable\nimport numpy as np\nimport pickle\nimport scipy as sp\nfrom clonesig.data_loader import SimLoader\nfrom clonesig.evaluate import score1B_base, score1C_base, score2A_base, score2C_base, score_sig_1A_base, score_sig_1B_base, score_sig_1C_base, score_sig_1D_base, score_sig_1E_base\nfrom clonesig.run_clonesig import get_MU\nfrom clonesig.estimator import Estimator, EV_DOF_THRESHOLD\nimport pkg_resources\nfrom pandas.errors import EmptyDataError\nfrom clonesig import mixin_init_parameters\nimport bz2\n\n\nMIXTURE_THRESHOLD = 0.05\n\n\n\"\"\"\nfolder_path = 'salcedo_dream_challenge/T2_8X'\n\"\"\"\n\n\ndef format_truth(folder_path):\n truth_path = 'data/salcedo_dream_challenge/MuTect_truth'\n tumor = folder_path.split('/')[1].split('_')[0]\n depth = folder_path.split('/')[1].split('_')[1]\n filename_1A = '{}/MuTect_{}.T.{}.truth.1A.txt'\\\n .format(truth_path, tumor, depth)\n with open(filename_1A, 'r') as f:\n true_purity = float(f.read())\n filename_1B = '{}/MuTect_{}.T.{}.truth.1B.txt'\\\n .format(truth_path, tumor, depth)\n with open(filename_1B, 'r') as f:\n J_true = int(f.readline().strip())\n filename_1C = '{}/MuTect_{}.T.{}.truth.1C.txt'\\\n .format(truth_path, tumor, depth)\n data_1C = pd.read_csv(filename_1C, sep='\\t', header=None,\n names=['cluster_id', 'nb_mut', 'ccf'])\n valid_1C = data_1C[data_1C.ccf > 0]\n phi_true_values = valid_1C.ccf.values / true_purity\n weights_true = valid_1C.nb_mut.values / sum(valid_1C.nb_mut)\n filename_2A = '{}/MuTect_{}.T.{}.truth.2A.txt'\\\n .format(truth_path, tumor, depth)\n data_2A = pd.read_csv(filename_2A, header=None, names=['cluster_id'])\n unfiltered_df = pd.read_csv('{}/unrestricted_input_mut.csv'.format(folder_path), sep='\\t')\n truth_vcf = pd.read_csv(\n '{}/MuTect_{}.T.{}.truth.scoring_vcf.vcf'.format(truth_path, tumor, depth),\n sep='\\t', comment='#', index_col=False, header=None,\n names=['chromosome', 'position', 'ID', 'REF', 'ALT', 'QUAL', 'FILTER',\n 'INFO', 'FORMAT', 'normal', 'tumor', 'calling'])\n unfiltered_df = unfiltered_df.assign(calling=truth_vcf.calling)\n final_df = unfiltered_df[unfiltered_df.calling]\n final_df = final_df.reset_index()\n final_df = final_df.assign(true_cluster_id=data_2A.cluster_id.astype(int))\n est_clonal_idx = valid_1C.ccf.values.argmax() + 1\n final_df = final_df.assign(\n true_subclonal=(final_df.true_cluster_id != est_clonal_idx).astype(int))\n\n nb_mut = valid_1C.nb_mut.sum()\n # nb_mut à changer plus tard\n sig_prop_filename = 'data/salcedo_dream_challenge/input_trinucleotide_signatures.txt'\n sig_prop_data = pd.read_csv(sig_prop_filename, sep='\\t')\n sig_prop_data = sig_prop_data[sig_prop_data.tumour==tumor]\n sig_matrix_filename = 'data/salcedo_dream_challenge/signatures.txt'\n sig_mat = pd.read_csv(sig_matrix_filename, sep='\\t')\n relevant_sigs = sig_prop_data.signature.values\n sig_profile_1A = np.zeros(96)\n val, c = np.unique(unfiltered_df[unfiltered_df.calling].trinucleotide,\n return_counts=True)\n sig_profile_1A[val.astype(int)] = c\n sig_profile_1A = sig_profile_1A / len(unfiltered_df[unfiltered_df.calling])\n sig_profile_1B = sig_prop_data.frequency.values.reshape(1,-1).dot(\n sig_mat[relevant_sigs].values.T)[0]\n complete_mat_filename = '{}/MU_matrix.csv'.format(folder_path)\n complete_mat = pd.read_csv(complete_mat_filename, sep='\\t')\n true_signatures = np.isin(complete_mat.columns[1:], relevant_sigs).astype(int)\n true_profile_1E = np.repeat([sig_profile_1B], nb_mut, axis=0)\n return (J_true, phi_true_values, weights_true,\n final_df[['mutation_id', 'true_cluster_id']],\n final_df[['mutation_id', 'true_subclonal']], sig_profile_1A,\n sig_profile_1B, true_signatures, true_profile_1E)\n\n\ndef format_clonesig(folder_path, setting):\n try:\n raw_res = bz2.BZ2File('{}/{}_clonesig_raw_results.bz2'.format(folder_path, setting), 'rb')\n new_est, lr, pval, cst_est, fitted_sigs, runtime = pickle.load(raw_res)\n except FileNotFoundError:\n return [None] * 11\n J_pred = new_est.J\n phi_pred_values = new_est.phi\n pre_est_w = np.zeros(new_est.J)\n pre_counts = np.unique(np.argmax(new_est.qun, axis=1),\n return_counts=True)\n pre_est_w[pre_counts[0]] = pre_counts[1]\n weights_pred = pre_est_w/new_est.N\n data_df = pd.read_csv('{}/input_t.tsv'.format(folder_path), sep='\\t')\n pred_cluster_assign = np.argmax(new_est.qun, axis=1)\n data_df = data_df.assign(pred_cluster_id=np.argmax(new_est.qun, axis=1))\n est_att = np.argmax(new_est.qun, axis=1)\n est_clonal_idx = np.argmax(new_est.phi)\n data_df = data_df.assign(\n pred_subclonal=(data_df.pred_cluster_id != est_clonal_idx).astype(int))\n\n pred_profile = new_est.xi.dot(new_est.pi).dot(new_est.mu_matrix)\n complete_mat_filename = '{}/MU_matrix.csv'.format(folder_path)\n complete_mat = pd.read_csv(complete_mat_filename, sep='\\t')\n complete_sigs = complete_mat.columns[1:]\n pred_signatures = np.zeros(len(complete_sigs))\n est_sig = new_est.xi.dot(new_est.pi)\n if setting in ('all', 'all_nuclonal'):\n pred_signatures = est_sig\n elif setting == 'cancertype':\n filename = '{}/sub_MU_matrix.csv'.format(folder_path)\n sub_matrix = pd.read_csv(filename, sep='\\t')\n sub_sigs = sub_matrix.columns[1:]\n idx = [list(complete_sigs).index(s) for s in sub_sigs]\n pred_signatures[np.array(idx)] = est_sig\n elif setting == 'prefit':\n pred_signatures[fitted_sigs] = est_sig\n else:\n raise NameError('unknown setting for CloneSig')\n est_dist = new_est.pi[new_est.qun.argmax(axis=1), :].dot(new_est.mu_matrix)\n return (ll_ratio, pval, J_pred, phi_pred_values, weights_pred,\n data_df[['mutation_id', 'pred_cluster_id']],\n data_df[['mutation_id', 'pred_subclonal']],\n pred_profile, pred_signatures, est_dist, runtime)\n\n\ndef format_pyclone(folder_path, setting):\n try:\n with open('{}/pyclone_timing.txt'.format(folder_path), 'r') as f:\n line = f.read()\n start, end = float(line.split(',')[0]), float(line.split(',')[1])\n runtime = end - start\n except:\n runtime = np.nan\n try:\n pyclone_res = '{}/pyclone/tables'.format(folder_path)\n cluster_table = pd.read_csv('{}/cluster.tsv'.format(pyclone_res),\n sep='\\t')\n loci_table = pd.read_csv('{}/loci.tsv'.format(pyclone_res), sep='\\t')\n except FileNotFoundError:\n return [None] * 10 + [runtime]\n J_pred = len(cluster_table[cluster_table['size'] > 1])\n weights_pred = cluster_table['size'] / cluster_table['size'].sum()\n phi_pred_values = cluster_table['mean']\n data_df = pd.read_csv('{}/input_t.tsv'.format(folder_path), sep='\\t')\n ordered_loci_table = pd.merge(data_df, loci_table, on='mutation_id')\n ordered_loci_table = ordered_loci_table.assign(\n pred_cluster_id=ordered_loci_table.cluster_id)\n est_clonal_idx = cluster_table.sort_values(by='mean').iloc[-1].cluster_id\n ordered_loci_table = ordered_loci_table.assign(\n pred_subclonal=(ordered_loci_table.cluster_id != est_clonal_idx)\n .astype(int))\n runtime = end - start\n return (None, None, J_pred, phi_pred_values, weights_pred,\n ordered_loci_table[['mutation_id', 'pred_cluster_id']],\n ordered_loci_table[['mutation_id', 'pred_subclonal']],\n None, None, None, runtime)\n\n\ndef format_sciclone(folder_path, setting):\n with open('{}/sciclone_timing.txt'.format(folder_path), 'r') as f:\n line = f.read()\n start, end = float(line.split(',')[0]), float(line.split(',')[1])\n runtime = end - start\n try:\n loci_table = pd.read_csv('{}/sciclone/clusters1'.format(folder_path),\n sep='\\t')\n except FileNotFoundError:\n return [None] * 10 + [runtime]\n J_pred = loci_table[loci_table.cluster > 0].cluster.nunique()\n weights_pred = loci_table[loci_table.cluster > 0].groupby('cluster')['tumor.vaf'].count()/len(loci_table)\n phi_pred_values = loci_table[loci_table.cluster > 0].groupby('cluster')['tumor.vaf'].mean()/100\n data_df = pd.read_csv('{}/input_t.tsv'.format(folder_path), sep='\\t')\n data_df = data_df.assign(pred_cluster_id=loci_table.cluster)\n est_clonal_idx = (loci_table[loci_table.cluster > 0].groupby('cluster')['tumor.vaf'].mean()).idxmax()\n data_df = data_df.assign(\n pred_subclonal=(data_df.pred_cluster_id != est_clonal_idx).astype(int))\n return (None, None, J_pred, phi_pred_values, weights_pred,\n data_df[['mutation_id', 'pred_cluster_id']],\n data_df[['mutation_id', 'pred_subclonal']],\n None, None, None, runtime)\n\n\ndef format_dpclust(folder_path, setting):\n with open('{}/dpclust_timing.txt'.format(folder_path), 'r') as f:\n line = f.read()\n start, end = float(line.split(',')[0]), float(line.split(',')[1])\n runtime = end - start\n try:\n res_folder = '{}_DPoutput_2000iters_1000burnin_seed123'.format(folder_path.split('/')[1])\n loci_table = pd.read_csv('{}/dpclust/{}/{}_2000iters_1000burnin_bestConsensusAssignments.bed'.format(folder_path, res_folder, folder_path.split('/')[1]), sep='\\t')\n cluster_table = pd.read_csv('{}/dpclust/{}/{}_2000iters_1000burnin_bestClusterInfo.txt'.format(folder_path, res_folder, folder_path.split('/')[1]), sep='\\t')\n except FileNotFoundError:\n return [None] * 10 + [runtime]\n J_pred = len(cluster_table)\n weights_pred = cluster_table['no.of.mutations'] / cluster_table['no.of.mutations'].sum()\n phi_pred_values = cluster_table['location']\n data_df = pd.read_csv('{}/input_t.tsv'.format(folder_path), sep='\\t')\n data_df = data_df.assign(pred_cluster_id=loci_table.cluster)\n est_clonal_idx = cluster_table.sort_values(by='location').iloc[-1]['cluster.no']\n data_df = data_df.assign(\n pred_subclonal=(data_df.pred_cluster_id != est_clonal_idx).astype(int))\n runtime = end - start\n return (None, None, J_pred, phi_pred_values, weights_pred,\n data_df[['mutation_id', 'pred_cluster_id']],\n data_df[['mutation_id', 'pred_subclonal']],\n None, None, None, runtime)\n\n\ndef format_phylogicndt(folder_path, setting):\n with open('{}/phylogicndt_timing.txt'.format(folder_path), 'r') as f:\n line = f.read()\n start, end = float(line.split(',')[0]), float(line.split(',')[1])\n runtime = end - start\n try:\n loci_table = pd.read_csv(\n '{}/phylogicndt/Test_Clust.mut_ccfs.txt'.format(folder_path),\n sep='\\t')\n loci_table = loci_table.assign(chr_num=loci_table.Chromosome.str.replace('chr', '').astype(int))\n loci_table = loci_table.assign(mutation_id_short=loci_table.chr_num.astype(str) + '_' + loci_table.Start_position.astype(str))\n cluster_table = pd.read_csv(\n '{}/phylogicndt/Test_Clust.cluster_ccfs.txt'.format(folder_path),\n sep='\\t')\n cluster_table = pd.merge(cluster_table,\n loci_table.Cluster_Assignment.value_counts().to_frame(),\n left_on='Cluster_ID', right_index=True)\n except FileNotFoundError:\n return [None] * 10 + [runtime]\n J_pred = len(cluster_table)\n weights_pred = cluster_table['Cluster_Assignment'] / cluster_table['Cluster_Assignment'].sum()\n phi_pred_values = cluster_table['postDP_ccf_mean']\n data_df = pd.read_csv('{}/input_t.tsv'.format(folder_path), sep='\\t')\n data_df = data_df.assign(mutation_id_short=data_df.chromosome.astype(str) + '_' + data_df.position.astype(str))\n data_df_m = pd.merge(data_df, loci_table[['mutation_id_short', 'Cluster_Assignment']], on=\"mutation_id_short\")\n data_df_m = data_df_m.assign(pred_cluster_id=data_df_m.Cluster_Assignment)\n est_clonal_idx = cluster_table.sort_values(by='postDP_ccf_mean').iloc[-1]['Cluster_ID']\n data_df_m = data_df_m.assign(\n pred_subclonal=(data_df_m.pred_cluster_id != est_clonal_idx).astype(int))\n return (None, None, J_pred, phi_pred_values, weights_pred,\n data_df_m[['mutation_id', 'pred_cluster_id']],\n data_df_m[['mutation_id', 'pred_subclonal']],\n None, None, None, runtime)\n\n\ndef format_ccube(folder_path, setting):\n with open('{}/ccube_timing.txt'.format(folder_path), 'r') as f:\n line = f.read()\n start, end = float(line.split(',')[0]), float(line.split(',')[1])\n runtime = end - start\n try:\n loci_table = pd.read_csv(\n '{}/ccube/ssm_clusters.csv'.format(folder_path), sep='\\t')\n except FileNotFoundError:\n return [None] * 10 + [runtime]\n pre_cluster_table = loci_table.groupby('ccube_ccf_mean').rough_mult.count()\n loci_table = loci_table.assign(\n cluster_id=loci_table.apply(\n lambda x:\n pre_cluster_table.index.tolist().index(x['ccube_ccf_mean']),\n axis=1))\n J_pred = loci_table.ccube_ccf_mean.nunique()\n weights_pred = pre_cluster_table.values/len(loci_table)\n phi_pred_values = pre_cluster_table.index.values\n data_df = pd.read_csv('{}/input_t.tsv'.format(folder_path), sep='\\t')\n data_df = data_df.assign(pred_cluster_id=loci_table.cluster_id)\n est_clonal_idx = np.argmax(pre_cluster_table.index.tolist())\n data_df = data_df.assign(\n pred_subclonal=(data_df.pred_cluster_id != est_clonal_idx).astype(int))\n return (None, None, J_pred, phi_pred_values, weights_pred,\n data_df[['mutation_id', 'pred_cluster_id']],\n data_df[['mutation_id', 'pred_subclonal']],\n None, None, None, runtime)\n\n\ndef format_deconstructsigs(folder_path, setting):\n res_filename = '{}/deconstructsigs/signatures_{}.csv'\\\n .format(folder_path, setting)\n result_file = pd.read_csv(res_filename, sep=' ')\n complete_mat_filename = '{}/MU_matrix.csv'.format(folder_path)\n complete_mat = pd.read_csv(complete_mat_filename, sep='\\t')\n complete_sigs = complete_mat.columns[1:]\n pred_signatures = np.zeros(len(complete_sigs))\n if setting == 'all':\n mu_mat_setting = complete_mat[complete_mat.columns[1:]].values.T\n pred_signatures = result_file.values[0]\n elif setting == 'cancertype':\n filename = '{}/sub_MU_matrix.csv'.format(folder_path)\n sub_matrix = pd.read_csv(filename, sep='\\t')\n mu_mat_setting = sub_matrix[sub_matrix.columns[1:]].values.T\n sub_sigs = sub_matrix.columns[1:]\n idx = [list(complete_sigs).index(s) for s in sub_sigs]\n pred_signatures[np.array(idx)] = result_file.iloc[0].values\n else:\n raise NameError('unknown setting for DeconstructSigs')\n sig_profile = result_file.values.dot(mu_mat_setting)\n input_filename = '{}/deconstructsigs/pattern96.csv'.format(folder_path)\n pattern = pd.read_csv(input_filename, sep='\\t')\n nb_mut = pattern.sum().sum()\n pred_profile_1E = np.repeat([sig_profile], nb_mut, axis=0)\n runtime = pd.read_csv('{}/deconstructsigs/deconstructsig_runtime_{}.csv'\n .format(folder_path, setting),\n index_col=0).values[0][0]\n return (None, None, None, None, None, None, None,\n sig_profile, pred_signatures, pred_profile_1E, runtime)\n\n\ndef format_palimpsest(folder_path, setting):\n try:\n mixture_file = pd.read_csv('{}/palimpsest/palimpsest_mixtures_{}.csv'.\n format(folder_path, setting), sep='\\t')\n ccf_file = pd.read_csv('{}/palimpsest/palimpsest_mut_data_{}.csv'\n .format(folder_path, setting), sep='\\t')\n except FileNotFoundError:\n return [None] * 11\n J_pred = 2\n weights_pred = ccf_file.groupby('Clonality').CCF.count().values/len(ccf_file)\n phi_pred_values = ccf_file.groupby('Clonality').CCF.mean().values\n ccf_file = ccf_file.assign(\n clonality_binary=ccf_file.apply(\n lambda x: 1 if x['Clonality'] == 'subclonal' else 0, axis=1))\n ccf_file = ccf_file.reset_index()\n data_df = pd.read_csv('{}/input_t.tsv'.format(folder_path), sep='\\t')\n data_df = data_df.assign(pred_cluster_id=ccf_file.clonality_binary)\n data_df = data_df.assign(pred_subclonal=ccf_file.clonality_binary)\n complete_mat_filename = '{}/MU_matrix.csv'.format(folder_path)\n complete_mat = pd.read_csv(complete_mat_filename, sep='\\t')\n complete_sigs = complete_mat.columns[1:]\n pred_signatures = np.zeros(len(complete_sigs))\n if setting == 'all':\n mu_mat_setting = complete_mat[complete_mat.columns[1:]].values.T\n idx = np.where(pred_signatures==0)[0]\n elif setting == 'cancertype':\n filename = '{}/sub_MU_matrix.csv'.format(folder_path)\n sub_matrix = pd.read_csv(filename, sep='\\t')\n mu_mat_setting = sub_matrix[sub_matrix.columns[1:]].values.T\n sub_sigs = sub_matrix.columns[1:]\n idx = [list(complete_sigs).index(s) for s in sub_sigs]\n elif setting == 'prefit':\n premixture_file = pd.read_csv(\n '{}/palimpsest/palimpsest_premixtures_{}.txt'.\n format(folder_path, setting), sep=' ')\n sig_names = premixture_file.columns.to_list()\n sig_names = [s.replace('.', ' ') for s in sig_names]\n sig_names = [s.replace('PCAWG ', 'PCAWG-') for s in sig_names]\n idx = [list(complete_sigs).index(s) for s in sig_names]\n mu_mat_setting = complete_mat[sig_names].values.T\n else:\n raise NameError('unknown setting for Palimpsest')\n pred_profile = (ccf_file.groupby('clonality_binary').CCF.count() /\n len(ccf_file)).values.reshape(1, -1)\\\n .dot(mixture_file).dot(mu_mat_setting)\n est_sigs = (ccf_file.groupby('clonality_binary').CCF.count() /\n len(ccf_file)).values.reshape(1, -1) \\\n .dot(mixture_file)\n pred_signatures[np.array(idx)] = est_sigs[0]\n est_dist = mixture_file.values[ccf_file.clonality_binary].dot(mu_mat_setting)\n runtime = pd.read_csv('{}/palimpsest/palimpsest_runtime_{}.csv'.format(folder_path, setting),\n index_col=0).values[0][0]\n return (None, None, J_pred, phi_pred_values, weights_pred,\n data_df[['mutation_id', 'pred_cluster_id']],\n data_df[['mutation_id', 'pred_subclonal']], pred_profile,\n pred_signatures, est_dist, runtime)\n\n\ndef format_tracksig(folder_path, setting):\n try:\n mixture_file = pd.read_csv('{}/tracksig/tracksig_mixtures_{}.csv'.\n format(folder_path, setting), sep=',')\n except FileNotFoundError:\n return [None] * 11\n try:\n changepoint_file = pd.read_csv(\n '{}/tracksig/tracksig_changepoints_{}.txt'.\n format(folder_path, setting), header=None, sep=' ')\n changepoints_tracksig_list = changepoint_file.values[0]\n except EmptyDataError:\n changepoints_tracksig_list = np.array(list())\n\n input_df = pd.read_csv('{}/input_t.tsv'.format(folder_path), sep='\\t')\n with open('{}/purity.txt'.format(folder_path), 'r') as f:\n purity = float(f.read())\n input_df = input_df.assign(mut_cn=1)\n input_df = input_df.assign(vaf=input_df.var_counts /\n (input_df.ref_counts + input_df.var_counts))\n input_df = input_df.assign(\n total_cn=lambda x: x['minor_cn'] + x['major_cn'])\n input_df = input_df.assign(\n vaf_cn=input_df.vaf * input_df['total_cn'] / input_df['mut_cn'])\n input_df = input_df.assign(\n vaf_purity=input_df.apply(\n lambda x: x['vaf']/purity *\n ((1 - purity) * 2 + purity * x['total_cn']) /\n x['mut_cn'], axis=1))\n input_df.sort_values(by='vaf_purity', inplace=True)\n input_df.reset_index(inplace=True, drop=True)\n\n input_df = input_df.assign(mutation_group=lambda x: x.index//100)\n nbin = len(input_df)//100\n input_df_filter = input_df[input_df.mutation_group <= nbin - 1]\n cluster_id_list = np.zeros(input_df_filter.mutation_group.nunique())\n i = 1\n for chg_point in changepoints_tracksig_list:\n cluster_id_list[(chg_point - 1):] = i\n i += 1\n input_df_filter = input_df_filter.assign(\n pred_cluster_id=input_df_filter.apply(\n lambda x: int(cluster_id_list[x['mutation_group']]), axis=1))\n\n J_pred = len(changepoints_tracksig_list) + 1\n weights_pred = input_df_filter.groupby('pred_cluster_id').vaf_purity.count().values/len(input_df_filter)\n phi_pred_values = input_df_filter.groupby('pred_cluster_id').vaf_purity.mean().values\n est_clonal_idx = input_df_filter.groupby('pred_cluster_id').vaf_purity.mean().idxmax()\n input_df_filter = input_df_filter.assign(\n pred_subclonal=(input_df_filter.pred_cluster_id != est_clonal_idx).astype(int))\n\n\n complete_mat_filename = '{}/MU_matrix.csv'.format(folder_path)\n complete_mat = pd.read_csv(complete_mat_filename, sep='\\t')\n complete_sigs = complete_mat.columns[1:]\n pred_signatures = np.zeros(len(complete_sigs))\n\n if setting == 'all':\n mu_mat_setting = complete_mat[complete_mat.columns[1:]].values.T\n idx = np.where(pred_signatures==0)[0]\n elif setting == 'cancertype':\n filename = '{}/sub_MU_matrix.csv'.format(folder_path)\n sub_matrix = pd.read_csv(filename, sep='\\t')\n mu_mat_setting = sub_matrix[sub_matrix.columns[1:]].values.T\n sub_sigs = sub_matrix.columns[1:]\n idx = [list(complete_sigs).index(s) for s in sub_sigs]\n elif setting == 'prefit':\n sig_names = mixture_file[mixture_file.columns[0]].values\n sig_names = [s.replace('.', ' ').replace('PCAWG ', 'PCAWG-') for s in sig_names]\n idx = [list(complete_sigs).index(s) for s in sig_names]\n mu_mat_setting = complete_mat[sig_names].values.T\n else:\n raise NameError('unknown setting for TrackSig')\n\n est_sigs = mixture_file[mixture_file.columns[1:]].mean(axis=1).values\n pred_signatures[idx] = est_sigs\n\n pred_profile = est_sigs.dot(mu_mat_setting)\n\n est_dist = mixture_file.values[:, 1:].T[input_df_filter.mutation_group.astype(int)].dot(mu_mat_setting)\n runtime = pd.read_csv('{}/tracksig/tracksig_runtime_{}.csv'.format(folder_path, setting),\n index_col=0).values[0][0]\n return (None, None, J_pred, phi_pred_values, weights_pred, \n input_df_filter[['mutation_id', 'pred_cluster_id']],\n input_df_filter[['mutation_id', 'pred_subclonal']], pred_profile,\n pred_signatures, est_dist, runtime)\n\ndef format_tracksigfreq(folder_path, setting):\n try:\n mixture_file = pd.read_csv('{}/tracksigfreq/tracksigfreq_mixtures_{}.csv'.\n format(folder_path, setting), sep=',')\n except FileNotFoundError:\n return [None] * 11\n try:\n changepoint_file = pd.read_csv(\n '{}/tracksigfreq/tracksigfreq_changepoints_{}.txt'.\n format(folder_path, setting), header=None, sep=' ')\n changepoints_tracksig_list = changepoint_file.values[0]\n except EmptyDataError:\n changepoints_tracksig_list = np.array(list())\n\n data_df = pd.read_csv('{}/tracksigfreq/vcaf.csv'.\n format(folder_path), sep='\\t')\n\n cluster_id_list = np.zeros(data_df.bin.nunique())\n i = 1\n for chg_point in changepoints_tracksig_list:\n cluster_id_list[(chg_point - 1):] = i\n i += 1\n data_df = data_df.assign(\n pred_cluster_id=data_df.apply(lambda x: int(cluster_id_list[x['bin']-1]),\n axis=1))\n J_pred = len(changepoints_tracksig_list) + 1\n weights_pred = data_df.groupby('pred_cluster_id').phi.count().values/len(data_df)\n phi_pred_values = data_df.groupby('pred_cluster_id').phi.mean().values\n\n est_clonal_idx = data_df.groupby('pred_cluster_id').phi.mean().idxmax()\n data_df = data_df.assign(\n pred_subclonal=(data_df.pred_cluster_id != est_clonal_idx).astype(int))\n\n complete_mat_filename = '{}/MU_matrix.csv'.format(folder_path)\n complete_mat = pd.read_csv(complete_mat_filename, sep='\\t')\n complete_sigs = complete_mat.columns[1:]\n pred_signatures = np.zeros(len(complete_sigs))\n\n if setting == 'all':\n mu_mat_setting = complete_mat[complete_mat.columns[1:]].values.T\n idx = np.where(pred_signatures==0)[0]\n elif setting == 'cancertype':\n filename = '{}/sub_MU_matrix.csv'.format(folder_path)\n sub_matrix = pd.read_csv(filename, sep='\\t')\n mu_mat_setting = sub_matrix[sub_matrix.columns[1:]].values.T\n sub_sigs = sub_matrix.columns[1:]\n idx = [list(complete_sigs).index(s) for s in sub_sigs]\n elif setting == 'prefit':\n sig_names = mixture_file[mixture_file.columns[0]].values\n sig_names = [s.replace('.', ' ').replace('PCAWG ', 'PCAWG-') for s in sig_names]\n idx = [list(complete_sigs).index(s) for s in sig_names]\n mu_mat_setting = complete_mat[sig_names].values.T\n else:\n raise NameError('unknown setting for TrackSigFreq')\n est_sigs = mixture_file[mixture_file.columns[1:]].mean(axis=1).values\n pred_signatures[idx] = est_sigs\n\n pred_profile = est_sigs.dot(mu_mat_setting)\n\n est_dist = mixture_file.values[:, 1:].T \\\n [data_df.bin.astype(int)-1].dot(mu_mat_setting)\n runtime = pd.read_csv('{}/tracksigfreq/tracksigfreq_runtime_{}.csv'.format(folder_path, setting),\n index_col=0).values[0][0]\n return (None, None, J_pred, phi_pred_values, weights_pred, \n data_df[['mutation_id', 'pred_cluster_id']],\n data_df[['mutation_id', 'pred_subclonal']], pred_profile, \n pred_signatures, est_dist, runtime)\n\n\nmethod_setting_list = [('pyclone', None), ('sciclone', None), ('ccube', None),\n ('dpclust', None), ('phylogicndt', None),\n ('clonesig', 'all'), ('clonesig', 'cancertype'),\n ('clonesig', 'prefit'), ('deconstructsigs', 'all'),\n ('deconstructsigs', 'cancertype'),\n ('palimpsest', 'all'), ('palimpsest', 'cancertype'),\n ('palimpsest', 'prefit'), ('tracksig', 'all'),\n ('tracksig', 'cancertype'), ('tracksig', 'prefit'),\n ('tracksigfreq', 'all'), ('tracksigfreq', 'cancertype'),\n ('tracksigfreq', 'prefit')]\nmethod_function_dict = {'pyclone': format_pyclone, 'sciclone': format_sciclone,\n 'ccube': format_ccube, 'dpclust': format_dpclust,\n 'phylogicndt': format_phylogicndt,\n 'clonesig': format_clonesig,\n 'deconstructsigs': format_deconstructsigs,\n 'palimpsest': format_palimpsest,\n 'tracksig': format_tracksig,\n 'tracksigfreq': format_tracksigfreq}\n\nif __name__ == '__main__':\n folder_path = sys.argv[1]\n (J_true, phi_true_values, weights_true, true_cluster_assign, true_subclonal,\n sig_profile_1A, sig_profile_1B, true_signatures, true_profile_1E) = \\\n format_truth(folder_path)\n\n tumor = folder_path.split('/')[1].split('_')[0]\n depth = folder_path.split('/')[1].split('_')[1]\n\n f = open('data/salcedo_dream_challenge/MuTect_truth/MuTect_{}.T.{}.truth.1A.txt'.format(tumor, depth), 'r')\n true_purity = float(f.readline().strip())\n perc_dip = 0 # jfkdjf\n\n cnv_filename = 'data/salcedo_dream_challenge/MuTect_inputs/{}-{}_refit_subclones_noXY.txt'.format(tumor, depth)\n cnv_table = pd.read_csv(cnv_filename, sep='\\t')\n\n def get_major_minor(x):\n if x.frac1_A==1:\n return pd.Series([x.nMaj1_A, x.nMin1_A])\n else:\n if x.frac1_A > x.frac2_A:\n return pd.Series([x.nMaj1_A, x.nMin1_A])\n else:\n return pd.Series([x.nMaj2_A, x.nMin2_A])\n\n new_data = cnv_table.apply(get_major_minor, axis=1)\n new_data.columns = ['major_cn', 'minor_cn']\n cnv_final = pd.concat((cnv_table[['chr', 'startpos', 'endpos']],\n new_data.astype(int)), axis=1)\n cnv_final = cnv_final.assign(weight=cnv_final.endpos - cnv_final.startpos)\n perc_dip = cnv_final[(cnv_final.major_cn==1)&(cnv_final.minor_cn==1)].weight.sum() / cnv_final.weight.sum()\n\n df_list = list()\n df_cols = ['tumor', 'depth', 'nb_mut', 'true_nb_clones', 'true_purity',\n 'perc_dip', 'fitted_nb_clones', 'll_ratio', 'pval', 'score1B',\n 'score1C', 'score2A', 'score2C_auc', 'score2C_accuracy',\n 'score2C_sensitivity', 'score2C_specificity', 'score2C_precision',\n 'score_sig_1A', 'score_sig_1B', 'score_sig_1C_auc',\n 'score_sig_1C_accuracy', 'score_sig_1C_sensitivity',\n 'score_sig_1C_specificity', 'score_sig_1C_precision',\n 'min_diff_distrib_mut', 'max_diff_distrib_mut',\n 'std_diff_distrib_mut', 'median_diff_distrib_mut', 'perc_dist_5',\n 'perc_dist_10', 'runtime', 'method', 'setting']\n for method, setting in method_setting_list:\n print(method, setting)\n row_list = list()\n row_list.append(tumor)\n row_list.append(depth)\n row_list.append(len(true_cluster_assign))\n row_list.append(J_true)\n row_list.append(true_purity)\n row_list.append(perc_dip)\n (ll_ratio, pval, J_pred, phi_pred_values, weights_pred,\n pred_cluster_assign, pred_subclonal, pred_profile, pred_signatures,\n est_dist, runtime) = \\\n method_function_dict[method](folder_path, setting)\n row_list.append(J_pred)\n row_list.append(ll_ratio)\n row_list.append(pval)\n if J_pred is not None:\n row_list.append(score1B_base(J_true, J_pred))\n else:\n row_list.append(np.nan)\n if phi_pred_values is not None:\n row_list.append(score1C_base(phi_true_values, phi_pred_values,\n weights_true, weights_pred))\n else:\n row_list.append(np.nan)\n if pred_cluster_assign is not None:\n ordered_table = pd.merge(pred_cluster_assign, true_cluster_assign,\n on='mutation_id', how='inner')\n if len(true_cluster_assign)<20000:\n row_list.append(score2A_base(ordered_table.true_cluster_id,\n ordered_table.pred_cluster_id))\n else:\n row_list.append(np.nan)\n ordered_table = pd.merge(pred_subclonal, true_subclonal,\n on='mutation_id', how='inner')\n auc, accuracy, sensitivity, specificity, precision = \\\n score2C_base(ordered_table.true_subclonal,\n ordered_table.pred_subclonal)\n for v in (auc, accuracy, sensitivity, specificity, precision):\n row_list.append(v)\n else:\n for i in range(6):\n row_list.append(np.nan)\n if pred_profile is not None:\n row_list.append(score_sig_1A_base(sig_profile_1A, pred_profile))\n row_list.append(score_sig_1B_base(sig_profile_1B, pred_profile))\n auc, accuracy, sensitivity, specificity, precision = \\\n score_sig_1C_base(true_signatures, pred_signatures)\n for v in (auc, accuracy, sensitivity, specificity, precision):\n row_list.append(v)\n if method == 'deconstructsigs':\n nb_rows = min(est_dist.shape[0], true_profile_1E.shape[0])\n (min_diff_distrib_mut, max_diff_distrib_mut, std_diff_distrib_mut,\n median_diff_distrib_mut, perc_dist_5, perc_dist_10) = \\\n score_sig_1E_base(true_profile_1E[0:nb_rows, :],\n est_dist[0:nb_rows, :])\n else:\n ok_ids = ordered_table.mutation_id.values\n true_filter = (true_subclonal.mutation_id.isin(ok_ids)).values\n pred_filter = (pred_subclonal.mutation_id.isin(ok_ids)).values\n (min_diff_distrib_mut, max_diff_distrib_mut, std_diff_distrib_mut,\n median_diff_distrib_mut, perc_dist_5, perc_dist_10) = \\\n score_sig_1E_base(true_profile_1E[true_filter, :],\n est_dist[pred_filter, :].astype(float))\n for v in (min_diff_distrib_mut, max_diff_distrib_mut,\n std_diff_distrib_mut, median_diff_distrib_mut,\n perc_dist_5, perc_dist_10):\n row_list.append(v)\n else:\n for i in range(13):\n row_list.append(np.nan)\n row_list.append(runtime)\n row_list.append(method)\n row_list.append(setting)\n df_list.append(row_list)\n res_df = pd.DataFrame(df_list, columns=df_cols)\n res_df.to_csv('{}/result_evaluation_dream_new.csv'.format(folder_path),\n sep='\\t', index=False)\n\n\n\n\n\n" ]
[ [ "numpy.array", "numpy.zeros", "pandas.merge", "pandas.DataFrame", "numpy.where", "numpy.argmax", "pandas.Series", "numpy.repeat", "pandas.read_csv", "numpy.unique", "numpy.isin" ] ]
cendaifeng/keras-face-recognition
[ "cda443e6ffec4c618f24c10ef30208f4f31a4181" ]
[ "version/face_recognize_sleep.py" ]
[ "import cv2\nimport os\nimport time\nimport numpy as np\nfrom net.mtcnn import mtcnn\nimport utils.utils as utils\nfrom net.inception import InceptionResNetV1\nimport RPi.GPIO as GPIO\nimport signal\nimport atexit\n\n\n# 初始化舵机\natexit.register(GPIO.cleanup)\nservopin = 17\nGPIO.setmode(GPIO.BCM)\nGPIO.setup(servopin, GPIO.OUT, initial=False)\npwm = GPIO.PWM(servopin, 50) # 50HZ\npwm.start(0)\n\n\ndef ControlMotor(angle):\n # 舵机的频率为50HZ,占空比为2.5%-12.5%,线性对应舵机转动角度的0-180度\n pwm.ChangeDutyCycle(2.5 + angle / 360 * 20)\n\n\nclass face_rec:\n\n def __init__(self):\n\n # 创建 mtcnn 对象\n # 检测图片中的人脸\n self.mtcnn_model = mtcnn()\n # 门限函数\n self.threshold = [0.5, 0.8, 0.9]\n\n # 载入 facenet\n # 将检测到的人脸转化为128维的向量\n self.facenet_model = InceptionResNetV1()\n # model.summary()\n # 加载模型权重只需要15s 而加载图像文件夹则需30s以上\n model_path = './model_data/facenet_keras.h5'\n self.facenet_model.load_weights(model_path)\n\n time2 = time.time()\n # ----------------------------------------------- #\n # 对数据库中的人脸进行编码\n # known_face_encodings中存储的是编码后的人脸\n # known_face_names为人脸的名字\n # ----------------------------------------------- #\n face_list = os.listdir(\"face_dataset\")\n\n self.known_face_encodings = []\n\n self.known_face_names = []\n\n for face in face_list:\n name = face.split(\".\")[0]\n\n img = cv2.imread(\"./face_dataset/\"+face)\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n # 检测人脸\n rectangles = self.mtcnn_model.detectFace(img, self.threshold)\n\n # 转化成正方形\n rectangles = utils.rect2square(np.array(rectangles))\n # facenet要传入一个160x160的图片\n rectangle = rectangles[0]\n # 记下他们的landmark\n landmark = (np.reshape(rectangle[5:15],(5,2)) - np.array([int(rectangle[0]),int(rectangle[1])]))/(rectangle[3]-rectangle[1])*160\n\n crop_img = img[int(rectangle[1]):int(rectangle[3]), int(rectangle[0]):int(rectangle[2])]\n crop_img = cv2.resize(crop_img,(160,160))\n\n new_img,_ = utils.Alignment_1(crop_img,landmark)\n\n new_img = np.expand_dims(new_img,0)\n # 将检测到的人脸传入到facenet的模型中,实现128维特征向量的提取\n face_encoding = utils.calc_128_vec(self.facenet_model,new_img)\n\n self.known_face_encodings.append(face_encoding)\n self.known_face_names.append(name)\n\n time3 = time.time()\n print(time3-time2)\n\n\n\n def recognize(self, draw):\n # -----------------------------------------------#\n # 人脸识别\n # 先定位,再进行数据库匹配\n # -----------------------------------------------#\n height, width, _ = np.shape(draw)\n draw_rgb = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB)\n\n # 检测人脸\n rectangles = self.mtcnn_model.detectFace(draw_rgb, self.threshold)\n\n if len(rectangles) == 0:\n return\n\n # 转化成正方形\n rectangles = utils.rect2square(np.array(rectangles, dtype=np.int32))\n rectangles[:, 0] = np.clip(rectangles[:, 0], 0, width)\n rectangles[:, 1] = np.clip(rectangles[:, 1], 0, height)\n rectangles[:, 2] = np.clip(rectangles[:, 2], 0, width)\n rectangles[:, 3] = np.clip(rectangles[:, 3], 0, height)\n # -----------------------------------------------#\n # 对检测到的人脸进行编码\n # -----------------------------------------------#\n face_encodings = []\n for rectangle in rectangles:\n landmark = (np.reshape(rectangle[5:15], (5, 2)) - np.array([int(rectangle[0]), int(rectangle[1])])) / (\n rectangle[3] - rectangle[1]) * 160\n\n crop_img = draw_rgb[int(rectangle[1]):int(rectangle[3]), int(rectangle[0]):int(rectangle[2])]\n crop_img = cv2.resize(crop_img, (160, 160))\n\n new_img, _ = utils.Alignment_1(crop_img, landmark)\n new_img = np.expand_dims(new_img, 0)\n\n face_encoding = utils.calc_128_vec(self.facenet_model, new_img)\n face_encodings.append(face_encoding)\n\n face_names = []\n distances = []\n for face_encoding in face_encodings:\n # 取出一张脸并与数据库中所有的人脸进行对比,计算得分\n matches = utils.compare_faces(self.known_face_encodings, face_encoding, tolerance=0.7)\n name = \"Unknown\"\n # 找出距离最近的人脸\n face_distances = utils.face_distance(self.known_face_encodings, face_encoding)\n # 取出这个最近人脸的评分\n best_match_index = np.argmin(face_distances)\n dis = face_distances[best_match_index]\n if matches[best_match_index]:\n name = self.known_face_names[best_match_index]\n print(name+\" ---> \"+str(dis))\n\n ControlMotor(180)\n time.sleep(4)\n ControlMotor(0)\n time.sleep(1)\n\n\n face_names.append(name)\n distances.append(dis)\n\n rectangles = rectangles[:, 0:4]\n # -----------------------------------------------#\n # 画框~!~\n # -----------------------------------------------#\n for (left, top, right, bottom), name, dis in zip(rectangles, face_names, distances):\n cv2.rectangle(draw, (left, top), (right, bottom), (0, 0, 255), 2)\n\n font = cv2.FONT_HERSHEY_SIMPLEX\n cv2.putText(draw, name, (left, bottom - 15), font, 0.75, (255, 255, 255), 2)\n cv2.putText(draw, str(dis), (right, bottom - 15), font, 0.75, (0, 0, 255), 2)\n return draw\n\n\nif __name__ == \"__main__\":\n\n diudiudiu = face_rec()\n video_capture = cv2.VideoCapture(0)\n\n num_frames = 0\n since = time.time()\n while True:\n time.sleep(1)\n ret, draw = video_capture.read()\n diudiudiu.recognize(draw)\n # put the text into video\n num_frames += 1\n cv2.putText(draw, f'FPS{num_frames / (time.time() - since):.2f}', (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1.2,\n (0, 0, 255), 2)\n cv2.imshow('Video', draw)\n if cv2.waitKey(20) & 0xFF == ord('q'):\n break\n\n\n video_capture.release()\n cv2.destroyAllWindows()\n" ]
[ [ "numpy.array", "numpy.reshape", "numpy.argmin", "numpy.shape", "numpy.clip", "numpy.expand_dims" ] ]
dparle/CarND-Capstone
[ "7ad6173ec71cc548c5b807de042608cd386c4862" ]
[ "ros/src/tl_detector/light_classification/tl_classifier.py" ]
[ "from styx_msgs.msg import TrafficLight\nimport rospy\nimport yaml\nimport tensorflow as tf\nimport numpy as np\nimport os\nimport cv2\n\nclass TLClassifier(object):\n def __init__(self):\n self.session = None\n self.model_graph = None\n self.classes = {1: TrafficLight.GREEN, 2: TrafficLight.RED, 3: TrafficLight.YELLOW, 4: TrafficLight.UNKNOWN}\n \n config_string = rospy.get_param(\"/traffic_light_config\")\n self.config = yaml.load(config_string)\n self.get_model(os.path.dirname(os.path.realpath(__file__)) + self.config['classifier_model'])\n \n def get_model(self, model_path):\n config = tf.ConfigProto()\n config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1\n\n self.model_graph = tf.Graph()\n with tf.Session(graph=self.model_graph, config=config) as sess:\n self.session = sess\n od_graph_def = tf.GraphDef()\n with tf.gfile.GFile(model_path, 'rb') as fid:\n serialized_graph = fid.read()\n od_graph_def.ParseFromString(serialized_graph)\n tf.import_graph_def(od_graph_def, name='')\n \n def get_classification(self, image):\n \"\"\"Determines the color of the traffic light in the image\n\n Args:\n image (cv::Mat): image containing the traffic light\n\n Returns:\n int: ID of traffic light color (specified in styx_msgs/TrafficLight)\n\n \"\"\"\n # Processing pre image\n image = cv2.resize(image, (300, 300))\n image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n \n image_tensor = self.model_graph.get_tensor_by_name('image_tensor:0')\n detection_boxes = self.model_graph.get_tensor_by_name('detection_boxes:0')\n detection_scores = self.model_graph.get_tensor_by_name('detection_scores:0')\n detection_classes = self.model_graph.get_tensor_by_name('detection_classes:0')\n \n\n (boxes, scores, classes) = self.session.run(\n [detection_boxes, detection_scores, detection_classes],\n feed_dict={image_tensor: np.expand_dims(image, axis=0)})\n \n boxes = np.squeeze(boxes)\n scores = np.squeeze(scores)\n classes = np.squeeze(classes)\n\n for element, box in enumerate(boxes):\n if scores[element] > 0.5:\n traffic_light_class = self.classes[classes[element]]\n rospy.logdebug(\"Detected traffic light is: %d\", traffic_light_class)\n return traffic_light_class\n else:\n rospy.logdebug(\"Traffic Light Image Unkown\")\n \n return TrafficLight.UNKNOWN\n" ]
[ [ "tensorflow.Graph", "tensorflow.Session", "tensorflow.GraphDef", "tensorflow.import_graph_def", "tensorflow.gfile.GFile", "tensorflow.ConfigProto", "numpy.squeeze", "numpy.expand_dims" ] ]
yiskw713/pytorch_utils
[ "ce353202b28d5add9fa4fa6e4f9cdedffc3f8114" ]
[ "tests/opencv_misc/test_interframe_difference.py" ]
[ "import numpy as np\nimport pytest\n\nfrom src.opencv_misc.interframe_difference import InterframeDifferenceProcessor\n\n\nclass TestInterframeDifferenceProcessor:\n @pytest.fixture()\n def processor(self) -> InterframeDifferenceProcessor:\n return InterframeDifferenceProcessor(threshold=30)\n\n def test_update_background(self, processor: InterframeDifferenceProcessor):\n bg = np.zeros((10, 10), np.uint8)\n processor._update_background(bg)\n\n assert np.all(bg == processor.background)\n\n def test_calc_interframe_difference(\n self, processor: InterframeDifferenceProcessor\n ) -> None:\n bg = np.zeros((10, 10), np.uint8)\n processor._update_background(bg)\n\n # target gray image\n target = bg.copy()\n target[1, 1] = 30\n target[2, 2] = 29\n\n # expected mask\n expected = np.zeros((10, 10), np.uint8)\n expected[1, 1] = 255\n\n mask = processor._calc_interframe_difference(target)\n assert np.all(mask == expected)\n" ]
[ [ "numpy.all", "numpy.zeros" ] ]
karthiksharma98/mmdetection
[ "295145d41a74598db98a037224f0f82c074f3fff" ]
[ "mmdet/models/dense_heads/yolof_rep_head.py" ]
[ "import torch\nimport torch.nn as nn\nfrom mmcv.cnn import (ConvModule, bias_init_with_prob, constant_init, is_norm,\n normal_init)\nfrom mmcv.runner import force_fp32\n\nfrom mmdet.core import anchor_inside_flags, multi_apply, reduce_mean, unmap\nfrom ..builder import HEADS\nfrom .anchor_head import AnchorHead\nfrom ..backbones.repvgg import RepVGGConvModule\nINF = 1e8\n\n\ndef levels_to_images(mlvl_tensor):\n \"\"\"Concat multi-level feature maps by image.\n [feature_level0, feature_level1...] -> [feature_image0, feature_image1...]\n Convert the shape of each element in mlvl_tensor from (N, C, H, W) to\n (N, H*W , C), then split the element to N elements with shape (H*W, C), and\n concat elements in same image of all level along first dimension.\n\n Args:\n mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from\n corresponding level. Each element is of shape (N, C, H, W)\n\n Returns:\n list[torch.Tensor]: A list that contains N tensors and each tensor is\n of shape (num_elements, C)\n \"\"\"\n batch_size = mlvl_tensor[0].size(0)\n batch_list = [[] for _ in range(batch_size)]\n channels = mlvl_tensor[0].size(1)\n for t in mlvl_tensor:\n t = t.permute(0, 2, 3, 1)\n t = t.view(batch_size, -1, channels).contiguous()\n for img in range(batch_size):\n batch_list[img].append(t[img])\n return [torch.cat(item, 0) for item in batch_list]\n\n\[email protected]_module()\nclass RepYOLOFHead(AnchorHead):\n \"\"\"YOLOFHead Paper link: https://arxiv.org/abs/2103.09460.\n Args:\n num_classes (int): The number of object classes (w/o background)\n in_channels (List[int]): The number of input channels per scale.\n cls_num_convs (int): The number of convolutions of cls branch.\n Default 2.\n reg_num_convs (int): The number of convolutions of reg branch.\n Default 4.\n norm_cfg (dict): Dictionary to construct and config norm layer.\n \"\"\"\n\n def __init__(self,\n num_classes,\n in_channels,\n num_cls_convs=2,\n num_reg_convs=4,\n norm_cfg=dict(type='BN', requires_grad=True),\n deploy=False,\n **kwargs):\n self.num_cls_convs = num_cls_convs\n self.num_reg_convs = num_reg_convs\n self.norm_cfg = norm_cfg\n self.deploy = deploy\n super(RepYOLOFHead, self).__init__(num_classes, in_channels, **kwargs)\n\n def _init_layers(self):\n cls_subnet = []\n bbox_subnet = []\n for i in range(self.num_cls_convs):\n cls_subnet.append(\n # ConvModule(\n # self.in_channels,\n # self.in_channels,\n # kernel_size=3,\n # padding=1,\n # norm_cfg=self.norm_cfg)\n\n RepVGGConvModule(\n in_channels=self.in_channels,\n out_channels=self.in_channels,\n kernel_size=3,\n stride=1,\n padding=1,\n dilation=1,\n groups=1,\n activation='ReLU',\n padding_mode='zeros',\n deploy=self.deploy)\n )\n \n for i in range(self.num_reg_convs):\n bbox_subnet.append(\n # ConvModule(\n # self.in_channels,\n # self.in_channels,\n # kernel_size=3,\n # padding=1,\n # norm_cfg=self.norm_cfg)\n RepVGGConvModule(\n in_channels=self.in_channels,\n out_channels=self.in_channels,\n kernel_size=3,\n stride=1,\n padding=1,\n dilation=1,\n groups=1,\n activation='ReLU',\n padding_mode='zeros',\n deploy=self.deploy)\n\n )\n self.cls_subnet = nn.Sequential(*cls_subnet)\n self.bbox_subnet = nn.Sequential(*bbox_subnet)\n self.cls_score = nn.Conv2d(\n self.in_channels,\n self.num_anchors * self.num_classes,\n kernel_size=3,\n stride=1,\n padding=1)\n\n self.bbox_pred = nn.Conv2d(\n self.in_channels,\n self.num_anchors * 4,\n kernel_size=3,\n stride=1,\n padding=1)\n self.object_pred = nn.Conv2d(\n self.in_channels,\n self.num_anchors,\n kernel_size=3,\n stride=1,\n padding=1)\n\n def init_weights(self):\n for m in self.modules():\n if isinstance(m, nn.Conv2d):\n normal_init(m, mean=0, std=0.01)\n if is_norm(m):\n constant_init(m, 1)\n\n # Use prior in model initialization to improve stability\n bias_cls = bias_init_with_prob(0.01)\n torch.nn.init.constant_(self.cls_score.bias, bias_cls)\n\n def forward_single(self, feature):\n cls_score = self.cls_score(self.cls_subnet(feature))\n N, _, H, W = cls_score.shape\n cls_score = cls_score.view(N, -1, self.num_classes, H, W)\n\n reg_feat = self.bbox_subnet(feature)\n bbox_reg = self.bbox_pred(reg_feat)\n objectness = self.object_pred(reg_feat)\n\n # # implicit objectness\n if self.fp16_enabled:\n INF = 6.55e4\n else:\n INF = 1e8\n\n objectness = objectness.view(N, -1, 1, H, W)\n normalized_cls_score = cls_score + objectness - torch.log(\n 1. + torch.clamp(cls_score.exp(), max=INF) +\n torch.clamp(objectness.exp(), max=INF))\n normalized_cls_score = normalized_cls_score.view(N, -1, H, W)\n return normalized_cls_score, bbox_reg\n\n @force_fp32(apply_to=('cls_scores', 'bbox_preds'))\n def loss(self,\n cls_scores,\n bbox_preds,\n gt_bboxes,\n gt_labels,\n img_metas,\n gt_bboxes_ignore=None):\n \"\"\"Compute losses of the head.\n Args:\n cls_scores (list[Tensor]): Box scores for each scale level\n Has shape (batch, num_anchors * num_classes, h, w)\n bbox_preds (list[Tensor]): Box energies / deltas for each scale\n level with shape (batch, num_anchors * 4, h, w)\n gt_bboxes (list[Tensor]): Ground truth bboxes for each image with\n shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.\n gt_labels (list[Tensor]): class indices corresponding to each box\n img_metas (list[dict]): Meta information of each image, e.g.,\n image size, scaling factor, etc.\n gt_bboxes_ignore (None | list[Tensor]): specify which bounding\n boxes can be ignored when computing the loss. Default: None\n Returns:\n dict[str, Tensor]: A dictionary of loss components.\n \"\"\"\n assert len(cls_scores) == 1\n assert self.anchor_generator.num_levels == 1\n\n device = cls_scores[0].device\n featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]\n anchor_list, valid_flag_list = self.get_anchors(\n featmap_sizes, img_metas, device=device)\n\n # The output level is always 1\n anchor_list = [anchors[0] for anchors in anchor_list]\n valid_flag_list = [valid_flags[0] for valid_flags in valid_flag_list]\n\n cls_scores_list = levels_to_images(cls_scores)\n bbox_preds_list = levels_to_images(bbox_preds)\n\n label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1\n cls_reg_targets = self.get_targets(\n cls_scores_list,\n bbox_preds_list,\n anchor_list,\n valid_flag_list,\n gt_bboxes,\n img_metas,\n gt_bboxes_ignore_list=gt_bboxes_ignore,\n gt_labels_list=gt_labels,\n label_channels=label_channels)\n if cls_reg_targets is None:\n return None\n (batch_labels, batch_label_weights, num_total_pos, num_total_neg,\n batch_bbox_weights, batch_pos_predicted_boxes,\n batch_target_boxes) = cls_reg_targets\n\n flatten_labels = batch_labels.reshape(-1)\n batch_label_weights = batch_label_weights.reshape(-1)\n cls_score = cls_scores[0].permute(0, 2, 3,\n 1).reshape(-1, self.cls_out_channels)\n\n num_total_samples = (num_total_pos +\n num_total_neg) if self.sampling else num_total_pos\n num_total_samples = reduce_mean(\n cls_score.new_tensor(num_total_samples)).clamp_(1.0).item()\n\n # classification loss\n loss_cls = self.loss_cls(\n cls_score,\n flatten_labels,\n batch_label_weights,\n avg_factor=num_total_samples)\n\n # regression loss\n if batch_pos_predicted_boxes.shape[0] == 0:\n # no pos sample\n loss_bbox = batch_pos_predicted_boxes.sum() * 0\n else:\n loss_bbox = self.loss_bbox(\n batch_pos_predicted_boxes,\n batch_target_boxes,\n batch_bbox_weights.float(),\n avg_factor=num_total_samples)\n\n return dict(loss_cls=loss_cls, loss_bbox=loss_bbox)\n\n def get_targets(self,\n cls_scores_list,\n bbox_preds_list,\n anchor_list,\n valid_flag_list,\n gt_bboxes_list,\n img_metas,\n gt_bboxes_ignore_list=None,\n gt_labels_list=None,\n label_channels=1,\n unmap_outputs=True):\n \"\"\"Compute regression and classification targets for anchors in\n multiple images.\n Args:\n cls_scores_list (list[Tensor]): Classification scores of\n each image. each is a 4D-tensor, the shape is\n (h * w, num_anchors * num_classes).\n bbox_preds_list (list[Tensor]): Bbox preds of each image.\n each is a 4D-tensor, the shape is (h * w, num_anchors * 4).\n anchor_list (list[Tensor]): Anchors of each image. Each element of\n is a tensor of shape (h * w * num_anchors, 4).\n valid_flag_list (list[Tensor]): Valid flags of each image. Each\n element of is a tensor of shape (h * w * num_anchors, )\n gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image.\n img_metas (list[dict]): Meta info of each image.\n gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be\n ignored.\n gt_labels_list (list[Tensor]): Ground truth labels of each box.\n label_channels (int): Channel of label.\n unmap_outputs (bool): Whether to map outputs back to the original\n set of anchors.\n\n Returns:\n tuple: Usually returns a tuple containing learning targets.\n\n - batch_labels (Tensor): Label of all images. Each element \\\n of is a tensor of shape (batch, h * w * num_anchors)\n - batch_label_weights (Tensor): Label weights of all images \\\n of is a tensor of shape (batch, h * w * num_anchors)\n - num_total_pos (int): Number of positive samples in all \\\n images.\n - num_total_neg (int): Number of negative samples in all \\\n images.\n additional_returns: This function enables user-defined returns from\n `self._get_targets_single`. These returns are currently refined\n to properties at each feature map (i.e. having HxW dimension).\n The results will be concatenated after the end\n \"\"\"\n num_imgs = len(img_metas)\n assert len(anchor_list) == len(valid_flag_list) == num_imgs\n\n # compute targets for each image\n if gt_bboxes_ignore_list is None:\n gt_bboxes_ignore_list = [None for _ in range(num_imgs)]\n if gt_labels_list is None:\n gt_labels_list = [None for _ in range(num_imgs)]\n results = multi_apply(\n self._get_targets_single,\n bbox_preds_list,\n anchor_list,\n valid_flag_list,\n gt_bboxes_list,\n gt_bboxes_ignore_list,\n gt_labels_list,\n img_metas,\n label_channels=label_channels,\n unmap_outputs=unmap_outputs)\n (all_labels, all_label_weights, pos_inds_list, neg_inds_list,\n sampling_results_list) = results[:5]\n rest_results = list(results[5:]) # user-added return values\n # no valid anchors\n if any([labels is None for labels in all_labels]):\n return None\n # sampled anchors of all images\n num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list])\n num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list])\n\n batch_labels = torch.stack(all_labels, 0)\n batch_label_weights = torch.stack(all_label_weights, 0)\n\n res = (batch_labels, batch_label_weights, num_total_pos, num_total_neg)\n for i, rests in enumerate(rest_results): # user-added return values\n rest_results[i] = torch.cat(rests, 0)\n\n return res + tuple(rest_results)\n\n def _get_targets_single(self,\n bbox_preds,\n flat_anchors,\n valid_flags,\n gt_bboxes,\n gt_bboxes_ignore,\n gt_labels,\n img_meta,\n label_channels=1,\n unmap_outputs=True):\n \"\"\"Compute regression and classification targets for anchors in a\n single image.\n Args:\n bbox_preds (Tensor): Bbox prediction of the image, which\n shape is (h * w ,4)\n flat_anchors (Tensor): Anchors of the image, which shape is\n (h * w * num_anchors ,4)\n valid_flags (Tensor): Valid flags of the image, which shape is\n (h * w * num_anchors,).\n gt_bboxes (Tensor): Ground truth bboxes of the image,\n shape (num_gts, 4).\n gt_bboxes_ignore (Tensor): Ground truth bboxes to be\n ignored, shape (num_ignored_gts, 4).\n img_meta (dict): Meta info of the image.\n gt_labels (Tensor): Ground truth labels of each box,\n shape (num_gts,).\n label_channels (int): Channel of label.\n unmap_outputs (bool): Whether to map outputs back to the original\n set of anchors.\n Returns:\n tuple:\n labels (Tensor): Labels of image, which shape is\n (h * w * num_anchors, ).\n label_weights (Tensor): Label weights of image, which shape is\n (h * w * num_anchors, ).\n pos_inds (Tensor): Pos index of image.\n neg_inds (Tensor): Neg index of image.\n sampling_result (obj:`SamplingResult`): Sampling result.\n pos_bbox_weights (Tensor): The Weight of using to calculate\n the bbox branch loss, which shape is (num, ).\n pos_predicted_boxes (Tensor): boxes predicted value of\n using to calculate the bbox branch loss, which shape is\n (num, 4).\n pos_target_boxes (Tensor): boxes target value of\n using to calculate the bbox branch loss, which shape is\n (num, 4).\n \"\"\"\n inside_flags = anchor_inside_flags(flat_anchors, valid_flags,\n img_meta['img_shape'][:2],\n self.train_cfg.allowed_border)\n if not inside_flags.any():\n return (None, ) * 8\n # assign gt and sample anchors\n anchors = flat_anchors[inside_flags, :]\n bbox_preds = bbox_preds.reshape(-1, 4)\n bbox_preds = bbox_preds[inside_flags, :]\n\n # decoded bbox\n decoder_bbox_preds = self.bbox_coder.decode(anchors, bbox_preds)\n assign_result = self.assigner.assign(\n decoder_bbox_preds, anchors, gt_bboxes, gt_bboxes_ignore,\n None if self.sampling else gt_labels)\n\n pos_bbox_weights = assign_result.get_extra_property('pos_idx')\n pos_predicted_boxes = assign_result.get_extra_property(\n 'pos_predicted_boxes')\n pos_target_boxes = assign_result.get_extra_property('target_boxes')\n\n sampling_result = self.sampler.sample(assign_result, anchors,\n gt_bboxes)\n num_valid_anchors = anchors.shape[0]\n labels = anchors.new_full((num_valid_anchors, ),\n self.num_classes,\n dtype=torch.long)\n label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float)\n\n pos_inds = sampling_result.pos_inds\n neg_inds = sampling_result.neg_inds\n if len(pos_inds) > 0:\n if gt_labels is None:\n # Only rpn gives gt_labels as None\n # Foreground is the first class since v2.5.0\n labels[pos_inds] = 0\n else:\n labels[pos_inds] = gt_labels[\n sampling_result.pos_assigned_gt_inds]\n if self.train_cfg.pos_weight <= 0:\n label_weights[pos_inds] = 1.0\n else:\n label_weights[pos_inds] = self.train_cfg.pos_weight\n if len(neg_inds) > 0:\n label_weights[neg_inds] = 1.0\n\n # map up to original set of anchors\n if unmap_outputs:\n num_total_anchors = flat_anchors.size(0)\n labels = unmap(\n labels, num_total_anchors, inside_flags,\n fill=self.num_classes) # fill bg label\n label_weights = unmap(label_weights, num_total_anchors,\n inside_flags)\n\n return (labels, label_weights, pos_inds, neg_inds, sampling_result,\n pos_bbox_weights, pos_predicted_boxes, pos_target_boxes)\n" ]
[ [ "torch.cat", "torch.stack", "torch.nn.init.constant_", "torch.nn.Sequential", "torch.nn.Conv2d" ] ]
canaanchap/roguedeka
[ "aacd3e45966deff21eb2db5e6e6034eaaf9b88b2" ]
[ "venv/lib/python3.7/site-packages/tcod/sdl.py" ]
[ "\"\"\"SDL2 specific functionality.\n\nAdd the line ``import tcod.sdl`` to include this module, as importing this\nmodule is not implied by ``import tcod``.\n\"\"\"\nfrom typing import Any, Tuple\n\nimport numpy as np\n\nfrom tcod.loader import lib, ffi\n\n__all__ = (\"Window\",)\n\n\nclass _TempSurface:\n \"\"\"Holds a temporary surface derived from a NumPy array.\"\"\"\n\n def __init__(self, pixels: np.array) -> None:\n self._array = np.ascontiguousarray(pixels, dtype=np.uint8)\n if len(self._array) != 3:\n raise TypeError(\n \"NumPy shape must be 3D [y, x, ch] (got %r)\"\n % (self._array.shape,)\n )\n if 3 <= self._array.shape[2] <= 4:\n raise TypeError(\n \"NumPy array must have RGB or RGBA channels. (got %r)\"\n % (self._array.shape,)\n )\n self.p = ffi.gc(\n lib.SDL_CreateRGBSurfaceFrom(\n ffi.from_buffer(\"void*\", self._array),\n self._array.shape[1], # Width.\n self._array.shape[0], # Height.\n self._array.shape[2] * 8, # Bit depth.\n self._array.stride[1], # Pitch.\n 0x000000FF,\n 0x0000FF00,\n 0x00FF0000,\n 0xFF000000 if self._array.shape[2] == 4 else 0,\n ),\n lib.SDL_FreeSurface,\n )\n\n\nclass Window:\n \"\"\"An SDL2 Window object.\"\"\"\n\n def __init__(self, sdl_window_p: Any) -> None:\n if ffi.typeof(sdl_window_p) is not ffi.typeof(\"struct SDL_Window*\"):\n raise TypeError(\n \"sdl_window_p must be %r type (was %r).\"\n % (ffi.typeof(\"struct SDL_Window*\"), ffi.typeof(sdl_window_p))\n )\n if not sdl_window_p:\n raise ValueError(\"sdl_window_p can not be a null pointer.\")\n self.p = sdl_window_p\n\n def __eq__(self, other: Any) -> bool:\n return bool(self.p == other.p)\n\n def set_icon(self, image: np.array) -> None:\n \"\"\"Set the window icon from an image.\n\n `image` is a C memory order RGB or RGBA NumPy array.\n \"\"\"\n surface = _TempSurface(image)\n lib.SDL_SetWindowIcon(self.p, surface.p)\n\n @property\n def allow_screen_saver(self) -> bool:\n \"\"\"If True the operating system is allowed to display a screen saver.\n\n You can set this attribute to enable or disable the screen saver.\n \"\"\"\n return bool(lib.SDL_IsScreenSaverEnabled(self.p))\n\n @allow_screen_saver.setter\n def allow_screen_saver(self, value: bool) -> None:\n if value:\n lib.SDL_EnableScreenSaver(self.p)\n else:\n lib.SDL_DisableScreenSaver(self.p)\n\n @property\n def position(self) -> Tuple[int, int]:\n \"\"\"Return the (x, y) position of the window.\n\n This attribute can be set the move the window.\n The constants tcod.lib.SDL_WINDOWPOS_CENTERED or\n tcod.lib.SDL_WINDOWPOS_UNDEFINED can be used.\n \"\"\"\n xy = ffi.new(\"int[2]\")\n lib.SDL_GetWindowPosition(self.p, xy, xy + 1)\n return xy[0], xy[1]\n\n @position.setter\n def position(self, xy: Tuple[int, int]) -> None:\n x, y = xy\n lib.SDL_SetWindowPosition(self.p, x, y)\n\n @property\n def size(self) -> Tuple[int, int]:\n \"\"\"Return the pixel (width, height) of the window.\n\n This attribute can be set to change the size of the window but the\n given size must be greater than (1, 1) or else an exception will be\n raised.\n \"\"\"\n xy = ffi.new(\"int[2]\")\n lib.SDL_GetWindowSize(self.p, xy, xy + 1)\n return xy[0], xy[1]\n\n @size.setter\n def size(self, xy: Tuple[int, int]) -> None:\n if any(i <= 0 for i in xy):\n raise ValueError(\n \"Window size must be greater than zero, not %r\" % (xy,)\n )\n x, y = xy\n lib.SDL_SetWindowSize(self.p, x, y)\n\n\ndef get_active_window() -> Window:\n \"\"\"Return the SDL2 window current managed by libtcod.\n\n Will raise an error if libtcod does not currently have a window.\n \"\"\"\n sdl_window = lib.TCOD_sys_get_window()\n if not sdl_window:\n raise RuntimeError(\"TCOD does not have an active window.\")\n return Window(sdl_window)\n" ]
[ [ "numpy.ascontiguousarray" ] ]
zgojcic/3D_multiview_reg
[ "22468adadfccaffe92c6d88a4cce42b5b7edb77e" ]
[ "lib/pairwise/__init__.py" ]
[ "import torch\r\nimport torch.nn as nn\r\nimport MinkowskiEngine as ME\r\nfrom lib.layers import Soft_NN, Sampler\r\nfrom lib.utils import extract_overlaping_pairs, extract_mutuals, construct_filtering_input_data\r\nfrom lib.pairwise import (\r\n config, training\r\n)\r\n\r\n__all__ = [\r\n config, training\r\n]\r\n\r\n\r\nclass PairwiseReg(nn.Module):\r\n \"\"\"\r\n Pairwise registration class.\r\n\r\n It cobmines a feature descriptor with a filtering network and differentiable Kabsch algorithm to estimate\r\n the transformation parameters of two point clouds.\r\n\r\n Args:\r\n descriptor_module (nn.Module): feature descriptor network\r\n filtering_module (nn.Module): filtering (outlier detection) network\r\n corr_type (string): type of the correspondences to be used (hard, soft, Gumble-Softmax)\r\n device (device): torch device \r\n mutuals_flag (bool): if mutual nearest neighbors should be used\r\n\r\n Returns:\r\n\r\n\r\n \"\"\"\r\n \r\n def __init__(self, descriptor_module, \r\n filtering_module, device, samp_type='fps', \r\n corr_type = 'soft', mutuals_flag=False, \r\n connectivity_info=None, tgt_num_points=2000, \r\n straight_through_gradient=True, train_descriptor=False):\r\n super().__init__()\r\n\r\n self.device = device\r\n self.samp_type = samp_type\r\n self.corr_type = corr_type\r\n \r\n self.mutuals = mutuals_flag\r\n self.connectivity_info = connectivity_info\r\n self.train_descriptor = train_descriptor\r\n \r\n self.descriptor_module = descriptor_module\r\n\r\n\r\n # If the descriptor module is not specified, precomputed descriptor data should be used\r\n if self.descriptor_module: \r\n self.sampler = Sampler(samp_type=self.samp_type, targeted_num_points=tgt_num_points)\r\n self.feature_matching = Soft_NN(corr_type=self.corr_type, st=straight_through_gradient)\r\n self.precomputed_desc = False\r\n else:\r\n self.precomputed_desc = True\r\n\r\n self.filtering_module = filtering_module\r\n \r\n def forward(self, data):\r\n\r\n filtering_input, f_0, f_1 = self.compute_descriptors(input_dict=data)\r\n \r\n registration_outputs = self.filter_correspondences(filtering_input)\r\n \r\n return filtering_input, f_0, f_1, registration_outputs\r\n\r\n\r\n\r\n\r\n def compute_descriptors(self, input_dict):\r\n ''' \r\n If not precomputed it infers the feature descriptors and returns the established correspondences\r\n together with the ground truth transformation parameters and inlier labels.\r\n\r\n Args:\r\n input_dict (dict): input data\r\n\r\n '''\r\n\r\n if not self.precomputed_desc:\r\n\r\n xyz_down = input_dict['pcd0'].to(self.device)\r\n\r\n sinput0 = ME.SparseTensor(\r\n input_dict['sinput0_F'], coords=input_dict['sinput0_C']).to(self.device)\r\n\r\n F0 = self.descriptor_module(sinput0).F\r\n\r\n test = torch.any(torch.isnan(F0))\r\n # If the FCGF descriptor should be trained with the FCGF loss (need also corresponding desc.)\r\n if self.train_descriptor:\r\n sinput1 = ME.SparseTensor(\r\n input_dict['sinput1_F'], coords=input_dict['sinput1_C']).to(self.device)\r\n\r\n F1 = self.descriptor_module(sinput1).F \r\n else:\r\n F1 = torch.empty(F0.shape[0], 0).to(self.device)\r\n\r\n\r\n # Sample the points\r\n xyz_batch, f_batch = self.sampler(xyz_down, F0, input_dict['pts_list'].to(self.device))\r\n\r\n # Build point cloud pairs for the inference\r\n xyz_s, xyz_t, f_s, f_t = extract_overlaping_pairs(xyz_batch, f_batch, self.connectivity_info)\r\n\r\n # Compute nearest neighbors in feature space\r\n nn_C_s_t = self.feature_matching(f_s, f_t, xyz_t) # NNs of the source points in the target point cloud\r\n nn_C_t_s = self.feature_matching(f_t, f_s, xyz_s) # NNs of the target points in the source point cloud\r\n \r\n\r\n if self.mutuals:\r\n mutuals = extract_mutuals(xyz_s, xyz_t, nn_C_s_t, nn_C_t_s)\r\n else:\r\n mutuals = None\r\n\r\n # Prepare the input for the filtering block\r\n filtering_input = construct_filtering_input_data(xyz_s, nn_C_s_t, input_dict, self.mutuals)\r\n\r\n else:\r\n filtering_input = input_dict\r\n F0 = None\r\n F1 = None\r\n\r\n return filtering_input, F0, F1\r\n\r\n\r\n\r\n def filter_correspondences(self, input_dict):\r\n '''\r\n Return the infered weights together with the pairwise rotation matrices nad translation vectors. \r\n\r\n Args:\r\n input_dict (dict): input data\r\n\r\n '''\r\n\r\n registration_outputs = self.filtering_module(input_dict)\r\n\r\n return registration_outputs\r\n" ]
[ [ "torch.empty", "torch.isnan" ] ]
fbuchert/crypto-trading
[ "9fdf44ae9a0fdc96a24a5645b52c2fcbf4406402" ]
[ "portfolio/portfolio.py" ]
[ "import os\nimport pickle\nimport logging\nimport pandas as pd\n\nfrom typing import List, Tuple\nfrom core.instrument import Instrument\nfrom core.events import TradeExecutedEvent\n\nrootLogger = logging.getLogger()\n\n\nclass Portfolio:\n def __init__(self, instruments: List[Instrument], save_path: str = ''):\n self._save_path: str = save_path\n self._current_position = self._load_position(instruments)\n\n @staticmethod\n def _init_position(instruments: List[Instrument]) -> pd.Series:\n return pd.Series({instrument.name: 0.0 for instrument in instruments})\n\n def _load_position(self, instruments: List[Instrument]) -> pd.Series:\n if self._save_path and os.path.exists(self._save_path):\n rootLogger.info(f'Loading strategy positions from {self._save_path}')\n with open(self._save_path, 'rb') as handle:\n current_position = pd.Series(pickle.load(handle))\n\n if set(current_position.index) != set(instrument.name for instrument in instruments):\n rootLogger.info(f'Instruments of loaded position series does not match {instruments}.'\n f'Initializing position series to 0.')\n current_position = Portfolio._init_position(instruments)\n else:\n rootLogger.info(f'{self._save_path} does not exist. Initializing position series to 0.')\n current_position = Portfolio._init_position(instruments)\n return current_position\n\n def _save_current_position(self) -> None:\n with open(self._save_path, 'wb') as handle:\n pickle.dump(self._current_position.to_dict(), handle, protocol=pickle.HIGHEST_PROTOCOL)\n\n def get_current_position(self) -> pd.Series:\n return self._current_position\n\n def handle_execution(self, event: TradeExecutedEvent):\n order = event.data\n self._current_position.loc[order.instrument.name] += order.size\n if self._save_path:\n self._save_current_position()\n" ]
[ [ "pandas.Series" ] ]
Saynah/athos-webapp
[ "df29e08a98f6ae6fe7d817b155e8c736dacc7acc" ]
[ "app/views.py" ]
[ "from flask import render_template\nfrom app import app\nimport matplotlib.pyplot as plt, mpld3\n\[email protected]('/')\[email protected]('/index')\ndef index():\n return render_template(\"index.html\",\n title = 'Home', user = { 'nickname': 'Mark' },\n )\n\n# keep this page for debugging purposes\[email protected](\"/db_fancy\")\ndef cities_page_fancy():\n with db:\n cur = db.cursor()\n cur.execute(\"SELECT Name, CountryCode, Population FROM City ORDER BY Population LIMIT 15;\")\n\n query_results = cur.fetchall()\n cities = []\n for result in query_results:\n cities.append(dict(name=result[0], country=result[1], population=result[2]))\n return render_template('cities.html', cities=cities)\n\n# page for the actual app\[email protected](\"/dashboard\")\ndef my_dashboard():\n\t# text input\n\tname = 'Marruecos'\n\n\t# line plot input\n\tx = [1, 2, 3, 4, 5]\n\ty = [2, 6, 2, 7, 8]\n\n\t# pie chart input\n\tsizes = [15, 30, 45, 10]\n\texplode = (0, 0.1, 0, 0)\n\tlabels = 'Frogs', 'Hogs', 'Dogs', 'Logs'\n\n\t#\n\treturn render_template('dashboard.html',\n\t\tuser = {'nickname': name},\n\t\tmy_plot = html_line_plot( x, y ),\n\t\tmy_pie = html_pie_chart( sizes, explode, labels )\n\t\t)\n\ndef html_line_plot(x,y):\n\t# mpld3 to create line plot \n\tfig, ax = plt.subplots()\n\tax.plot(x, y, 'k-')\n\treturn mpld3.fig_to_html(fig)\n\ndef html_pie_chart(sizes, explode, labels):\n\t# mpld3 to create bar chart\n\tcolors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral']\n\n\tfig, ax = plt.subplots()\n\tax.pie(sizes, explode=explode, labels=labels, colors=colors,\n\t autopct='%1.1f%%', shadow=True, startangle=90)\n\t# Set aspect ratio to be equal so that pie is drawn as a circle.\n\tax.axis('equal')\n\treturn mpld3.fig_to_html(fig)\n" ]
[ [ "matplotlib.pyplot.subplots" ] ]
subramon/qlu
[ "2fb8a2b3636dd11e2dfeae2a6477bd130316da47" ]
[ "ML/DT/python/DTree_sklearn_ramesh_dataset_train_test.py" ]
[ "\n# coding: utf-8\n\n# In[15]:\n\n\nimport sklearn_dt_utils as utils\nfrom sklearn.tree import export_graphviz\nimport pandas as pd\nimport os\n\n\n# In[16]:\n\n\nq_src_dir = os.getenv('Q_SRC_ROOT')\nif not q_src_dir:\n print(\"'Q_SRC_ROOT' is not set\")\n exit(-1)\ntrain_csv_file_path = \"%s/ML/KNN/data/from_ramesh/ds1_11709_13248_train.csv\" % q_src_dir\ntest_csv_file_path = \"%s/ML/KNN/data/from_ramesh/ds1_11709_13248_test.csv\" % q_src_dir\ngraphviz_gini = \"graphviz_gini.txt\"\ngraphviz_entropy = \"graphviz_entropy.txt\"\ngoal_col_name = \"class\"\n\n\n# In[17]:\n\n\nprint(\"Train data shape\")\ntrain_data = utils.import_data(train_csv_file_path)\nprint(\"Test data shape\")\ntest_data = utils.import_data(test_csv_file_path)\n\n\n# In[18]:\n\n\nX, Y, X_train, temp_X_train, y_train, temp_y_train = utils.split_dataset(train_data, goal_col_name, 1)\nX, Y, X_test, temp_X_test, y_test, temp_y_test = utils.split_dataset(test_data, goal_col_name, 1)\n\n\n# In[19]:\n\n\n#print(len(X_train))\n#print(len(X_test))\n\n\n# In[20]:\n\n\n# cross validation\n# cross_validate_dt_new(X, Y)\n\n\n# In[21]:\n\n\n# cross validation\n# cross_validate_dt(X, Y)\n\n\n# In[22]:\n\n#calling gridsearchcv\ngrid = utils.grid_search_cv(X_train, y_train, scoring_method=\"f1_weighted\")\n\n# pickle_path = \"category1_f1_wt.pkl\"\n\n# saving model to pkl file\n# utils.save(grid, pickle_path)\n\n# loading model from pkl file\n# grid = utils.restore(pickle_path)\n\n\n\"\"\"\nprint(grid.cv_results_)\nprint(\"============================\")\nprint(grid.best_estimator_)\nprint(\"============================\")\nprint(grid.best_score_)\nprint(\"============================\")\nprint(grid.best_params_)\nprint(\"============================\")\n\"\"\"\n\n# Prediction using gini\ny_pred_gini = utils.prediction(X_test, grid.best_estimator_)\nprint(\"Results for grid search algo\")\nutils.cal_accuracy(y_test, y_pred_gini)\n\nexport_graphviz(grid.best_estimator_, out_file=\"best_fit_graphviz_ramesh_acr.txt\", filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)\n\n# Train using gini\nclf_gini = utils.train_using_gini(X_train, y_train)\n# print(X_train[1])\nexport_graphviz(clf_gini, out_file=graphviz_gini, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)\n\n\n# In[23]:\n\n\n# Prediction using gini\ny_pred_gini = utils.prediction(X_test, clf_gini)\nprint(\"Results for gini algo\")\nutils.cal_accuracy(y_test, y_pred_gini)\n\n\n# In[24]:\n\n\n# Train using entropy\nclf_entropy = utils.tarin_using_entropy(X_train, y_train)\n# print(clf_entropy)\nexport_graphviz(clf_entropy, out_file=graphviz_entropy, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)\n\n\n# In[25]:\n\n\n# Prediction using entropy\ny_pred_entropy = utils.prediction(X_test, clf_entropy)\nprint(\"Results for entropy algo\")\nutils.cal_accuracy(y_test, y_pred_entropy)\n\n" ]
[ [ "sklearn.tree.export_graphviz" ] ]
sdll/gccpm-tvm
[ "caf22e629b96a9aa1f1a53e07ed81ec12520a04d" ]
[ "demo.py" ]
[ "import argparse\n\nimport cv2\nimport numpy as np\nimport torch\n\nfrom models.with_mobilenet import PoseEstimationWithMobileNet\nfrom modules.keypoints import extract_keypoints, group_keypoints\nfrom modules.load_state import load_state\nfrom modules.pose import Pose, propagate_ids\nfrom val import normalize, pad_width\n\n\nclass ImageReader(object):\n def __init__(self, file_names):\n self.file_names = file_names\n self.max_idx = len(file_names)\n\n def __iter__(self):\n self.idx = 0\n return self\n\n def __next__(self):\n if self.idx == self.max_idx:\n raise StopIteration\n img = cv2.imread(self.file_names[self.idx], cv2.IMREAD_COLOR)\n if img.size == 0:\n raise IOError(\"Image {} cannot be read\".format(self.file_names[self.idx]))\n self.idx = self.idx + 1\n return img\n\n\nclass VideoReader(object):\n def __init__(self, file_name):\n self.file_name = file_name\n try: # OpenCV needs int to read from webcam\n self.file_name = int(file_name)\n except ValueError:\n pass\n\n def __iter__(self):\n self.cap = cv2.VideoCapture(self.file_name)\n if not self.cap.isOpened():\n raise IOError(\"Video {} cannot be opened\".format(self.file_name))\n return self\n\n def __next__(self):\n was_read, img = self.cap.read()\n if not was_read:\n raise StopIteration\n return img\n\n\ndef infer_fast(\n net,\n img,\n net_input_height_size,\n stride,\n upsample_ratio,\n cpu,\n pad_value=(0, 0, 0),\n img_mean=(128, 128, 128),\n img_scale=1 / 256,\n):\n height, width, _ = img.shape\n scale = net_input_height_size / height\n\n scaled_img = cv2.resize(\n img, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC\n )\n scaled_img = normalize(scaled_img, img_mean, img_scale)\n min_dims = [net_input_height_size, max(scaled_img.shape[1], net_input_height_size)]\n padded_img, pad = pad_width(scaled_img, stride, pad_value, min_dims)\n\n tensor_img = torch.from_numpy(padded_img).permute(2, 0, 1).unsqueeze(0).float()\n if not cpu:\n tensor_img = tensor_img.cuda()\n\n stages_output = net(tensor_img)\n\n stage2_heatmaps = stages_output[-2]\n heatmaps = np.transpose(stage2_heatmaps.squeeze().cpu().data.numpy(), (1, 2, 0))\n heatmaps = cv2.resize(\n heatmaps,\n (0, 0),\n fx=upsample_ratio,\n fy=upsample_ratio,\n interpolation=cv2.INTER_CUBIC,\n )\n\n stage2_pafs = stages_output[-1]\n pafs = np.transpose(stage2_pafs.squeeze().cpu().data.numpy(), (1, 2, 0))\n pafs = cv2.resize(\n pafs,\n (0, 0),\n fx=upsample_ratio,\n fy=upsample_ratio,\n interpolation=cv2.INTER_CUBIC,\n )\n\n return heatmaps, pafs, scale, pad\n\n\ndef run_demo(net, image_provider, height_size, cpu, track_ids):\n net = net.eval()\n if not cpu:\n net = net.cuda()\n\n stride = 8\n upsample_ratio = 4\n num_keypoints = Pose.num_kpts\n previous_poses = []\n for img in image_provider:\n orig_img = img.copy()\n heatmaps, pafs, scale, pad = infer_fast(\n net, img, height_size, stride, upsample_ratio, cpu\n )\n\n total_keypoints_num = 0\n all_keypoints_by_type = []\n for kpt_idx in range(num_keypoints): # 19th for bg\n total_keypoints_num += extract_keypoints(\n heatmaps[:, :, kpt_idx], all_keypoints_by_type, total_keypoints_num\n )\n\n pose_entries, all_keypoints = group_keypoints(\n all_keypoints_by_type, pafs, demo=True\n )\n for kpt_id in range(all_keypoints.shape[0]):\n all_keypoints[kpt_id, 0] = (\n all_keypoints[kpt_id, 0] * stride / upsample_ratio - pad[1]\n ) / scale\n all_keypoints[kpt_id, 1] = (\n all_keypoints[kpt_id, 1] * stride / upsample_ratio - pad[0]\n ) / scale\n current_poses = []\n for n in range(len(pose_entries)):\n if len(pose_entries[n]) == 0:\n continue\n pose_keypoints = np.ones((num_keypoints, 2), dtype=np.int32) * -1\n for kpt_id in range(num_keypoints):\n if pose_entries[n][kpt_id] != -1.0: # keypoint was found\n pose_keypoints[kpt_id, 0] = int(\n all_keypoints[int(pose_entries[n][kpt_id]), 0]\n )\n pose_keypoints[kpt_id, 1] = int(\n all_keypoints[int(pose_entries[n][kpt_id]), 1]\n )\n pose = Pose(pose_keypoints, pose_entries[n][18])\n current_poses.append(pose)\n pose.draw(img)\n\n img = cv2.addWeighted(orig_img, 0.6, img, 0.4, 0)\n if track_ids:\n propagate_ids(previous_poses, current_poses)\n previous_poses = current_poses\n for pose in current_poses:\n cv2.rectangle(\n img,\n (pose.bbox[0], pose.bbox[1]),\n (pose.bbox[0] + pose.bbox[2], pose.bbox[1] + pose.bbox[3]),\n (0, 255, 0),\n )\n cv2.putText(\n img,\n \"id: {}\".format(pose.id),\n (pose.bbox[0], pose.bbox[1] - 16),\n cv2.FONT_HERSHEY_COMPLEX,\n 0.5,\n (0, 0, 255),\n )\n cv2.imshow(\"Lightweight Human Pose Estimation Python Demo\", img)\n key = cv2.waitKey(33)\n if key == 27: # esc\n return\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser(\n description=\"\"\"Lightweight human pose estimation python demo.\n This is just for quick results preview.\n Please, consider c++ demo for the best performance.\"\"\"\n )\n parser.add_argument(\n \"--checkpoint-path\", type=str, required=True, help=\"path to the checkpoint\"\n )\n parser.add_argument(\n \"--height-size\", type=int, default=256, help=\"network input layer height size\"\n )\n parser.add_argument(\n \"--video\", type=str, default=\"\", help=\"path to video file or camera id\"\n )\n parser.add_argument(\n \"--images\", nargs=\"+\", default=\"\", help=\"path to input image(s)\"\n )\n parser.add_argument(\n \"--cpu\", action=\"store_true\", help=\"run network inference on cpu\"\n )\n parser.add_argument(\"--track-ids\", action=\"store_true\", help=\"track poses ids\")\n args = parser.parse_args()\n\n if args.video == \"\" and args.images == \"\":\n raise ValueError(\"Either --video or --image has to be provided\")\n\n net = PoseEstimationWithMobileNet()\n checkpoint = torch.load(args.checkpoint_path, map_location=\"cpu\")\n load_state(net, checkpoint)\n\n frame_provider = ImageReader(args.images)\n if args.video != \"\":\n frame_provider = VideoReader(args.video)\n\n run_demo(net, frame_provider, args.height_size, args.cpu, args.track_ids)\n" ]
[ [ "numpy.ones", "torch.load", "torch.from_numpy" ] ]
PeculiarOvertones/FHDeX
[ "60e285101704196db24afe8b2461283753526fc5" ]
[ "exec/immersed_boundary/src_analysis/data_model.py" ]
[ "# used pretty much everywhere\nimport numpy as np\n# used to enable function overloading\nfrom multimethod import multimeta\n\n\n\nclass SoA(object, metaclass=multimeta):\n _pref = \"particle_\"\n _pos = \"position_\"\n _vel = \"vel\"\n\n _id = \"id\"\n _cpu = \"cpu\"\n _id_0 = \"id_0\"\n _cpu_0 = \"cpu_0\"\n _id_1 = \"id_1\"\n _cpu_1 = \"cpu_1\"\n\n\n def __init__(self, data, copy_v=False, copy_id=False):\n\n # Flag which fields are copied\n self.contains_v = copy_v\n self.contains_id = copy_id\n\n # Copy positions\n str_pos = self._pref + self._pos\n self.px = np.array(data[str_pos + \"x\"])\n self.py = np.array(data[str_pos + \"y\"])\n self.pz = np.array(data[str_pos + \"z\"])\n\n # Copy velocities\n if copy_v:\n str_vel = self._pref + self._vel\n self.vx = np.array(data[str_vel + \"x\"])\n self.vy = np.array(data[str_vel + \"y\"])\n self.vz = np.array(data[str_vel + \"z\"])\n\n # Copy particle ID\n if copy_id:\n self.id = np.array(data[self._pref + self._id], dtype=int)\n self.cpu = np.array(data[self._pref + self._cpu], dtype=int)\n self.id_0 = np.array(data[self._pref + self._id_0], dtype=int)\n self.cpu_0 = np.array(data[self._pref + self._cpu_0], dtype=int)\n self.id_1 = np.array(data[self._pref + self._id_1], dtype=int)\n self.cpu_1 = np.array(data[self._pref + self._cpu_1], dtype=int)\n\n\n\n def __print_pos(self):\n return str(self.px) + \",\" + str(self.py) + \",\" + str(self.pz)\n\n\n def __print_vel(self):\n return str(self.vx) + \",\" + str(self.vy) + \",\" + str(self.vz)\n\n\n def __print_id(self):\n return \"id:\" + str(self.id) + \", cpu:\" + str(self.cpu) + \", \" + \\\n \"id_0:\" + str(self.id_0) + \", cpu_0:\" + str(self.cpu_0) + \", \" + \\\n \"id_1:\" + str(self.id_1) + \", cpu_1:\" + str(self.cpu_1)\n\n\n\n def __str__(self):\n str_rep = \"{pos:\" + self.__print_pos()\n if self.contains_v:\n str_rep += \"; vel:\" + self.__print_vel()\n if self.contains_id:\n str_rep += \"; \" + self.__print_id()\n str_rep += \"}\"\n\n return str_rep\n\n\n def __repr__(self):\n return str(self)\n\n\n def pos(self, index: int):\n return self.px[index], self.py[index], self.pz[index]\n\n def pos(self, start: int, stop: int):\n return self.px[start:stop], self.py[start:stop], self.pz[start:stop]\n\n def pos(self):\n return self.px, self.py, self.pz\n\n\n def vel(self, index: int):\n if self.contains_v:\n return self.vx[index], self.vy[index], self.vz[index]\n else:\n return None\n\n def vel(self, start:int , stop: int):\n if self.contains_v:\n return self.vx[start:stop], self.vy[start:stop], self.vz[start:stop]\n else:\n return None\n\n def vel(self):\n if self.contains_v:\n return self.vx, self.vy, self.vz\n else:\n return None\n\n\n def pid(self, index: int):\n if self.contains_id:\n return self.id[index], self.cpu[index], \\\n self.id_0[index], self.cpu_0[index], \\\n self.id_1[index], self.cpu_1[index]\n else:\n return None\n\n def pid(self, start: int, stop: int):\n if self.contains_id:\n return self.id[start:stop], self.cpu[start:stop], \\\n self.id_0[start:stop], self.cpu_0[start:stop], \\\n self.id_1[start:stop], self.cpu_1[start:stop]\n else:\n return None\n\n\n def pid(self):\n if self.contains_id:\n return self.id, self.cpu, self.id_0, self.cpu_0, self.id_1, self.cpu_1\n else:\n return None\n\n\n\nclass Particle(object):\n def __init__(self, px, py, pz, **kwargs):\n self.pos = np.array([px, py, pz])\n\n self.contains_vel = False\n if \"vel\" in kwargs.keys():\n self.vel = np.array(kwargs[\"vel\"])\n self.contains_vel = True\n\n self.contains_id = False\n if \"id\" in kwargs.keys():\n self.id = kwargs[\"id\"][0]\n self.cpu = kwargs[\"id\"][1]\n self.id_0 = kwargs[\"id\"][2]\n self.cpu_0 = kwargs[\"id\"][3]\n self.id_1 = kwargs[\"id\"][4]\n self.cpu_1 = kwargs[\"id\"][5]\n self.contains_id = True\n\n\n def __str__(self):\n str_rep = \"P(\" + str(self.pos)\n if self.contains_vel:\n str_rep += \", \" + str(self.vel)\n if self.contains_id:\n str_rep += \", \" + str(self.id) + \", \" + str(self.cpu) + \\\n \", \" + str(self.id_0) + \", \" + str(self.cpu_0) + \\\n \", \" + str(self.id_1) + \", \" + str(self.cpu_1)\n str_rep += \")\"\n\n return str_rep\n\n\n def __repr__(self):\n return str(self)\n\n\n\nclass AoS(object):\n def __init__(self, amrex_data, copy_v=False, copy_id=False):\n self.particles = list()\n soa = SoA(amrex_data, copy_v=copy_v, copy_id=copy_id);\n\n for i, elt in enumerate(zip(*soa.pos())):\n if soa.contains_v and soa.contains_id:\n self.particles.append(Particle(* elt, vel=soa.vel(i), id=soa.pid(i)))\n elif soa.contains_v:\n self.particles.append(Particle(* elt, vel=soa.vel(i)))\n elif soa.contains_id:\n self.particles.append(Particle(* elt, id=soa.pid(i)))\n else:\n self.particles.append(Particle(* elt))\n" ]
[ [ "numpy.array" ] ]
mathause/pandas
[ "72327f32e2328d3e13b6c55617d71036fccdd0e1" ]
[ "pandas/tests/series/indexing/test_getitem.py" ]
[ "\"\"\"\nSeries.__getitem__ test classes are organized by the type of key passed.\n\"\"\"\nfrom datetime import (\n date,\n datetime,\n time,\n)\n\nimport numpy as np\nimport pytest\n\nfrom pandas._libs.tslibs import (\n conversion,\n timezones,\n)\n\nfrom pandas.core.dtypes.common import is_scalar\n\nimport pandas as pd\nfrom pandas import (\n Categorical,\n DataFrame,\n DatetimeIndex,\n Index,\n Series,\n Timestamp,\n date_range,\n period_range,\n timedelta_range,\n)\nimport pandas._testing as tm\nfrom pandas.core.indexing import IndexingError\n\nfrom pandas.tseries.offsets import BDay\n\n\nclass TestSeriesGetitemScalars:\n def test_getitem_float_keys_tuple_values(self):\n # see GH#13509\n\n # unique Index\n ser = Series([(1, 1), (2, 2), (3, 3)], index=[0.0, 0.1, 0.2], name=\"foo\")\n result = ser[0.0]\n assert result == (1, 1)\n\n # non-unique Index\n expected = Series([(1, 1), (2, 2)], index=[0.0, 0.0], name=\"foo\")\n ser = Series([(1, 1), (2, 2), (3, 3)], index=[0.0, 0.0, 0.2], name=\"foo\")\n\n result = ser[0.0]\n tm.assert_series_equal(result, expected)\n\n def test_getitem_unrecognized_scalar(self):\n # GH#32684 a scalar key that is not recognized by lib.is_scalar\n\n # a series that might be produced via `frame.dtypes`\n ser = Series([1, 2], index=[np.dtype(\"O\"), np.dtype(\"i8\")])\n\n key = ser.index[1]\n\n result = ser[key]\n assert result == 2\n\n def test_getitem_negative_out_of_bounds(self):\n ser = Series(tm.rands_array(5, 10), index=tm.rands_array(10, 10))\n\n msg = \"index -11 is out of bounds for axis 0 with size 10\"\n with pytest.raises(IndexError, match=msg):\n ser[-11]\n\n def test_getitem_out_of_bounds_indexerror(self, datetime_series):\n # don't segfault, GH#495\n msg = r\"index \\d+ is out of bounds for axis 0 with size \\d+\"\n with pytest.raises(IndexError, match=msg):\n datetime_series[len(datetime_series)]\n\n def test_getitem_out_of_bounds_empty_rangeindex_keyerror(self):\n # GH#917\n # With a RangeIndex, an int key gives a KeyError\n ser = Series([], dtype=object)\n with pytest.raises(KeyError, match=\"-1\"):\n ser[-1]\n\n def test_getitem_keyerror_with_int64index(self):\n ser = Series(np.random.randn(6), index=[0, 0, 1, 1, 2, 2])\n\n with pytest.raises(KeyError, match=r\"^5$\"):\n ser[5]\n\n with pytest.raises(KeyError, match=r\"^'c'$\"):\n ser[\"c\"]\n\n # not monotonic\n ser = Series(np.random.randn(6), index=[2, 2, 0, 0, 1, 1])\n\n with pytest.raises(KeyError, match=r\"^5$\"):\n ser[5]\n\n with pytest.raises(KeyError, match=r\"^'c'$\"):\n ser[\"c\"]\n\n def test_getitem_int64(self, datetime_series):\n idx = np.int64(5)\n assert datetime_series[idx] == datetime_series[5]\n\n # TODO: better name/GH ref?\n def test_getitem_regression(self):\n ser = Series(range(5), index=list(range(5)))\n result = ser[list(range(5))]\n tm.assert_series_equal(result, ser)\n\n # ------------------------------------------------------------------\n # Series with DatetimeIndex\n\n @pytest.mark.parametrize(\"tzstr\", [\"Europe/Berlin\", \"dateutil/Europe/Berlin\"])\n def test_getitem_pydatetime_tz(self, tzstr):\n tz = timezones.maybe_get_tz(tzstr)\n\n index = date_range(\n start=\"2012-12-24 16:00\", end=\"2012-12-24 18:00\", freq=\"H\", tz=tzstr\n )\n ts = Series(index=index, data=index.hour)\n time_pandas = Timestamp(\"2012-12-24 17:00\", tz=tzstr)\n\n dt = datetime(2012, 12, 24, 17, 0)\n time_datetime = conversion.localize_pydatetime(dt, tz)\n assert ts[time_pandas] == ts[time_datetime]\n\n @pytest.mark.parametrize(\"tz\", [\"US/Eastern\", \"dateutil/US/Eastern\"])\n def test_string_index_alias_tz_aware(self, tz):\n rng = date_range(\"1/1/2000\", periods=10, tz=tz)\n ser = Series(np.random.randn(len(rng)), index=rng)\n\n result = ser[\"1/3/2000\"]\n tm.assert_almost_equal(result, ser[2])\n\n def test_getitem_time_object(self):\n rng = date_range(\"1/1/2000\", \"1/5/2000\", freq=\"5min\")\n ts = Series(np.random.randn(len(rng)), index=rng)\n\n mask = (rng.hour == 9) & (rng.minute == 30)\n result = ts[time(9, 30)]\n expected = ts[mask]\n result.index = result.index._with_freq(None)\n tm.assert_series_equal(result, expected)\n\n # ------------------------------------------------------------------\n # Series with CategoricalIndex\n\n def test_getitem_scalar_categorical_index(self):\n cats = Categorical([Timestamp(\"12-31-1999\"), Timestamp(\"12-31-2000\")])\n\n ser = Series([1, 2], index=cats)\n\n expected = ser.iloc[0]\n result = ser[cats[0]]\n assert result == expected\n\n def test_getitem_str_with_timedeltaindex(self):\n rng = timedelta_range(\"1 day 10:11:12\", freq=\"h\", periods=500)\n ser = Series(np.arange(len(rng)), index=rng)\n\n key = \"6 days, 23:11:12\"\n indexer = rng.get_loc(key)\n assert indexer == 133\n\n result = ser[key]\n assert result == ser.iloc[133]\n\n msg = r\"^Timedelta\\('50 days 00:00:00'\\)$\"\n with pytest.raises(KeyError, match=msg):\n rng.get_loc(\"50 days\")\n with pytest.raises(KeyError, match=msg):\n ser[\"50 days\"]\n\n\nclass TestSeriesGetitemSlices:\n def test_getitem_partial_str_slice_with_datetimeindex(self):\n # GH#34860\n arr = date_range(\"1/1/2008\", \"1/1/2009\")\n ser = arr.to_series()\n result = ser[\"2008\"]\n\n rng = date_range(start=\"2008-01-01\", end=\"2008-12-31\")\n expected = Series(rng, index=rng)\n\n tm.assert_series_equal(result, expected)\n\n def test_getitem_slice_strings_with_datetimeindex(self):\n idx = DatetimeIndex(\n [\"1/1/2000\", \"1/2/2000\", \"1/2/2000\", \"1/3/2000\", \"1/4/2000\"]\n )\n\n ts = Series(np.random.randn(len(idx)), index=idx)\n\n result = ts[\"1/2/2000\":]\n expected = ts[1:]\n tm.assert_series_equal(result, expected)\n\n result = ts[\"1/2/2000\":\"1/3/2000\"]\n expected = ts[1:4]\n tm.assert_series_equal(result, expected)\n\n def test_getitem_partial_str_slice_with_timedeltaindex(self):\n rng = timedelta_range(\"1 day 10:11:12\", freq=\"h\", periods=500)\n ser = Series(np.arange(len(rng)), index=rng)\n\n result = ser[\"5 day\":\"6 day\"]\n expected = ser.iloc[86:134]\n tm.assert_series_equal(result, expected)\n\n result = ser[\"5 day\":]\n expected = ser.iloc[86:]\n tm.assert_series_equal(result, expected)\n\n result = ser[:\"6 day\"]\n expected = ser.iloc[:134]\n tm.assert_series_equal(result, expected)\n\n def test_getitem_partial_str_slice_high_reso_with_timedeltaindex(self):\n # higher reso\n rng = timedelta_range(\"1 day 10:11:12\", freq=\"us\", periods=2000)\n ser = Series(np.arange(len(rng)), index=rng)\n\n result = ser[\"1 day 10:11:12\":]\n expected = ser.iloc[0:]\n tm.assert_series_equal(result, expected)\n\n result = ser[\"1 day 10:11:12.001\":]\n expected = ser.iloc[1000:]\n tm.assert_series_equal(result, expected)\n\n result = ser[\"1 days, 10:11:12.001001\"]\n assert result == ser.iloc[1001]\n\n # TODO: redundant with test_getitem_ndim_deprecated?\n def test_getitem_slice_2d(self, datetime_series):\n # GH#30588 multi-dimensional indexing deprecated\n\n with tm.assert_produces_warning(\n FutureWarning, match=\"Support for multi-dimensional indexing\"\n ):\n # GH#30867 Don't want to support this long-term, but\n # for now ensure that the warning from Index\n # doesn't comes through via Series.__getitem__.\n result = datetime_series[:, np.newaxis]\n expected = datetime_series.values[:, np.newaxis]\n tm.assert_almost_equal(result, expected)\n\n # FutureWarning from NumPy.\n @pytest.mark.filterwarnings(\"ignore:Using a non-tuple:FutureWarning\")\n def test_getitem_median_slice_bug(self):\n index = date_range(\"20090415\", \"20090519\", freq=\"2B\")\n s = Series(np.random.randn(13), index=index)\n\n indexer = [slice(6, 7, None)]\n with tm.assert_produces_warning(FutureWarning):\n # GH#31299\n result = s[indexer]\n expected = s[indexer[0]]\n tm.assert_series_equal(result, expected)\n\n @pytest.mark.parametrize(\n \"slc, positions\",\n [\n [slice(date(2018, 1, 1), None), [0, 1, 2]],\n [slice(date(2019, 1, 2), None), [2]],\n [slice(date(2020, 1, 1), None), []],\n [slice(None, date(2020, 1, 1)), [0, 1, 2]],\n [slice(None, date(2019, 1, 1)), [0]],\n ],\n )\n def test_getitem_slice_date(self, slc, positions):\n # https://github.com/pandas-dev/pandas/issues/31501\n ser = Series(\n [0, 1, 2],\n DatetimeIndex([\"2019-01-01\", \"2019-01-01T06:00:00\", \"2019-01-02\"]),\n )\n result = ser[slc]\n expected = ser.take(positions)\n tm.assert_series_equal(result, expected)\n\n def test_getitem_slice_float_raises(self, datetime_series):\n msg = (\n \"cannot do slice indexing on DatetimeIndex with these indexers \"\n r\"\\[{key}\\] of type float\"\n )\n with pytest.raises(TypeError, match=msg.format(key=r\"4\\.0\")):\n datetime_series[4.0:10.0]\n\n with pytest.raises(TypeError, match=msg.format(key=r\"4\\.5\")):\n datetime_series[4.5:10.0]\n\n def test_getitem_slice_bug(self):\n ser = Series(range(10), index=list(range(10)))\n result = ser[-12:]\n tm.assert_series_equal(result, ser)\n\n result = ser[-7:]\n tm.assert_series_equal(result, ser[3:])\n\n result = ser[:-12]\n tm.assert_series_equal(result, ser[:0])\n\n def test_getitem_slice_integers(self):\n ser = Series(np.random.randn(8), index=[2, 4, 6, 8, 10, 12, 14, 16])\n\n result = ser[:4]\n expected = Series(ser.values[:4], index=[2, 4, 6, 8])\n tm.assert_series_equal(result, expected)\n\n\nclass TestSeriesGetitemListLike:\n @pytest.mark.parametrize(\"box\", [list, np.array, Index, Series])\n def test_getitem_no_matches(self, box):\n # GH#33462 we expect the same behavior for list/ndarray/Index/Series\n ser = Series([\"A\", \"B\"])\n\n key = Series([\"C\"], dtype=object)\n key = box(key)\n\n msg = r\"None of \\[Index\\(\\['C'\\], dtype='object'\\)\\] are in the \\[index\\]\"\n with pytest.raises(KeyError, match=msg):\n ser[key]\n\n def test_getitem_intlist_intindex_periodvalues(self):\n ser = Series(period_range(\"2000-01-01\", periods=10, freq=\"D\"))\n\n result = ser[[2, 4]]\n exp = Series(\n [pd.Period(\"2000-01-03\", freq=\"D\"), pd.Period(\"2000-01-05\", freq=\"D\")],\n index=[2, 4],\n dtype=\"Period[D]\",\n )\n tm.assert_series_equal(result, exp)\n assert result.dtype == \"Period[D]\"\n\n @pytest.mark.parametrize(\"box\", [list, np.array, Index])\n def test_getitem_intlist_intervalindex_non_int(self, box):\n # GH#33404 fall back to positional since ints are unambiguous\n dti = date_range(\"2000-01-03\", periods=3)._with_freq(None)\n ii = pd.IntervalIndex.from_breaks(dti)\n ser = Series(range(len(ii)), index=ii)\n\n expected = ser.iloc[:1]\n key = box([0])\n result = ser[key]\n tm.assert_series_equal(result, expected)\n\n @pytest.mark.parametrize(\"box\", [list, np.array, Index])\n @pytest.mark.parametrize(\"dtype\", [np.int64, np.float64, np.uint64])\n def test_getitem_intlist_multiindex_numeric_level(self, dtype, box):\n # GH#33404 do _not_ fall back to positional since ints are ambiguous\n idx = Index(range(4)).astype(dtype)\n dti = date_range(\"2000-01-03\", periods=3)\n mi = pd.MultiIndex.from_product([idx, dti])\n ser = Series(range(len(mi))[::-1], index=mi)\n\n key = box([5])\n with pytest.raises(KeyError, match=\"5\"):\n ser[key]\n\n def test_getitem_uint_array_key(self, any_unsigned_int_numpy_dtype):\n # GH #37218\n ser = Series([1, 2, 3])\n key = np.array([4], dtype=any_unsigned_int_numpy_dtype)\n\n with pytest.raises(KeyError, match=\"4\"):\n ser[key]\n with pytest.raises(KeyError, match=\"4\"):\n ser.loc[key]\n\n\nclass TestGetitemBooleanMask:\n def test_getitem_boolean(self, string_series):\n ser = string_series\n mask = ser > ser.median()\n\n # passing list is OK\n result = ser[list(mask)]\n expected = ser[mask]\n tm.assert_series_equal(result, expected)\n tm.assert_index_equal(result.index, ser.index[mask])\n\n def test_getitem_boolean_empty(self):\n ser = Series([], dtype=np.int64)\n ser.index.name = \"index_name\"\n ser = ser[ser.isna()]\n assert ser.index.name == \"index_name\"\n assert ser.dtype == np.int64\n\n # GH#5877\n # indexing with empty series\n ser = Series([\"A\", \"B\"])\n expected = Series(dtype=object, index=Index([], dtype=\"int64\"))\n result = ser[Series([], dtype=object)]\n tm.assert_series_equal(result, expected)\n\n # invalid because of the boolean indexer\n # that's empty or not-aligned\n msg = (\n r\"Unalignable boolean Series provided as indexer \\(index of \"\n r\"the boolean Series and of the indexed object do not match\"\n )\n with pytest.raises(IndexingError, match=msg):\n ser[Series([], dtype=bool)]\n\n with pytest.raises(IndexingError, match=msg):\n ser[Series([True], dtype=bool)]\n\n def test_getitem_boolean_object(self, string_series):\n # using column from DataFrame\n\n ser = string_series\n mask = ser > ser.median()\n omask = mask.astype(object)\n\n # getitem\n result = ser[omask]\n expected = ser[mask]\n tm.assert_series_equal(result, expected)\n\n # setitem\n s2 = ser.copy()\n cop = ser.copy()\n cop[omask] = 5\n s2[mask] = 5\n tm.assert_series_equal(cop, s2)\n\n # nans raise exception\n omask[5:10] = np.nan\n msg = \"Cannot mask with non-boolean array containing NA / NaN values\"\n with pytest.raises(ValueError, match=msg):\n ser[omask]\n with pytest.raises(ValueError, match=msg):\n ser[omask] = 5\n\n def test_getitem_boolean_dt64_copies(self):\n # GH#36210\n dti = date_range(\"2016-01-01\", periods=4, tz=\"US/Pacific\")\n key = np.array([True, True, False, False])\n\n ser = Series(dti._data)\n\n res = ser[key]\n assert res._values._data.base is None\n\n # compare with numeric case for reference\n ser2 = Series(range(4))\n res2 = ser2[key]\n assert res2._values.base is None\n\n def test_getitem_boolean_corner(self, datetime_series):\n ts = datetime_series\n mask_shifted = ts.shift(1, freq=BDay()) > ts.median()\n\n msg = (\n r\"Unalignable boolean Series provided as indexer \\(index of \"\n r\"the boolean Series and of the indexed object do not match\"\n )\n with pytest.raises(IndexingError, match=msg):\n ts[mask_shifted]\n\n with pytest.raises(IndexingError, match=msg):\n ts.loc[mask_shifted]\n\n def test_getitem_boolean_different_order(self, string_series):\n ordered = string_series.sort_values()\n\n sel = string_series[ordered > 0]\n exp = string_series[string_series > 0]\n tm.assert_series_equal(sel, exp)\n\n def test_getitem_boolean_contiguous_preserve_freq(self):\n rng = date_range(\"1/1/2000\", \"3/1/2000\", freq=\"B\")\n\n mask = np.zeros(len(rng), dtype=bool)\n mask[10:20] = True\n\n masked = rng[mask]\n expected = rng[10:20]\n assert expected.freq == rng.freq\n tm.assert_index_equal(masked, expected)\n\n mask[22] = True\n masked = rng[mask]\n assert masked.freq is None\n\n\nclass TestGetitemCallable:\n def test_getitem_callable(self):\n # GH#12533\n ser = Series(4, index=list(\"ABCD\"))\n result = ser[lambda x: \"A\"]\n assert result == ser.loc[\"A\"]\n\n result = ser[lambda x: [\"A\", \"B\"]]\n expected = ser.loc[[\"A\", \"B\"]]\n tm.assert_series_equal(result, expected)\n\n result = ser[lambda x: [True, False, True, True]]\n expected = ser.iloc[[0, 2, 3]]\n tm.assert_series_equal(result, expected)\n\n\ndef test_getitem_generator(string_series):\n gen = (x > 0 for x in string_series)\n result = string_series[gen]\n result2 = string_series[iter(string_series > 0)]\n expected = string_series[string_series > 0]\n tm.assert_series_equal(result, expected)\n tm.assert_series_equal(result2, expected)\n\n\[email protected](\n \"series\",\n [\n Series([0, 1]),\n Series(date_range(\"2012-01-01\", periods=2)),\n Series(date_range(\"2012-01-01\", periods=2, tz=\"CET\")),\n ],\n)\ndef test_getitem_ndim_deprecated(series):\n with tm.assert_produces_warning(\n FutureWarning,\n match=\"Support for multi-dimensional indexing\",\n ):\n result = series[:, None]\n\n expected = np.asarray(series)[:, None]\n tm.assert_numpy_array_equal(result, expected)\n\n\ndef test_getitem_multilevel_scalar_slice_not_implemented(\n multiindex_year_month_day_dataframe_random_data,\n):\n # not implementing this for now\n df = multiindex_year_month_day_dataframe_random_data\n ser = df[\"A\"]\n\n msg = r\"\\(2000, slice\\(3, 4, None\\)\\)\"\n with pytest.raises(TypeError, match=msg):\n ser[2000, 3:4]\n\n\ndef test_getitem_dataframe_raises():\n rng = list(range(10))\n ser = Series(10, index=rng)\n df = DataFrame(rng, index=rng)\n msg = (\n \"Indexing a Series with DataFrame is not supported, \"\n \"use the appropriate DataFrame column\"\n )\n with pytest.raises(TypeError, match=msg):\n ser[df > 5]\n\n\ndef test_getitem_assignment_series_aligment():\n # https://github.com/pandas-dev/pandas/issues/37427\n # with getitem, when assigning with a Series, it is not first aligned\n ser = Series(range(10))\n idx = np.array([2, 4, 9])\n ser[idx] = Series([10, 11, 12])\n expected = Series([0, 1, 10, 3, 11, 5, 6, 7, 8, 12])\n tm.assert_series_equal(ser, expected)\n\n\ndef test_getitem_duplicate_index_mistyped_key_raises_keyerror():\n # GH#29189 float_index.get_loc(None) should raise KeyError, not TypeError\n ser = Series([2, 5, 6, 8], index=[2.0, 4.0, 4.0, 5.0])\n with pytest.raises(KeyError, match=\"None\"):\n ser[None]\n\n with pytest.raises(KeyError, match=\"None\"):\n ser.index.get_loc(None)\n\n with pytest.raises(KeyError, match=\"None\"):\n ser.index._engine.get_loc(None)\n\n\ndef test_getitem_1tuple_slice_without_multiindex():\n ser = Series(range(5))\n key = (slice(3),)\n\n result = ser[key]\n expected = ser[key[0]]\n tm.assert_series_equal(result, expected)\n\n\ndef test_getitem_preserve_name(datetime_series):\n result = datetime_series[datetime_series > 0]\n assert result.name == datetime_series.name\n\n result = datetime_series[[0, 2, 4]]\n assert result.name == datetime_series.name\n\n result = datetime_series[5:10]\n assert result.name == datetime_series.name\n\n\ndef test_getitem_with_integer_labels():\n # integer indexes, be careful\n ser = Series(np.random.randn(10), index=list(range(0, 20, 2)))\n inds = [0, 2, 5, 7, 8]\n arr_inds = np.array([0, 2, 5, 7, 8])\n with pytest.raises(KeyError, match=\"not in index\"):\n ser[inds]\n\n with pytest.raises(KeyError, match=\"not in index\"):\n ser[arr_inds]\n\n\ndef test_getitem_missing(datetime_series):\n # missing\n d = datetime_series.index[0] - BDay()\n msg = r\"Timestamp\\('1999-12-31 00:00:00', freq='B'\\)\"\n with pytest.raises(KeyError, match=msg):\n datetime_series[d]\n\n\ndef test_getitem_fancy(string_series, object_series):\n slice1 = string_series[[1, 2, 3]]\n slice2 = object_series[[1, 2, 3]]\n assert string_series.index[2] == slice1.index[1]\n assert object_series.index[2] == slice2.index[1]\n assert string_series[2] == slice1[1]\n assert object_series[2] == slice2[1]\n\n\ndef test_getitem_box_float64(datetime_series):\n value = datetime_series[5]\n assert isinstance(value, np.float64)\n\n\ndef test_getitem_unordered_dup():\n obj = Series(range(5), index=[\"c\", \"a\", \"a\", \"b\", \"b\"])\n assert is_scalar(obj[\"c\"])\n assert obj[\"c\"] == 0\n\n\ndef test_getitem_dups():\n ser = Series(range(5), index=[\"A\", \"A\", \"B\", \"C\", \"C\"], dtype=np.int64)\n expected = Series([3, 4], index=[\"C\", \"C\"], dtype=np.int64)\n result = ser[\"C\"]\n tm.assert_series_equal(result, expected)\n\n\ndef test_getitem_categorical_str():\n # GH#31765\n ser = Series(range(5), index=Categorical([\"a\", \"b\", \"c\", \"a\", \"b\"]))\n result = ser[\"a\"]\n expected = ser.iloc[[0, 3]]\n tm.assert_series_equal(result, expected)\n\n # Check the intermediate steps work as expected\n with tm.assert_produces_warning(FutureWarning):\n result = ser.index.get_value(ser, \"a\")\n tm.assert_series_equal(result, expected)\n\n\ndef test_slice_can_reorder_not_uniquely_indexed():\n ser = Series(1, index=[\"a\", \"a\", \"b\", \"b\", \"c\"])\n ser[::-1] # it works!\n\n\[email protected](\"index_vals\", [\"aabcd\", \"aadcb\"])\ndef test_duplicated_index_getitem_positional_indexer(index_vals):\n # GH 11747\n s = Series(range(5), index=list(index_vals))\n result = s[3]\n assert result == 3\n" ]
[ [ "pandas.DatetimeIndex", "pandas._testing.rands_array", "pandas.Timestamp", "pandas.IntervalIndex.from_breaks", "pandas._testing.assert_series_equal", "pandas.period_range", "numpy.dtype", "pandas.DataFrame", "pandas.timedelta_range", "pandas.Period", "numpy.array", "pandas.core.dtypes.common.is_scalar", "pandas._libs.tslibs.conversion.localize_pydatetime", "numpy.random.randn", "pandas.MultiIndex.from_product", "pandas._testing.assert_index_equal", "pandas._libs.tslibs.timezones.maybe_get_tz", "pandas.Index", "numpy.asarray", "pandas._testing.assert_produces_warning", "pandas.tseries.offsets.BDay", "pandas.date_range", "pandas._testing.assert_almost_equal", "pandas.Categorical", "pandas._testing.assert_numpy_array_equal", "pandas.Series", "numpy.int64" ] ]
carldlaird/idaes-pse
[ "cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f" ]
[ "idaes/surrogate/roundingRegression/RoundingRegression.py" ]
[ "#################################################################################\n# The Institute for the Design of Advanced Energy Systems Integrated Platform\n# Framework (IDAES IP) was produced under the DOE Institute for the\n# Design of Advanced Energy Systems (IDAES), and is copyright (c) 2018-2021\n# by the software owners: The Regents of the University of California, through\n# Lawrence Berkeley National Laboratory, National Technology & Engineering\n# Solutions of Sandia, LLC, Carnegie Mellon University, West Virginia University\n# Research Corporation, et al. All rights reserved.\n#\n# Please see the files COPYRIGHT.md and LICENSE.md for full copyright and\n# license information.\n#################################################################################\nfrom pyomo.environ import *\nfrom pyomo.opt import SolverFactory\nimport numpy as np\nimport scipy as sp\nfrom copy import copy\n\n\nclass RoundingRegression:\n\n def __init__(self, X, Y, complexity_penalty_factor):\n \"\"\"\n A class for creating a Rounding Regression model.\n\n Returns:\n self function containing several attributes -\n\n self.X : Input design matrix\n self.Y : Input response vector \n self.LAP : Pyomo models and object to handle OLS solution for active set\n self.B_ols_sum : Sum of magnitude of coefficients from OLS solution (including all variables)\n self.regressors_probability : Scaled regressor probabilities to remove dependence on big-M chosen\n self.regressors_sorted : Sorted list of regressors by their probabilities\n\n \"\"\"\n # Input/output matrix\n self.X = X\n self.Y = Y\n\n # Construct object to handle OLS solution for active set and build Pyomo models\n self.LAP = LinAlgandPyomo(X, Y, complexity_penalty_factor)\n\n # Find OLS solution (i.e. including all variables) and construct QP relaxation model\n _, B_ols, self.B_ols_sum = self.LAP.evaluate_obj(np.ones(X.shape[1]))\n QP, opt = self.LAP.construct_QP(X, Y, self.B_ols_sum)\n\n # Get rounding probabilities from relaxed binaries of QP relaxation\n regressors_probability, _, _ = self.LAP.optimize(opt, QP)\n\n # Scale and sort regressors\n self.regressors_probability = regressors_probability / (abs(max(regressors_probability, key=abs))) * 0.9\n self.regressors_sorted = np.argsort(regressors_probability)[::-1]\n\n def randomized_rounding(self):\n \"\"\"\n Round randomly by stepping through each regressor\n and rounding with prbability equal to scaled binary from QP relaxation value\n \"\"\"\n\n # Number of iterations of randomized rounding starting from null model\n number_RR = 5\n\n # Number of refinement steps of each randomized rounding iteration\n number_Refinement = 5\n opt_obj = 1e5\n opt_regressors = np.zeros(len(self.regressors_probability))\n\n # Build RR model from null\n for n in range(number_RR):\n # Initialize null model\n regressors = np.zeros(len(self.regressors_probability))\n i, j = 0, 0\n step_obj = 1e5\n step = True\n # Step-through regressors until number_Refinment refinement loops reached\n while step:\n select = np.random.choice([0, 1], p=[1 - self.regressors_probability[self.regressors_sorted[i]],\n self.regressors_probability[self.regressors_sorted[i]]])\n if select == 1 and regressors[self.regressors_sorted[i]] != 1:\n regressors[self.regressors_sorted[i]] = 1\n obj, coeff, _ = self.LAP.evaluate_obj(regressors)\n if obj < step_obj:\n step_obj = copy(obj)\n step_coeffs = copy(coeff)\n step_regressors = copy(regressors)\n else:\n regressors[self.regressors_sorted[i]] = 0\n\n if select == 0 and regressors[self.regressors_sorted[i]] != 0 and np.count_nonzero(regressors) != 1:\n regressors[self.regressors_sorted[i]] = 0\n obj, coeff, _ = self.LAP.evaluate_obj(regressors)\n if obj < step_obj:\n step_obj = copy(obj)\n step_coeffs = copy(coeff)\n step_regressors = copy(regressors)\n else:\n regressors[self.regressors_sorted[i]] = 1\n i += 1\n if i == min(self.X.shape[1], self.X.shape[0]):\n if np.count_nonzero(regressors) == 0:\n i = 0\n else:\n i = 0\n j += 1\n if j == number_Refinement:\n step = False\n\n # Keep current model if best found\n if step_obj < opt_obj:\n opt_obj = copy(step_obj)\n opt_coeffs = copy(step_coeffs)\n opt_regressors = copy(step_regressors)\n self.rr_regressors = copy(opt_regressors)\n self.rr_obj = copy(opt_obj)\n self.rr_coeffs = copy(opt_coeffs)\n\n def deterministic_rounding(self):\n \"\"\"\n Round deterministically by stepping through each regressor in order of regressor probability\n \"\"\"\n\n improvement = False\n objective_list_det2 = []\n opt_obj = 1e5\n regressors = copy(self.rr_regressors)\n step_regressors = copy(self.rr_regressors)\n step_coeffs = copy(self.rr_coeffs)\n step_obj = copy(self.rr_obj)\n step = True\n\n i = 0\n j = 1\n # Deterministic rounding loop, loop until no improvement\n while step:\n if regressors[self.regressors_sorted[i]] == 0:\n regressors[self.regressors_sorted[i]] = 1\n obj, coeff, _ = self.LAP.evaluate_obj(regressors)\n if obj < step_obj:\n step_obj = copy(obj)\n step_coeffs = copy(coeff)\n step_regressors = copy(regressors)\n improvement = True\n else:\n regressors[self.regressors_sorted[i]] = 0\n\n else:\n regressors[self.regressors_sorted[i]] = 0\n if np.count_nonzero(regressors) != 0:\n obj, coeff, _ = self.LAP.evaluate_obj(regressors)\n if obj < step_obj:\n step_obj = copy(obj)\n step_coeffs = copy(coeff)\n step_regressors = copy(regressors)\n improvement = True\n else:\n regressors[self.regressors_sorted[i]] = 1\n else:\n regressors[self.regressors_sorted[i]] = 1\n i += 1\n if i == self.X.shape[1]:\n if improvement == False:\n step = False\n else:\n improvement = False\n i = 0\n j += 1\n\n if step_obj < opt_obj:\n self.opt_obj = copy(step_obj)\n self.opt_coeffs = copy(step_coeffs)\n self.opt_regressors = copy(step_regressors)\n\n def build_model(self):\n \"\"\"\n Method to conduct Randomized rounding and Deterministic rounding combo\n \"\"\"\n\n self.randomized_rounding()\n self.deterministic_rounding()\n\n # Format model found and return\n self.opt_regressors = np.nonzero(self.opt_regressors)[0]\n coeffs = np.zeros(self.X.shape[1])\n for (idx, coefficient) in enumerate(self.opt_coeffs):\n coeffs[self.opt_regressors[idx]] = coefficient\n self.opt_coeffs = coeffs\n return self.opt_obj, self.opt_coeffs, self.opt_regressors\n\n\nclass LinAlgandPyomo:\n\n def __init__(self, x, y, complexity_penalty_factor):\n \"\"\"\n Initialize linear algebra and matrix math object that uses pyomo models\n\n Returns:\n self function containing several attributes -\n\n self.x : Input design matrix \n self.y : Input response vector \n self.regressors_old_A : Active set from previous iteration \n self.regressors_old_QR : Active set form previous iteration used for QR \n self.Q : Q matrix from QR decompostion \n self.R : R matrix from R decomposition \n self.A : Current design matrix (with only columns current active set)\n self.b : Input response vector alias \n self.complexity_penalty : Penalty in objecive function for size of active set \n\n \"\"\"\n self.x = x\n self.y = y\n\n self.regressors_old_A = [1 for i in range(self.x.shape[1])]\n self.regressors_old_QR = [1 for i in range(self.x.shape[1])]\n self.Q, self.R = sp.linalg.qr(self.x)\n\n self.A = copy(x)\n self.b = copy(y)\n # Complexity penality is a fraction of maximum complexity penalty\n self.complexity_penalty = complexity_penalty_factor * np.linalg.norm(x.T @ y, ord=np.inf)\n\n def construct_QP(self, x, y, bigM):\n \"\"\"\n Construct the QP relaxtion of best subset MIQP\n\n Args:\n\n x : Input design matrix \n y : Input response vector \n bigM : Maximum value of coefficient \n\n Returns:\n self.QP : Pyomo optimization ConcreteModel object \n self.opt : Pyomo optimization object \n\n \"\"\"\n regressors = [r for r in range(1, self.x.shape[1] + 1)]\n datapoints = [d for d in range(1, self.x.shape[0] + 1)]\n\n self.QP = ConcreteModel()\n self.QP.Coeff = Var(regressors, domain=Reals)\n self.QP.z = Var(regressors, domain=UnitInterval)\n self.QP.V = Var(datapoints, domain=Reals)\n\n def ub_rule(model, i):\n return model.Coeff[i] <= float(bigM) * model.z[i]\n\n def lb_rule(model, i):\n return model.Coeff[i] >= -float(bigM) * model.z[i]\n\n def obj_rule(model, i):\n return model.V[i] == (\n float(self.y[i - 1]) - sum(model.Coeff[j] * float(self.x[i - 1][j - 1]) for j in regressors))\n\n self.QP.UB = Constraint(regressors, rule=ub_rule)\n self.QP.LB = Constraint(regressors, rule=lb_rule)\n self.QP.Vconst = Constraint(datapoints, rule=obj_rule)\n\n self.M = float(bigM)\n\n self.QP.complexity_penalty = Param(regressors, initialize=self.complexity_penalty, mutable=True)\n self.QP.OBJ = Objective(expr=sum((self.QP.V[i]) ** 2 for i in datapoints) + sum(\n self.QP.complexity_penalty[i] * self.QP.z[i] for i in regressors))\n self.opt = SolverFactory('ipopt')\n\n return self.QP, self.opt\n\n def optimize(self, opt, model):\n \"\"\"\n Solve QP model and return relaxed binaries as probabilities\n\n Arguments:\n opt : Pyomo optimization object \n model : Pyomo optimization ConcreteModel object \n\n Returns:\n regressors : Binary vector indicating regressors which are active \n coefficients : Coefficient vector indicating coefficients for each regressor \n time : The amount of time needed to solve the optimization problem \n \"\"\"\n self.results_opt = opt.solve(model, tee=False, keepfiles=False)\n self.solve_time = self.results_opt.solver.time\n regressors = []\n coefficients = []\n for i in range(1, len(model.z) + 1):\n regressors.append(value(model.z[i]))\n coefficients.append(value(model.Coeff[i]))\n\n return np.array(regressors), np.array(coefficients), self.results_opt.solver.time\n\n def updateA_col(self):\n \"\"\"\n Update the columns of the design matrix A (i.e. the active set)\n \"\"\"\n h = 0\n for i in range(self.x.shape[1]):\n if self.regressors_old_A[i] == 0 and self.regressors[i] == 1:\n # New variable inserted, inserts corresponding column into A\n self.A = np.insert(self.A.T, h, self.x.T[i], 0)\n self.A = self.A.T\n h = h + 1\n if self.regressors_old_A[i] == 1 and self.regressors[i] == 1:\n h = h + 1\n if self.regressors_old_A[i] == 1 and self.regressors[\n i] == 0: # Variable removed, deletes corresponding column from A\n self.A = np.delete(self.A.T, h, 0)\n self.A = self.A.T\n\n def updateQR(self):\n \"\"\"\n Update the QR factorization for the new active set\n \"\"\"\n h = 0\n for i in range(self.x.shape[1]):\n if self.regressors_old_QR[i] == 0 and self.regressors[i] == 1:\n # New variable inserted, inserts corresponding column into A\n self.Q, self.R = sp.linalg.qr_insert(self.Q, self.R, self.x.T[i].T, h, 'col')\n h = h + 1\n if self.regressors_old_QR[i] == 1 and self.regressors[i] == 1:\n h = h + 1\n if self.regressors_old_QR[i] == 1 and self.regressors[\n i] == 0: # Variable removed, deletes corresponding column from A\n self.Q, self.R = sp.linalg.qr_delete(self.Q, self.R, h, 1, 'col')\n\n def OLS_soln(self):\n \"\"\"\n Find the OLS solution of the current active set using QR factorization (if overdetermined problem)\n or else numpy's inbuilt lnalg.lstsq routine for underdetermined case\n \"\"\"\n self.updateA_col()\n if np.linalg.matrix_rank(self.A) == self.A.shape[1]:\n self.updateQR()\n Rp = self.R[:np.count_nonzero(self.regressors)] # Takes the first 'p' rows of R\n nb = np.dot(self.Q.T, self.b)\n c = nb[:np.count_nonzero(self.regressors)] # Takes the first 'p' rows of nb vector\n d = nb[np.count_nonzero(self.regressors):]\n self.B_ols = sp.linalg.solve_triangular(Rp, c)\n self.SSRols = sum(d[i] ** 2 for i in range(np.shape(self.A)[0] - np.shape(self.A)[1]))\n self.B_ols_sum = sum(abs(self.B_ols[i]) for i in range(np.shape(self.A)[1]))\n\n self.regressors_old_A = copy(self.regressors)\n self.regressors_old_QR = copy(self.regressors)\n else:\n self.B_ols, self.SSRols, rank, s = np.linalg.lstsq(self.A, self.b, rcond=-1)\n self.B_ols_sum = sum(abs(self.B_ols[i]) for i in range(self.A.shape[1]))\n if len(self.SSRols) == 0:\n self.SSRols = 0\n else:\n self.SSRols = self.SSRols[0]\n self.regressors_old_A = copy(self.regressors)\n\n def evaluate_obj(self, regressors):\n \"\"\"\n Evaluate objective to MIQP using OLS solution to calculate squared error term plus the complexity penalty\n\n Arguments:\n regressors : Binary vector indicating regressors which are active\n\n Returns:\n self.obj : Approximate objective to MIQP \n self.B_ols : Coefficient vector for OLS coefficients on active set \n self.B_ols_sum : Sum of magnitude of OLS coefficients for active set \n\n \"\"\"\n self.regressors = regressors\n self.OLS_soln()\n self.obj = self.SSRols + self.complexity_penalty * np.count_nonzero(regressors)\n\n return self.obj, self.B_ols, self.B_ols_sum" ]
[ [ "numpy.array", "numpy.linalg.norm", "numpy.linalg.matrix_rank", "numpy.dot", "numpy.zeros", "numpy.random.choice", "numpy.delete", "numpy.count_nonzero", "numpy.ones", "scipy.linalg.qr_insert", "numpy.nonzero", "numpy.shape", "scipy.linalg.qr", "numpy.linalg.lstsq", "numpy.argsort", "scipy.linalg.qr_delete", "scipy.linalg.solve_triangular", "numpy.insert" ] ]
GuoooooJing/nasa-cloudstreet-image-classification
[ "2923609da6a0501101bdf3db322b7e012e683391" ]
[ "ml_model/twolayer.py" ]
[ "import torch\nimport torch.nn as nn\nclass ConvNet(nn.Module):\n def __init__(self, num_classes=10):\n super(ConvNet, self).__init__()\n self.layer1 = nn.Sequential(\n nn.Conv2d(3, 16, kernel_size=5, stride=1, padding=2),\n nn.BatchNorm2d(16),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2, stride=2))\n self.layer2 = nn.Sequential(\n nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),\n nn.BatchNorm2d(32),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2, stride=2))\n self.fc = nn.Linear(200*200*32, num_classes)\n \n def forward(self, x):\n out = self.layer1(x)\n out = self.layer2(out)\n out = out.reshape(out.size(0), -1)\n print(out.size())\n out = self.fc(out)\n return out" ]
[ [ "torch.nn.Linear", "torch.nn.MaxPool2d", "torch.nn.BatchNorm2d", "torch.nn.ReLU", "torch.nn.Conv2d" ] ]
ruohai0925/artemis
[ "deca36ea6f5ccd823f6808d4e9afcaaf94059dd0" ]
[ "Examples/Modules/ionization/analysis_ionization.py" ]
[ "#! /usr/bin/env python\n\n# Copyright 2019-2020 Luca Fedeli, Maxence Thevenet\n#\n# This file is part of WarpX.\n#\n# License: BSD-3-Clause-LBNL\n\n\n\"\"\"\nThis script tests the result of the ionization module in WarpX.\n\nInput files inputs.rt and inputs.bf.rt are used to reproduce the test from\nChen, JCP, 2013, figure 2 (in the lab frame and in a boosted frame,\nrespectively): a plane-wave laser pulse propagates through a\nuniform N2+ neutral plasma and further ionizes the Nitrogen atoms. This test\nchecks that, after the laser went through the plasma, ~32% of Nitrogen\nions are N5+, in agreement with theory from Chen's article.\n\"\"\"\n\nimport sys\nimport yt\nimport numpy as np\nyt.funcs.mylog.setLevel(0)\nsys.path.insert(1, '../../../../warpx/Regression/Checksum/')\nimport checksumAPI\n\n# Open plotfile specified in command line, and get ion's ionization level.\nfilename = sys.argv[1]\nds = yt.load( filename )\nad = ds.all_data()\nilev = ad['ions', 'particle_ionization_level'].v\n\n# Fraction of Nitrogen ions that are N5+.\nN5_fraction = ilev[ilev == 5].size/float(ilev.size)\n\nprint(\"Number of ions: \" + str(ilev.size))\nprint(\"Number of N5+ : \" + str(ilev[ilev == 5].size))\nprint(\"N5_fraction : \" + str(N5_fraction))\n\ndo_plot = False\nif do_plot:\n import matplotlib.pyplot as plt\n all_data_level_0 = ds.covering_grid(level=0,left_edge=ds.domain_left_edge,\n dims=ds.domain_dimensions)\n F = all_data_level_0['boxlib', 'Ex'].v.squeeze()\n extent = [ ds.domain_left_edge[1], ds.domain_right_edge[1],\n ds.domain_left_edge[0], ds.domain_right_edge[0] ]\n ad = ds.all_data()\n\n # Plot ions with ionization levels\n species = 'ions';\n xi = ad[species, 'particle_position_x'].v\n zi = ad[species, 'particle_position_y'].v\n ii = ad[species, 'particle_ionization_level'].v\n plt.figure(figsize=(10,10))\n plt.subplot(211)\n plt.imshow(np.abs(F), extent=extent, aspect='auto',\n cmap='magma', origin='default')\n plt.colorbar()\n for lev in range(int(np.max(ii)+1)):\n select = (ii == lev)\n plt.scatter(zi[select],xi[select],s=.2,\n label='ionization level: ' + str(lev))\n plt.legend()\n plt.title(\"abs(Ex) (V/m) and ions\")\n plt.xlabel(\"z (m)\")\n plt.ylabel(\"x (m)\")\n plt.subplot(212)\n plt.imshow(np.abs(F), extent=extent, aspect='auto',\n cmap='magma', origin='default')\n plt.colorbar()\n\n # Plot electrons\n species = 'electrons';\n if species in [x[0] for x in ds.field_list]:\n xe = ad[species, 'particle_position_x'].v\n ze = ad[species, 'particle_position_y'].v\n plt.scatter(ze,xe,s=.1,c='r',label='electrons')\n plt.title(\"abs(Ex) (V/m) and electrons\")\n plt.xlabel(\"z (m)\")\n plt.ylabel(\"x (m)\")\n plt.savefig(\"image_ionization.pdf\", bbox_inches='tight')\n\nerror_rel = abs(N5_fraction-0.32) / 0.32\ntolerance_rel = 0.07\n\nprint(\"error_rel : \" + str(error_rel))\nprint(\"tolerance_rel: \" + str(tolerance_rel))\n\nassert( error_rel < tolerance_rel )\n\ntest_name = filename[:-9] # Could also be os.path.split(os.getcwd())[1]\nchecksumAPI.evaluate_checksum(test_name, filename)\n" ]
[ [ "numpy.max", "matplotlib.pyplot.colorbar", "matplotlib.pyplot.xlabel", "matplotlib.pyplot.savefig", "matplotlib.pyplot.title", "matplotlib.pyplot.legend", "matplotlib.pyplot.figure", "matplotlib.pyplot.ylabel", "numpy.abs", "matplotlib.pyplot.scatter", "matplotlib.pyplot.subplot" ] ]
lemolatoon/chemistry
[ "f69c5a70a959ebd36265e6de1c317fcc0963c6ac" ]
[ "gly_titration.py" ]
[ "from os import name\r\nimport numpy as np\r\nimport matplotlib.pyplot as plt\r\n\r\ncb = np.array(0.1 * 0.9505)\r\n\r\nva = 50\r\n\r\nkw = 10 ** (-14)\r\n\r\ndef h2so4(ph: np.ndarray, graph: bool = False):\r\n k1 = np.array(10 ** (-2.35))\r\n k2 = np.array(10 ** (-1.99))\r\n\r\n beta1 = 10 ** (9.78)\r\n beta2 = 10 ** (12.13)\r\n\r\n f = 0.994\r\n ca = np.array(0.014) * f\r\n\r\n h = 10 ** (-ph)\r\n oh = kw / h\r\n\r\n print(f\"\\npH:\\n{ph}\")\r\n\r\n da, dha, dh2a = calc_mol(h, beta1, beta2)\r\n print(f\"\\nモル分率\\nG-:\\n{da * 100}\\nHG:\\n{dha*100}\\nH2G+:\\n{dh2a*100}\")\r\n\r\n n = n_bar(dha, dh2a)\r\n\r\n print(f\"\\n平均プロトン数:\\n{n}\")\r\n\r\n vb = calc_vb2(ca, va, n, cb, h, oh)\r\n\r\n print(f\"\\n滴下量:\\n{vb}\")\r\n\r\n alpha = calc_alpha(h, beta1, beta2)\r\n\r\n print(f\"\\n副反応係数:\\n{alpha}\")\r\n\r\n print(f\"\\nlog副反応係数:\\n{np.log(alpha)}\")\r\n\r\n a = calc_a(n, h, oh, ca)\r\n\r\n\r\n if graph:\r\n\r\n plt.plot(ph, vb)\r\n plt.ylim(0, 20)\r\n plt.title(\"vb\")\r\n plt.show()\r\n\r\n plt.plot(ph, da, label=\"G-\")\r\n plt.plot(ph, dha, label=\"HG\")\r\n plt.plot(ph, dh2a, label=\"H2G+\")\r\n plt.title(\"mol %\")\r\n plt.legend()\r\n plt.show()\r\n\r\n plt.plot(ph, np.log(alpha))\r\n plt.title(\"log alpha\")\r\n plt.show()\r\n\r\n plt.plot(ph, n)\r\n plt.title(\"n\")\r\n plt.show()\r\n\r\n\r\n\r\n\r\n\r\ndef calc_mol(h, b1, b2):\r\n a = calc_alpha(h, b1, b2)\r\n dg = 1 / a\r\n\r\n dhg = b1 * h / a\r\n\r\n dh2g = b2 * (h**2) / a\r\n\r\n return dg, dhg, dh2g\r\n\r\ndef n_bar(dha, dh2a):\r\n return dha + 2 * dh2a\r\n\r\ndef calc_vb(va, ca, dha, dh2a, h, oh, cb):\r\n \"\"\"\r\n should not use\r\n \"\"\"\r\n vb = (va * (2 * ca - dha - 2 * dh2a - h + oh)) / (cb + dha + 2 * dh2a + h - oh)\r\n return vb\r\n\r\ndef calc_vb2(ca, va, n, cb, h, oh):\r\n vb = (ca * va * (2 - n) - va * (h - oh)) / (cb + h - oh)\r\n return vb\r\n\r\ndef calc_alpha(h, b1, b2):\r\n return 1 + b1 * h + b2 * (h ** 2)\r\n\r\ndef calc_a(n, h, oh, ca):\r\n return 2- n - (h - oh) / ca\r\n\r\n\r\n\r\ndef plot(x, y):\r\n plt.plot(x, y)\r\n plt.show()\r\n \r\n\r\ndef test():\r\n ph = np.linspace(0, 14)\r\n graph = True\r\n h2so4(ph, graph=graph)\r\n # h2so4(4)\r\n\r\ndef get_value():\r\n ph = np.array([0, 1, 2, 2.23, 2.5, 3, 4, 6, 8, 9, 10, 10.5, 11, 11.25, 11.5, 11.75])\r\n print(len(ph))\r\n h2so4(ph, graph=True)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n # test()\r\n get_value()\r\n # h2so4(4)" ]
[ [ "numpy.array", "numpy.log", "matplotlib.pyplot.ylim", "matplotlib.pyplot.plot", "matplotlib.pyplot.title", "matplotlib.pyplot.legend", "matplotlib.pyplot.show", "numpy.linspace" ] ]
cragkhit/elasticsearch
[ "07fb10bec4614f55bcc39e571d1185fc9ce86242" ]
[ "evaluation/plot_index_query.py" ]
[ "from matplotlib import pyplot as plt\nimport numpy as np\nimport matplotlib\n\n# Index time plot\n\nfiles = [4, 16, 64, 256, 1024, 4096, 16384, 65536, 262144, 1048576]\nmethods = [22, 50, 178, 423, 1723, 6601, 28030, 111190, 442403, 1771183, 4870113]\nnicad_methods = [22, 50, 178, 423, 1723, 6601, 28030, 111190]\niclones_methods = [22, 50, 178, 1723, 6601, 28030]\njplag_methods = [22, 50, 178, 423, 1723, 6601, 28030]\nsimian_methods = [22, 50, 178, 423, 1723, 6601, 28030]\ndeckard_methods = [22, 50, 178, 423, 1723, 6601, 28030]\nccfx_methods = [22, 50, 178, 423, 1723, 6601, 28030, 111190, 442403, 1771183]\n\n# 4,0:02.09,2.09\n# 16,0:04.19,4.19\n# 64,0:07.15,7.15\n# 256,0:12.19,12.19\n# 1024,0:17.71,17.71\n# 4096,1:08.19,68.19\n# 16384,6:29.89,389.89\n# 65536,23:52.99,1439.99\n# 262144,1:13:16,4396\n# 1048576,5:04:59,18299\n\nsiamese = [2.09, 4.19, 7.15, 12.19, 17.71, 68.19, 389.89, 1439.99, 4396, 18299, 65616]\nscc = [(0.58 + 2.03), (0.88 + 0.68), (2.68 + 0.98), (3.81 + 1.37), (9.18 + 2.15),\n (28.40 + 4.96), (110.09 + 16.78), (432.52 + 60.96), (1694.23 + 219.8), (6786 + 870.08),\n (18606 + 2348.9)]\nnicad = [0.34, 0.66, 2.21, 7.89, 26.50, 84.25, 574.91, 6992]\niclones = [0.59, 0.67, 3.06, 4.82, 14.97, 166.95]\njpag = [0.37, 0.91, 0.89, 1.02, 3.83, 57.92, 890.08]\nsimian = [0.25, 0.30, 0.47, 2.14, 25.90, 401.93, 6506]\ndeckard = [1.56, 3.51, 8.29, 18.15, 109.66, 1159.29, 26051.83]\nccfx = [0.45, 0.48, 0.61, 1.92, 6.58, 36.02, 46 * 60 + 17.12, 9 * 60 + 8.97, 42 * 60 + 46.56, 9 * 3600 + 14 * 60 + 15]\n\n# # seconds\n# fig = plt.figure()\n# ax = fig.add_subplot(111)\n# ax.plot(methods, siamese, c=\"b\", marker=\"s\", label=\"Siamese\")\n# ax.plot(methods, scc, c=\"r\", marker=\"x\", label=\"SourcererCC\")\n# ax.plot(methods, nicad, c=\"g\", marker=\"o\", label=\"NiCad\")\n# plt.yscale('log', basey=10)\n# plt.xscale('log', basex=10)\n# plt.xlabel(\"No. of methods\")\n# plt.ylabel(\"Indexing time (s)\")\n# plt.ylim(ymax=100000)\n# plt.legend(loc=2)\n# plt.show()\n#\n# fig = ax.get_figure()\n# fig.savefig('index.pdf', bbox_inches='tight')\n\n# minutes\nsiamese_m = [x / 60 for x in siamese]\nscc_m = [x / 60 for x in scc]\nnicad_m = [x / 60 for x in nicad]\niclones_m = [x / 60 for x in iclones]\njplag_m = [x / 60 for x in jpag]\nsimian_m = [x / 60 for x in simian]\ndeckard_m = [x / 60 for x in deckard]\nccfx_m = [x / 60 for x in ccfx]\n\nfig = plt.figure()\n# ax = fig.add_subplot(111)\nplt.plot(methods, siamese_m, c=\"b\", marker=\"s\", label=\"Siamese\")\nplt.plot(methods, scc_m, c=\"r\", marker=\"x\", label=\"SourcererCC\")\n# plt.plot(ccfx_methods, ccfx_m, c=\"#5B2C6F\", marker=\"p\", linestyle=\":\", label=\"CCFinderX\")\nplt.plot(deckard_methods, deckard_m, c=\"k\", marker=\">\", linestyle=\":\", label=\"Deckard\")\nplt.plot(iclones_methods, iclones_m, c=\"c\", marker=\"v\", linestyle=\":\", label=\"iClones\")\nplt.plot(jplag_methods, jplag_m, c=\"m\", marker=\"^\", linestyle=\":\", label=\"JPlag\")\nplt.plot(nicad_methods, nicad_m, c=\"g\", marker=\"o\", linestyle=\":\", label=\"NiCad\")\nplt.plot(simian_methods, simian_m, c=\"y\", marker=\"<\", linestyle=\":\", label=\"Simian\")\nplt.yscale('log', basey=10)\nplt.xscale('log', basex=10)\nplt.xlabel(\"No. of methods\", fontsize=16)\nplt.ylabel(\"Indexing time (m)\", fontsize=16)\nplt.tick_params(axis='both', which='major', labelsize=16)\n# plt.ylim(ymax=1500)\nplt.legend(loc=2, ncol=1, prop={'size': 12})\n# plt.show()\n# fig = ax.get_figure()\n# fig.set_size_inches(6, 3)\nplt.savefig('../index_m.pdf', bbox_inches='tight')\n\n\n# Query time plot\n\nmethods = [22, 50, 178, 423, 1723, 6601, 28030, 111190, 442403, 1771183, 4870113]\n#\nsiamese = [1.8106, 1.8114, 1.8179, 1.8598, 1.9062, 2.0773, 2.4023, 2.4893, 3.2297, 5.0043, 7.9741]\nscc = [1.3006, 1.3773, 1.3937, 1.3412, 1.4536, 1.4913, 2.0718, 3.3621, 9.2613, 28.3169, 60.9813]\nscc_m = [x + 0.2354 for x in scc] # add avg tokenisation time\n\nprint(scc_m)\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot(methods, siamese, c=\"b\", marker=\"s\", label=\"Siamese\")\nax.plot(methods, scc_m, c=\"r\", marker=\"x\", label=\"SourcererCC\")\nplt.yscale('log', basey=10)\nplt.xscale('log', basex=10)\nplt.ylim(ymax=100)\nplt.xlabel(\"No. of methods in the index\", fontsize=16)\nplt.ylabel(\"Average query response time (s)\", fontsize=16)\nplt.tick_params(axis='both', which='major', labelsize=16)\n# plt.ylim(ymax=1500)\nplt.legend(loc=2, ncol=1, prop={'size': 12})\n# plt.legend(loc=2)\nplt.show()\n\nfig = ax.get_figure()\nfig.savefig('../query.pdf', bbox_inches='tight')" ]
[ [ "matplotlib.pyplot.xscale", "matplotlib.pyplot.xlabel", "matplotlib.pyplot.savefig", "matplotlib.pyplot.plot", "matplotlib.pyplot.legend", "matplotlib.pyplot.ylim", "matplotlib.pyplot.figure", "matplotlib.pyplot.tick_params", "matplotlib.pyplot.ylabel", "matplotlib.pyplot.show", "matplotlib.pyplot.yscale" ] ]
yepedraza/hm-class-nn
[ "08e9bbd9c2c9ff7faeeb6c317aea16e434d1b233" ]
[ "neuralNetwork.py" ]
[ "from dataset import Dataset\nimport numpy as np\n\nclass NeuralNetwork:\n layers_dim = []\n params = {}\n errors = []\n epochs = int \n alpha = float #Learning rate\n training = True\n train_data = Dataset([],[])\n\n def __init__(self, train_data, layers_dim):\n self.train_data = train_data\n self.layers_dim = layers_dim\n\n def TrainingOFF(self, alpha, epochs):\n self.alpha = alpha\n self.epochs = epochs\n self._Initialize_params()\n\n i=0\n while i <= self.epochs:\n self._ForwardOFF(self.train_data.x_train)\n self._BackPropagationOFF(d = True)\n self._WeightAdjustOFF()\n self.errors.append(self._mse(self.train_data.y_train, self.params['A2']))\n i += 1\n\n def ValidateOFF(self):\n self._forward(self.train_data.x_test)\n return self.params['A2']\n\n def TrainingON(self, alpha, epochs):\n self.alpha = alpha\n self.epochs = epochs\n self._Initialize_params()\n\n i=0\n while i <= self.epochs:\n A2 = self._ForwardON(self.train_data.x_train)\n self.errors.append(self._mse(self.train_data.y_train, A2))\n\n def _Initialize_params(self):\n L = len(self.layers_dim)\n for l in range(0, L-1):\n self.params['W' + str(l+1)] = (np.random.rand(self.layers_dim[l],self.layers_dim[l+1]) * 2 ) - 1\n self.params['b' + str(l+1)] = (np.random.rand(1,self.layers_dim[l+1]) * 2 ) - 1\n\n\n def _tanh(self, x, d=False):\n if d:\n return 4/(np.exp(x)+np.exp(-x))**2\n else:\n return (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))\n\n def _mse(self, y, y_hat, d = False):\n if d:\n return y_hat-y\n else:\n return np.mean((y_hat - y)**2)\n\n ######################################## OFFLINE TRAINING (EACH EPOCH) ###################################################\n def _ForwardOFF(self, x_data):\n self.params['A0'] = x_data\n\n self.params['Z1'] = (self.params['A0']@self.params['W1']) + self.params['b1'] \n self.params['A1'] = self._tanh(self.params['Z1']) \n\n self.params['Z2'] = (self.params['A1']@self.params['W2']) + self.params['b2'] \n self.params['A2'] = self._tanh(self.params['Z2']) \n\n def _BackPropagationOFF(self, d):\n self.params['dZ2'] = self._mse(self.train_data.y_train, self.params['A2'], d) * self._tanh(self.params['A2'], d)\n self.params['dW2'] = self.params['A1'][email protected]['dZ2']\n\n self.params['dZ1'] = self.params['dZ2']@self.params['W2'].T * self._tanh(self.params['A1'], d)\n self.params['dW1'] = self.params['A0'][email protected]['dZ1']\n\n def _WeightAdjustOFF(self):\n self.params['W2'] = self.params['W2'] - self.params['dW2'] * self.alpha\n self.params['b2'] = self.params['b2'] - (np.mean(self.params['dZ2'], axis=0, keepdims=True)) * self.alpha\n\n self.params['W1'] = self.params['W1'] - self.params['dW1'] * self.alpha\n self.params['b1'] = self.params['b1'] - (np.mean(self.params['dZ1'], axis=0, keepdims=True)) * self.alpha\n\n ######################################## ONLINE TRAINING (EACH ITER) ###################################################\n def _ForwardON(self, x_data):\n A0 = x_data\n L = len(self.params['A0'])\n Z1, Z2, A1, A2= [], [], [], []\n\n for l in range(0, L-1):\n #print(A0[l])\n #print(A0[0,l+1])\n Z1.append((A0[l]@self.params['W1']) + self.params['b1'])\n A1.append(self._tanh(Z1[l]))\n Z2.append((A1[l]@self.params['W2']) + self.params['b1'])\n A2.append(self._tanh(Z2[l]))\n print(A2[l])\n print(A2[0:l])\n \n\n self._BackPropagationON(l, A1[0,l], A2[0,l], d = True)\n \n return A2\n\n def _BackPropagationON(self, l, A1, A2, d):\n\n Y = self.train_data.y_train\n\n dZ2 = self._mse(Y[l], A2, d) * self._tanh(A2,d)\n dW2 = A1.T@dZ2\n\n dZ1= [email protected]['W2'] * self._tanh(A1,d)\n dW1 = A1.T@dZ2\n\n self._WeightAdjustON(dZ1,dZ2,dW1,dW2)\n\n def _WeightAdjustON(self,dZ1,dZ2,dW1,dW2):\n self.params['W2'] = self.params['W2'] + dW2 * self.alpha\n self.params['b2'] = self.params['b2'] - (np.mean(dZ2, axis=0, keepdims=True)) * self.alpha\n\n self.params['W1'] = self.params['W1'] + dW1 * self.alpha\n self.params['b1'] = self.params['b1'] - (np.mean(dZ1, axis=0, keepdims=True)) * self.alpha" ]
[ [ "numpy.random.rand", "numpy.exp", "numpy.mean" ] ]
lukepinkel/pystats
[ "8e8d7588a63c9ca39b6b7ca1e4a6e92f5a1c0c22" ]
[ "pystatsm/pyfa/rotation.py" ]
[ "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Jun 15 22:04:25 2020\n\n@author: lukepinkel\n\"\"\"\n\nimport numpy as np\nimport pandas as pd\nfrom ..utilities.linalg_operations import vec, invec, vecl, vdg\nfrom ..utilities.special_mats import kmat\n\n\n\n\ndef vgq_cf(L, gamma):\n '''\n VgQ subroutine for crawford ferguson\n \n Parameters:\n L: Loadings matrix of p features and q factors\n gamma: Coefficient that determines the type of rotation\n \n Returns:\n ft: Criteria function\n Gq: Gradient of the function at L\n '''\n p, q = L.shape\n L2 = L**2\n N = np.ones((q, q)) - np.eye(q)\n M = np.ones((p, p)) - np.eye(p)\n f1 = (1 - gamma) * np.trace(np.dot(L2.T, L2.dot(N))) / 4.0\n f2 = gamma * np.trace(np.dot(L2.T, M.dot(L2))) / 4.0\n G1 = (1.0 - gamma) * L * (L2.dot(N)) \n G2 = gamma * L * (M.dot(L2))\n ft = f1 + f2\n Gq = G1 + G2\n return ft, Gq\n\n \ndef vgq_ob(L, gamma):\n '''\n VgQ subroutine for oblimin rotation\n \n Parameters:\n L: Loadings matrix of p features and q factors\n gamma: Coefficient that determines the type of rotation\n \n Returns:\n ft: Criteria function\n Gq: Gradient of the function at L\n '''\n p, q = L.shape\n L2 = L**2\n C = np.eye(p) - np.ones((p, p)) * gamma / p\n N = np.ones((q, q)) - np.eye(q)\n B = C.dot(L2.dot(N))\n ft = np.trace(np.dot(L2.T, B)) / 4.0\n Gq = L * B\n return ft, Gq\n\ndef rotate_ortho(A, vgq, T=None, alpha=1.0, gamma=0, tol=1e-9, n_iters=1000):\n '''\n Orthogonal rotation\n \n Parameters:\n A: Loadings matrix\n T: Initial rotation matrix\n alpha: Coefficient that determines the step size\n gamma: Coefficient that determines the type of rotation\n tol: Tolerance that determines convergance\n n_iters: The maximum number of iterations before stopping\n \n Returns:\n T: Rotation matrix\n '''\n if T is None:\n T = np.eye(A.shape[1])\n L = np.dot(A, T)\n ft, Gq = vgq(L, gamma)\n G = np.dot(A.T, Gq)\n opt_hist = []\n for i in range(n_iters):\n M = np.dot(T.T, G)\n S = (M + M.T) / 2.0\n Gp = G - np.dot(T, S)\n s = np.linalg.norm(Gp)\n opt_hist.append([ft, s])\n if s<tol:\n break\n alpha = 2.0 * alpha\n for c in range(10):\n X = T - alpha * Gp\n U, D, V = np.linalg.svd(X, full_matrices=False)\n Tt = np.dot(U, V)\n L = np.dot(A, Tt)\n ft_new, Gq = vgq(L, gamma)\n if ft_new < (ft - 0.5*s**2*alpha):\n break\n else:\n alpha = alpha * 0.5\n ft, T =ft_new, Tt\n G = np.dot(A.T, Gq)\n return T, G, Gq, opt_hist\n\n\n\ndef rotate_obli(A, vgq, T=None, alpha=1.0, gamma=0, tol=1e-9, n_iters=500):\n '''\n Oblique rotation\n \n Parameters:\n A: Loadings matrix\n T: Initial rotation matrix\n alpha: Coefficient that determines the step size\n gamma: Coefficient that determines the type of rotation\n tol: Tolerance that determines convergance\n n_iters: The maximum number of iterations before stopping\n \n Returns:\n T: Rotation matrix\n '''\n if T is None:\n T = np.eye(A.shape[1])\n Tinv = np.linalg.inv(T)\n L = np.dot(A, Tinv.T)\n ft, Gq = vgq(L, gamma)\n G = -np.linalg.multi_dot([L.T, Gq, Tinv]).T\n opt_hist = []\n for i in range(n_iters):\n TG = T*G\n Gp = G - np.dot(T, np.diag(np.sum(TG, axis=0)))\n s = np.linalg.norm(Gp)\n opt_hist.append([ft, s])\n if s<tol:\n break\n alpha = 2.0 * alpha\n for c in range(10):\n X = T - alpha * Gp\n X2 = X**2\n V = np.diag(1 / np.sqrt(np.sum(X2, axis=0)))\n Tt = np.dot(X, V)\n Tinv = np.linalg.pinv(Tt)\n L = np.dot(A, Tinv.T)\n ft_new, Gq = vgq(L, gamma)\n if ft_new < (ft - 0.5*s**2*alpha):\n break\n else:\n alpha = alpha * 0.5\n ft, T =ft_new, Tt\n G = -np.linalg.multi_dot([L.T, Gq, Tinv]).T\n return T.T, G, Gq, opt_hist\n\ndef get_gamma_ob(criterion, p, custom_gamma):\n if (criterion == 'quartimin') or (criterion == 'quartimax'):\n gamma = 0.0\n elif (criterion == 'biquartimin') or (criterion == 'biquartimax'):\n gamma = 1.0 / 2.0\n elif (criterion == 'varimax') or (criterion == 'covarimin'):\n gamma = 1.0\n elif (criterion == 'equamax'):\n gamma = p / 2.0\n elif (criterion == 'oblique'):\n gamma = custom_gamma if custom_gamma is not None else -0.1\n else:\n gamma = 0.0\n return gamma\n \ndef get_gamma_cf(criterion, p, k):\n if (criterion == 'quartimax'):\n kappa = 0.0\n elif (criterion == 'varimax'):\n kappa = 1.0 / p\n elif (criterion == 'equamax'):\n kappa = k / (2.0 * p)\n elif (criterion == 'parsimax'):\n kappa = (k - 1.0) / (p + k - 2.0)\n elif (criterion == 'parsimony'):\n kappa = 1\n else:\n kappa = 0.0\n return kappa\n \n\n\n#TODO remove figure out which of these are redudant/used for development\ndef jac_approx(f, x, eps=1e-4, tol=None, d=1e-4, args=()):\n tol = np.finfo(float).eps**(1/3) if tol is None else tol\n h = np.abs(d * x) + eps * (np.abs(x) < tol)\n n = len(x)\n m = len(f(x, *args))\n u = np.zeros_like(h)\n J = np.zeros((n, m))\n for i in range(n):\n u[i] = h[i]\n J[i] = (f(x + u, *args) - f(x - u, *args)) / (2.0 * h[i])\n u[i] = 0.0\n return J\n\n\ndef oblique_constraints(lvec, tvec, p, q, gamma, vgq):\n L = invec(lvec, p, q)\n T = invec(tvec, q, q)\n _, Gq = vgq(L, gamma)\n p, q = L.shape\n I = np.eye(q)\n N = np.ones((q, q)) - I\n Phi = np.dot(T, T.T)\n J1 = L.T.dot(Gq).dot(np.linalg.inv(Phi)) * N\n #J2 = Phi * I\n J = J1 #+ J2 - I\n return vec(J)\n\n\ndef oblique_constraint_func(params, model):\n L, Phi, Psi = model.model_matrices_augmented(params)\n T = model.T\n _, Gq = model._vgq(L, model._gamma)\n p, q = L.shape\n I = np.eye(q)\n N = np.ones((q, q)) - I\n Phi = np.dot(T, T.T)\n J1 = L.T.dot(Gq).dot(np.linalg.inv(Phi)) * N\n return vec(J1)\n\n\ndef oblique_constraint_derivs(params, model):\n \"\"\"\n Derivatives of the Oblique Constraints \n \n Parameters\n ----------\n params: ndarray\n vector containing model parameters\n \n model: FactorAnalysis object\n The factor model on which rotation is being performed\n \n Returns\n -------\n \n D: ndarray \n Derivative of the oblique constraint matrix organized in block\n \n \n Oblique constraints can be expressed\n \n \\Lambda^{T} \\frac{dQ}{d\\Lambda}\\Phi^{-1}\n \n Where Lambda is the loadings matrix, Q is the rotation criterion,\n \\frac{dQ}{d\\Lambda} is the gradient of the rotation criterion wrt \\Lambda,\n and \\Phi^{-1} is the inverse of the implied factor covariance.\n \"\"\"\n L, Phi, Psi = model.model_matrices_augmented(params)\n V = np.linalg.inv(Phi)\n gamma = model._gamma\n p, q = L.shape\n Iq = np.eye(q)\n Kpq = kmat(p, q).A\n L2 = L**2\n A = np.eye(p) - np.ones((p, p)) * gamma / p\n B = np.ones((q, q)) - np.eye(q)\n G = L * A.dot(L2).dot(B)\n dgL = vdg(L)\n DL1A = np.kron(V.T.dot(G.T), Iq).dot(Kpq)\n DL2A = np.kron(V.T, L.T)\n DL2B = 2.0 * dgL.dot(np.kron(B.T, A)).dot(dgL) + vdg(A.dot(L2).dot(B))\n \n DL = DL1A + DL2A.dot(DL2B)\n DPhi = -np.kron(V.T, L.T.dot(G).dot(V))\n l_ind = vecl(np.arange(q*q).reshape(q, q, order='F'))\n \n D = np.concatenate([DL, DPhi[:, l_ind], np.zeros((DL.shape[0], p))], axis=1)\n return D\n\n\ndef approx_oblique_constraint_derivs(params, model):\n J = jac_approx(oblique_constraint_func, params, args=(model,))\n return J\n\n \ndef rotate(A, criterion, method='oblimin', rotation_type='oblique', T=None, \n tol=1e-8, alpha=1.0, n_iters=500, custom_gamma=None):\n '''\n Rotation of loadings matrix\n \n Parameters:\n A: Loadings Matrix\n method: Type of rotation\n T: Initial rotation matrix\n tol: Tolerance controlling convergance\n alpha: Parameter controlling step size taken in GPA algorithm\n n_iters: Maximum number of iterations before convergance\n custom_gamma: Coefficient used to customize non standard oblique rotations\n \n Returns:\n L: Rotated loadings matrix\n T: Rotation matrix\n \n Methods are:\n quartimax\n biquartimax\n varimax\n equamax\n quartimin\n biquartimin\n covarimin\n oblique\n \n '''\n p, k = A.shape\n if method == 'oblimin':\n gamma, vgq = get_gamma_ob(criterion, p, custom_gamma), vgq_ob\n elif method == 'cf':\n gamma, vgq = get_gamma_cf(criterion, p, k), vgq_cf\n \n \n if rotation_type == 'orthogonal':\n T, G, Gq, opt_hist = rotate_ortho(A, vgq, T=T, alpha=alpha, gamma=gamma, \n tol=tol, n_iters=n_iters)\n L = np.dot(A, T)\n \n elif rotation_type == 'oblique':\n T, G, Gq, opt_hist = rotate_obli(A, vgq, T=T, alpha=alpha, gamma=gamma,\n tol=tol, n_iters=n_iters)\n L = np.dot(A, np.linalg.inv(T))\n\n return L, T, G, Gq, opt_hist, vgq, gamma\n\n\n\n\n\n\n\n\n\n\n\n\n\n" ]
[ [ "numpy.zeros_like", "numpy.dot", "numpy.linalg.norm", "numpy.linalg.multi_dot", "numpy.zeros", "numpy.sum", "numpy.ones", "numpy.linalg.pinv", "numpy.eye", "numpy.finfo", "numpy.arange", "numpy.abs", "numpy.linalg.svd", "numpy.linalg.inv", "numpy.kron" ] ]
USE-sum/usesum
[ "eaf6dae0c451459551f728c0a8866777c20ed707" ]
[ "onmt/trainer.py" ]
[ "\"\"\"\n This is the loadable seq2seq trainer library that is\n in charge of training details, loss compute, and statistics.\n See train.py for a use case of this library.\n\n Note: To make this a general library, we implement *only*\n mechanism things here(i.e. what to do), and leave the strategy\n things to users(i.e. how to do it). Also see train.py(one of the\n users of this library) for the strategy things we do.\n\"\"\"\n\nfrom __future__ import division\n\nimport onmt.inputters as inputters\nimport onmt.utils\nimport torch\nimport random\n\nfrom onmt.utils.logging import logger\nfrom torch import autograd\n\n\ndef build_trainer(opt, device_id, model, fields,\n optim, data_type, model_saver=None):\n \"\"\"\n Simplify `Trainer` creation based on user `opt`s*\n\n Args:\n opt (:obj:`Namespace`): user options (usually from argument parsing)\n model (:obj:`onmt.models.NMTModel`): the model to train\n fields (dict): dict of fields\n optim (:obj:`onmt.utils.Optimizer`): optimizer used during training\n data_type (str): string describing the type of data\n e.g. \"text\", \"img\", \"audio\"\n model_saver(:obj:`onmt.models.ModelSaverBase`): the utility object\n used to save the model\n \"\"\"\n train_loss = onmt.utils.loss.build_loss_compute(\n model, fields[\"tgt\"].vocab, opt)\n valid_loss = onmt.utils.loss.build_loss_compute(\n model, fields[\"tgt\"].vocab, opt, train=False)\n\n trunc_size = opt.truncated_decoder # Badly named...\n shard_size = opt.max_generator_batches\n norm_method = opt.normalization\n grad_accum_count = opt.accum_count\n n_gpu = opt.world_size\n if device_id >= 0:\n gpu_rank = opt.gpu_ranks[device_id]\n else:\n gpu_rank = 0\n n_gpu = 0\n gpu_verbose_level = opt.gpu_verbose_level\n\n report_manager = onmt.utils.build_report_manager(opt)\n trainer = onmt.Trainer(model, train_loss, valid_loss, optim, trunc_size,\n shard_size, data_type, norm_method,\n grad_accum_count, n_gpu, gpu_rank,\n gpu_verbose_level, report_manager,\n model_saver=model_saver)\n return trainer\n\n\nclass Trainer(object):\n \"\"\"\n Class that controls the training process.\n\n Args:\n model(:py:class:`onmt.models.model.NMTModel`): translation model\n to train\n train_loss(:obj:`onmt.utils.loss.LossComputeBase`):\n training loss computation\n valid_loss(:obj:`onmt.utils.loss.LossComputeBase`):\n training loss computation\n optim(:obj:`onmt.utils.optimizers.Optimizer`):\n the optimizer responsible for update\n trunc_size(int): length of truncated back propagation through time\n shard_size(int): compute loss in shards of this size for efficiency\n data_type(string): type of the source input: [text|img|audio]\n norm_method(string): normalization methods: [sents|tokens]\n grad_accum_count(int): accumulate gradients this many times.\n report_manager(:obj:`onmt.utils.ReportMgrBase`):\n the object that creates reports, or None\n model_saver(:obj:`onmt.models.ModelSaverBase`): the saver is\n used to save a checkpoint.\n Thus nothing will be saved if this parameter is None\n \"\"\"\n\n def __init__(self, model, train_loss, valid_loss, optim,\n trunc_size=0, shard_size=32, data_type='text',\n norm_method=\"sents\", grad_accum_count=1, n_gpu=1, gpu_rank=1,\n gpu_verbose_level=0, report_manager=None, model_saver=None):\n # Basic attributes.\n self.model = model\n self.train_loss = train_loss\n self.valid_loss = valid_loss\n self.optim = optim\n self.trunc_size = trunc_size\n self.shard_size = shard_size\n self.data_type = data_type\n self.norm_method = norm_method\n self.grad_accum_count = grad_accum_count\n self.n_gpu = n_gpu\n self.gpu_rank = gpu_rank\n self.gpu_verbose_level = gpu_verbose_level\n self.report_manager = report_manager\n self.model_saver = model_saver\n\n assert grad_accum_count > 0\n if grad_accum_count > 1:\n assert(self.trunc_size == 0), \\\n \"\"\"To enable accumulated gradients,\n you must disable target sequence truncating.\"\"\"\n\n # Set model in training mode.\n self.model.train()\n\n def train(self, train_iter_fct, valid_iter_fct, train_steps, valid_steps):\n \"\"\"\n The main training loops.\n by iterating over training data (i.e. `train_iter_fct`)\n and running validation (i.e. iterating over `valid_iter_fct`\n\n Args:\n train_iter_fct(function): a function that returns the train\n iterator. e.g. something like\n train_iter_fct = lambda: generator(*args, **kwargs)\n valid_iter_fct(function): same as train_iter_fct, for valid data\n train_steps(int):\n valid_steps(int):\n save_checkpoint_steps(int):\n\n Return:\n None\n \"\"\"\n logger.info('Start training...')\n torch.autograd.set_detect_anomaly(False)\n step = self.optim._step + 1\n true_batchs = []\n accum = 0\n normalization = 0\n train_iter = train_iter_fct()\n\n total_stats = onmt.utils.Statistics()\n report_stats = onmt.utils.Statistics()\n self._start_report_manager(start_time=total_stats.start_time)\n while step <= train_steps:\n\n reduce_counter = 0\n for i, batch in enumerate(train_iter):\n if self.n_gpu == 0 or (i % self.n_gpu == self.gpu_rank):\n if self.gpu_verbose_level > 1:\n logger.info(\"GpuRank %d: index: %d accum: %d\"\n % (self.gpu_rank, i, accum))\n\n true_batchs.append(batch)\n\n if self.norm_method == \"tokens\":\n num_tokens = batch.tgt[1:].ne(\n self.train_loss.padding_idx).sum()\n normalization += num_tokens.item()\n else:\n normalization += batch.batch_size\n accum += 1\n if accum == self.grad_accum_count:\n reduce_counter += 1\n if self.gpu_verbose_level > 0:\n logger.info(\"GpuRank %d: reduce_counter: %d \\\n n_minibatch %d\"\n % (self.gpu_rank, reduce_counter,\n len(true_batchs)))\n if self.n_gpu > 1:\n normalization = sum(onmt.utils.distributed\n .all_gather_list\n (normalization))\n self._gradient_accumulation(\n true_batchs, normalization, total_stats,\n report_stats)\n\n report_stats = self._maybe_report_training(\n step, train_steps,\n self.optim.learning_rate,\n report_stats)\n\n true_batchs = []\n accum = 0\n normalization = 0\n if (step % valid_steps == 0):\n if self.gpu_verbose_level > 0:\n logger.info('GpuRank %d: validate step %d'\n % (self.gpu_rank, step))\n valid_iter = valid_iter_fct()\n valid_stats = self.validate(valid_iter)\n if self.gpu_verbose_level > 0:\n logger.info('GpuRank %d: gather valid stat \\\n step %d' % (self.gpu_rank, step))\n valid_stats = self._maybe_gather_stats(valid_stats)\n if self.gpu_verbose_level > 0:\n logger.info('GpuRank %d: report stat step %d'\n % (self.gpu_rank, step))\n self._report_step(self.optim.learning_rate,\n step, valid_stats=valid_stats)\n\n if self.gpu_rank == 0:\n self._maybe_save(step)\n step += 1\n if step > train_steps:\n break\n if self.gpu_verbose_level > 0:\n logger.info('GpuRank %d: we completed an epoch \\\n at step %d' % (self.gpu_rank, step))\n train_iter = train_iter_fct()\n\n return total_stats\n\n def assemble_src_representation(self, src):\n src_representation = None\n for b_id in range(src.size()[0]):\n a = torch.squeeze(src[b_id], 1)\n sr = torch.cumsum(a, dim=0)[src.size()[1] - 1]\n if src_representation is None:\n src_representation = sr.unsqueeze(0)\n else:\n src_representation = torch.cat((src_representation, sr.unsqueeze(0)), 0)\n return src_representation\n\n def validate(self, valid_iter):\n \"\"\" Validate model.\n valid_iter: validate data iterator\n Returns:\n :obj:`nmt.Statistics`: validation loss statistics\n \"\"\"\n # Set model in validating mode.\n self.model.eval()\n\n stats = onmt.utils.Statistics()\n\n for batch in valid_iter:\n src = inputters.make_features(batch, 'src', self.data_type)\n if self.data_type == 'text' and not self.model.decoder.decoder_type.startswith('vecdif'):\n _, src_lengths = batch.src\n elif self.data_type == 'audio':\n src_lengths = batch.src_lengths\n else:\n src_lengths = src.size()\n\n tgt = inputters.make_features(batch, 'tgt', self.data_type)\n\n # F-prop through the model.\n if self.model.decoder.decoder_type.startswith('vecdif'):\n if self.data_type == 'text':\n src = torch.squeeze(src, 2)\n tgt = torch.squeeze(tgt, 2)\n # if self.model.decoder.decoder_type==\"vecdif_multi\":\n all_outputs = []\n if self.n_gpu > 0:\n covered_target = torch.zeros((src_lengths[0], 512), dtype=torch.float).cuda()\n else:\n covered_target = torch.zeros((src_lengths[0], 512), dtype=torch.float)\n src_representation = self.assemble_src_representation(src)\n for target_id in range(0, tgt.size()[1]):\n outputs, scores, covered_target = self.model(-1, src, None,covered_target,src_representation, target_id)\n all_outputs.append(outputs.detach())\n else:\n src_representation = self.assemble_src_representation(src)\n outputs, scores, _= self.model(-1, src, None, source_vector=src_representation )\n\n attns = None\n else:\n outputs, attns, _ = self.model(src, tgt, src_lengths)\n\n # Compute loss.\n if self.model.decoder.decoder_type == \"vecdif_multi\":\n batch_stats = self.valid_loss.monolithic_compute_loss_multivec(\n batch, all_outputs)\n else:\n batch_stats = self.valid_loss.monolithic_compute_loss(\n batch, outputs, attns)\n\n # Update statistics.\n stats.update(batch_stats)\n\n # Set model back to training mode.\n self.model.train()\n\n return stats\n\n #iteration throught vecfidf training, both for single and multi vector assembly\n def vec_dif_iter(self, j, batch, src, trunc_size, normalization, shuffle_src_order, src_representation, covered_target):\n accum_grad = 1 # 0 = grad at the end, >0 - grad every accum_grad source steps\n src_lengths = src.size()\n\n if self.data_type == 'text': # cnndm case- text is the input, to convert on the fly into sue vectors\n src = torch.squeeze(src, 2)\n shuffle_src_order = False\n accum_grad = 1\n\n prev_prediction = None\n last_stats = None\n to_iter = [x for x in range(src_lengths[1])]\n if shuffle_src_order:\n random.shuffle(to_iter)\n ct= 0 # in case of random shuffling\n if len(to_iter)>1:\n for i in to_iter:\n ct += 1\n to_compare = src[:,i,:]\n outputs = self.model(i, src, prev_prediction, covered_prediction=covered_target,\n source_vector=src_representation, target_id=j)\n if covered_target is not None:\n covered_target += outputs.detach()\n\n # 3. Compute loss in shards for memory efficiency.\n batch_stats = self.train_loss.sharded_compute_loss(\n batch, outputs, None, j,\n trunc_size, src_lengths[0], normalization, to_compare=to_compare) # , (i == range(src_lengths[1])-1)\n\n if last_stats is None:\n last_stats = batch_stats\n else:\n last_stats.update(batch_stats)\n\n # 4. Update the parameters and statistics.\n if self.grad_accum_count == 1 and (accum_grad>0 and ct % accum_grad ==0):\n # Multi GPU gradient gather\n if self.n_gpu > 1:\n grads = [p.grad.data for p in self.model.parameters()\n if p.requires_grad\n and p.grad is not None]\n onmt.utils.distributed.all_reduce_and_rescale_tensors(\n grads, float(1))\n self.optim.step(0)\n prev_prediction = outputs.detach()\n\n if accum_grad==0 or ct % accum_grad!=0:\n if self.n_gpu > 1:\n grads = [p.grad.data for p in self.model.parameters()\n if p.requires_grad\n and p.grad is not None]\n onmt.utils.distributed.all_reduce_and_rescale_tensors(\n grads, float(1))\n self.optim.step(0)\n return last_stats, covered_target\n\n def _gradient_accumulation(self, true_batchs, normalization, total_stats,\n report_stats):\n if self.grad_accum_count > 1:\n self.model.zero_grad()\n use_vecdif_mode = False\n for batch in true_batchs:\n target_size = batch.tgt.size(0)\n\n # Truncated BPTT: reminder not compatible with accum > 1\n if self.trunc_size:\n trunc_size = self.trunc_size\n else:\n trunc_size = target_size\n\n dec_state = None\n src = inputters.make_features(batch, 'src', self.data_type)\n if self.data_type == 'text':\n if not self.model.decoder.decoder_type.startswith('vecdif') :\n _, src_lengths = batch.src\n report_stats.n_src_words += src_lengths.sum().item()\n else:\n use_vecdif_mode = True\n target_size = batch.tgt.size()[1]\n trunc_size=1\n if src.size()[1] < 3 :\n continue\n elif self.data_type == 'audio':\n src_lengths = batch.src_lengths\n elif self.data_type == 'vector':\n target_size=1\n src_lengths = batch.src.size()\n use_vecdif_mode = True\n\n else:\n src_lengths = None\n\n tgt_outer = inputters.make_features(batch, 'tgt', self.data_type)\n src_representation = self.assemble_src_representation(src)\n covered_target = None\n if use_vecdif_mode and self.data_type == 'text': # cnndm case- text is the input, to convert on the fly into sue vectors\n if self.n_gpu > 0:\n covered_target = torch.zeros((src.size()[0], 512), dtype=torch.float).cuda()\n else:\n covered_target = torch.zeros((src.size()[0], 512), dtype=torch.float)\n #print(\"tgt outer len \"+str(len(tgt_outer) )+\" use_vecdif_mode= \"+str(use_vecdif_mode)+\" batch.tgt.size()= \"+str(batch.tgt.size()) )\n for j in range(0, target_size, trunc_size):\n # 1. Create truncated target.\n if not use_vecdif_mode:\n tgt = tgt_outer[j: j + trunc_size]\n\n # 2. F-prop all but generator.\n if self.grad_accum_count == 1:\n self.model.zero_grad()\n if use_vecdif_mode:\n last_stats, covered_target = self.vec_dif_iter(j, batch, src, trunc_size, normalization, False, src_representation, covered_target)\n self.train_loss.prev_distance = None\n\n # example (set of inputs -> target), not per single vector pairs\n if last_stats is not None:\n total_stats.update(last_stats)\n report_stats.update(last_stats)\n else:\n outputs, attns, dec_state = \\\n self.model(src, tgt, src_lengths, dec_state)\n\n # 3. Compute loss in shards for memory efficiency.\n batch_stats = self.train_loss.sharded_compute_loss(\n batch, outputs, attns, j,\n trunc_size, self.shard_size, normalization)\n total_stats.update(batch_stats)\n report_stats.update(batch_stats)\n\n # 4. Update the parameters and statistics.\n if self.grad_accum_count == 1:\n # Multi GPU gradient gather\n if self.n_gpu > 1:\n grads = [p.grad.data for p in self.model.parameters()\n if p.requires_grad\n and p.grad is not None]\n onmt.utils.distributed.all_reduce_and_rescale_tensors(\n grads, float(1))\n self.optim.step()\n\n # If truncated, don't backprop fully.\n if dec_state is not None:\n dec_state.detach()\n if use_vecdif_mode:\n self.optim.increase_step(1) # the steps should be counted per training\n # in case of multi step gradient accumulation,\n # update only after accum batches\n if self.grad_accum_count > 1:\n if self.n_gpu > 1:\n grads = [p.grad.data for p in self.model.parameters()\n if p.requires_grad\n and p.grad is not None]\n onmt.utils.distributed.all_reduce_and_rescale_tensors(\n grads, float(1))\n self.optim.step()\n\n def _start_report_manager(self, start_time=None):\n \"\"\"\n Simple function to start report manager (if any)\n \"\"\"\n if self.report_manager is not None:\n if start_time is None:\n self.report_manager.start()\n else:\n self.report_manager.start_time = start_time\n\n def _maybe_gather_stats(self, stat):\n \"\"\"\n Gather statistics in multi-processes cases\n\n Args:\n stat(:obj:onmt.utils.Statistics): a Statistics object to gather\n or None (it returns None in this case)\n\n Returns:\n stat: the updated (or unchanged) stat object\n \"\"\"\n if stat is not None and self.n_gpu > 1:\n return onmt.utils.Statistics.all_gather_stats(stat)\n return stat\n\n def _maybe_report_training(self, step, num_steps, learning_rate,\n report_stats):\n \"\"\"\n Simple function to report training stats (if report_manager is set)\n see `onmt.utils.ReportManagerBase.report_training` for doc\n \"\"\"\n if self.report_manager is not None:\n return self.report_manager.report_training(\n step, num_steps, learning_rate, report_stats,\n multigpu=self.n_gpu > 1)\n\n def _report_step(self, learning_rate, step, train_stats=None,\n valid_stats=None):\n \"\"\"\n Simple function to report stats (if report_manager is set)\n see `onmt.utils.ReportManagerBase.report_step` for doc\n \"\"\"\n if self.report_manager is not None:\n return self.report_manager.report_step(\n learning_rate, step, train_stats=train_stats,\n valid_stats=valid_stats)\n\n def _maybe_save(self, step):\n \"\"\"\n Save the model if a model saver is set\n \"\"\"\n if self.model_saver is not None:\n self.model_saver.maybe_save(step)\n" ]
[ [ "torch.zeros", "torch.squeeze", "torch.autograd.set_detect_anomaly", "torch.cumsum" ] ]
Roemer/CrazyServ
[ "a61769008d6490c16c4c58d25638315698aabdcf" ]
[ "crazyserv/packagegenerator.py" ]
[ "import random\nimport numpy as np\nfrom .arena import Arena\nfrom .deliverylogger import DeliveryLogger\nfrom .drone import Drone\n\n\nclass PackageGenerator:\n def __init__(self):\n self.coordinate_pool = self.define_coordinate_pool()\n self.pool_size = self.coordinate_pool.shape[0]\n self.package_weights = [0.5, 0.75, 1]\n self.rng = {}\n self.delivery_loggers = {}\n\n def define_coordinate_pool(self):\n arena = Arena(0)\n z = arena.min_z\n return np.array([\n [2.6, 0.6, z],\n [2.4, 3.4, z],\n [0.6, 2.2, z],\n [1.4, 3.2, z],\n [1., 1.6, z],\n [3.6, 0.6, z],\n [3.2, 3.2, z],\n [3.4, 1.4, z]\n ])\n\n def initialize_swarm(self, swarm_id, seed):\n self.rng[swarm_id] = random.Random()\n self.rng[swarm_id].seed(seed)\n self.delivery_loggers[swarm_id] = DeliveryLogger()\n return True\n\n def generate_number(self, swarm_id, lower_limit, upper_limit):\n return self.rng[swarm_id].randint(lower_limit, upper_limit)\n\n def generate_hash(self, swarm_id):\n return self.rng[swarm_id].getrandbits(128)\n\n def get_package(self, swarm_id):\n if self.delivery_loggers[swarm_id].log_is_full(swarm_id):\n return None\n\n rand = self.generate_number(swarm_id, 0, self.pool_size - 1)\n weightIndex = self.generate_number(swarm_id, 0, len(self.package_weights)-1)\n weight = self.package_weights[weightIndex]\n id = self.generate_hash(swarm_id)\n\n package = {'id': str(id), 'coordinates': self.coordinate_pool[rand].tolist(), 'weight': weight, 'drone': None, 'picked': False}\n\n self.delivery_loggers[swarm_id].add_package(swarm_id, package)\n return package\n\n def pickup(self, swarm_id, package_id, drone: Drone):\n success = self.delivery_loggers[swarm_id].pickup(swarm_id, package_id, drone)\n return success\n\n def deliver(self, swarm_id, package_id, drone: Drone):\n success = self.delivery_loggers[swarm_id].deliver(swarm_id, package_id, drone)\n return success\n\n def print_deliveries(self, swarm_id):\n success = self.delivery_loggers[swarm_id].print_deliveries()\n return success\n" ]
[ [ "numpy.array" ] ]
RomainGratier/Black-box_Optimization_via_Deep_Generative-Exploratory_Networks
[ "2cce334b473df709eb67d2f351a96cde1addc5a6" ]
[ "src/forward/cnn_models.py" ]
[ "import torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\nfrom src.forward.layers import FlattenLayer\n\ndef conv_init(m):\n classname = m.__class__.__name__\n if classname.find('Conv') != -1:\n #nn.init.xavier_uniform(m.weight, gain=np.sqrt(2))\n nn.init.normal_(m.weight, mean=0, std=1)\n nn.init.constant(m.bias, 0)\n\nclass AlexNet(nn.Module):\n\n def __init__(self, num_classes, inputs=3):\n super(AlexNet, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(inputs, 64, kernel_size=11, stride=4, padding=5),\n nn.ReLU(inplace=True),\n nn.Dropout(p=0.5),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(64, 192, kernel_size=5, padding=2),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(192, 384, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Dropout(p=0.5),\n nn.Conv2d(384, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(256, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Dropout(p=0.5),\n nn.MaxPool2d(kernel_size=2, stride=2),\n )\n self.classifier = nn.Linear(256, num_classes)\n\n def forward(self, x):\n x = self.features(x)\n x = x.view(x.size(0), -1)\n x = self.classifier(x)\n return x\n\n\nclass LeNet(nn.Module):\n def __init__(self, num_classes, inputs=3):\n super(LeNet, self).__init__()\n self.conv1 = nn.Conv2d(inputs, 6, 5)\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16*5*5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, num_classes)\n\n def forward(self, x):\n out = F.relu(self.conv1(x))\n out = F.max_pool2d(out, 2)\n out = F.relu(self.conv2(out))\n out = F.max_pool2d(out, 2)\n out = out.view(out.size(0), -1)\n out = F.relu(self.fc1(out))\n out = F.relu(self.fc2(out))\n out = self.fc3(out)\n\n return(out)\n\n\nclass ThreeConvThreeFC(nn.Module):\n \"\"\"\n To train on CIFAR-10:\n https://arxiv.org/pdf/1207.0580.pdf\n \"\"\"\n def __init__(self, outputs, inputs):\n super(ThreeConvThreeFC, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(inputs, 32, 5, stride=1, padding=2),\n nn.Softplus(),\n nn.MaxPool2d(kernel_size=3, stride=2),\n nn.Conv2d(32, 64, 5, stride=1, padding=2),\n nn.Softplus(),\n nn.MaxPool2d(kernel_size=3, stride=2),\n nn.Conv2d(64, 128, 5, stride=1, padding=1),\n nn.Softplus(),\n nn.MaxPool2d(kernel_size=3, stride=2),\n )\n self.classifier = nn.Sequential(\n FlattenLayer(2 * 2 * 128),\n nn.Linear(2 * 2 * 128, 1000),\n nn.Softplus(),\n nn.Linear(1000, 1000),\n nn.Softplus(),\n nn.Linear(1000, outputs)\n )\n\n def forward(self, x):\n x = self.features(x)\n x = self.classifier(x)\n return x\n\nclass FC(nn.Module):\n def __init__(self, num_classes, inputs=3):\n super(FC, self).__init__()\n self.fc1 = nn.Linear(inputs * 32 * 32, 256)\n self.fc2 = nn.Linear(256, 256)\n self.fc3 = nn.Linear(256, num_classes)\n\n def forward(self, x):\n out = x.view(x.shape[0], -1)\n out = F.relu(self.fc1(out))\n out = F.relu(self.fc2(out))\n out = self.fc3(out)\n return(out)" ]
[ [ "torch.nn.Linear", "torch.nn.Dropout", "torch.nn.init.constant", "torch.nn.MaxPool2d", "torch.nn.ReLU", "torch.nn.Conv2d", "torch.nn.init.normal_", "torch.nn.Softplus", "torch.nn.functional.max_pool2d" ] ]
sandipan1/stable-baselines3
[ "5fe6d54f3bbdfade1e90ff5fc9b2506f3facdc37" ]
[ "stable_baselines3/a2c/a2c.py" ]
[ "from typing import Any, Dict, Optional, Type, Union\n\nimport torch as th\nfrom gym import spaces\nfrom torch.nn import functional as F\n\nfrom stable_baselines3.common.on_policy_algorithm import OnPolicyAlgorithm\nfrom stable_baselines3.common.policies import ActorCriticPolicy\nfrom stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule\nfrom stable_baselines3.common.utils import explained_variance\n\n\nclass A2C(OnPolicyAlgorithm):\n \"\"\"\n Advantage Actor Critic (A2C)\n\n Paper: https://arxiv.org/abs/1602.01783\n Code: This implementation borrows code from https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail and\n and Stable Baselines (https://github.com/hill-a/stable-baselines)\n\n Introduction to A2C: https://hackernoon.com/intuitive-rl-intro-to-advantage-actor-critic-a2c-4ff545978752\n\n :param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)\n :param env: The environment to learn from (if registered in Gym, can be str)\n :param learning_rate: The learning rate, it can be a function\n of the current progress remaining (from 1 to 0)\n :param n_steps: The number of steps to run for each environment per update\n (i.e. batch size is n_steps * n_env where n_env is number of environment copies running in parallel)\n :param gamma: Discount factor\n :param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator\n Equivalent to classic advantage when set to 1.\n :param ent_coef: Entropy coefficient for the loss calculation\n :param vf_coef: Value function coefficient for the loss calculation\n :param max_grad_norm: The maximum value for the gradient clipping\n :param rms_prop_eps: RMSProp epsilon. It stabilizes square root computation in denominator\n of RMSProp update\n :param use_rms_prop: Whether to use RMSprop (default) or Adam as optimizer\n :param use_sde: Whether to use generalized State Dependent Exploration (gSDE)\n instead of action noise exploration (default: False)\n :param sde_sample_freq: Sample a new noise matrix every n steps when using gSDE\n Default: -1 (only sample at the beginning of the rollout)\n :param normalize_advantage: Whether to normalize or not the advantage\n :param tensorboard_log: the log location for tensorboard (if None, no logging)\n :param create_eval_env: Whether to create a second environment that will be\n used for evaluating the agent periodically. (Only available when passing string for the environment)\n :param policy_kwargs: additional arguments to be passed to the policy on creation\n :param verbose: the verbosity level: 0 no output, 1 info, 2 debug\n :param seed: Seed for the pseudo random generators\n :param device: Device (cpu, cuda, ...) on which the code should be run.\n Setting it to auto, the code will be run on the GPU if possible.\n :param _init_setup_model: Whether or not to build the network at the creation of the instance\n \"\"\"\n\n def __init__(\n self,\n policy: Union[str, Type[ActorCriticPolicy]],\n env: Union[GymEnv, str],\n learning_rate: Union[float, Schedule] = 7e-4,\n n_steps: int = 5,\n gamma: float = 0.99,\n gae_lambda: float = 1.0,\n ent_coef: float = 0.0,\n vf_coef: float = 0.5,\n max_grad_norm: float = 0.5,\n rms_prop_eps: float = 1e-5,\n use_rms_prop: bool = True,\n use_sde: bool = False,\n sde_sample_freq: int = -1,\n normalize_advantage: bool = False,\n tensorboard_log: Optional[str] = None,\n create_eval_env: bool = False,\n policy_kwargs: Optional[Dict[str, Any]] = None,\n verbose: int = 0,\n seed: Optional[int] = None,\n device: Union[th.device, str] = \"auto\",\n _init_setup_model: bool = True,\n ):\n\n super(A2C, self).__init__(\n policy,\n env,\n learning_rate=learning_rate,\n n_steps=n_steps,\n gamma=gamma,\n gae_lambda=gae_lambda,\n ent_coef=ent_coef,\n vf_coef=vf_coef,\n max_grad_norm=max_grad_norm,\n use_sde=use_sde,\n sde_sample_freq=sde_sample_freq,\n tensorboard_log=tensorboard_log,\n policy_kwargs=policy_kwargs,\n verbose=verbose,\n device=device,\n create_eval_env=create_eval_env,\n seed=seed,\n _init_setup_model=False,\n supported_action_spaces=(\n spaces.Box,\n spaces.Discrete,\n spaces.MultiDiscrete,\n spaces.MultiBinary,\n ),\n )\n\n self.normalize_advantage = normalize_advantage\n\n # Update optimizer inside the policy if we want to use RMSProp\n # (original implementation) rather than Adam\n if use_rms_prop and \"optimizer_class\" not in self.policy_kwargs:\n self.policy_kwargs[\"optimizer_class\"] = th.optim.RMSprop\n self.policy_kwargs[\"optimizer_kwargs\"] = dict(alpha=0.99, eps=rms_prop_eps, weight_decay=0)\n\n if _init_setup_model:\n self._setup_model()\n\n def train(self) -> None:\n \"\"\"\n Update policy using the currently gathered\n rollout buffer (one gradient step over whole data).\n \"\"\"\n # Update optimizer learning rate\n self._update_learning_rate(self.policy.optimizer)\n\n # This will only loop once (get all data in one go)\n for rollout_data in self.rollout_buffer.get(batch_size=None):\n\n actions = rollout_data.actions\n if isinstance(self.action_space, spaces.Discrete):\n # Convert discrete action from float to long\n actions = actions.long().flatten()\n\n # TODO: avoid second computation of everything because of the gradient\n values, log_prob, entropy = self.policy.evaluate_actions(rollout_data.observations, actions)\n values = values.flatten()\n\n # Normalize advantage (not present in the original implementation)\n advantages = rollout_data.advantages\n if self.normalize_advantage:\n advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-8)\n\n # Policy gradient loss\n policy_loss = -(advantages * log_prob).mean()\n\n # Value loss using the TD(gae_lambda) target\n value_loss = F.mse_loss(rollout_data.returns, values)\n\n # Entropy loss favor exploration\n if entropy is None:\n # Approximate entropy when no analytical form\n entropy_loss = -th.mean(-log_prob)\n else:\n entropy_loss = -th.mean(entropy)\n\n loss = policy_loss + self.ent_coef * entropy_loss + self.vf_coef * value_loss\n\n # Optimization step\n self.policy.optimizer.zero_grad()\n loss.backward()\n\n # Clip grad norm\n th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)\n self.policy.optimizer.step()\n\n explained_var = explained_variance(self.rollout_buffer.values.flatten(), self.rollout_buffer.returns.flatten())\n\n self._n_updates += 1\n self.logger.record(\"train/n_updates\", self._n_updates, exclude=\"tensorboard\")\n self.logger.record(\"train/explained_variance\", explained_var)\n self.logger.record(\"train/entropy_loss\", entropy_loss.item())\n self.logger.record(\"train/policy_loss\", policy_loss.item())\n self.logger.record(\"train/value_loss\", value_loss.item())\n if hasattr(self.policy, \"log_std\"):\n self.logger.record(\"train/std\", th.exp(self.policy.log_std).mean().item())\n\n def learn(\n self,\n total_timesteps: int,\n callback: MaybeCallback = None,\n log_interval: int = 100,\n eval_env: Optional[GymEnv] = None,\n eval_freq: int = -1,\n n_eval_episodes: int = 5,\n tb_log_name: str = \"A2C\",\n eval_log_path: Optional[str] = None,\n reset_num_timesteps: bool = True,\n ) -> \"A2C\":\n\n return super(A2C, self).learn(\n total_timesteps=total_timesteps,\n callback=callback,\n log_interval=log_interval,\n eval_env=eval_env,\n eval_freq=eval_freq,\n n_eval_episodes=n_eval_episodes,\n tb_log_name=tb_log_name,\n eval_log_path=eval_log_path,\n reset_num_timesteps=reset_num_timesteps,\n )\n" ]
[ [ "torch.nn.functional.mse_loss", "torch.exp", "torch.mean" ] ]
bendichter/hdmf
[ "237721cba907086b4cc52befe2ab616683ffa2c1" ]
[ "src/hdmf/build/map.py" ]
[ "from __future__ import absolute_import\nimport re\nimport numpy as np\nimport warnings\nfrom collections import OrderedDict\nfrom copy import copy\nfrom datetime import datetime\nfrom six import with_metaclass, raise_from, text_type, binary_type, integer_types\n\nfrom ..utils import docval, getargs, ExtenderMeta, get_docval, fmt_docval_args, call_docval_func\nfrom ..container import Container, Data, DataRegion\nfrom ..spec import Spec, AttributeSpec, DatasetSpec, GroupSpec, LinkSpec, NAME_WILDCARD, NamespaceCatalog, RefSpec,\\\n SpecReader\nfrom ..data_utils import DataIO, AbstractDataChunkIterator\nfrom ..spec.spec import BaseStorageSpec\nfrom .builders import DatasetBuilder, GroupBuilder, LinkBuilder, Builder, ReferenceBuilder, RegionBuilder, BaseBuilder\nfrom .warnings import OrphanContainerWarning, MissingRequiredWarning\n\n\nclass Proxy(object):\n \"\"\"\n A temporary object to represent a Container. This gets used when resolving the true location of a\n Container's parent.\n\n Proxy objects allow simple bookkeeping of all potential parents a Container may have.\n\n This object is used by providing all the necessary information for describing the object. This object\n gets passed around and candidates are accumulated. Upon calling resolve, all saved candidates are matched\n against the information (provided to the constructor). The candidate that has an exact match is returned.\n \"\"\"\n\n def __init__(self, manager, source, location, namespace, data_type):\n self.__source = source\n self.__location = location\n self.__namespace = namespace\n self.__data_type = data_type\n self.__manager = manager\n self.__candidates = list()\n\n @property\n def source(self):\n \"\"\"The source of the object e.g. file source\"\"\"\n return self.__source\n\n @property\n def location(self):\n \"\"\"The location of the object. This can be thought of as a unique path\"\"\"\n return self.__location\n\n @property\n def namespace(self):\n \"\"\"The namespace from which the data_type of this Proxy came from\"\"\"\n return self.__namespace\n\n @property\n def data_type(self):\n \"\"\"The data_type of Container that should match this Proxy\"\"\"\n return self.__data_type\n\n @docval({\"name\": \"object\", \"type\": (BaseBuilder, Container), \"doc\": \"the container or builder to get a proxy for\"})\n def matches(self, **kwargs):\n obj = getargs('object', kwargs)\n if not isinstance(obj, Proxy):\n obj = self.__manager.get_proxy(obj)\n return self == obj\n\n @docval({\"name\": \"container\", \"type\": Container, \"doc\": \"the Container to add as a candidate match\"})\n def add_candidate(self, **kwargs):\n container = getargs('container', kwargs)\n self.__candidates.append(container)\n\n def resolve(self, **kwargs):\n for candidate in self.__candidates:\n if self.matches(candidate):\n return candidate\n return None\n\n def __eq__(self, other):\n return self.data_type == other.data_type and \\\n self.location == other.location and \\\n self.namespace == other.namespace and \\\n self.source == other.source\n\n def __repr__(self):\n ret = dict()\n for key in ('source', 'location', 'namespace', 'data_type'):\n ret[key] = getattr(self, key, None)\n return str(ret)\n\n\nclass BuildManager(object):\n \"\"\"\n A class for managing builds of Containers\n \"\"\"\n\n def __init__(self, type_map):\n self.__builders = dict()\n self.__containers = dict()\n self.__type_map = type_map\n\n @property\n def namespace_catalog(self):\n return self.__type_map.namespace_catalog\n\n @property\n def type_map(self):\n return self.__type_map\n\n @docval({\"name\": \"object\", \"type\": (BaseBuilder, Container), \"doc\": \"the container or builder to get a proxy for\"},\n {\"name\": \"source\", \"type\": str,\n \"doc\": \"the source of container being built i.e. file path\", 'default': None})\n def get_proxy(self, **kwargs):\n obj = getargs('object', kwargs)\n if isinstance(obj, BaseBuilder):\n return self.__get_proxy_builder(obj)\n elif isinstance(obj, Container):\n return self.__get_proxy_container(obj)\n\n def __get_proxy_builder(self, builder):\n dt = self.__type_map.get_builder_dt(builder)\n ns = self.__type_map.get_builder_ns(builder)\n stack = list()\n tmp = builder\n while tmp is not None:\n stack.append(tmp.name)\n tmp = self.__get_parent_dt_builder(tmp)\n loc = \"/\".join(reversed(stack))\n return Proxy(self, builder.source, loc, ns, dt)\n\n def __get_proxy_container(self, container):\n ns, dt = self.__type_map.get_container_ns_dt(container)\n stack = list()\n tmp = container\n while tmp is not None:\n if isinstance(tmp, Proxy):\n stack.append(tmp.location)\n break\n else:\n stack.append(tmp.name)\n tmp = tmp.parent\n loc = \"/\".join(reversed(stack))\n return Proxy(self, container.container_source, loc, ns, dt)\n\n @docval({\"name\": \"container\", \"type\": Container, \"doc\": \"the container to convert to a Builder\"},\n {\"name\": \"source\", \"type\": str,\n \"doc\": \"the source of container being built i.e. file path\", 'default': None})\n def build(self, **kwargs):\n \"\"\" Build the GroupBuilder for the given Container\"\"\"\n container = getargs('container', kwargs)\n container_id = self.__conthash__(container)\n result = self.__builders.get(container_id)\n source = getargs('source', kwargs)\n if result is None:\n if container.container_source is None:\n container.container_source = source\n else:\n if container.container_source != source:\n raise ValueError(\"Can't change container_source once set\")\n result = self.__type_map.build(container, self, source=source)\n self.prebuilt(container, result)\n elif container.modified:\n if isinstance(result, GroupBuilder):\n # TODO: if Datasets attributes are allowed to be modified, we need to\n # figure out how to handle that starting here.\n result = self.__type_map.build(container, self, builder=result, source=source)\n return result\n\n @docval({\"name\": \"container\", \"type\": Container, \"doc\": \"the Container to save as prebuilt\"},\n {'name': 'builder', 'type': (DatasetBuilder, GroupBuilder),\n 'doc': 'the Builder representation of the given container'})\n def prebuilt(self, **kwargs):\n ''' Save the Builder for a given Container for future use '''\n container, builder = getargs('container', 'builder', kwargs)\n container_id = self.__conthash__(container)\n self.__builders[container_id] = builder\n builder_id = self.__bldrhash__(builder)\n self.__containers[builder_id] = container\n\n def __conthash__(self, obj):\n return id(obj)\n\n def __bldrhash__(self, obj):\n return id(obj)\n\n @docval({'name': 'builder', 'type': (DatasetBuilder, GroupBuilder),\n 'doc': 'the builder to construct the Container from'})\n def construct(self, **kwargs):\n \"\"\" Construct the Container represented by the given builder \"\"\"\n builder = getargs('builder', kwargs)\n if isinstance(builder, LinkBuilder):\n builder = builder.target\n builder_id = self.__bldrhash__(builder)\n result = self.__containers.get(builder_id)\n if result is None:\n result = self.__type_map.construct(builder, self)\n parent_builder = self.__get_parent_dt_builder(builder)\n if parent_builder is not None:\n result.parent = self.__get_proxy_builder(parent_builder)\n else:\n # we are at the top of the hierarchy,\n # so it must be time to resolve parents\n self.__resolve_parents(result)\n self.prebuilt(result, builder)\n result.set_modified(False)\n return result\n\n def __resolve_parents(self, container):\n stack = [container]\n while len(stack) > 0:\n tmp = stack.pop()\n if isinstance(tmp.parent, Proxy):\n tmp.parent = tmp.parent.resolve()\n for child in tmp.children:\n stack.append(child)\n\n def __get_parent_dt_builder(self, builder):\n '''\n Get the next builder above the given builder\n that has a data_type\n '''\n tmp = builder.parent\n ret = None\n while tmp is not None:\n ret = tmp\n dt = self.__type_map.get_builder_dt(tmp)\n if dt is not None:\n break\n tmp = tmp.parent\n return ret\n\n @docval({'name': 'builder', 'type': Builder, 'doc': 'the Builder to get the class object for'})\n def get_cls(self, **kwargs):\n ''' Get the class object for the given Builder '''\n builder = getargs('builder', kwargs)\n return self.__type_map.get_cls(builder)\n\n @docval({\"name\": \"container\", \"type\": Container, \"doc\": \"the container to convert to a Builder\"},\n returns='The name a Builder should be given when building this container', rtype=str)\n def get_builder_name(self, **kwargs):\n ''' Get the name a Builder should be given '''\n container = getargs('container', kwargs)\n return self.__type_map.get_builder_name(container)\n\n @docval({'name': 'spec', 'type': (DatasetSpec, GroupSpec), 'doc': 'the parent spec to search'},\n {'name': 'builder', 'type': (DatasetBuilder, GroupBuilder, LinkBuilder),\n 'doc': 'the builder to get the sub-specification for'})\n def get_subspec(self, **kwargs):\n '''\n Get the specification from this spec that corresponds to the given builder\n '''\n spec, builder = getargs('spec', 'builder', kwargs)\n return self.__type_map.get_subspec(spec, builder)\n\n @docval({'name': 'builder', 'type': (DatasetBuilder, GroupBuilder, LinkBuilder),\n 'doc': 'the builder to get the sub-specification for'})\n def get_builder_ns(self, **kwargs):\n '''\n Get the namespace of a builder\n '''\n builder = getargs('builder', kwargs)\n return self.__type_map.get_builder_ns(builder)\n\n @docval({'name': 'builder', 'type': (DatasetBuilder, GroupBuilder, LinkBuilder),\n 'doc': 'the builder to get the data_type for'})\n def get_builder_dt(self, **kwargs):\n '''\n Get the data_type of a builder\n '''\n builder = getargs('builder', kwargs)\n return self.__type_map.get_builder_dt(builder)\n\n\n_const_arg = '__constructor_arg'\n\n\n@docval({'name': 'name', 'type': str, 'doc': 'the name of the constructor argument'},\n is_method=False)\ndef _constructor_arg(**kwargs):\n '''Decorator to override the default mapping scheme for a given constructor argument.\n\n Decorate ObjectMapper methods with this function when extending ObjectMapper to override the default\n scheme for mapping between Container and Builder objects. The decorated method should accept as its\n first argument the Builder object that is being mapped. The method should return the value to be passed\n to the target Container class constructor argument given by *name*.\n '''\n name = getargs('name', kwargs)\n\n def _dec(func):\n setattr(func, _const_arg, name)\n return func\n return _dec\n\n\n_obj_attr = '__object_attr'\n\n\n@docval({'name': 'name', 'type': str, 'doc': 'the name of the constructor argument'},\n is_method=False)\ndef _object_attr(**kwargs):\n '''Decorator to override the default mapping scheme for a given object attribute.\n\n Decorate ObjectMapper methods with this function when extending ObjectMapper to override the default\n scheme for mapping between Container and Builder objects. The decorated method should accept as its\n first argument the Container object that is being mapped. The method should return the child Builder\n object (or scalar if the object attribute corresponds to an AttributeSpec) that represents the\n attribute given by *name*.\n '''\n name = getargs('name', kwargs)\n\n def _dec(func):\n setattr(func, _obj_attr, name)\n return func\n return _dec\n\n\ndef _unicode(s):\n \"\"\"\n A helper function for converting to Unicode\n \"\"\"\n if isinstance(s, text_type):\n return s\n elif isinstance(s, binary_type):\n return s.decode('utf-8')\n else:\n raise ValueError(\"Expected unicode or ascii string, got %s\" % type(s))\n\n\ndef _ascii(s):\n \"\"\"\n A helper function for converting to ASCII\n \"\"\"\n if isinstance(s, text_type):\n return s.encode('ascii', 'backslashreplace')\n elif isinstance(s, binary_type):\n return s\n else:\n raise ValueError(\"Expected unicode or ascii string, got %s\" % type(s))\n\n\nclass ObjectMapper(with_metaclass(ExtenderMeta, object)):\n '''A class for mapping between Spec objects and Container attributes\n\n '''\n\n __dtypes = {\n \"float\": np.float32,\n \"float32\": np.float32,\n \"double\": np.float64,\n \"float64\": np.float64,\n \"long\": np.int64,\n \"int64\": np.int64,\n \"uint64\": np.uint64,\n \"int\": np.int32,\n \"int32\": np.int32,\n \"int16\": np.int16,\n \"int8\": np.int8,\n \"bool\": np.bool_,\n \"text\": _unicode,\n \"text\": _unicode,\n \"utf\": _unicode,\n \"utf8\": _unicode,\n \"utf-8\": _unicode,\n \"ascii\": _ascii,\n \"str\": _ascii,\n \"isodatetime\": _ascii,\n \"uint32\": np.uint32,\n \"uint16\": np.uint16,\n \"uint8\": np.uint8,\n }\n\n @classmethod\n def __resolve_dtype(cls, given, specified):\n \"\"\"\n Determine the dtype to use from the dtype of the given value and the specified dtype.\n This amounts to determining the greater precision of the two arguments, but also\n checks to make sure the same base dtype is being used.\n \"\"\"\n g = np.dtype(given)\n s = np.dtype(specified)\n if g.itemsize <= s.itemsize:\n return s.type\n else:\n if g.name[:3] != s.name[:3]: # different types\n if s.itemsize < 8:\n msg = \"expected %s, received %s - must supply %s or higher precision\" % (s.name, g.name, s.name)\n else:\n msg = \"expected %s, received %s - must supply %s\" % (s.name, g.name, s.name)\n raise ValueError(msg)\n else:\n return g.type\n\n @classmethod\n def convert_dtype(cls, spec, value):\n \"\"\"\n Convert values to the specified dtype. For example, if a literal int\n is passed in to a field that is specified as a unsigned integer, this function\n will convert the Python int to a numpy unsigned int.\n\n :return: The function returns a tuple consisting of 1) the value, and 2) the data type.\n The value is returned as the function may convert the input value to comply\n with the dtype specified in the schema.\n \"\"\"\n if value is None:\n dt = spec.dtype\n if isinstance(dt, RefSpec):\n dt = dt.reftype\n return None, dt\n if isinstance(value, DataIO):\n return value, cls.convert_dtype(spec, value.data)[1]\n if spec.dtype is None:\n return value, None\n if spec.dtype == 'numeric':\n return value, None\n if spec.dtype is not None and spec.dtype not in cls.__dtypes:\n msg = \"unrecognized dtype: %s -- cannot convert value\" % spec.dtype\n raise ValueError(msg)\n ret = None\n ret_dtype = None\n spec_dtype = cls.__dtypes[spec.dtype]\n if isinstance(value, np.ndarray):\n if spec_dtype is _unicode:\n ret = value.astype('U')\n ret_dtype = \"utf8\"\n elif spec_dtype is _ascii:\n ret = value.astype('S')\n ret_dtype = \"ascii\"\n else:\n dtype_func = cls.__resolve_dtype(value.dtype, spec_dtype)\n ret = value.astype(dtype_func)\n ret_dtype = ret.dtype.type\n elif isinstance(value, (tuple, list)):\n ret = list()\n for elem in value:\n tmp, tmp_dtype = cls.convert_dtype(spec, elem)\n ret.append(tmp)\n ret = type(value)(ret)\n ret_dtype = tmp_dtype\n else:\n if spec_dtype in (_unicode, _ascii):\n ret_dtype = 'ascii'\n if spec_dtype == _unicode:\n ret_dtype = 'utf8'\n ret = spec_dtype(value)\n else:\n dtype_func = cls.__resolve_dtype(type(value), spec_dtype)\n ret = dtype_func(value)\n ret_dtype = type(ret)\n return ret, ret_dtype\n\n _const_arg = '__constructor_arg'\n\n @staticmethod\n @docval({'name': 'name', 'type': str, 'doc': 'the name of the constructor argument'},\n is_method=False)\n def constructor_arg(**kwargs):\n '''Decorator to override the default mapping scheme for a given constructor argument.\n\n Decorate ObjectMapper methods with this function when extending ObjectMapper to override the default\n scheme for mapping between Container and Builder objects. The decorated method should accept as its\n first argument the Builder object that is being mapped. The method should return the value to be passed\n to the target Container class constructor argument given by *name*.\n '''\n name = getargs('name', kwargs)\n return _constructor_arg(name)\n\n _obj_attr = '__object_attr'\n\n @staticmethod\n @docval({'name': 'name', 'type': str, 'doc': 'the name of the constructor argument'},\n is_method=False)\n def object_attr(**kwargs):\n '''Decorator to override the default mapping scheme for a given object attribute.\n\n Decorate ObjectMapper methods with this function when extending ObjectMapper to override the default\n scheme for mapping between Container and Builder objects. The decorated method should accept as its\n first argument the Container object that is being mapped. The method should return the child Builder\n object (or scalar if the object attribute corresponds to an AttributeSpec) that represents the\n attribute given by *name*.\n '''\n name = getargs('name', kwargs)\n return _object_attr(name)\n\n @staticmethod\n def __is_attr(attr_val):\n return hasattr(attr_val, _obj_attr)\n\n @staticmethod\n def __get_obj_attr(attr_val):\n return getattr(attr_val, _obj_attr)\n\n @staticmethod\n def __is_constructor_arg(attr_val):\n return hasattr(attr_val, _const_arg)\n\n @staticmethod\n def __get_cargname(attr_val):\n return getattr(attr_val, _const_arg)\n\n @ExtenderMeta.post_init\n def __gather_procedures(cls, name, bases, classdict):\n if hasattr(cls, 'constructor_args'):\n cls.constructor_args = copy(cls.constructor_args)\n else:\n cls.constructor_args = dict()\n if hasattr(cls, 'obj_attrs'):\n cls.obj_attrs = copy(cls.obj_attrs)\n else:\n cls.obj_attrs = dict()\n for name, func in cls.__dict__.items():\n if cls.__is_constructor_arg(func):\n cls.constructor_args[cls.__get_cargname(func)] = getattr(cls, name)\n elif cls.__is_attr(func):\n cls.obj_attrs[cls.__get_obj_attr(func)] = getattr(cls, name)\n\n @docval({'name': 'spec', 'type': (DatasetSpec, GroupSpec),\n 'doc': 'The specification for mapping objects to builders'})\n def __init__(self, **kwargs):\n \"\"\" Create a map from Container attributes to NWB specifications \"\"\"\n spec = getargs('spec', kwargs)\n self.__spec = spec\n self.__data_type_key = spec.type_key()\n self.__spec2attr = dict()\n self.__attr2spec = dict()\n self.__spec2carg = dict()\n self.__carg2spec = dict()\n self.__map_spec(spec)\n\n @property\n def spec(self):\n ''' the Spec used in this ObjectMapper '''\n return self.__spec\n\n @_constructor_arg('name')\n def get_container_name(self, *args):\n builder = args[0]\n return builder.name\n\n @classmethod\n @docval({'name': 'spec', 'type': Spec, 'doc': 'the specification to get the name for'})\n def convert_dt_name(cls, **kwargs):\n '''Get the attribute name corresponding to a specification'''\n spec = getargs('spec', kwargs)\n if spec.data_type_def is not None:\n name = spec.data_type_def\n elif spec.data_type_inc is not None:\n name = spec.data_type_inc\n else:\n raise ValueError('found spec without name or data_type')\n s1 = re.sub('(.)([A-Z][a-z]+)', r'\\1_\\2', name)\n name = re.sub('([a-z0-9])([A-Z])', r'\\1_\\2', s1).lower()\n if name[-1] != 's' and spec.is_many():\n name += 's'\n return name\n\n @classmethod\n def __get_fields(cls, name_stack, all_names, spec):\n name = spec.name\n if spec.name is None:\n name = cls.convert_dt_name(spec)\n name_stack.append(name)\n if name in all_names:\n name = \"_\".join(name_stack)\n all_names[name] = spec\n if isinstance(spec, BaseStorageSpec):\n if not (spec.data_type_def is None and spec.data_type_inc is None):\n # don't get names for components in data_types\n return\n for subspec in spec.attributes:\n cls.__get_fields(name_stack, all_names, subspec)\n if isinstance(spec, GroupSpec):\n for subspec in spec.datasets:\n cls.__get_fields(name_stack, all_names, subspec)\n for subspec in spec.groups:\n cls.__get_fields(name_stack, all_names, subspec)\n for subspec in spec.links:\n cls.__get_fields(name_stack, all_names, subspec)\n name_stack.pop()\n\n @classmethod\n @docval({'name': 'spec', 'type': Spec, 'doc': 'the specification to get the object attribute names for'})\n def get_attr_names(cls, **kwargs):\n '''Get the attribute names for each subspecification in a Spec'''\n spec = getargs('spec', kwargs)\n names = OrderedDict()\n for subspec in spec.attributes:\n cls.__get_fields(list(), names, subspec)\n if isinstance(spec, GroupSpec):\n for subspec in spec.groups:\n cls.__get_fields(list(), names, subspec)\n for subspec in spec.datasets:\n cls.__get_fields(list(), names, subspec)\n for subspec in spec.links:\n cls.__get_fields(list(), names, subspec)\n return names\n\n def __map_spec(self, spec):\n attr_names = self.get_attr_names(spec)\n for k, v in attr_names.items():\n self.map_spec(k, v)\n\n @docval({\"name\": \"attr_name\", \"type\": str, \"doc\": \"the name of the object to map\"},\n {\"name\": \"spec\", \"type\": Spec, \"doc\": \"the spec to map the attribute to\"})\n def map_attr(self, **kwargs):\n \"\"\" Map an attribute to spec. Use this to override default behavior \"\"\"\n attr_name, spec = getargs('attr_name', 'spec', kwargs)\n if hasattr(spec, 'name') and spec.name is not None:\n n = spec.name\n elif hasattr(spec, 'data_type_def') and spec.data_type_def is not None:\n n = spec.data_type_def # noqa: F841\n self.__spec2attr[spec] = attr_name\n self.__attr2spec[attr_name] = spec\n\n @docval({\"name\": \"attr_name\", \"type\": str, \"doc\": \"the name of the attribute\"})\n def get_attr_spec(self, **kwargs):\n \"\"\" Return the Spec for a given attribute \"\"\"\n attr_name = getargs('attr_name', kwargs)\n return self.__attr2spec.get(attr_name)\n\n @docval({\"name\": \"carg_name\", \"type\": str, \"doc\": \"the name of the constructor argument\"})\n def get_carg_spec(self, **kwargs):\n \"\"\" Return the Spec for a given constructor argument \"\"\"\n carg_name = getargs('carg_name', kwargs)\n return self.__attr2spec.get(carg_name)\n\n @docval({\"name\": \"const_arg\", \"type\": str, \"doc\": \"the name of the constructor argument to map\"},\n {\"name\": \"spec\", \"type\": Spec, \"doc\": \"the spec to map the attribute to\"})\n def map_const_arg(self, **kwargs):\n \"\"\" Map an attribute to spec. Use this to override default behavior \"\"\"\n const_arg, spec = getargs('const_arg', 'spec', kwargs)\n self.__spec2carg[spec] = const_arg\n self.__carg2spec[const_arg] = spec\n\n @docval({\"name\": \"spec\", \"type\": Spec, \"doc\": \"the spec to map the attribute to\"})\n def unmap(self, **kwargs):\n \"\"\" Removing any mapping for a specification. Use this to override default mapping \"\"\"\n spec = getargs('spec', kwargs)\n self.__spec2attr.pop(spec, None)\n self.__spec2carg.pop(spec, None)\n\n @docval({\"name\": \"attr_carg\", \"type\": str, \"doc\": \"the constructor argument/object attribute to map this spec to\"},\n {\"name\": \"spec\", \"type\": Spec, \"doc\": \"the spec to map the attribute to\"})\n def map_spec(self, **kwargs):\n \"\"\" Map the given specification to the construct argument and object attribute \"\"\"\n spec, attr_carg = getargs('spec', 'attr_carg', kwargs)\n self.map_const_arg(attr_carg, spec)\n self.map_attr(attr_carg, spec)\n\n def __get_override_carg(self, *args):\n name = args[0]\n remaining_args = tuple(args[1:])\n if name in self.constructor_args:\n func = self.constructor_args[name]\n try:\n # remaining_args is [builder, manager]\n return func(self, *remaining_args)\n except TypeError:\n # LEGACY: remaining_args is [manager]\n return func(self, *remaining_args[:-1])\n return None\n\n def __get_override_attr(self, name, container, manager):\n if name in self.obj_attrs:\n func = self.obj_attrs[name]\n return func(self, container, manager)\n return None\n\n @docval({\"name\": \"spec\", \"type\": Spec, \"doc\": \"the spec to get the attribute for\"},\n returns='the attribute name', rtype=str)\n def get_attribute(self, **kwargs):\n ''' Get the object attribute name for the given Spec '''\n spec = getargs('spec', kwargs)\n val = self.__spec2attr.get(spec, None)\n return val\n\n @docval({\"name\": \"spec\", \"type\": Spec, \"doc\": \"the spec to get the attribute value for\"},\n {\"name\": \"container\", \"type\": Container, \"doc\": \"the container to get the attribute value from\"},\n {\"name\": \"manager\", \"type\": BuildManager, \"doc\": \"the BuildManager used for managing this build\"},\n returns='the value of the attribute')\n def get_attr_value(self, **kwargs):\n ''' Get the value of the attribute corresponding to this spec from the given container '''\n spec, container, manager = getargs('spec', 'container', 'manager', kwargs)\n attr_name = self.get_attribute(spec)\n if attr_name is None:\n return None\n attr_val = self.__get_override_attr(attr_name, container, manager)\n if attr_val is None:\n # TODO: A message like this should be used to warn users when an expected attribute\n # does not exist on a Container object\n #\n # if not hasattr(container, attr_name):\n # msg = \"Container '%s' (%s) does not have attribute '%s'\" \\\n # % (container.name, type(container), attr_name)\n # #warnings.warn(msg)\n attr_val = getattr(container, attr_name, None)\n if attr_val is not None:\n attr_val = self.__convert_value(attr_val, spec)\n return attr_val\n\n def __convert_value(self, value, spec):\n ret = value\n if isinstance(spec, AttributeSpec):\n if 'text' in spec.dtype:\n if spec.shape is not None:\n ret = list(map(text_type, value))\n else:\n ret = text_type(value)\n elif isinstance(spec, DatasetSpec):\n # TODO: make sure we can handle specs with data_type_inc set\n if spec.data_type_inc is not None:\n ret = value\n else:\n if spec.dtype is not None:\n string_type = None\n if 'text' in spec.dtype:\n string_type = text_type\n elif 'ascii' in spec.dtype:\n string_type = binary_type\n elif 'isodatetime' in spec.dtype:\n string_type = datetime.isoformat\n if string_type is not None:\n if spec.dims is not None:\n ret = list(map(string_type, value))\n else:\n ret = string_type(value)\n return ret\n\n @docval({\"name\": \"spec\", \"type\": Spec, \"doc\": \"the spec to get the constructor argument for\"},\n returns=\"the name of the constructor argument\", rtype=str)\n def get_const_arg(self, **kwargs):\n ''' Get the constructor argument for the given Spec '''\n spec = getargs('spec', kwargs)\n return self.__spec2carg.get(spec, None)\n\n @docval({\"name\": \"container\", \"type\": Container, \"doc\": \"the container to convert to a Builder\"},\n {\"name\": \"manager\", \"type\": BuildManager, \"doc\": \"the BuildManager to use for managing this build\"},\n {\"name\": \"parent\", \"type\": Builder, \"doc\": \"the parent of the resulting Builder\", 'default': None},\n {\"name\": \"source\", \"type\": str,\n \"doc\": \"the source of container being built i.e. file path\", 'default': None},\n {\"name\": \"builder\", \"type\": GroupBuilder, \"doc\": \"the Builder to build on\", 'default': None},\n {\"name\": \"spec_ext\", \"type\": Spec, \"doc\": \"a spec extension\", 'default': None},\n returns=\"the Builder representing the given Container\", rtype=Builder)\n def build(self, **kwargs):\n ''' Convert a Container to a Builder representation '''\n container, manager, parent, source = getargs('container', 'manager', 'parent', 'source', kwargs)\n builder = getargs('builder', kwargs)\n name = manager.get_builder_name(container)\n if isinstance(self.__spec, GroupSpec):\n if builder is None:\n builder = GroupBuilder(name, parent=parent, source=source)\n self.__add_datasets(builder, self.__spec.datasets, container, manager, source)\n self.__add_groups(builder, self.__spec.groups, container, manager, source)\n self.__add_links(builder, self.__spec.links, container, manager, source)\n else:\n if not isinstance(container, Data):\n msg = \"'container' must be of type Data with DatasetSpec\"\n raise ValueError(msg)\n if isinstance(self.spec.dtype, RefSpec):\n bldr_data = self.__get_ref_builder(self.spec.dtype, self.spec.shape, container, manager)\n try:\n bldr_data, dtype = self.convert_dtype(self.spec, bldr_data)\n except Exception as ex:\n msg = 'could not resolve dtype for %s \\'%s\\'' % (type(container).__name__, container.name)\n raise_from(Exception(msg), ex)\n builder = DatasetBuilder(name, bldr_data, parent=parent, source=source, dtype=dtype)\n elif isinstance(self.spec.dtype, list):\n refs = [(i, subt) for i, subt in enumerate(self.spec.dtype) if isinstance(subt.dtype, RefSpec)]\n bldr_data = copy(container.data)\n bldr_data = list()\n for i, row in enumerate(container.data):\n tmp = list(row)\n for j, subt in refs:\n tmp[j] = self.__get_ref_builder(subt.dtype, None, row[j], manager)\n bldr_data.append(tuple(tmp))\n try:\n bldr_data, dtype = self.convert_dtype(self.spec, bldr_data)\n except Exception as ex:\n msg = 'could not resolve dtype for %s \\'%s\\'' % (type(container).__name__, container.name)\n raise_from(Exception(msg), ex)\n builder = DatasetBuilder(name, bldr_data, parent=parent, source=source, dtype=dtype)\n else:\n if self.__spec.dtype is None and self.__is_reftype(container.data):\n bldr_data = list()\n for d in container.data:\n bldr_data.append(ReferenceBuilder(manager.build(d)))\n builder = DatasetBuilder(name, bldr_data, parent=parent, source=source,\n dtype='object')\n else:\n try:\n bldr_data, dtype = self.convert_dtype(self.spec, container.data)\n except Exception as ex:\n msg = 'could not resolve dtype for %s \\'%s\\'' % (type(container).__name__, container.name)\n raise_from(Exception(msg), ex)\n builder = DatasetBuilder(name, bldr_data, parent=parent, source=source, dtype=dtype)\n self.__add_attributes(builder, self.__spec.attributes, container, manager, source)\n return builder\n\n def __is_reftype(self, data):\n tmp = data\n while hasattr(tmp, '__len__') and not isinstance(tmp, (Container, text_type, binary_type)):\n tmptmp = None\n for t in tmp:\n # In case of a numeric array stop the iteration at the first element to avoid long-running loop\n if isinstance(t, (integer_types, float, complex, bool)):\n break\n if hasattr(t, '__len__') and not isinstance(t, (Container, text_type, binary_type)) and len(t) > 0:\n tmptmp = tmp[0]\n break\n if tmptmp is not None:\n break\n else:\n tmp = tmp[0]\n if isinstance(tmp, Container):\n return True\n else:\n return False\n\n def __get_ref_builder(self, dtype, shape, container, manager):\n bldr_data = None\n if dtype.is_region():\n if shape is None:\n if not isinstance(container, DataRegion):\n msg = \"'container' must be of type DataRegion if spec represents region reference\"\n raise ValueError(msg)\n bldr_data = RegionBuilder(container.region, manager.build(container.data))\n else:\n bldr_data = list()\n for d in container.data:\n bldr_data.append(RegionBuilder(d.slice, manager.build(d.target)))\n else:\n if shape is None:\n if isinstance(container, Container):\n bldr_data = ReferenceBuilder(manager.build(container))\n else:\n bldr_data = ReferenceBuilder(manager.build(container.data))\n else:\n bldr_data = list()\n for d in container.data:\n bldr_data.append(ReferenceBuilder(manager.build(d.target)))\n return bldr_data\n\n def __is_null(self, item):\n if item is None:\n return True\n else:\n if any(isinstance(item, t) for t in (list, tuple, dict, set)):\n return len(item) == 0\n return False\n\n def __add_attributes(self, builder, attributes, container, build_manager, source):\n for spec in attributes:\n if spec.value is not None:\n attr_value = spec.value\n else:\n attr_value = self.get_attr_value(spec, container, build_manager)\n if attr_value is None:\n attr_value = spec.default_value\n\n if isinstance(spec.dtype, RefSpec):\n if not self.__is_reftype(attr_value):\n if attr_value is None:\n msg = \"object of data_type %s not found on %s '%s'\" % \\\n (spec.dtype.target_type, type(container).__name__, container.name)\n else:\n msg = \"invalid type for reference '%s' (%s) - must be Container\" % (spec.name, type(attr_value))\n raise ValueError(msg)\n target_builder = build_manager.build(attr_value, source=source)\n attr_value = ReferenceBuilder(target_builder)\n else:\n if attr_value is not None:\n try:\n attr_value, attr_dtype = self.convert_dtype(spec, attr_value)\n except Exception as ex:\n msg = 'could not convert %s for %s %s' % (spec.name, type(container).__name__, container.name)\n raise_from(Exception(msg), ex)\n\n # do not write empty or null valued objects\n if attr_value is None:\n if spec.required:\n msg = \"attribute '%s' for '%s' (%s)\"\\\n % (spec.name, builder.name, self.spec.data_type_def)\n warnings.warn(msg, MissingRequiredWarning)\n continue\n\n builder.set_attribute(spec.name, attr_value)\n\n def __add_links(self, builder, links, container, build_manager, source):\n for spec in links:\n attr_value = self.get_attr_value(spec, container, build_manager)\n if not attr_value:\n continue\n self.__add_containers(builder, spec, attr_value, build_manager, source, container)\n\n def __is_empty(self, val):\n if val is None:\n return True\n if isinstance(val, DataIO):\n val = val.data\n if isinstance(val, AbstractDataChunkIterator):\n return False\n else:\n if (hasattr(val, '__len__') and len(val) == 0):\n return True\n else:\n return False\n\n def __add_datasets(self, builder, datasets, container, build_manager, source):\n for spec in datasets:\n attr_value = self.get_attr_value(spec, container, build_manager)\n # TODO: add check for required datasets\n if self.__is_empty(attr_value):\n if spec.required:\n msg = \"dataset '%s' for '%s' of type (%s)\"\\\n % (spec.name, builder.name, self.spec.data_type_def)\n warnings.warn(msg, MissingRequiredWarning)\n continue\n if isinstance(attr_value, Builder):\n builder.set_builder(attr_value)\n elif spec.data_type_def is None and spec.data_type_inc is None:\n if spec.name in builder.datasets:\n sub_builder = builder.datasets[spec.name]\n else:\n try:\n data, dtype = self.convert_dtype(spec, attr_value)\n except Exception as ex:\n msg = 'could not convert \\'%s\\' for %s \\'%s\\''\n msg = msg % (spec.name, type(container).__name__, container.name)\n raise_from(Exception(msg), ex)\n sub_builder = builder.add_dataset(spec.name, data, dtype=dtype)\n self.__add_attributes(sub_builder, spec.attributes, container, build_manager, source)\n else:\n self.__add_containers(builder, spec, attr_value, build_manager, source, container)\n\n def __add_groups(self, builder, groups, container, build_manager, source):\n for spec in groups:\n if spec.data_type_def is None and spec.data_type_inc is None:\n # we don't need to get attr_name since any named\n # group does not have the concept of value\n sub_builder = builder.groups.get(spec.name)\n if sub_builder is None:\n sub_builder = GroupBuilder(spec.name, source=source)\n self.__add_attributes(sub_builder, spec.attributes, container, build_manager, source)\n self.__add_datasets(sub_builder, spec.datasets, container, build_manager, source)\n\n # handle subgroups that are not Containers\n attr_name = self.get_attribute(spec)\n if attr_name is not None:\n attr_value = getattr(container, attr_name, None)\n attr_value = self.get_attr_value(spec, container, build_manager)\n if any(isinstance(attr_value, t) for t in (list, tuple, set, dict)):\n it = iter(attr_value)\n if isinstance(attr_value, dict):\n it = iter(attr_value.values())\n for item in it:\n if isinstance(item, Container):\n self.__add_containers(sub_builder, spec, item, build_manager, source, container)\n self.__add_groups(sub_builder, spec.groups, container, build_manager, source)\n empty = sub_builder.is_empty()\n if not empty or (empty and isinstance(spec.quantity, int)):\n if sub_builder.name not in builder.groups:\n builder.set_group(sub_builder)\n else:\n if spec.data_type_def is not None:\n attr_name = self.get_attribute(spec)\n if attr_name is not None:\n attr_value = getattr(container, attr_name, None)\n if attr_value is not None:\n self.__add_containers(builder, spec, attr_value, build_manager, source, container)\n else:\n attr_name = self.get_attribute(spec)\n attr_value = getattr(container, attr_name, None)\n if attr_value is not None:\n self.__add_containers(builder, spec, attr_value, build_manager, source, container)\n\n def __add_containers(self, builder, spec, value, build_manager, source, parent_container):\n if isinstance(value, Container):\n if value.parent is None:\n msg = \"'%s' (%s) for '%s' (%s)\"\\\n % (value.name, getattr(value, self.spec.type_key()),\n builder.name, self.spec.data_type_def)\n warnings.warn(msg, OrphanContainerWarning)\n if value.modified: # writing a new container\n rendered_obj = build_manager.build(value, source=source)\n # use spec to determine what kind of HDF5\n # object this Container corresponds to\n if isinstance(spec, LinkSpec) or value.parent is not parent_container:\n name = spec.name\n builder.set_link(LinkBuilder(rendered_obj, name, builder))\n elif isinstance(spec, DatasetSpec):\n if rendered_obj.dtype is None and spec.dtype is not None:\n val, dtype = self.convert_dtype(spec, None)\n rendered_obj.dtype = dtype\n builder.set_dataset(rendered_obj)\n else:\n builder.set_group(rendered_obj)\n elif value.container_source: # make a link to an existing container\n if value.container_source != parent_container.container_source or\\\n value.parent is not parent_container:\n rendered_obj = build_manager.build(value, source=source)\n builder.set_link(LinkBuilder(rendered_obj, name=spec.name, parent=builder))\n else:\n raise ValueError(\"Found unmodified Container with no source - '%s' with parent '%s'\" %\n (value.name, parent_container.name))\n else:\n if any(isinstance(value, t) for t in (list, tuple)):\n values = value\n elif isinstance(value, dict):\n values = value.values()\n else:\n msg = (\"received %s, expected Container - 'value' \"\n \"must be an Container a list/tuple/dict of \"\n \"Containers if 'spec' is a GroupSpec\")\n raise ValueError(msg % value.__class__.__name__)\n for container in values:\n if container:\n self.__add_containers(builder, spec, container, build_manager, source, parent_container)\n\n def __get_subspec_values(self, builder, spec, manager):\n ret = dict()\n # First get attributes\n attributes = builder.attributes\n for attr_spec in spec.attributes:\n attr_val = attributes.get(attr_spec.name)\n if attr_val is None:\n continue\n if isinstance(attr_val, (GroupBuilder, DatasetBuilder)):\n ret[attr_spec] = manager.construct(attr_val)\n elif isinstance(attr_val, RegionBuilder):\n raise ValueError(\"RegionReferences as attributes is not yet supported\")\n elif isinstance(attr_val, ReferenceBuilder):\n ret[attr_spec] = manager.construct(attr_val.builder)\n else:\n ret[attr_spec] = attr_val\n if isinstance(spec, GroupSpec):\n if not isinstance(builder, GroupBuilder):\n raise ValueError(\"__get_subspec_values - must pass GroupBuilder with GroupSpec\")\n # first aggregate links by data type and separate them\n # by group and dataset\n groups = dict(builder.groups) # make a copy so we can separate links\n datasets = dict(builder.datasets) # make a copy so we can separate links\n links = builder.links\n link_dt = dict()\n for link_builder in links.values():\n target = link_builder.builder\n if isinstance(target, DatasetBuilder):\n datasets[link_builder.name] = target\n else:\n groups[link_builder.name] = target\n dt = manager.get_builder_dt(target)\n if dt is not None:\n link_dt.setdefault(dt, list()).append(target)\n # now assign links to their respective specification\n for subspec in spec.links:\n if subspec.name is not None:\n ret[subspec] = manager.construct(links[subspec.name].builder)\n else:\n sub_builder = link_dt.get(subspec.target_type)\n if sub_builder is not None:\n ret[subspec] = self.__flatten(sub_builder, subspec, manager)\n # now process groups and datasets\n self.__get_sub_builders(groups, spec.groups, manager, ret)\n self.__get_sub_builders(datasets, spec.datasets, manager, ret)\n elif isinstance(spec, DatasetSpec):\n if not isinstance(builder, DatasetBuilder):\n raise ValueError(\"__get_subspec_values - must pass DatasetBuilder with DatasetSpec\")\n ret[spec] = builder.data\n return ret\n\n def __get_sub_builders(self, sub_builders, subspecs, manager, ret):\n # index builders by data_type\n builder_dt = dict()\n for g in sub_builders.values():\n dt = manager.get_builder_dt(g)\n ns = manager.get_builder_ns(g)\n if dt is None or ns is None:\n continue\n for parent_dt in manager.namespace_catalog.get_hierarchy(ns, dt):\n builder_dt.setdefault(parent_dt, list()).append(g)\n for subspec in subspecs:\n # first get data type for the spec\n if subspec.data_type_def is not None:\n dt = subspec.data_type_def\n elif subspec.data_type_inc is not None:\n dt = subspec.data_type_inc\n else:\n dt = None\n # use name if we can, otherwise use data_data\n if subspec.name is None:\n sub_builder = builder_dt.get(dt)\n if sub_builder is not None:\n sub_builder = self.__flatten(sub_builder, subspec, manager)\n ret[subspec] = sub_builder\n else:\n sub_builder = sub_builders.get(subspec.name)\n if sub_builder is None:\n continue\n if dt is None:\n # recurse\n ret.update(self.__get_subspec_values(sub_builder, subspec, manager))\n else:\n ret[subspec] = manager.construct(sub_builder)\n\n def __flatten(self, sub_builder, subspec, manager):\n tmp = [manager.construct(b) for b in sub_builder]\n if len(tmp) == 1 and not subspec.is_many():\n tmp = tmp[0]\n return tmp\n\n @docval({'name': 'builder', 'type': (DatasetBuilder, GroupBuilder),\n 'doc': 'the builder to construct the Container from'},\n {'name': 'manager', 'type': BuildManager, 'doc': 'the BuildManager for this build'})\n def construct(self, **kwargs):\n ''' Construct an Container from the given Builder '''\n builder, manager = getargs('builder', 'manager', kwargs)\n cls = manager.get_cls(builder)\n # gather all subspecs\n subspecs = self.__get_subspec_values(builder, self.spec, manager)\n # get the constructor argument that each specification corresponds to\n const_args = dict()\n for subspec, value in subspecs.items():\n const_arg = self.get_const_arg(subspec)\n if const_arg is not None:\n if isinstance(subspec, BaseStorageSpec) and subspec.is_many():\n existing_value = const_args.get(const_arg)\n if isinstance(existing_value, list):\n value = existing_value + value\n const_args[const_arg] = value\n # build kwargs for the constructor\n kwargs = dict()\n for const_arg in get_docval(cls.__init__):\n argname = const_arg['name']\n override = self.__get_override_carg(argname, builder, manager)\n if override is not None:\n val = override\n elif argname in const_args:\n val = const_args[argname]\n else:\n continue\n kwargs[argname] = val\n try:\n obj = cls(**kwargs)\n obj.container_source = builder.source\n except Exception as ex:\n msg = 'Could not construct %s object' % (cls.__name__,)\n raise_from(Exception(msg), ex)\n return obj\n\n @docval({'name': 'container', 'type': Container, 'doc': 'the Container to get the Builder name for'})\n def get_builder_name(self, **kwargs):\n '''Get the name of a Builder that represents a Container'''\n container = getargs('container', kwargs)\n if self.__spec.name not in (NAME_WILDCARD, None):\n ret = self.__spec.name\n else:\n if container.name is None:\n if self.__spec.default_name is not None:\n ret = self.__spec.default_name\n else:\n msg = 'Unable to determine name of container type %s' % self.__spec.data_type_def\n raise ValueError(msg)\n else:\n ret = container.name\n return ret\n\n\nclass TypeSource(object):\n '''A class to indicate the source of a data_type in a namespace.\n\n This class should only be used by TypeMap\n '''\n\n @docval({\"name\": \"namespace\", \"type\": str, \"doc\": \"the namespace the from, which the data_type originated\"},\n {\"name\": \"data_type\", \"type\": str, \"doc\": \"the name of the type\"})\n def __init__(self, **kwargs):\n namespace, data_type = getargs('namespace', 'data_type', kwargs)\n self.__namespace = namespace\n self.__data_type = data_type\n\n @property\n def namespace(self):\n return self.__namespace\n\n @property\n def data_type(self):\n return self.__data_type\n\n\nclass TypeMap(object):\n ''' A class to maintain the map between ObjectMappers and Container classes\n '''\n\n @docval({'name': 'namespaces', 'type': NamespaceCatalog, 'doc': 'the NamespaceCatalog to use'},\n {'name': 'mapper_cls', 'type': type, 'doc': 'the ObjectMapper class to use', 'default': ObjectMapper})\n def __init__(self, **kwargs):\n namespaces = getargs('namespaces', kwargs)\n self.__ns_catalog = namespaces\n self.__mappers = dict() # already constructed ObjectMapper classes\n self.__mapper_cls = dict() # the ObjectMapper class to use for each container type\n self.__container_types = OrderedDict()\n self.__data_types = dict()\n self.__default_mapper_cls = getargs('mapper_cls', kwargs)\n\n @property\n def namespace_catalog(self):\n return self.__ns_catalog\n\n def __copy__(self):\n ret = TypeMap(copy(self.__ns_catalog), self.__default_mapper_cls)\n ret.merge(self)\n return ret\n\n def __deepcopy__(self, memo):\n # XXX: From @nicain: All of a sudden legacy tests started\n # needing this argument in deepcopy. Doesn't hurt anything, though.\n return self.__copy__()\n\n def copy_mappers(self, type_map):\n for namespace in self.__ns_catalog.namespaces:\n if namespace not in type_map.__container_types:\n continue\n for data_type in self.__ns_catalog.get_namespace(namespace).get_registered_types():\n container_cls = type_map.__container_types[namespace].get(data_type)\n if container_cls is None:\n continue\n self.register_container_type(namespace, data_type, container_cls)\n if container_cls in type_map.__mapper_cls:\n self.register_map(container_cls, type_map.__mapper_cls[container_cls])\n\n def merge(self, type_map):\n for namespace in type_map.__container_types:\n for data_type in type_map.__container_types[namespace]:\n\n container_cls = type_map.__container_types[namespace][data_type]\n self.register_container_type(namespace, data_type, container_cls)\n\n for container_cls in type_map.__mapper_cls:\n self.register_map(container_cls, type_map.__mapper_cls[container_cls])\n\n @docval({'name': 'namespace_path', 'type': str, 'doc': 'the path to the file containing the namespaces(s) to load'},\n {'name': 'resolve', 'type': bool,\n 'doc': 'whether or not to include objects from included/parent spec objects', 'default': True},\n {'name': 'reader',\n 'type': SpecReader,\n 'doc': 'the class to user for reading specifications', 'default': None},\n returns=\"the namespaces loaded from the given file\", rtype=tuple)\n def load_namespaces(self, **kwargs):\n '''Load namespaces from a namespace file.\n\n This method will call load_namespaces on the NamespaceCatalog used to construct this TypeMap. Additionally,\n it will process the return value to keep track of what types were included in the loaded namespaces. Calling\n load_namespaces here has the advantage of being able to keep track of type dependencies across namespaces.\n '''\n deps = call_docval_func(self.__ns_catalog.load_namespaces, kwargs)\n for new_ns, ns_deps in deps.items():\n for src_ns, types in ns_deps.items():\n for dt in types:\n container_cls = self.get_container_cls(src_ns, dt)\n if container_cls is None:\n container_cls = TypeSource(src_ns, dt)\n self.register_container_type(new_ns, dt, container_cls)\n return tuple(deps.keys())\n\n _type_map = {\n 'text': str,\n 'float': float,\n 'float64': float,\n 'int': int,\n 'int32': int,\n 'isodatetime': datetime\n }\n\n def __get_type(self, spec):\n if isinstance(spec, AttributeSpec):\n if isinstance(spec.dtype, RefSpec):\n tgttype = spec.dtype.target_type\n for val in self.__container_types.values():\n container_type = val.get(tgttype)\n if container_type is not None:\n return container_type\n return (Data, Container)\n elif spec.shape is None:\n return self._type_map.get(spec.dtype)\n else:\n return ('array_data',)\n elif isinstance(spec, LinkSpec):\n return Container\n else:\n if not (spec.data_type_inc is None and spec.data_type_inc is None):\n if spec.name is not None:\n return (list, tuple, dict, set)\n else:\n return Container\n else:\n return ('array_data', 'data',)\n\n def __get_constructor(self, base, addl_fields):\n # TODO: fix this to be more maintainable and smarter\n existing_args = set()\n docval_args = list()\n new_args = list()\n if base is not None:\n for arg in get_docval(base.__init__):\n existing_args.add(arg['name'])\n if arg['name'] in addl_fields:\n continue\n docval_args.append(arg)\n for f, field_spec in addl_fields.items():\n dtype = self.__get_type(field_spec)\n docval_arg = {'name': f, 'type': dtype, 'doc': field_spec.doc}\n if not field_spec.required:\n docval_arg['default'] = getattr(field_spec, 'default_value', None)\n docval_args.append(docval_arg)\n if f not in existing_args:\n new_args.append(f)\n if base is None:\n @docval(*docval_args)\n def __init__(self, **kwargs):\n for f in new_args:\n setattr(self, f, kwargs.get(f, None))\n return __init__\n else:\n @docval(*docval_args)\n def __init__(self, **kwargs):\n pargs, pkwargs = fmt_docval_args(base.__init__, kwargs)\n super(type(self), self).__init__(*pargs, **pkwargs)\n for f in new_args:\n setattr(self, f, kwargs.get(f, None))\n return __init__\n\n @docval({\"name\": \"namespace\", \"type\": str, \"doc\": \"the namespace containing the data_type\"},\n {\"name\": \"data_type\", \"type\": str, \"doc\": \"the data type to create a Container class for\"},\n returns='the class for the given namespace and data_type', rtype=type)\n def get_container_cls(self, **kwargs):\n '''Get the container class from data type specification\n\n If no class has been associated with the ``data_type`` from ``namespace``,\n a class will be dynamically created and returned.\n '''\n namespace, data_type = getargs('namespace', 'data_type', kwargs)\n cls = self.__get_container_cls(namespace, data_type)\n if cls is None:\n spec = self.__ns_catalog.get_spec(namespace, data_type)\n dt_hier = self.__ns_catalog.get_hierarchy(namespace, data_type)\n parent_cls = None\n for t in dt_hier:\n parent_cls = self.__get_container_cls(namespace, t)\n if parent_cls is not None:\n break\n bases = tuple()\n if parent_cls is not None:\n bases = (parent_cls,)\n else:\n if isinstance(spec, GroupSpec):\n bases = (Container,)\n elif isinstance(spec, DatasetSpec):\n bases = (Data,)\n else:\n raise ValueError(\"Cannot generate class from %s\" % type(spec))\n parent_cls = bases[0]\n name = data_type\n attr_names = self.__default_mapper_cls.get_attr_names(spec)\n fields = dict()\n for k, field_spec in attr_names.items():\n if not spec.is_inherited_spec(field_spec):\n fields[k] = field_spec\n d = {'__init__': self.__get_constructor(parent_cls, fields)}\n cls = type(str(name), bases, d)\n self.register_container_type(namespace, data_type, cls)\n return cls\n\n def __get_container_cls(self, namespace, data_type):\n if namespace not in self.__container_types:\n return None\n if data_type not in self.__container_types[namespace]:\n return None\n ret = self.__container_types[namespace][data_type]\n if isinstance(ret, TypeSource):\n ret = self.__get_container_cls(ret.namespace, ret.data_type)\n if ret is not None:\n self.register_container_type(namespace, data_type, ret)\n return ret\n\n @docval({'name': 'builder', 'type': (DatasetBuilder, GroupBuilder, LinkBuilder),\n 'doc': 'the builder to get the data_type for'})\n def get_builder_dt(self, **kwargs):\n '''\n Get the data_type of a builder\n '''\n builder = getargs('builder', kwargs)\n ret = builder.attributes.get(self.__ns_catalog.group_spec_cls.type_key())\n if isinstance(ret, bytes):\n ret = ret.decode('UTF-8')\n return ret\n\n @docval({'name': 'builder', 'type': (DatasetBuilder, GroupBuilder, LinkBuilder),\n 'doc': 'the builder to get the sub-specification for'})\n def get_builder_ns(self, **kwargs):\n '''\n Get the namespace of a builder\n '''\n builder = getargs('builder', kwargs)\n if isinstance(builder, LinkBuilder):\n builder = builder.builder\n ret = builder.attributes.get('namespace')\n return ret\n\n @docval({'name': 'builder', 'type': Builder,\n 'doc': 'the Builder object to get the corresponding Container class for'})\n def get_cls(self, **kwargs):\n ''' Get the class object for the given Builder '''\n builder = getargs('builder', kwargs)\n data_type = self.get_builder_dt(builder)\n if data_type is None:\n raise ValueError(\"No data_type found for builder %s\" % builder.path)\n namespace = self.get_builder_ns(builder)\n if namespace is None:\n raise ValueError(\"No namespace found for builder %s\" % builder.path)\n return self.get_container_cls(namespace, data_type)\n\n @docval({'name': 'spec', 'type': (DatasetSpec, GroupSpec), 'doc': 'the parent spec to search'},\n {'name': 'builder', 'type': (DatasetBuilder, GroupBuilder, LinkBuilder),\n 'doc': 'the builder to get the sub-specification for'})\n def get_subspec(self, **kwargs):\n '''\n Get the specification from this spec that corresponds to the given builder\n '''\n spec, builder = getargs('spec', 'builder', kwargs)\n if isinstance(builder, LinkBuilder):\n builder_type = type(builder.builder)\n else:\n builder_type = type(builder)\n if issubclass(builder_type, DatasetBuilder):\n subspec = spec.get_dataset(builder.name)\n else:\n subspec = spec.get_group(builder.name)\n if subspec is None:\n # builder was generated from something with a data_type and a wildcard name\n if isinstance(builder, LinkBuilder):\n dt = self.get_builder_dt(builder.builder)\n else:\n dt = self.get_builder_dt(builder)\n if dt is not None:\n ns = self.get_builder_ns(builder)\n hierarchy = self.__ns_catalog.get_hierarchy(ns, dt)\n for t in hierarchy:\n subspec = spec.get_data_type(t)\n if subspec is not None:\n break\n return subspec\n\n def get_container_ns_dt(self, obj):\n container_cls = obj.__class__\n namespace, data_type = self.get_container_cls_dt(container_cls)\n return namespace, data_type\n\n def get_container_cls_dt(self, cls):\n return self.__data_types.get(cls, (None, None))\n\n @docval({'name': 'namespace', 'type': str,\n 'doc': 'the namespace to get the container classes for', 'default': None})\n def get_container_classes(self, **kwargs):\n namespace = getargs('namespace', kwargs)\n ret = self.__data_types.keys()\n if namespace is not None:\n ret = filter(lambda x: self.__data_types[x][0] == namespace, ret)\n return list(ret)\n\n @docval({'name': 'obj', 'type': (Container, Builder), 'doc': 'the object to get the ObjectMapper for'},\n returns='the ObjectMapper to use for mapping the given object', rtype='ObjectMapper')\n def get_map(self, **kwargs):\n \"\"\" Return the ObjectMapper object that should be used for the given container \"\"\"\n obj = getargs('obj', kwargs)\n # get the container class, and namespace/data_type\n if isinstance(obj, Container):\n container_cls = obj.__class__\n namespace, data_type = self.get_container_ns_dt(obj)\n if namespace is None:\n raise ValueError(\"class %s is not mapped to a data_type\" % container_cls)\n else:\n data_type = self.get_builder_dt(obj)\n namespace = self.get_builder_ns(obj)\n container_cls = self.get_cls(obj)\n # now build the ObjectMapper class\n spec = self.__ns_catalog.get_spec(namespace, data_type)\n mapper = self.__mappers.get(container_cls)\n if mapper is None:\n mapper_cls = self.__default_mapper_cls\n for cls in container_cls.__mro__:\n tmp_mapper_cls = self.__mapper_cls.get(cls)\n if tmp_mapper_cls is not None:\n mapper_cls = tmp_mapper_cls\n break\n\n mapper = mapper_cls(spec)\n self.__mappers[container_cls] = mapper\n return mapper\n\n @docval({\"name\": \"namespace\", \"type\": str, \"doc\": \"the namespace containing the data_type to map the class to\"},\n {\"name\": \"data_type\", \"type\": str, \"doc\": \"the data_type to map the class to\"},\n {\"name\": \"container_cls\", \"type\": (TypeSource, type), \"doc\": \"the class to map to the specified data_type\"})\n def register_container_type(self, **kwargs):\n ''' Map a container class to a data_type '''\n namespace, data_type, container_cls = getargs('namespace', 'data_type', 'container_cls', kwargs)\n spec = self.__ns_catalog.get_spec(namespace, data_type) # make sure the spec exists\n self.__container_types.setdefault(namespace, dict())\n self.__container_types[namespace][data_type] = container_cls\n self.__data_types.setdefault(container_cls, (namespace, data_type))\n setattr(container_cls, spec.type_key(), data_type)\n setattr(container_cls, 'namespace', namespace)\n\n @docval({\"name\": \"container_cls\", \"type\": type,\n \"doc\": \"the Container class for which the given ObjectMapper class gets used for\"},\n {\"name\": \"mapper_cls\", \"type\": type, \"doc\": \"the ObjectMapper class to use to map\"})\n def register_map(self, **kwargs):\n ''' Map a container class to an ObjectMapper class '''\n container_cls, mapper_cls = getargs('container_cls', 'mapper_cls', kwargs)\n if self.get_container_cls_dt(container_cls) == (None, None):\n raise ValueError('cannot register map for type %s - no data_type found' % container_cls)\n self.__mapper_cls[container_cls] = mapper_cls\n\n @docval({\"name\": \"container\", \"type\": Container, \"doc\": \"the container to convert to a Builder\"},\n {\"name\": \"manager\", \"type\": BuildManager,\n \"doc\": \"the BuildManager to use for managing this build\", 'default': None},\n {\"name\": \"source\", \"type\": str,\n \"doc\": \"the source of container being built i.e. file path\", 'default': None},\n {\"name\": \"builder\", \"type\": GroupBuilder, \"doc\": \"the Builder to build on\", 'default': None})\n def build(self, **kwargs):\n \"\"\" Build the GroupBuilder for the given Container\"\"\"\n container, manager, builder = getargs('container', 'manager', 'builder', kwargs)\n if manager is None:\n manager = BuildManager(self)\n attr_map = self.get_map(container)\n if attr_map is None:\n raise ValueError('No ObjectMapper found for container of type %s' % str(container.__class__.__name__))\n else:\n builder = attr_map.build(container, manager, builder=builder, source=getargs('source', kwargs))\n namespace, data_type = self.get_container_ns_dt(container)\n builder.set_attribute('namespace', namespace)\n builder.set_attribute(attr_map.spec.type_key(), data_type)\n return builder\n\n @docval({'name': 'builder', 'type': (DatasetBuilder, GroupBuilder),\n 'doc': 'the builder to construct the Container from'},\n {'name': 'build_manager', 'type': BuildManager,\n 'doc': 'the BuildManager for constructing', 'default': None})\n def construct(self, **kwargs):\n \"\"\" Construct the Container represented by the given builder \"\"\"\n builder, build_manager = getargs('builder', 'build_manager', kwargs)\n if build_manager is None:\n build_manager = BuildManager(self)\n attr_map = self.get_map(builder)\n if attr_map is None:\n raise ValueError('No ObjectMapper found for builder of type %s'\n % str(container.__class__.__name__)) # noqa: F821\n else:\n return attr_map.construct(builder, build_manager)\n\n @docval({\"name\": \"container\", \"type\": Container, \"doc\": \"the container to convert to a Builder\"},\n returns='The name a Builder should be given when building this container', rtype=str)\n def get_builder_name(self, **kwargs):\n ''' Get the name a Builder should be given '''\n container = getargs('container', kwargs)\n attr_map = self.get_map(container)\n if attr_map is None:\n raise ValueError('No ObjectMapper found for container of type %s' % str(container.__class__.__name__))\n else:\n return attr_map.get_builder_name(container)\n" ]
[ [ "numpy.dtype" ] ]
NicolasHammer/self_driving_car
[ "1a869a6bb8cb5241341eed051a0683ca17abfe44" ]
[ "trained_models/pilot_net.py" ]
[ "import torch.nn as nn\n\nclass PilotNet(nn.Module):\n def __init__(self, num_controls: int, dropout: float = 0.):\n super(PilotNet, self).__init__()\n\n self.convolutional_block = nn.Sequential(\n # 1st layer\n nn.Conv2d(1, 24, kernel_size=(5,5), stride=(2,2)),\n nn.ELU(),\n # 2nd layer\n nn.Conv2d(24, 36, kernel_size=(5,5), stride=(2,2)),\n nn.ELU(),\n # 3rd layer\n nn.Conv2d(36, 48, kernel_size=(5,5), stride=(2,2)),\n nn.ELU(),\n # 4th layer\n nn.Conv2d(48, 64, kernel_size=(3,3)),\n nn.ELU(),\n # 5th layer\n nn.Conv2d(64, 64, kernel_size=(3,3)),\n nn.ELU()\n )\n\n self.MLP = nn.Sequential(\n # 1st dense layer\n nn.Flatten(),\n nn.Linear(6656, 100),\n nn.ELU(),\n nn.Dropout(dropout),\n\n # 2nd dense layer\n nn.Linear(100, 50),\n nn.ELU(),\n nn.Dropout(dropout),\n\n # 3rd dense layer\n nn.Linear(50, 10),\n nn.ELU(),\n nn.Dropout(dropout)\n )\n\n self.output = nn.Linear(10, num_controls)\n\n def forward(self, image):\n conv_out = self.convolutional_block(image)\n MLP_out = self.MLP(conv_out)\n controls = self.output(MLP_out)\n return controls\n" ]
[ [ "torch.nn.Linear", "torch.nn.Dropout", "torch.nn.ELU", "torch.nn.Conv2d", "torch.nn.Flatten" ] ]
troyliu0105/mmselfsup
[ "df907e5ce02951e68e5de825986d8d7e879ac5b4" ]
[ "mmselfsup/models/heads/latent_pred_head.py" ]
[ "# Copyright (c) OpenMMLab. All rights reserved.\nimport torch\nimport torch.nn as nn\nfrom mmcv.runner import BaseModule\n\nfrom ..builder import HEADS, build_neck\n\n\[email protected]_module()\nclass LatentPredictHead(BaseModule):\n \"\"\"Head for latent feature prediction.\n\n This head builds a predictor, which can be any registered neck component.\n For example, BYOL and SimSiam call this head and build NonLinearNeck.\n It also implements similarity loss between two forward features.\n\n Args:\n predictor (dict): Config dict for module of predictor.\n \"\"\"\n\n def __init__(self, predictor):\n super(LatentPredictHead, self).__init__()\n self.predictor = build_neck(predictor)\n\n def forward(self, input, target):\n \"\"\"Forward head.\n\n Args:\n input (Tensor): NxC input features.\n target (Tensor): NxC target features.\n\n Returns:\n dict[str, Tensor]: A dictionary of loss components.\n \"\"\"\n pred = self.predictor([input])[0]\n target = target.detach()\n\n pred_norm = nn.functional.normalize(pred, dim=1)\n target_norm = nn.functional.normalize(target, dim=1)\n loss = -(pred_norm * target_norm).sum(dim=1).mean()\n return dict(loss=loss)\n\n\[email protected]_module\nclass LatentClsHead(BaseModule):\n \"\"\"Head for latent feature classification.\n\n Args:\n in_channels (int): Number of input channels.\n num_classes (int): Number of classes.\n init_cfg (dict or list[dict], optional): Initialization config dict.\n \"\"\"\n\n def __init__(self,\n in_channels,\n num_classes,\n init_cfg=dict(type='Normal', std=0.01, layer='Linear')):\n super(LatentClsHead, self).__init__(init_cfg)\n self.predictor = nn.Linear(in_channels, num_classes)\n self.criterion = nn.CrossEntropyLoss()\n\n def forward(self, input, target):\n \"\"\"Forward head.\n\n Args:\n input (Tensor): NxC input features.\n target (Tensor): NxC target features.\n\n Returns:\n dict[str, Tensor]: A dictionary of loss components.\n \"\"\"\n pred = self.predictor(input)\n with torch.no_grad():\n label = torch.argmax(self.predictor(target), dim=1).detach()\n loss = self.criterion(pred, label)\n return dict(loss=loss)\n" ]
[ [ "torch.nn.Linear", "torch.nn.functional.normalize", "torch.no_grad", "torch.nn.CrossEntropyLoss" ] ]
William-Choe/donkey
[ "733fd842b17b8ea5243031c1db0b88c7bcf759f2" ]
[ "donkeycar/parts/image.py" ]
[ "import os\nimport io\nfrom PIL import Image\nimport numpy as np\nfrom donkeycar.utils import img_to_binary, binary_to_img, arr_to_img, img_to_arr\n\nclass ImgArrToJpg():\n\n def run(self, img_arr):\n if img_arr is None:\n return None\n try:\n image = arr_to_img(img_arr)\n jpg = img_to_binary(image)\n return jpg\n except:\n return None\n\nclass JpgToImgArr():\n\n def run(self, jpg):\n if jpg is None:\n return None\n image = binary_to_img(jpg)\n img_arr = img_to_arr(image)\n return img_arr\n\nclass StereoPair:\n '''\n take two images and put together in a single image\n '''\n def run(self, image_a, image_b):\n '''\n This will take the two images and combine them into a single image\n One in red, the other in green, and diff in blue channel.\n '''\n if image_a is not None and image_b is not None:\n width, height, _ = image_a.shape\n grey_a = dk.utils.rgb2gray(image_a)\n grey_b = dk.utils.rgb2gray(image_b)\n grey_c = grey_a - grey_b\n \n stereo_image = np.zeros([width, height, 3], dtype=np.dtype('B'))\n stereo_image[...,0] = np.reshape(grey_a, (width, height))\n stereo_image[...,1] = np.reshape(grey_b, (width, height))\n stereo_image[...,2] = np.reshape(grey_c, (width, height))\n else:\n stereo_image = []\n\n return np.array(stereo_image)" ]
[ [ "numpy.array", "numpy.reshape", "numpy.dtype" ] ]
mattkjames7/PlanetSpice
[ "5a5c2cb75b9832af42cb746a1a6a886b9cc4da88" ]
[ "PlanetSpice/Earth/Orbit.py" ]
[ "import numpy as np\nfrom ..Tools.RotTrans import RotTrans\nfrom ..Sun.Transform import HAEtoHCI\n\ndef OrbitHAE():\n\t'''\n\tThis function returns the orbit of Earth in HAE coordinates\n\t(Heliocentric Aries Ecliptic) where Z is perpendicular to the \n\tEarth's ecliptic plane (positive northwards), X points towards the\n\tfirst point of Aries (defined by the intersection between Earth's\n\tequatorial plane and the ecliptic).\n\t\n\t'''\n\t\n\ta = 149598023.0\n\te = 0.0167086\n\tb = a*np.sqrt(1 - e**2)\n\tt = np.arange(361,dtype='float32')*np.pi/180.0\n\trp = (1.0 - e)*a\n\tx0 = a*np.cos(t)-(a-rp)\n\ty0 = b*np.sin(t)\n\ti = 7.155*np.pi/180.0\n\tla = -11.26064*np.pi/180.0\n\tap = 114.20783*np.pi/180.0\n\t\n\tx1,y1 = RotTrans(x0,y0,ap)\n\tz1 = np.zeros(361,dtype='float32')\n\t\n\ty2,z2 = RotTrans(y1,z1,i)\n\tx2 = x1\n\t\n\tx3,y3 = RotTrans(x2,y2,la)\n\tz3 = z2\n\treturn (x3,y3,z3)\n\ndef OrbitHCI(Date=20150101):\n\t'''\n\tCalculate the orbit positions in HCI coordinates (Heliocentric\n\tInertial), where Z is the Sun's rotational axis, X is the solar \n\tascending node on the ecliptic.\n\t\n\tInputs\n\t======\n\tDate : int\n\t\tDate in format yyyymmdd\n\t\n\t'''\n\n\tx,y,z = OrbitHAE()\n\treturn HAEtoHCI(Date,0.0,x,y,z)\n" ]
[ [ "numpy.sin", "numpy.zeros", "numpy.arange", "numpy.cos", "numpy.sqrt" ] ]
olliethomas/ml-fairness-gym
[ "adaa878596d3ce7dc0ee821f53f99cdf0cd2ef5f" ]
[ "ml_gym/agents/allocation_agents_test.py" ]
[ "# coding=utf-8\n# Copyright 2020 The ML Fairness Gym Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python2, python3\n\"\"\"Tests for naive_probability_matching_allocator.py.\"\"\"\n\nfrom __future__ import absolute_import, division, print_function\n\nimport gym\nimport numpy as np\nfrom absl.testing import absltest\nfrom six.moves import range\n\nimport core\nimport rewards\nimport test_util\nfrom ml_gym.agents import allocation_agents\nfrom ml_gym.environments import attention_allocation\n\n\nclass NaiveProbabilityMatchingAgentTest(absltest.TestCase):\n def test_update_counts(self):\n \"\"\"Check that counts are updated correctly given an observation.\"\"\"\n env = attention_allocation.LocationAllocationEnv()\n agent_params = allocation_agents.NaiveProbabilityMatchingAgentParams()\n agent_params.decay_prob = 0\n agent = allocation_agents.NaiveProbabilityMatchingAgent(\n action_space=env.action_space,\n observation_space=env.observation_space,\n reward_fn=None,\n params=agent_params,\n )\n counts = [3, 6, 8]\n observation = np.array([1, 2, 0])\n updated_counts = agent._update_beliefs(observation, counts)\n self.assertTrue(np.all(np.equal(updated_counts, [4, 8, 8])))\n\n def test__allocate_by_counts(self):\n \"\"\"Check allocation proportions match probabilities from counts.\"\"\"\n env = attention_allocation.LocationAllocationEnv()\n agent = allocation_agents.NaiveProbabilityMatchingAgent(\n action_space=env.action_space, observation_space=env.observation_space, reward_fn=None\n )\n counts = [3, 6, 8]\n n_resource = 20\n n_samples = 100\n samples = [agent._allocate(n_resource, counts) for _ in range(n_samples)]\n counts_normalized = [(count / float(np.sum(counts))) for count in counts]\n samples_normalized = [(count / float(np.sum(samples))) for count in np.sum(samples, axis=0)]\n self.assertTrue(np.all(np.isclose(counts_normalized, samples_normalized, atol=0.05)))\n\n def test_allocate_by_counts_zero(self):\n \"\"\"Check allocations are even when counts are zero.\"\"\"\n env = attention_allocation.LocationAllocationEnv()\n agent = allocation_agents.NaiveProbabilityMatchingAgent(\n action_space=env.action_space, observation_space=env.observation_space, reward_fn=None\n )\n counts = [0, 0, 0]\n n_resource = 15\n n_samples = 100\n samples = [agent._allocate(n_resource, counts) for _ in range(n_samples)]\n mean_samples = np.sum(samples, axis=0) / float(n_samples)\n expected_mean = n_resource / float(len(counts))\n std_dev = np.std(samples)\n means_close = [np.abs(mean - expected_mean) < std_dev for mean in mean_samples]\n self.assertTrue(np.all(means_close))\n\n def test_can_interact_with_attention_env(self):\n env = attention_allocation.LocationAllocationEnv()\n agent = allocation_agents.NaiveProbabilityMatchingAgent(\n action_space=env.action_space, observation_space=env.observation_space, reward_fn=None\n )\n test_util.run_test_simulation(env=env, agent=agent)\n\n def test_get_added_vector_features(self):\n action_space_len = 2\n observation = {\"incidents_seen\": np.array([0, 1]), \"incidents_reported\": np.array([3, 1])}\n features = allocation_agents._get_added_vector_features(observation, action_space_len)\n expected = [3.0, 2.0]\n self.assertSequenceAlmostEqual(features.tolist(), expected)\n features = allocation_agents._get_added_vector_features(\n observation, action_space_len, keys=[\"incidents_reported\"]\n )\n expected = [3.0, 1.0]\n self.assertSequenceAlmostEqual(features.tolist(), expected)\n\n def test_episode_done_raises_error(self):\n env = attention_allocation.LocationAllocationEnv()\n agent = allocation_agents.NaiveProbabilityMatchingAgent(\n action_space=env.action_space, observation_space=env.observation_space, reward_fn=None\n )\n observation = env.reset()\n with self.assertRaises(core.EpisodeDoneError):\n agent.act(observation, done=True)\n\n\nclass MLEProbabilityMatchingAgentTest(absltest.TestCase):\n def test_can_interact_with_attention_env(self):\n env = attention_allocation.LocationAllocationEnv()\n agent = allocation_agents.MLEProbabilityMatchingAgent(\n action_space=env.action_space,\n observation_space=env.observation_space,\n reward_fn=None,\n params=None,\n )\n test_util.run_test_simulation(env=env, agent=agent)\n\n def test_MLE_rate_estimation(self):\n env_params = attention_allocation.Params()\n env_params.prior_incident_counts = (500, 500)\n env_params.n_attention_units = 5\n\n # pylint: disable=g-long-lambda\n agent_params = allocation_agents.MLEProbabilityMatchingAgentParams()\n\n agent_params.feature_selection_fn = lambda obs: allocation_agents._get_added_vector_features(\n obs, env_params.n_locations, keys=[\"incidents_seen\"]\n )\n agent_params.interval = 200\n agent_params.epsilon = 0\n\n env = attention_allocation.LocationAllocationEnv(env_params)\n agent = allocation_agents.MLEProbabilityMatchingAgent(\n action_space=env.action_space,\n reward_fn=lambda x: None,\n observation_space=env.observation_space,\n params=agent_params,\n )\n seed = 0\n agent.rng.seed(seed)\n env.seed(seed)\n observation = env.reset()\n done = False\n steps = 200\n for _ in range(steps):\n action = agent.act(observation, done)\n observation, _, done, _ = env.step(action)\n\n self.assertTrue(\n np.all(np.isclose(list(agent.beliefs), list(env_params.incident_rates), atol=0.5))\n )\n\n\nclass MLEGreedyAgentTest(absltest.TestCase):\n def test_can_interact_with_attention_env(self):\n env = attention_allocation.LocationAllocationEnv()\n agent = allocation_agents.MLEGreedyAgent(\n action_space=env.action_space, observation_space=env.observation_space, reward_fn=None\n )\n test_util.run_test_simulation(env=env, agent=agent)\n\n def test_allocate_beliefs_fair_unsatisfiable(self):\n env_params = attention_allocation.Params(\n n_locations=4,\n prior_incident_counts=(10, 10, 10, 10),\n n_attention_units=5,\n incident_rates=[0, 0, 0, 0],\n )\n env = attention_allocation.LocationAllocationEnv(params=env_params)\n agent_params = allocation_agents.MLEGreedyAgentParams(epsilon=0.0, alpha=0.25)\n agent = allocation_agents.MLEGreedyAgent(\n action_space=env.action_space,\n observation_space=env.observation_space,\n reward_fn=rewards.VectorSumReward(\"incidents_seen\"),\n params=agent_params,\n )\n with self.assertRaises(gym.error.InvalidAction):\n agent._allocate(5, [5, 2, 1, 1])\n\n def test_allocate_beliefs_fair(self):\n env_params = attention_allocation.Params(\n n_locations=4,\n prior_incident_counts=(10, 10, 10, 10),\n n_attention_units=6,\n incident_rates=[0, 0, 0, 0],\n )\n env = attention_allocation.LocationAllocationEnv(params=env_params)\n agent_params = allocation_agents.MLEGreedyAgentParams(epsilon=0.0, alpha=0.25)\n agent = allocation_agents.MLEGreedyAgent(\n action_space=env.action_space,\n observation_space=env.observation_space,\n reward_fn=rewards.VectorSumReward(\"incidents_seen\"),\n params=agent_params,\n )\n allocation = agent._allocate(6, [5, 2, 1, 1])\n self.assertTrue(np.all(np.equal(allocation, [3, 1, 1, 1])))\n\n def test_allocate_beliefs_greedy(self):\n env_params = attention_allocation.Params(\n n_locations=4,\n prior_incident_counts=(10, 10, 10, 10),\n n_attention_units=5,\n incident_rates=[0, 0, 0, 0],\n )\n env = attention_allocation.LocationAllocationEnv(params=env_params)\n agent_params = allocation_agents.MLEGreedyAgentParams(epsilon=0.0)\n agent = allocation_agents.MLEGreedyAgent(\n action_space=env.action_space,\n observation_space=env.observation_space,\n reward_fn=rewards.VectorSumReward(\"incidents_seen\"),\n params=agent_params,\n )\n allocation = agent._allocate(5, [5, 2, 1, 1])\n self.assertTrue(np.all(np.equal(allocation, [4, 1, 0, 0])))\n\n\nif __name__ == \"__main__\":\n absltest.main()\n" ]
[ [ "numpy.equal", "numpy.array", "numpy.isclose", "numpy.sum", "numpy.std", "numpy.abs", "numpy.all" ] ]
ANarayan/robustness-gym
[ "eed2800985631fbbe6491b5f6f0731a067eef78e" ]
[ "robustnessgym/core/dataformats/vision.py" ]
[ "from __future__ import annotations\n\nimport copy\nimport gzip\nimport logging\nimport os\nimport pickle\nimport tempfile\nimport uuid\nfrom collections import defaultdict\nfrom types import SimpleNamespace\nfrom typing import Any, Callable, Dict, List, Mapping, Optional, Tuple, Union, cast\n\nimport cytoolz as tz\nimport datasets\nimport numpy as np\nimport pyarrow as pa\nimport torch\n\ntry:\n import torchvision.datasets.folder as folder\n from PIL import Image as im\nexcept ImportError:\n _torchvision_available = False\n folder, im = None, None\nelse:\n _torchvision_available = True\nfrom datasets import DatasetInfo, Features, NamedSplit\nfrom joblib import Parallel, delayed\nfrom tqdm.auto import tqdm\n\nfrom robustnessgym.core.dataformats.abstract import AbstractDataset\nfrom robustnessgym.core.tools import convert_to_batch_fn\n\nlogger = logging.getLogger(__name__)\n\nExample = Dict\nBatch = Dict[str, List]\n\n\nclass RGImage:\n \"\"\"This class acts as an interface to allow the user to manipulate the\n images without actually loading them into memory.\"\"\"\n\n def __init__(self, filepath):\n self.filepath = filepath\n self.name = os.path.split(filepath)[-1]\n\n # Cache the transforms applied on the image when VisionDataset.update\n # gets called\n self.transforms = []\n\n def display(self):\n pass\n\n def load(self):\n return torch.from_numpy(np.array(folder.default_loader(self.filepath)))\n\n def __getitem__(self, idx):\n image = self.load()\n for t in self.transforms:\n image = t(image)\n return image[idx]\n\n def __str__(self):\n return self.filepath\n\n def __repr__(self):\n return \"Image(%s)\" % self.name\n\n def __eq__(self, other):\n filepath_eq = self.filepath == other.filepath\n transforms_eq = self.transforms == other.transforms\n return filepath_eq and transforms_eq\n\n\ndef save_image(image, filename):\n \"\"\"Save 'image' to file 'filename' and return an RGImage object.\"\"\"\n if isinstance(image, torch.Tensor):\n image = image.numpy()\n image = im.fromarray(image.astype(np.uint8))\n if image.mode != \"RGB\":\n image = image.convert(\"RGB\")\n image.save(filename)\n\n return RGImage(filename)\n\n\nclass RGVisionFolder:\n \"\"\"A generic data loader where the samples are a list paths to images: ::\n [\n '0001.jpg',\n '0002.jpg',\n ...\n ]\n Args:\n paths (List[str]]): List of paths to images.\n loader (callable): A function to load an image given its path.\n extensions (tuple[string]): A list of allowed extensions.\n both extensions and is_valid_file should not be passed.\n transform (callable, optional): A function/transform that takes in\n a sample and returns a transformed version.\n E.g, ``transforms.RandomCrop``\n is_valid_file (callable, optional): A function that takes path of a file\n and check if the file is a valid file (used to check of corrupt files)\n both extensions and is_valid_file should not be passed.\n Attributes:\n paths (list): List of paths to images\n \"\"\"\n\n def __init__(\n self,\n paths: List[str],\n loader: Callable[[str], Any] = None,\n extensions: Optional[Tuple[str, ...]] = None,\n transform: Optional[Callable] = None,\n is_valid_file: Optional[Callable[[str], bool]] = None,\n ) -> None:\n if loader is None:\n loader = folder.default_loader\n\n if extensions is None:\n extensions = folder.IMG_EXTENSIONS\n\n if len(paths) == 0:\n raise RuntimeError(\"No file paths were passed\")\n\n instances = self.make_dataset(paths, extensions, is_valid_file)\n\n if len(instances) == 0:\n msg = \"No valid image paths were given.\\n\"\n if extensions is not None:\n msg += \"Supported extensions are: {}\".format(\",\".join(extensions))\n raise RuntimeError(msg)\n\n self.loader = loader\n self.extensions = extensions\n\n self.samples = instances\n self.transform = transform\n\n @staticmethod\n def make_dataset(\n paths: List[str],\n extensions: Optional[Tuple[str, ...]] = None,\n is_valid_file: Optional[Callable[[str], bool]] = None,\n ) -> List[Tuple[str, Dict[str, Any]]]:\n instances = []\n both_none = extensions is None and is_valid_file is None\n both_something = extensions is not None and is_valid_file is not None\n if both_none or both_something:\n raise ValueError(\n \"Both extensions and is_valid_file cannot be None or not \"\n \"None at the same time\"\n )\n if extensions is not None:\n\n def is_valid_file(x: str) -> bool:\n return folder.has_file_allowed_extension(\n x, cast(Tuple[str, ...], extensions)\n )\n\n is_valid_file = cast(Callable[[str], bool], is_valid_file)\n for file_path in paths:\n instances.append(file_path)\n return instances\n\n def __getitem__(self, index: int) -> Tuple[Any, Any]:\n \"\"\"\n Args:\n index (int): Index\n Returns:\n tuple: (sample, metadata)\n \"\"\"\n path = self.samples[index]\n sample = self.loader(path)\n if self.transform is not None:\n sample = self.transform(sample)\n\n # Need to convert, otherwise Pytorch will complain\n return np.array(sample)\n\n def __len__(self) -> int:\n return len(self.samples)\n\n\nclass VisionDataset(AbstractDataset):\n \"\"\"Class for vision datasets that are to be stored in memory.\"\"\"\n\n def __init__(\n self,\n *args,\n column_names: List[str] = None,\n info: DatasetInfo = None,\n split: Optional[NamedSplit] = None,\n img_keys: List[str] = [\"image_file\"],\n ):\n\n # Data is a dictionary of lists\n self._data = {}\n\n if isinstance(img_keys, str):\n img_keys = [img_keys]\n self._img_keys = img_keys\n\n # Internal state variables for the update function\n self._updating_images = False\n self._adding_images = False\n self._callstack = []\n\n # Single argument\n if len(args) == 1:\n assert column_names is None, \"Don't pass in column_names.\"\n # The data is passed in\n data = args[0]\n\n # TODO: Discuss whether this is a worthy refactor\n # This replaces the commented elif block below\n if isinstance(data, list) and len(data):\n # Transpose the list of dicts to a dict of lists i.e. a batch\n data = tz.merge_with(list, *data)\n\n # `data` is a dictionary\n if isinstance(data, dict) and len(data):\n # Assert all columns are the same length\n self._assert_columns_all_equal_length(data)\n mask = [key in data for key in self._img_keys]\n if not all(mask):\n idx = mask.index(False)\n raise KeyError(\n \"Key with paths to images not found: %s\" % self._img_keys[idx]\n )\n for key in self._img_keys:\n self._paths_to_Images(data, key)\n self._data = data\n\n # `data` is a list\n # elif isinstance(data, list) and len(data):\n # # Transpose the list of dicts to a dict of lists i.e. a batch\n # data = tz.merge_with(list, *data)\n # # Assert all columns are the same length\n # self._assert_columns_all_equal_length(data)\n # if filepath_key not in data:\n # raise KeyError(\n # \"Key with paths to images not found: %s\" % filepath_key\n # )\n # self._paths_to_Images(data, filepath_key)\n # self._data = data\n\n # `data` is a datasets.Dataset\n elif isinstance(data, datasets.Dataset):\n self._data = data[:]\n info, split = data.info, data.split\n\n # No argument\n elif len(args) == 0:\n\n # Use column_names to setup the data dictionary\n if column_names:\n self._data = {k: [] for k in column_names}\n\n else:\n raise NotImplementedError(\n \"Currently only one table is supported when creating a VisionDataSet\"\n )\n\n # Setup the DatasetInfo\n info = info.copy() if info is not None else DatasetInfo()\n AbstractDataset.__init__(self, info=info, split=split)\n\n # Create attributes for all columns and visible columns\n self.all_columns = list(self._data.keys())\n self.visible_columns = None\n\n # Create attributes for visible rows\n self.visible_rows = None\n\n # Initialization\n self._initialize_state()\n\n logger.info(\n f\"Created `VisionDataset` with {len(self)} rows and \"\n f\"{len(self.column_names)} columns.\"\n )\n\n @staticmethod\n def _paths_to_Images(data, key):\n \"\"\"Convert a list of paths to images data[key] into a list of RGImage\n instances.\"\"\"\n if isinstance(data[key][0], RGImage):\n return # Can happen when we're copying a dataset\n data[key] = [RGImage(i) for i in data[key]]\n\n def _set_features(self):\n \"\"\"Set the features of the dataset.\"\"\"\n with self.format():\n d = self[:1].copy()\n for key in self._img_keys:\n if key in d:\n d[key] = list(map(str, d[key]))\n\n self.info.features = Features.from_arrow_schema(\n pa.Table.from_pydict(\n d,\n ).schema\n )\n\n def _materialize(self):\n # Materialize data, instead of using a reference to an ancestor Dataset\n self._data = {k: self[k] for k in self._data}\n\n # Reset visible_rows\n self.set_visible_rows(None)\n\n def add_column(self, column: str, values: List, overwrite=False) -> None:\n \"\"\"Add a column to the dataset.\"\"\"\n\n assert (\n column not in self.all_columns\n ) or overwrite, (\n f\"Column `{column}` already exists, set `overwrite=True` to overwrite.\"\n )\n assert len(values) == len(self), (\n f\"`add_column` failed. \"\n f\"Values length {len(values)} != dataset length {len(self)}.\"\n )\n\n if self.visible_rows is not None:\n # Materialize the data\n self._materialize()\n\n # Add the column\n self._data[column] = list(values)\n self.all_columns.append(column)\n self.visible_columns.append(column)\n\n # Set features\n self._set_features()\n\n logging.info(f\"Added column `{column}` with length `{len(values)}`.\")\n\n def remove_column(self, column: str) -> None:\n \"\"\"Remove a column from the dataset.\"\"\"\n assert column in self.all_columns, f\"Column `{column}` does not exist.\"\n\n # Remove the column\n del self._data[column]\n self.all_columns = [col for col in self.all_columns if col != column]\n self.visible_columns = [col for col in self.visible_columns if col != column]\n\n # Set features\n self._set_features()\n\n logging.info(f\"Removed column `{column}`.\")\n\n def select_columns(self, columns: List[str]) -> Batch:\n \"\"\"Select a subset of columns.\"\"\"\n for col in columns:\n assert col in self._data\n return tz.keyfilter(lambda k: k in columns, self._data)\n\n def _append_to_empty_dataset(self, example_or_batch: Union[Example, Batch]) -> None:\n \"\"\"Append a batch of data to the dataset when it's empty.\"\"\"\n # Convert to batch\n batch = self._example_or_batch_to_batch(example_or_batch)\n\n # TODO(karan): what other data properties need to be in sync here\n self.all_columns = list(batch.keys())\n self.visible_columns = list(batch.keys())\n\n # Dataset is empty: create the columns and append the batch\n self._data = {k: [] for k in self.column_names}\n for k in self.column_names:\n self._data[k].extend(batch[k])\n\n def append(\n self,\n example_or_batch: Union[Example, Batch],\n ) -> None:\n \"\"\"Append a batch of data to the dataset.\n\n `batch` must have the same columns as the dataset (regardless of\n what columns are visible).\n \"\"\"\n if not self.column_names:\n return self._append_to_empty_dataset(example_or_batch)\n\n # Check that example_or_batch has the same format as the dataset\n # TODO(karan): require matching on nested features?\n columns = list(example_or_batch.keys())\n assert set(columns) == set(\n self.column_names\n ), f\"Mismatched columns\\nbatch: {columns}\\ndataset: {self.column_names}\"\n\n # Convert to a batch\n batch = self._example_or_batch_to_batch(example_or_batch)\n\n # Append to the dataset\n for k in self.column_names:\n if k in self._img_keys:\n batch[k] = list(map(RGImage, batch[k]))\n self._data[k].extend(batch[k])\n\n def _remap_index(self, index):\n if isinstance(index, int):\n return self.visible_rows[index].item()\n elif isinstance(index, slice):\n return self.visible_rows[index].tolist()\n elif isinstance(index, str):\n return index\n elif (isinstance(index, tuple) or isinstance(index, list)) and len(index):\n return self.visible_rows[index].tolist()\n elif isinstance(index, np.ndarray) and len(index.shape) == 1:\n return self.visible_rows[index].tolist()\n else:\n raise TypeError(\"Invalid argument type: {}\".format(type(index)))\n\n def __getitem__(self, index):\n if self.visible_rows is not None:\n # Remap the index if only some rows are visible\n index = self._remap_index(index)\n\n if (\n isinstance(index, int)\n or isinstance(index, slice)\n or isinstance(index, np.int)\n ):\n # int or slice index => standard list slicing\n return {k: self._data[k][index] for k in self.visible_columns}\n elif isinstance(index, str):\n # str index => column selection\n if index in self.column_names:\n if self.visible_rows is not None:\n return [self._data[index][i] for i in self.visible_rows]\n return self._data[index]\n raise AttributeError(f\"Column {index} does not exist.\")\n elif (isinstance(index, tuple) or isinstance(index, list)) and len(index):\n return {k: [self._data[k][i] for i in index] for k in self.visible_columns}\n elif isinstance(index, np.ndarray) and len(index.shape) == 1:\n return {\n k: [self._data[k][int(i)] for i in index] for k in self.visible_columns\n }\n else:\n raise TypeError(\"Invalid argument type: {}\".format(type(index)))\n\n def _inspect_update_function(\n self,\n function: Callable,\n with_indices: bool = False,\n batched: bool = False,\n ) -> SimpleNamespace:\n \"\"\"Load the images before calling _inspect_function, and check if new\n image columns are being added.\"\"\"\n\n with self.format():\n first_row = self.copy()\n first_row.set_visible_rows([0])\n\n # Load the images (WARNING: This changes the original data)\n storage = {}\n for key in self._img_keys:\n storage[key] = first_row._data[key][0]\n first_row._data[key][0] = first_row[0][key].load()\n\n properties = AbstractDataset._inspect_function(\n first_row, function, with_indices, batched\n )\n\n # Check if new columns are added\n if batched:\n if with_indices:\n output = function(self[:2], range(2))\n else:\n output = function(self[:2])\n\n else:\n if with_indices:\n output = function(self[0], 0)\n else:\n output = function(self[0])\n new_columns = set(output.keys()).difference(set(self.all_columns))\n\n # Check if any of those new columns is an image column\n new_img_keys = []\n for key in new_columns:\n val = output[key]\n if isinstance(val, torch.Tensor) and len(val.shape) >= 2:\n new_img_keys.append(key)\n\n properties.new_image_columns = new_img_keys\n\n # Fix the dataset\n for key in self._img_keys:\n first_row._data[key][0] = storage[key]\n\n return properties\n\n def _inspect_filter_function(\n self,\n function: Callable,\n with_indices: bool = False,\n batched: bool = False,\n ) -> SimpleNamespace:\n \"\"\"Load the images before calling _inspect_function.\"\"\"\n\n with self.format():\n first_row = self.copy()\n first_row.set_visible_rows([0])\n\n # Load the images (WARNING: This changes the original data)\n storage = {}\n for key in self._img_keys:\n storage[key] = first_row._data[key][0]\n first_row._data[key][0] = first_row[0][key].load()\n\n # Inspect normally\n properties = AbstractDataset._inspect_function(\n first_row, function, with_indices, batched\n )\n\n # Fix the dataset\n for key in self._img_keys:\n first_row._data[key][0] = storage[key]\n\n return properties\n\n def update(\n self,\n function: Optional[Callable] = None,\n with_indices: bool = False,\n # input_columns: Optional[Union[str, List[str]]] = None,\n batched: bool = False,\n batch_size: Optional[int] = 1000,\n remove_columns: Optional[List[str]] = None,\n cache_dir: str = None,\n **kwargs,\n ) -> Optional[VisionDataset]:\n \"\"\"Update the columns of the dataset.\"\"\"\n # TODO(karan): make this fn go faster\n # most of the time is spent on the merge, speed it up further\n\n # Sanity check when updating the images\n self._callstack.append(\"update\")\n\n # Return if the function is None\n if function is None:\n logger.info(\"`function` None, returning None.\")\n return self\n\n # Return if `self` has no examples\n if not len(self):\n logger.info(\"Dataset empty, returning None.\")\n return self\n\n # Get some information about the function\n function_properties = self._inspect_update_function(\n function, with_indices, batched\n )\n assert (\n function_properties.dict_output\n ), f\"`function` {function} must return dict.\"\n\n if not batched:\n # Convert to a batch function\n function = convert_to_batch_fn(function, with_indices=with_indices)\n logger.info(f\"Converting `function` {function} to batched function.\")\n\n updated_columns = function_properties.existing_columns_updated\n changed_images = [key in self._img_keys for key in updated_columns]\n new_image_columns = function_properties.new_image_columns\n\n # Set the internal state for the map function\n self._updating_images = any(changed_images)\n self._adding_images = any(new_image_columns)\n if self._updating_images or self._adding_images:\n # Set the cache directory where the modified images will be stored\n if not cache_dir:\n cache_dir = tempfile.gettempdir()\n logger.warning(\n \"Modifying the images without setting a cache directory.\\n\"\n \"Consider setting it if your dataset is very large.\\n\"\n \"The default image cache location is: {}\".format(cache_dir)\n )\n\n if not os.path.exists(cache_dir):\n os.makedirs(cache_dir)\n\n cache_dir = os.path.join(cache_dir, uuid.uuid4().hex)\n os.mkdir(cache_dir)\n\n # Update always returns a new dataset\n logger.info(\"Running update, a new dataset will be returned.\")\n if self.visible_rows is not None:\n # Run .map() to get updated batches and pass them into a new dataset\n new_dataset = VisionDataset(\n self.map(\n (\n lambda batch, indices: self._merge_batch_and_output(\n batch, function(batch, indices)\n )\n )\n if with_indices\n else (\n lambda batch: self._merge_batch_and_output(\n batch, function(batch)\n )\n ),\n with_indices=with_indices,\n batched=True,\n batch_size=batch_size,\n cache_dir=cache_dir,\n ),\n img_keys=self._img_keys,\n )\n else:\n if function_properties.updates_existing_column:\n # Copy the ._data dict with a reference to the actual columns\n new_dataset = self.copy()\n\n # Calculate the values for the updated columns using a .map()\n output = self.map(\n (\n lambda batch, indices:\n # Only merge columns that get updated\n self._merge_batch_and_output(\n {\n k: v\n for k, v in batch.items()\n if k in function_properties.existing_columns_updated\n },\n function(batch, indices),\n )\n )\n if with_indices\n else (\n lambda batch:\n # Only merge columns that get updated\n self._merge_batch_and_output(\n {\n k: v\n for k, v in batch.items()\n if k in function_properties.existing_columns_updated\n },\n function(batch),\n )\n ),\n with_indices=with_indices,\n batched=True,\n batch_size=batch_size,\n cache_dir=cache_dir,\n new_image_columns=new_image_columns,\n )\n\n # If new image columns were added, update that information\n if self._adding_images:\n new_dataset._img_keys.extend(new_image_columns)\n\n # Add new columns / overwrite existing columns for the update\n for col, vals in output.items():\n if isinstance(vals[0], torch.Tensor) and vals[\n 0\n ].shape == torch.Size([]):\n # Scalar tensor. Convert to Python.\n new_vals = []\n for val in vals:\n new_vals.append(val.item())\n vals = new_vals\n new_dataset.add_column(col, vals, overwrite=True)\n else:\n # Copy the ._data dict with a reference to the actual columns\n new_dataset = self.copy()\n\n # Calculate the values for the new columns using a .map()\n output = new_dataset.map(\n function=function,\n with_indices=with_indices,\n batched=True,\n batch_size=batch_size,\n cache_dir=cache_dir,\n new_image_columns=new_image_columns,\n )\n\n # If new image columns were added, update that information\n if self._adding_images:\n new_dataset._img_keys.extend(new_image_columns)\n\n # Add new columns for the update\n for col, vals in output.items():\n if isinstance(vals[0], torch.Tensor) and vals[\n 0\n ].shape == torch.Size([]):\n # Scalar tensor. Convert to Python.\n new_vals = []\n for val in vals:\n new_vals.append(val.item())\n vals = new_vals\n new_dataset.add_column(col, vals)\n\n # Remove columns\n if remove_columns:\n for col in remove_columns:\n new_dataset.remove_column(col)\n logger.info(f\"Removed columns {remove_columns}.\")\n # Reset the format\n # if input_columns:\n # self.set_format(previous_format)\n\n # Remember to reset the internal state\n self._updating_images = False\n self._adding_images = False\n # And remove this call from the callstack\n self._callstack.pop()\n\n # If the new dataset is a copy we also need to reset it\n new_dataset._updating_images = False\n new_dataset._adding_images = False\n new_dataset._callstack.pop()\n\n return new_dataset\n\n def map(\n self,\n function: Optional[Callable] = None,\n with_indices: bool = False,\n input_columns: Optional[Union[str, List[str]]] = None,\n batched: bool = False,\n batch_size: Optional[int] = 1000,\n drop_last_batch: bool = False,\n num_proc: Optional[int] = 64,\n **kwargs,\n ) -> Optional[Dict, List]:\n \"\"\"Apply a map over the dataset.\"\"\"\n\n # Just return if the function is None\n if function is None:\n logger.info(\"`function` None, returning None.\")\n return None\n\n # Ensure that num_proc is not None\n if num_proc is None:\n num_proc = 64\n\n # Return if `self` has no examples\n if not len(self):\n logger.info(\"Dataset empty, returning None.\")\n return None\n\n if isinstance(input_columns, str):\n input_columns = [input_columns]\n\n # Set the format\n previous_format = self.visible_columns\n if input_columns:\n self.set_format(input_columns)\n\n if not batched:\n # Convert to a batch function\n function = convert_to_batch_fn(function, with_indices=with_indices)\n logger.info(f\"Converting `function` {function} to a batched function.\")\n\n # Check if any of the columns is an image column\n if not input_columns:\n input_columns = self.visible_columns\n image_loaders = {}\n for key in input_columns:\n if key in self._img_keys:\n # Load the images\n img_folder = RGVisionFolder(list(map(str, self[key])))\n images = torch.utils.data.DataLoader(\n img_folder,\n batch_size=batch_size,\n shuffle=False,\n drop_last=False,\n num_workers=num_proc,\n )\n images = iter(images)\n image_loaders[key] = images\n\n # If we are updating, prepare image savers and perform sanity checks\n if self._updating_images or self._adding_images:\n assert \"update\" in self._callstack, (\n \"_updating_images and _adding_images can only be set by \"\n \"VisionDataset.update\"\n )\n assert \"cache_dir\" in kwargs, \"No cache directory specified\"\n cache_dir = kwargs[\"cache_dir\"]\n if self._adding_images:\n assert \"new_image_columns\" in kwargs, \"New image column names not specified\"\n new_image_columns = kwargs[\"new_image_columns\"]\n\n # Run the map\n logger.info(\"Running `map`, the dataset will be left unchanged.\")\n outputs = None\n for i, batch in tqdm(\n enumerate(self.batch(batch_size, drop_last_batch)),\n total=(len(self) // batch_size) + (1 - int(drop_last_batch)),\n ):\n for key in image_loaders:\n batch[key] = next(image_loaders[key])\n\n # Run `function` on the batch\n output = (\n function(\n batch,\n range(i * batch_size, min(len(self), (i + 1) * batch_size)),\n )\n if with_indices\n else function(batch)\n )\n\n # Save the modified images\n if self._updating_images:\n for key in image_loaders:\n images = output[key]\n\n # Save the images in parallel\n rgimages = Parallel(n_jobs=num_proc)(\n delayed(save_image)(\n images[idx],\n os.path.join(\n cache_dir,\n \"{0}{1}.png\".format(key, i * batch_size + idx),\n ),\n )\n for idx in range(len(images))\n )\n\n output[key] = rgimages\n\n if self._adding_images:\n for key in new_image_columns:\n images = output[key]\n\n # Save the images in parallel\n rgimages = Parallel(n_jobs=num_proc)(\n delayed(save_image)(\n images[idx],\n os.path.join(\n cache_dir,\n \"{0}{1}.png\".format(key, i * batch_size + idx),\n ),\n )\n for idx in range(len(images))\n )\n\n output[key] = rgimages\n\n if i == 0:\n # Create an empty dict or list for the outputs\n outputs = defaultdict(list) if isinstance(output, Mapping) else []\n\n # Append the output\n if output is not None:\n if isinstance(output, Mapping):\n for k in output.keys():\n outputs[k].extend(output[k])\n else:\n outputs.extend(output)\n\n # Reset the format\n if input_columns:\n self.set_format(previous_format)\n\n if not len(outputs):\n return None\n elif isinstance(outputs, dict):\n return dict(outputs)\n return outputs\n\n def filter(\n self,\n function: Optional[Callable] = None,\n with_indices=False,\n input_columns: Optional[Union[str, List[str]]] = None,\n batched: bool = False,\n batch_size: Optional[int] = 1000,\n drop_last_batch: bool = False,\n num_proc: Optional[int] = 64,\n **kwargs,\n ) -> Optional[VisionDataset]:\n \"\"\"Apply a filter over the dataset.\"\"\"\n # Just return if the function is None\n if function is None:\n logger.info(\"`function` None, returning None.\")\n return None\n\n # Return if `self` has no examples\n if not len(self):\n logger.info(\"Dataset empty, returning None.\")\n return None\n\n # Get some information about the function\n function_properties = self._inspect_filter_function(\n function,\n with_indices,\n batched=batched,\n )\n assert function_properties.bool_output, \"function must return boolean.\"\n\n # Map to get the boolean outputs and indices\n logger.info(\"Running `filter`, a new dataset will be returned.\")\n outputs = self.map(\n function=function,\n with_indices=with_indices,\n input_columns=input_columns,\n batched=batched,\n batch_size=batch_size,\n drop_last_batch=drop_last_batch,\n )\n indices = np.where(outputs)[0]\n\n # Reset the format to set visible columns for the filter\n with self.format():\n # Filter returns a new dataset\n new_dataset = self.copy()\n new_dataset.set_visible_rows(indices)\n\n return new_dataset\n\n def copy(self, deepcopy=False):\n \"\"\"Return a copy of the dataset.\"\"\"\n if deepcopy:\n return copy.deepcopy(self)\n else:\n dataset = VisionDataset()\n dataset.__dict__ = {k: copy.copy(v) for k, v in self.__dict__.items()}\n return dataset\n\n @classmethod\n def _state_keys(cls) -> set:\n \"\"\"List of attributes that describe the state of the object.\"\"\"\n return {\n \"_data\",\n \"all_columns\",\n \"visible_rows\",\n \"_info\",\n \"_split\",\n \"_img_keys\",\n \"_updating_images\",\n \"_adding_images\",\n \"_callstack\",\n }\n\n @classmethod\n def _assert_state_keys(cls, state: Dict) -> None:\n \"\"\"Assert that a state contains all required keys.\"\"\"\n assert (\n set(state.keys()) == cls._state_keys()\n ), f\"State must contain all state keys: {cls._state_keys()}.\"\n\n def __getstate__(self) -> Dict:\n \"\"\"Get the internal state of the dataset.\"\"\"\n state = {key: getattr(self, key) for key in self._state_keys()}\n self._assert_state_keys(state)\n return state\n\n def __setstate__(self, state: Dict) -> None:\n \"\"\"Set the internal state of the dataset.\"\"\"\n if not isinstance(state, dict):\n raise ValueError(\n f\"`state` must be a dictionary containing \" f\"{self._state_keys()}.\"\n )\n\n self._assert_state_keys(state)\n\n for key in self._state_keys():\n setattr(self, key, state[key])\n\n # Do some initialization\n self._initialize_state()\n\n @classmethod\n def load_from_disk(cls, path: str) -> VisionDataset:\n \"\"\"Load the in-memory dataset from disk.\"\"\"\n\n with gzip.open(os.path.join(path, \"data.gz\")) as f:\n dataset = pickle.load(f)\n # # Empty state dict\n # state = {}\n #\n # # Load the data\n # with gzip.open(os.path.join(path, \"data.gz\")) as f:\n # state['_data'] = pickle.load(f)\n #\n # # Load the metadata\n # metadata = json.load(\n # open(os.path.join(path, \"metadata.json\"))\n # )\n #\n # # Merge the metadata into the state\n # state = {**state, **metadata}\n\n # Create an empty `VisionDataset` and set its state\n # dataset = cls()\n # dataset.__setstate__(state)\n\n return dataset\n\n def save_to_disk(self, path: str):\n \"\"\"Save the in-memory dataset to disk.\"\"\"\n # Create all the directories to the path\n os.makedirs(path, exist_ok=True)\n\n # Store the data in a compressed format\n with gzip.open(os.path.join(path, \"data.gz\"), \"wb\") as f:\n pickle.dump(self, f)\n\n # # Get the dataset state\n # state = self.__getstate__()\n #\n # # Store the data in a compressed format\n # with gzip.open(os.path.join(path, \"data.gz\"), \"wb\") as f:\n # pickle.dump(state['_data'], f)\n #\n # # Store the metadata\n # json.dump(\n # {k: v for k, v in state.items() if k != '_data'},\n # open(os.path.join(path, \"metadata.json\"), 'w'),\n # )\n" ]
[ [ "torch.Size", "numpy.where", "numpy.array", "torch.utils.data.DataLoader" ] ]
lmmx/lattice-obj
[ "76b62e5d36457d9fed341680a4edfc044f734f71" ]
[ "hasse.py" ]
[ "import numpy as np\nimport itertools\n\ndef ranks_dict(poset):\n \"\"\"\n Return a dictionary `r_dict` with rank keys (in descending order, from supremum,\n i.e. `h.vertices()[0]`, to infimum, `h.vertices()[-1]`) and lists of indices of\n the vertices at that level, to be displayed from left to right, suitable to be\n passed as the `heights` argument to `poset.plot`.\n \"\"\"\n h = poset.hasse_diagram()\n ranks = [poset.rank_function()(z) for z in h.vertices()]\n rank_set = set(ranks) # Note that this sorts the ranks ascending (reversed later)\n r_indices = [[i for i, r in enumerate(ranks) if r == s] for s in rank_set]\n r_dict = dict(reversed(list(zip(rank_set, r_indices))))\n return r_dict\n\ndef poset_rel_coords(level_sizes, w_scale):\n max_len = np.max(level_sizes)\n unit_w = (1.0 * w_scale) / (max_len - 1)\n max_x = np.multiply(unit_w, np.subtract(level_sizes, 1))\n x_start = (w_scale - max_x)/2\n max_x += x_start\n # Round these as linspace gives weird floating point errors\n xx = [np.round(np.linspace(f, m, s), 3).tolist() for f,m,s in zip(x_start, max_x, level_sizes)]\n yy = np.round(np.linspace(0, 1, len(level_sizes)), 3).tolist()\n yy = list(map(lambda y: [y], yy))\n coords = [list(map(list, itertools.product(*v))) for v in zip(xx, yy)]\n return coords\n\ndef hasse_coords(poset, return_dict=True, key_by_vertex_index=False, scale_w=True):\n \"\"\"\n poset should be a poset class such as `posets.IntegerPartitions(n)` for some `n`\n\n If `return_dict` is False:\n Returns a list of level lists. Each level list is a list of node tuples\n (the first/last are the singleton lists of the supremum/infimum).\n Each node tuple is a 2-tuple of the vertex index and a coordinate tuple.\n Each coordinate tuple is a 2-tuple of `(x,y)` coordinates.\n\n If `return_dict` is True and `key_by_vertex_index` is True:\n Returns a dictionary whose keys are integers in `range(len(h.vertices()))`,\n the values at each key are the `(x,y)` tuple, by default scaled from `[0.0, 1.0]`.\n\n If `return_dict` is True and `key_by_vertex_index` is False:\n Returns a dictionary whose keys are `h.vertices()` (to match `DiGraph.layout()`),\n the values at each key are the `(x,y)` tuple, by default scaled from `[0.0, 1.0]`.\n\n If `scale_w` is True, the (w:h) aspect ratio will be scaled to the poset's ratio\n of width:height i.e. the length ratio of the maximum antichain to maximum chain.\n \"\"\"\n h = poset.hasse_diagram()\n r_dict = ranks_dict(poset)\n level_sizes = list(map(len, r_dict.values()))\n wh_aspect_ratio = 1 # 1:1 width:height aspect ratio\n if scale_w:\n # Do not check chain and antichain max. lengths, fails above n=7 or so\n ph = len(level_sizes)\n pw = max(level_sizes)\n wh_aspect_ratio = pw / ph\n poset_coords = poset_rel_coords(level_sizes, wh_aspect_ratio)\n vi_coords = list(map(lambda z: list(zip(*z)), list(zip(r_dict.values(), poset_coords))))\n node_coords = [[(h.vertices()[c[0]], c[1]) for c in l] for l in vi_coords]\n if not return_dict:\n return vi_coords\n if key_by_vertex_index:\n # return a dict whose keys are the index of the vertex in `h.vertices()`\n return dict(list(itertools.chain.from_iterable(vi_coords)))\n else:\n # return a dict whose keys are the vertex tuples themselves\n return dict(list(itertools.chain.from_iterable(node_coords)))\n\n# Deprecated: Sage posets implement a method `rank_function`, use `rank_dicts` instead\ndef partition_len_level_dicts(vertices):\n \"\"\"\n Return a dictionary `v_dict` with length keys and lists of indices of the\n vertices at that level, to be displayed from left to right,\n suitable to be passed as the `heights` argument to `poset.plot`.\n \"\"\"\n v_dict = {}\n for vi, v in enumerate(vertices):\n v_len = len(v)\n v_dict.setdefault(v_len, [])\n v_dict.get(v_len).append(vi)\n return v_dict\n" ]
[ [ "numpy.max", "numpy.linspace", "numpy.subtract" ] ]
craigtaub/DashSensor
[ "848d54d0c9fe7077cdc65075e9160f51c84fe410" ]
[ "input.py" ]
[ "# -*- coding: utf-8 -*-\nimport os\n\nimport numpy as np\nimport cv2\n\nIMAGE_SIZE = 64\n\n\ndef resize_with_pad(image, height=IMAGE_SIZE, width=IMAGE_SIZE):\n\n def get_padding_size(image):\n h, w, _ = image.shape\n longest_edge = max(h, w)\n top, bottom, left, right = (0, 0, 0, 0)\n if h < longest_edge:\n dh = longest_edge - h\n top = dh // 2\n bottom = dh - top\n elif w < longest_edge:\n dw = longest_edge - w\n left = dw // 2\n right = dw - left\n else:\n pass\n return top, bottom, left, right\n\n top, bottom, left, right = get_padding_size(image)\n BLACK = [0, 0, 0]\n constant = cv2.copyMakeBorder(image, top , bottom, left, right, cv2.BORDER_CONSTANT, value=BLACK)\n\n resized_image = cv2.resize(constant, (height, width))\n\n return resized_image\n\n\nimages = []\nlabels = []\ndef traverse_dir(path):\n for file_or_dir in os.listdir(path):\n abs_path = os.path.abspath(os.path.join(path, file_or_dir))\n print(abs_path)\n if os.path.isdir(abs_path): # dir\n traverse_dir(abs_path)\n else: # file\n if file_or_dir.endswith('.jpg'):\n image = read_image(abs_path)\n images.append(image)\n labels.append(path)\n\n return images, labels\n\n\ndef read_image(file_path):\n image = cv2.imread(file_path)\n image = resize_with_pad(image, IMAGE_SIZE, IMAGE_SIZE)\n\n return image\n\n\ndef extract_data(path, userFolder='das'):\n images, labels = traverse_dir(path)\n images = np.array(images)\n # labels is each images folder (i.e. /data/craig, /data/craig)\n labels = np.array([0 if label.endswith(userFolder) else 1 for label in labels])\n # if matches X use 0 else use 1. so 0 for match\n\n return images, labels\n" ]
[ [ "numpy.array" ] ]
FedeClaudi/LocomotionControl
[ "1281f7894825096ad212407351463a2105c5152a" ]
[ "analysis/visuals.py" ]
[ "import matplotlib.pyplot as plt\nfrom typing import Union\nimport pandas as pd\nimport numpy as np\nfrom loguru import logger\nfrom scipy.stats import sem\n\n\n# from fcutils.plot.distributions import plot_kde\nfrom fcutils.plot.elements import plot_mean_and_error\nfrom myterial import blue_grey, blue_grey_dark, grey_darker, pink_dark, blue\n\nfrom data import colors, data_utils\nfrom analysis._visuals import get_window_ticks\n\n\n# ----------------------------------- misc ----------------------------------- #\n\n\ndef regplot(\n data: Union[pd.DataFrame, pd.Series, dict],\n x: str,\n y: str,\n ax: plt.axis,\n scatter_sample: int = 10,\n **kwargs,\n):\n ax.scatter(data[x][::scatter_sample], data[y][::scatter_sample], **kwargs)\n\n\ndef plot_balls_errors(\n x: np.ndarray,\n y: np.ndarray,\n yerr: np.ndarray,\n ax: plt.axis,\n s: int = 150,\n colors: Union[list, str] = None,\n):\n \"\"\"\n Given a serires of XY values and Y errors it plots a scatter for each XY point and a line\n to mark each Y error\n \"\"\"\n if colors is None:\n colors = [blue_grey] * len(x)\n elif isinstance(colors, str):\n colors = [colors] * len(x)\n\n ax.scatter(x, y, s=s, c=colors, zorder=100, lw=1, ec=[0.3, 0.3, 0.3])\n ax.plot(x, y, lw=3, color=colors[0], zorder=-1)\n\n if yerr is not None:\n for n in range(len(x)):\n ax.plot(\n [x[n], x[n]],\n [y[n] - yerr[n], y[n] + yerr[n]],\n lw=4,\n color=[0.3, 0.3, 0.3],\n zorder=96,\n solid_capstyle=\"round\",\n )\n ax.plot(\n [x[n], x[n]],\n [y[n] - yerr[n], y[n] + yerr[n]],\n lw=2,\n color=colors[n],\n zorder=98,\n solid_capstyle=\"round\",\n )\n\n\ndef plot_bin_x_by_y(\n data: pd.DataFrame,\n x: str,\n y: str,\n ax: plt.axis,\n bins: Union[int, np.ndarray] = 10,\n as_counts: bool = False,\n with_errors: bool = True,\n min_count: int = 0,\n **kwargs,\n):\n \"\"\"\n Bin the values of a column X of a dataframe based on the values of\n another column Y and plot as a balls and errors plot\n \"\"\"\n # bin\n bins, means, errors, counts = data_utils.bin_x_by_y(\n data, x, y, bins=bins, min_count=min_count\n )\n\n # plot\n if not as_counts:\n plot_balls_errors(\n bins, means, errors if with_errors else None, ax, **kwargs\n )\n else:\n plot_balls_errors(bins, counts, None, ax, **kwargs)\n\n\ndef plot_aligned(\n x: np.ndarray,\n indices: Union[np.ndarray, list],\n ax: plt.axis,\n mode: str,\n window: int = 120,\n mean_kwargs: dict = None,\n **kwargs,\n):\n \"\"\"\n Given a 1d array and a series of indices it plots the values of \n the array aligned to the timestamps.\n \"\"\"\n pre, aft = int(window / 2), int(window / 2)\n mean_kwargs = mean_kwargs or dict(lw=4, zorder=100, color=pink_dark)\n\n # get plotting params\n if mode == \"pre\":\n pre_c, pre_lw = kwargs.pop(\"color\", \"salmon\"), kwargs.pop(\"lw\", 2)\n aft_c, aft_lw = blue_grey, 1\n ax.axvspan(0, pre, fc=blue, alpha=0.25, zorder=-20)\n else:\n aft_c, aft_lw = kwargs.pop(\"color\", \"salmon\"), kwargs.pop(\"lw\", 2)\n pre_c, pre_lw = blue_grey, 1\n ax.axvspan(aft, window, fc=blue, alpha=0.25, zorder=-20)\n\n # plot each trace\n X = [] # collect data to plot mean\n for idx in indices:\n x_pre = x[idx - pre : idx]\n x_aft = x[idx - 1 : idx + aft]\n\n if len(x_pre) != pre or len(x_aft) != aft + 1:\n logger.warning(f\"Could not plot data aligned to index: {idx}\")\n continue\n X.append(x[idx - pre : idx + aft])\n\n ax.plot(x_pre, color=pre_c, lw=pre_lw, **kwargs)\n ax.plot(\n np.arange(aft + 1) + aft, x_aft, color=aft_c, lw=aft_lw, **kwargs\n )\n\n # plot mean and line\n X = np.vstack(X)\n plot_mean_and_error(\n np.mean(X, axis=0), np.std(X, axis=0), ax, **mean_kwargs\n )\n ax.axvline(pre, lw=2, color=blue_grey_dark, zorder=-1)\n\n ax.set(**get_window_ticks(window, shifted=False))\n\n\ndef plot_heatmap_2d(\n data: Union[dict, pd.DataFrame],\n key: str = None,\n ax: plt.axis = None,\n x_key: str = \"x\",\n y_key: str = \"y\",\n cmap: str = \"inferno\",\n vmin: int = 0,\n vmax: int = None,\n gridsize: int = 30,\n mincnt: int = 1,\n **kwargs,\n):\n # bin data in 2d\n try:\n ax.hexbin(\n data[x_key],\n data[y_key],\n data[key] if key is not None else None,\n cmap=cmap,\n gridsize=gridsize,\n vmin=vmin,\n vmax=vmax,\n mincnt=mincnt,\n **kwargs,\n )\n except ValueError: # likely the data was nested arrays\n ax.hexbin(\n np.hstack(data[x_key]),\n np.hstack(data[y_key]),\n np.hstack(data[key]) if key is not None else None,\n cmap=cmap,\n gridsize=gridsize,\n vmin=vmin,\n vmax=vmax,\n mincnt=mincnt,\n **kwargs,\n )\n\n\n# ---------------------------------------------------------------------------- #\n# EPHSY #\n# ---------------------------------------------------------------------------- #\ndef plot_probe_electrodes(\n rsites: pd.DataFrame,\n ax: plt.axis,\n TARGETS: list = [],\n annotate_every: Union[int, bool] = 5,\n x_shift: bool = True,\n s: int = 25,\n lw: float = 0.25,\n x_pos_delta: float = 0,\n):\n x = np.ones(len(rsites)) * 1.025 + x_pos_delta\n if x_shift:\n x[::2] = 1.025 + x_pos_delta - 0.05\n x[2::4] = 1.025 + x_pos_delta - 0.025\n x[1::4] = 1.025 + x_pos_delta + 0.025\n else:\n x = (\n x_pos_delta\n + np.tile(\n [0.975, 1.025, 1.0, 1.05], np.int(np.ceil(len(rsites) / 4))\n )[: len(rsites)]\n )\n\n if TARGETS is not None:\n colors = [\n rs.color\n if rs.brain_region in TARGETS\n else (\n [0.3, 0.3, 0.3]\n if rs.brain_region in (\"unknown\", \"OUT\")\n else blue_grey\n )\n for i, rs in rsites.iterrows()\n ]\n else:\n colors = [rs.color for i, rs in rsites.iterrows()]\n\n ax.scatter(\n x,\n rsites.probe_coordinates,\n s=s,\n lw=lw,\n ec=[0.3, 0.3, 0.3],\n marker=\"s\",\n c=colors,\n )\n\n if annotate_every:\n for i in np.arange(len(x))[::annotate_every]:\n ax.annotate(\n f\"{rsites.site_id.iloc[i]} - {rsites.brain_region.iloc[i]}\",\n (0.6, rsites.probe_coordinates.iloc[i]),\n color=colors[i],\n )\n ax.set(xlim=[0.5, 1.25], ylabel=\"probe coordinates (um)\")\n\n\ndef plot_raster(\n spikes: np.ndarray,\n events: Union[np.ndarray, list],\n ax: plt.axis,\n window: int = 120,\n s=5,\n color=grey_darker,\n kde: bool = True,\n bw: int = 6,\n **kwargs,\n):\n \"\"\"\n Plots a raster plot of spikes aligned to timestamps events\n\n It assumes that all event and spike times are in frames and framerate is 60\n \"\"\"\n half_window = window / 2\n yticks_step = int(np.ceil(len(events) / 8)) if len(events) > 8 else 2\n X = []\n for n, event in enumerate(events):\n event_spikes = (\n spikes[\n (spikes >= event - half_window)\n & (spikes <= event + half_window)\n ]\n - event\n )\n X.extend(list(event_spikes))\n y = np.ones_like(event_spikes) * n\n ax.scatter(event_spikes, y, s=5, color=color, **kwargs)\n ax.axvline(0, ls=\":\", color=\"k\", lw=0.75)\n\n # plot KDE\n if kde:\n raise NotImplementedError(\"KDE env setup incorrect\")\n # plot_kde(\n # ax=ax,\n # z=-len(events) / 4,\n # data=X,\n # normto=len(events) / 5,\n # color=blue_grey_dark,\n # kde_kwargs=dict(bw=bw, cut=0),\n # alpha=0.6,\n # invert=False,\n # )\n\n # set x axis properties\n ax.set(\n yticks=np.arange(0, len(events), yticks_step),\n ylabel=\"event number\",\n **get_window_ticks(window),\n )\n\n\ndef plot_avg_firing_rate_based_on_movement(\n tracking: pd.DataFrame, unit: pd.Series, ax: plt.axis\n):\n \"\"\"\n Plots a units average firing rate during different kinds of movements\n \"\"\"\n movements = (\n \"moving\",\n \"walking\",\n \"turning_left\",\n \"turning_right\",\n \"tone_on\",\n )\n for n, movement in enumerate(movements):\n left = unit.firing_rate[tracking[movement] == 0].mean()\n left_std = sem(unit.firing_rate[tracking[movement] == 0])\n right = unit.firing_rate[tracking[movement] == 1].mean()\n right_std = sem(unit.firing_rate[tracking[movement] == 1])\n\n plot_balls_errors(\n [n - 0.25, n + 0.25],\n [left, right],\n [left_std, right_std],\n ax,\n s=150,\n colors=colors.movements[movement],\n )\n ax.set(\n xticks=np.arange(len(movements)),\n xticklabels=movements,\n ylabel=\"avg firing rate\",\n )\n\n\n# ---------------------------------------------------------------------------- #\n# TRACKING #\n# ---------------------------------------------------------------------------- #\n# -------------------------------- linearized -------------------------------- #\ndef plot_tracking_linearized(\n tracking: Union[dict, pd.DataFrame],\n ax: plt.axis = None,\n plot: bool = True,\n **kwargs,\n):\n ax = ax or plt.subplots(figsize=(9, 9))[1]\n\n x = tracking[\"global_coord\"]\n y = np.linspace(1, 0, len(x))\n\n if not plot:\n ax.scatter(x, y, **kwargs)\n else:\n ax.plot(x, y, **kwargs)\n\n\ndef plot_bouts_1d(\n tracking: Union[dict, pd.DataFrame],\n bouts: pd.DataFrame,\n ax: plt.axis,\n direction: bool = None,\n zorder: int = 100,\n lw: float = 2,\n alpha: float = 1,\n **kwargs,\n):\n # select bouts by direction\n if direction is not None:\n bouts = bouts.loc[bouts.direction == direction]\n\n # get coords\n x = tracking[\"global_coord\"]\n y = np.linspace(1, 0, len(x))\n\n # plot\n for i, bout in bouts.iterrows():\n _x = x[bout.start_frame : bout.end_frame]\n _y = y[bout.start_frame : bout.end_frame]\n\n ax.plot(\n _x,\n _y,\n color=colors.bout_direction_colors[bout.direction],\n zorder=zorder,\n lw=lw,\n alpha=alpha,\n **kwargs,\n )\n ax.scatter(\n _x[0],\n _y[0],\n color=\"white\",\n lw=1,\n ec=colors.bout_direction_colors[bout.direction],\n s=30,\n zorder=101,\n alpha=0.85,\n **kwargs,\n )\n ax.scatter(\n _x[-1],\n _y[-1],\n color=[0.2, 0.2, 0.2],\n lw=1,\n ec=colors.bout_direction_colors[bout.direction],\n s=30,\n zorder=101,\n alpha=0.85,\n **kwargs,\n )\n\n\n# ------------------------------------ 2D ------------------------------------ #\ndef plot_tracking_xy(\n tracking: Union[dict, pd.DataFrame],\n key: str = None,\n skip_frames: int = 1,\n ax: plt.axis = None,\n plot: bool = False,\n **kwargs,\n):\n ax = ax or plt.subplots(figsize=(9, 9))[1]\n\n if key is None:\n if not plot:\n ax.scatter(\n tracking[\"x\"][::skip_frames],\n tracking[\"y\"][::skip_frames],\n color=[0.3, 0.3, 0.3],\n **kwargs,\n )\n else:\n ax.plot(\n tracking[\"x\"][::skip_frames],\n tracking[\"y\"][::skip_frames],\n **kwargs,\n )\n else:\n ax.scatter(\n tracking[\"x\"][::skip_frames],\n tracking[\"y\"][::skip_frames],\n c=tracking[key][::skip_frames],\n **kwargs,\n )\n\n if \"orientation\" in key or \"angle\" in key:\n # draw arrows to mark the angles/colors mapping\n angles = np.linspace(0, 2 * np.pi, 16)\n x = 2 * np.cos(angles[::-1] + np.pi / 2) + 25\n y = 2 * np.sin(angles + np.pi / 2) + 2\n ax.scatter(\n x, y, s=80, zorder=50, c=np.degrees(angles), alpha=1, **kwargs\n )\n\n\n# ---------------------------------------------------------------------------- #\n# BOUTS #\n# ---------------------------------------------------------------------------- #\n\n\ndef plot_bouts_2d(\n tracking: Union[dict, pd.DataFrame],\n bouts: pd.DataFrame,\n ax: plt.axis,\n direction: bool = None,\n zorder: int = 100,\n lw: float = 2,\n c: str = None,\n unit: pd.Series = None,\n **kwargs,\n):\n # select bouts by direction\n if direction is not None:\n bouts = bouts.loc[bouts.direction == direction]\n\n # plot\n for i, bout in bouts.iterrows():\n # prepare data\n x = tracking[\"x\"][bout.start_frame : bout.end_frame]\n y = tracking[\"y\"][bout.start_frame : bout.end_frame]\n\n if c is None:\n color = colors.bout_direction_colors[bout.direction]\n else:\n color = c\n\n # plot tracking\n ax.plot(x, y, color=color, zorder=zorder, lw=lw, **kwargs)\n\n if unit is None:\n # mark start and end\n ax.scatter(\n x[0],\n y[0],\n color=\"white\",\n lw=1,\n ec=color,\n s=25,\n zorder=101,\n **kwargs,\n )\n ax.scatter(\n x[-1],\n y[-1],\n color=[0.2, 0.2, 0.2],\n lw=1,\n ec=color,\n s=25,\n zorder=101,\n **kwargs,\n )\n else:\n # mark unit spikes\n spikes = unit.spikes[\n (unit.spikes > bout.start_frame)\n & (unit.spikes < bout.end_frame)\n ]\n ax.scatter(\n tracking[\"x\"][spikes],\n tracking[\"y\"][spikes],\n s=15,\n zorder=101,\n color=unit.color,\n )\n\n\ndef plot_bouts_aligned(\n tracking: Union[dict, pd.DataFrame],\n bouts: pd.DataFrame,\n ax: plt.axis,\n color: str = blue_grey,\n **kwargs,\n):\n \"\"\"\n Aligns bouts such that they start from the same position and with the same\n orientation.\n \"\"\"\n raise NotImplementedError(\"this doesnt work\")\n # keys = [\"x\", \"y\", \"speed\", \"orientation\", \"angular_velocity\"]\n\n # for i, bout in bouts.iterrows():\n # xy = np.vstack(tracking[[\"x\", \"y\"]].values).T[\n # bout.start_frame : bout.end_frame\n # ]\n\n # # center\n # xy -= xy[0]\n\n # # rotate\n # # R = coordinates.R(tracking['orientation'][bout.start_frame])\n # # xy = (R.T @ xy.T).T\n # # xy = xy[:20]\n\n # ax.plot(xy[:, 0], xy[:, 1], color=color, **kwargs)\n\n\ndef plot_bouts_x(\n tracking_data: pd.DataFrame,\n bouts: pd.DataFrame,\n ax: plt.axis,\n variable: str,\n color: str = blue_grey,\n **kwargs,\n):\n \"\"\"\n Plots a variable from the tracking data for each bout\n \"\"\"\n for i, bout in bouts.iterrows():\n ax.plot(\n tracking_data[variable][bout.start_frame : bout.end_frame],\n color=color,\n **kwargs,\n )\n\n\ndef plot_bouts_x_by_y(\n tracking_data: pd.DataFrame,\n bouts: pd.DataFrame,\n ax: plt.axis,\n x: str,\n y: str,\n color: str = blue_grey,\n **kwargs,\n):\n \"\"\"\n Plots two tracking variables one against the other for each bout\n \"\"\"\n for i, bout in bouts.iterrows():\n ax.plot(\n tracking_data[x][bout.start_frame : bout.end_frame],\n tracking_data[y][bout.start_frame : bout.end_frame],\n color=color,\n **kwargs,\n )\n\n\ndef plot_bouts_heatmap_2d(\n tracking_data: pd.DataFrame,\n bouts: pd.DataFrame,\n var: str,\n ax: plt.axis,\n **kwargs,\n):\n # stack the data for each bout\n data = dict(x=[], y=[], var=[])\n for i, bout in bouts.iterrows():\n data[\"x\"].extend(\n list(tracking_data.x[bout.start_frame : bout.end_frame])\n )\n data[\"y\"].extend(\n list(tracking_data.y[bout.start_frame : bout.end_frame])\n )\n data[\"var\"].extend(\n list(tracking_data[var][bout.start_frame : bout.end_frame])\n )\n\n # plot\n plot_heatmap_2d(pd.DataFrame(data), \"var\", ax, **kwargs)\n" ]
[ [ "numpy.sin", "numpy.ones_like", "pandas.DataFrame", "numpy.mean", "matplotlib.pyplot.subplots", "numpy.degrees", "numpy.std", "numpy.arange", "numpy.cos", "scipy.stats.sem", "numpy.hstack", "numpy.linspace", "numpy.vstack" ] ]
yuxiang-gao/crowdenv_training
[ "8d8f4f46c905155783195430175723a2a7755630" ]
[ "utils/exp_manager.py" ]
[ "import argparse\nimport os\nimport pickle as pkl\nimport time\nimport warnings\nfrom collections import OrderedDict\nfrom pprint import pprint\nfrom typing import Any, Callable, Dict, List, Optional, Tuple\n\nimport gym\nimport numpy as np\nimport optuna\nimport yaml\nfrom optuna.integration.skopt import SkoptSampler\nfrom optuna.pruners import BasePruner, MedianPruner, SuccessiveHalvingPruner\nfrom optuna.samplers import BaseSampler, RandomSampler, TPESampler\nfrom optuna.visualization import (\n plot_optimization_history,\n plot_param_importances,\n)\n\n# For using HER with GoalEnv\nfrom stable_baselines3 import HerReplayBuffer # noqa: F401\nfrom stable_baselines3.common.base_class import BaseAlgorithm\nfrom stable_baselines3.common.callbacks import (\n BaseCallback,\n CheckpointCallback,\n EvalCallback,\n)\nfrom stable_baselines3.common.env_util import make_vec_env\nfrom stable_baselines3.common.noise import (\n NormalActionNoise,\n OrnsteinUhlenbeckActionNoise,\n)\nfrom stable_baselines3.common.preprocessing import (\n is_image_space,\n is_image_space_channels_first,\n)\nfrom stable_baselines3.common.sb2_compat.rmsprop_tf_like import (\n RMSpropTFLike,\n) # noqa: F401\nfrom stable_baselines3.common.utils import constant_fn\nfrom stable_baselines3.common.vec_env import (\n DummyVecEnv,\n SubprocVecEnv,\n VecEnv,\n VecFrameStack,\n VecNormalize,\n VecTransposeImage,\n)\n\n# For custom activation fn\nfrom torch import nn as nn # noqa: F401\n\n# Register custom envs\nimport utils.import_envs # noqa: F401 pytype: disable=import-error\nfrom utils.callbacks import SaveVecNormalizeCallback, TrialEvalCallback\nfrom utils.hyperparams_opt import HYPERPARAMS_SAMPLER\nfrom utils.utils import (\n ALGOS,\n get_callback_list,\n get_latest_run_id,\n get_wrapper_class,\n linear_schedule,\n)\n\n# import custom feature extractor\nfrom hetero_crowd_nav.utils.custom_policy import (\n PairwiseAttentionFeaturesExtractor,\n)\n\n\nclass ExperimentManager(object):\n \"\"\"\n Experiment manager: read the hyperparameters,\n preprocess them, create the environment and the RL model.\n\n Please take a look at `train.py` to have the details for each argument.\n \"\"\"\n\n def __init__(\n self,\n args: argparse.Namespace,\n algo: str,\n env_id: str,\n log_folder: str,\n tensorboard_log: str = \"\",\n n_timesteps: int = 0,\n eval_freq: int = 10000,\n n_eval_episodes: int = 5,\n save_freq: int = -1,\n hyperparams: Optional[Dict[str, Any]] = None,\n env_kwargs: Optional[Dict[str, Any]] = None,\n trained_agent: str = \"\",\n optimize_hyperparameters: bool = False,\n storage: Optional[str] = None,\n study_name: Optional[str] = None,\n n_trials: int = 1,\n n_jobs: int = 1,\n sampler: str = \"tpe\",\n pruner: str = \"median\",\n optimization_log_path: Optional[str] = None,\n n_startup_trials: int = 0,\n n_evaluations: int = 1,\n truncate_last_trajectory: bool = False,\n uuid_str: str = \"\",\n seed: int = 0,\n log_interval: int = 0,\n save_replay_buffer: bool = False,\n verbose: int = 1,\n vec_env_type: str = \"dummy\",\n n_eval_envs: int = 1,\n no_optim_plots: bool = False,\n ):\n super(ExperimentManager, self).__init__()\n self.algo = algo\n self.env_id = env_id\n # Custom params\n self.custom_hyperparams = hyperparams\n self.env_kwargs = {} if env_kwargs is None else env_kwargs\n self.n_timesteps = n_timesteps\n self.normalize = False\n self.normalize_kwargs = {}\n self.env_wrapper = None\n self.frame_stack = None\n self.seed = seed\n self.optimization_log_path = optimization_log_path\n\n self.vec_env_class = {\"dummy\": DummyVecEnv, \"subproc\": SubprocVecEnv}[\n vec_env_type\n ]\n\n self.vec_env_kwargs = {}\n # self.vec_env_kwargs = {} if vec_env_type == \"dummy\" else {\"start_method\": \"fork\"}\n\n # Callbacks\n self.specified_callbacks = []\n self.callbacks = []\n self.save_freq = save_freq\n self.eval_freq = eval_freq\n self.n_eval_episodes = n_eval_episodes\n self.n_eval_envs = n_eval_envs\n\n self.n_envs = 1 # it will be updated when reading hyperparams\n self.n_actions = None # For DDPG/TD3 action noise objects\n self._hyperparams = {}\n\n self.trained_agent = trained_agent\n self.continue_training = trained_agent.endswith(\n \".zip\"\n ) and os.path.isfile(trained_agent)\n self.truncate_last_trajectory = truncate_last_trajectory\n\n self._is_atari = self.is_atari(env_id)\n # Hyperparameter optimization config\n self.optimize_hyperparameters = optimize_hyperparameters\n self.storage = storage\n self.study_name = study_name\n self.no_optim_plots = no_optim_plots\n # maximum number of trials for finding the best hyperparams\n self.n_trials = n_trials\n # number of parallel jobs when doing hyperparameter search\n self.n_jobs = n_jobs\n self.sampler = sampler\n self.pruner = pruner\n self.n_startup_trials = n_startup_trials\n self.n_evaluations = n_evaluations\n self.deterministic_eval = not self.is_atari(self.env_id)\n\n # Logging\n self.log_folder = log_folder\n self.tensorboard_log = (\n None\n if tensorboard_log == \"\"\n else os.path.join(tensorboard_log, env_id)\n )\n self.verbose = verbose\n self.args = args\n self.log_interval = log_interval\n self.save_replay_buffer = save_replay_buffer\n\n self.log_path = f\"{log_folder}/{self.algo}/\"\n self.save_path = os.path.join(\n self.log_path,\n f\"{self.env_id}_{get_latest_run_id(self.log_path, self.env_id) + 1}{uuid_str}\",\n )\n self.params_path = f\"{self.save_path}/{self.env_id}\"\n\n def setup_experiment(self) -> Optional[BaseAlgorithm]:\n \"\"\"\n Read hyperparameters, pre-process them (create schedules, wrappers, callbacks, action noise objects)\n create the environment and possibly the model.\n\n :return: the initialized RL model\n \"\"\"\n hyperparams, saved_hyperparams = self.read_hyperparameters()\n (\n hyperparams,\n self.env_wrapper,\n self.callbacks,\n ) = self._preprocess_hyperparams(hyperparams)\n\n self.create_log_folder()\n self.create_callbacks()\n\n # Create env to have access to action space for action noise\n env = self.create_envs(self.n_envs, no_log=False)\n\n self._hyperparams = self._preprocess_action_noise(\n hyperparams, saved_hyperparams, env\n )\n\n if self.continue_training:\n model = self._load_pretrained_agent(self._hyperparams, env)\n elif self.optimize_hyperparameters:\n return None\n else:\n # Train an agent from scratch\n model = ALGOS[self.algo](\n env=env,\n tensorboard_log=self.tensorboard_log,\n seed=self.seed,\n verbose=self.verbose,\n **self._hyperparams,\n )\n\n self._save_config(saved_hyperparams)\n return model\n\n def learn(self, model: BaseAlgorithm) -> None:\n \"\"\"\n :param model: an initialized RL model\n \"\"\"\n kwargs = {}\n if self.log_interval > -1:\n kwargs = {\"log_interval\": self.log_interval}\n\n if len(self.callbacks) > 0:\n kwargs[\"callback\"] = self.callbacks\n\n try:\n model.learn(self.n_timesteps, **kwargs)\n except KeyboardInterrupt:\n # this allows to save the model when interrupting training\n pass\n finally:\n # Release resources\n try:\n model.env.close()\n except EOFError:\n pass\n\n def save_trained_model(self, model: BaseAlgorithm) -> None:\n \"\"\"\n Save trained model optionally with its replay buffer\n and ``VecNormalize`` statistics\n\n :param model:\n \"\"\"\n print(f\"Saving to {self.save_path}\")\n model.save(f\"{self.save_path}/{self.env_id}\")\n\n if hasattr(model, \"save_replay_buffer\") and self.save_replay_buffer:\n print(\"Saving replay buffer\")\n model.save_replay_buffer(\n os.path.join(self.save_path, \"replay_buffer.pkl\")\n )\n\n if self.normalize:\n # Important: save the running average, for testing the agent we need that normalization\n model.get_vec_normalize_env().save(\n os.path.join(self.params_path, \"vecnormalize.pkl\")\n )\n\n def _save_config(self, saved_hyperparams: Dict[str, Any]) -> None:\n \"\"\"\n Save unprocessed hyperparameters, this can be use later\n to reproduce an experiment.\n\n :param saved_hyperparams:\n \"\"\"\n # Save hyperparams\n with open(os.path.join(self.params_path, \"config.yml\"), \"w\") as f:\n yaml.dump(saved_hyperparams, f)\n\n # save command line arguments\n with open(os.path.join(self.params_path, \"args.yml\"), \"w\") as f:\n ordered_args = OrderedDict(\n [\n (key, vars(self.args)[key])\n for key in sorted(vars(self.args).keys())\n ]\n )\n yaml.dump(ordered_args, f)\n\n print(f\"Log path: {self.save_path}\")\n\n def read_hyperparameters(self) -> Tuple[Dict[str, Any], Dict[str, Any]]:\n # Load hyperparameters from yaml file\n with open(f\"hyperparams/{self.algo}.yml\", \"r\") as f:\n hyperparams_dict = yaml.safe_load(f)\n if self.env_id in list(hyperparams_dict.keys()):\n hyperparams = hyperparams_dict[self.env_id]\n elif self._is_atari:\n hyperparams = hyperparams_dict[\"atari\"]\n else:\n raise ValueError(\n f\"Hyperparameters not found for {self.algo}-{self.env_id}\"\n )\n\n if self.custom_hyperparams is not None:\n # Overwrite hyperparams if needed\n hyperparams.update(self.custom_hyperparams)\n # Sort hyperparams that will be saved\n saved_hyperparams = OrderedDict(\n [(key, hyperparams[key]) for key in sorted(hyperparams.keys())]\n )\n\n if self.verbose > 0:\n print(\n \"Default hyperparameters for environment (ones being tuned will be overridden):\"\n )\n pprint(saved_hyperparams)\n\n return hyperparams, saved_hyperparams\n\n @staticmethod\n def _preprocess_schedules(hyperparams: Dict[str, Any]) -> Dict[str, Any]:\n # Create schedules\n for key in [\"learning_rate\", \"clip_range\", \"clip_range_vf\"]:\n if key not in hyperparams:\n continue\n if isinstance(hyperparams[key], str):\n schedule, initial_value = hyperparams[key].split(\"_\")\n initial_value = float(initial_value)\n hyperparams[key] = linear_schedule(initial_value)\n elif isinstance(hyperparams[key], (float, int)):\n # Negative value: ignore (ex: for clipping)\n if hyperparams[key] < 0:\n continue\n hyperparams[key] = constant_fn(float(hyperparams[key]))\n else:\n raise ValueError(f\"Invalid value for {key}: {hyperparams[key]}\")\n return hyperparams\n\n def _preprocess_normalization(\n self, hyperparams: Dict[str, Any]\n ) -> Dict[str, Any]:\n if \"normalize\" in hyperparams.keys():\n self.normalize = hyperparams[\"normalize\"]\n\n # Special case, instead of both normalizing\n # both observation and reward, we can normalize one of the two.\n # in that case `hyperparams[\"normalize\"]` is a string\n # that can be evaluated as python,\n # ex: \"dict(norm_obs=False, norm_reward=True)\"\n if isinstance(self.normalize, str):\n self.normalize_kwargs = eval(self.normalize)\n self.normalize = True\n\n # Use the same discount factor as for the algorithm\n if \"gamma\" in hyperparams:\n self.normalize_kwargs[\"gamma\"] = hyperparams[\"gamma\"]\n\n del hyperparams[\"normalize\"]\n return hyperparams\n\n def _preprocess_hyperparams(\n self, hyperparams: Dict[str, Any]\n ) -> Tuple[Dict[str, Any], Optional[Callable], List[BaseCallback]]:\n self.n_envs = hyperparams.get(\"n_envs\", 1)\n\n if self.verbose > 0:\n print(f\"Using {self.n_envs} environments\")\n\n # Convert schedule strings to objects\n hyperparams = self._preprocess_schedules(hyperparams)\n\n # Pre-process train_freq\n if \"train_freq\" in hyperparams and isinstance(\n hyperparams[\"train_freq\"], list\n ):\n hyperparams[\"train_freq\"] = tuple(hyperparams[\"train_freq\"])\n\n # Should we overwrite the number of timesteps?\n if self.n_timesteps > 0:\n if self.verbose:\n print(f\"Overwriting n_timesteps with n={self.n_timesteps}\")\n else:\n self.n_timesteps = int(hyperparams[\"n_timesteps\"])\n\n # Pre-process normalize config\n hyperparams = self._preprocess_normalization(hyperparams)\n\n # Pre-process policy/buffer keyword arguments\n # Convert to python object if needed\n for kwargs_key in {\n \"policy_kwargs\",\n \"replay_buffer_class\",\n \"replay_buffer_kwargs\",\n }:\n if kwargs_key in hyperparams.keys() and isinstance(\n hyperparams[kwargs_key], str\n ):\n hyperparams[kwargs_key] = eval(hyperparams[kwargs_key])\n\n # Delete keys so the dict can be pass to the model constructor\n if \"n_envs\" in hyperparams.keys():\n del hyperparams[\"n_envs\"]\n del hyperparams[\"n_timesteps\"]\n\n if \"frame_stack\" in hyperparams.keys():\n self.frame_stack = hyperparams[\"frame_stack\"]\n del hyperparams[\"frame_stack\"]\n\n # obtain a class object from a wrapper name string in hyperparams\n # and delete the entry\n env_wrapper = get_wrapper_class(hyperparams)\n if \"env_wrapper\" in hyperparams.keys():\n del hyperparams[\"env_wrapper\"]\n\n callbacks = get_callback_list(hyperparams)\n if \"callback\" in hyperparams.keys():\n self.specified_callbacks = hyperparams[\"callback\"]\n del hyperparams[\"callback\"]\n\n return hyperparams, env_wrapper, callbacks\n\n def _preprocess_action_noise(\n self,\n hyperparams: Dict[str, Any],\n saved_hyperparams: Dict[str, Any],\n env: VecEnv,\n ) -> Dict[str, Any]:\n # Parse noise string\n # Note: only off-policy algorithms are supported\n if hyperparams.get(\"noise_type\") is not None:\n noise_type = hyperparams[\"noise_type\"].strip()\n noise_std = hyperparams[\"noise_std\"]\n\n # Save for later (hyperparameter optimization)\n self.n_actions = env.action_space.shape[0]\n\n if \"normal\" in noise_type:\n hyperparams[\"action_noise\"] = NormalActionNoise(\n mean=np.zeros(self.n_actions),\n sigma=noise_std * np.ones(self.n_actions),\n )\n elif \"ornstein-uhlenbeck\" in noise_type:\n hyperparams[\"action_noise\"] = OrnsteinUhlenbeckActionNoise(\n mean=np.zeros(self.n_actions),\n sigma=noise_std * np.ones(self.n_actions),\n )\n else:\n raise RuntimeError(f'Unknown noise type \"{noise_type}\"')\n\n print(f\"Applying {noise_type} noise with std {noise_std}\")\n\n del hyperparams[\"noise_type\"]\n del hyperparams[\"noise_std\"]\n\n return hyperparams\n\n def create_log_folder(self):\n os.makedirs(self.params_path, exist_ok=True)\n\n def create_callbacks(self):\n\n if self.save_freq > 0:\n # Account for the number of parallel environments\n self.save_freq = max(self.save_freq // self.n_envs, 1)\n self.callbacks.append(\n CheckpointCallback(\n save_freq=self.save_freq,\n save_path=self.save_path,\n name_prefix=\"rl_model\",\n verbose=1,\n )\n )\n\n # Create test env if needed, do not normalize reward\n if self.eval_freq > 0 and not self.optimize_hyperparameters:\n # Account for the number of parallel environments\n self.eval_freq = max(self.eval_freq // self.n_envs, 1)\n\n if self.verbose > 0:\n print(\"Creating test environment\")\n\n save_vec_normalize = SaveVecNormalizeCallback(\n save_freq=1, save_path=self.params_path\n )\n eval_callback = EvalCallback(\n self.create_envs(self.n_eval_envs, eval_env=True),\n callback_on_new_best=save_vec_normalize,\n best_model_save_path=self.save_path,\n n_eval_episodes=self.n_eval_episodes,\n log_path=self.save_path,\n eval_freq=self.eval_freq,\n deterministic=self.deterministic_eval,\n )\n\n self.callbacks.append(eval_callback)\n\n @staticmethod\n def is_atari(env_id: str) -> bool:\n entry_point = gym.envs.registry.env_specs[env_id].entry_point\n return \"AtariEnv\" in str(entry_point)\n\n @staticmethod\n def is_bullet(env_id: str) -> bool:\n entry_point = gym.envs.registry.env_specs[env_id].entry_point\n return \"pybullet_envs\" in str(entry_point)\n\n @staticmethod\n def is_robotics_env(env_id: str) -> bool:\n entry_point = gym.envs.registry.env_specs[env_id].entry_point\n return \"gym.envs.robotics\" in str(\n entry_point\n ) or \"panda_gym.envs\" in str(entry_point)\n\n def _maybe_normalize(self, env: VecEnv, eval_env: bool) -> VecEnv:\n \"\"\"\n Wrap the env into a VecNormalize wrapper if needed\n and load saved statistics when present.\n\n :param env:\n :param eval_env:\n :return:\n \"\"\"\n # Pretrained model, load normalization\n path_ = os.path.join(os.path.dirname(self.trained_agent), self.env_id)\n path_ = os.path.join(path_, \"vecnormalize.pkl\")\n\n if os.path.exists(path_):\n print(\"Loading saved VecNormalize stats\")\n env = VecNormalize.load(path_, env)\n # Deactivate training and reward normalization\n if eval_env:\n env.training = False\n env.norm_reward = False\n\n elif self.normalize:\n # Copy to avoid changing default values by reference\n local_normalize_kwargs = self.normalize_kwargs.copy()\n # Do not normalize reward for env used for evaluation\n if eval_env:\n if len(local_normalize_kwargs) > 0:\n local_normalize_kwargs[\"norm_reward\"] = False\n else:\n local_normalize_kwargs = {\"norm_reward\": False}\n\n if self.verbose > 0:\n if len(local_normalize_kwargs) > 0:\n print(f\"Normalization activated: {local_normalize_kwargs}\")\n else:\n print(\"Normalizing input and reward\")\n env = VecNormalize(env, **local_normalize_kwargs)\n return env\n\n def create_envs(\n self, n_envs: int, eval_env: bool = False, no_log: bool = False\n ) -> VecEnv:\n \"\"\"\n Create the environment and wrap it if necessary.\n\n :param n_envs:\n :param eval_env: Whether is it an environment used for evaluation or not\n :param no_log: Do not log training when doing hyperparameter optim\n (issue with writing the same file)\n :return: the vectorized environment, with appropriate wrappers\n \"\"\"\n # Do not log eval env (issue with writing the same file)\n log_dir = None if eval_env or no_log else self.save_path\n\n monitor_kwargs = {}\n # Special case for GoalEnvs: log success rate too\n if (\n \"Neck\" in self.env_id\n or self.is_robotics_env(self.env_id)\n or \"parking-v0\" in self.env_id\n ):\n monitor_kwargs = dict(info_keywords=(\"is_success\",))\n\n # On most env, SubprocVecEnv does not help and is quite memory hungry\n # therefore we use DummyVecEnv by default\n env = make_vec_env(\n env_id=self.env_id,\n n_envs=n_envs,\n seed=self.seed,\n env_kwargs=self.env_kwargs,\n monitor_dir=log_dir,\n wrapper_class=self.env_wrapper,\n vec_env_cls=self.vec_env_class,\n vec_env_kwargs=self.vec_env_kwargs,\n monitor_kwargs=monitor_kwargs,\n )\n\n # Wrap the env into a VecNormalize wrapper if needed\n # and load saved statistics when present\n env = self._maybe_normalize(env, eval_env)\n\n # Optional Frame-stacking\n if self.frame_stack is not None:\n n_stack = self.frame_stack\n env = VecFrameStack(env, n_stack)\n if self.verbose > 0:\n print(f\"Stacking {n_stack} frames\")\n\n # Wrap if needed to re-order channels\n # (switch from channel last to channel first convention)\n if is_image_space(\n env.observation_space\n ) and not is_image_space_channels_first(env.observation_space):\n if self.verbose > 0:\n print(\"Wrapping into a VecTransposeImage\")\n env = VecTransposeImage(env)\n\n return env\n\n def _load_pretrained_agent(\n self, hyperparams: Dict[str, Any], env: VecEnv\n ) -> BaseAlgorithm:\n # Continue training\n print(\"Loading pretrained agent\")\n # Policy should not be changed\n del hyperparams[\"policy\"]\n\n if \"policy_kwargs\" in hyperparams.keys():\n del hyperparams[\"policy_kwargs\"]\n\n model = ALGOS[self.algo].load(\n self.trained_agent,\n env=env,\n seed=self.seed,\n tensorboard_log=self.tensorboard_log,\n verbose=self.verbose,\n **hyperparams,\n )\n\n replay_buffer_path = os.path.join(\n os.path.dirname(self.trained_agent), \"replay_buffer.pkl\"\n )\n\n if os.path.exists(replay_buffer_path):\n print(\"Loading replay buffer\")\n # `truncate_last_traj` will be taken into account only if we use HER replay buffer\n model.load_replay_buffer(\n replay_buffer_path,\n truncate_last_traj=self.truncate_last_trajectory,\n )\n return model\n\n def _create_sampler(self, sampler_method: str) -> BaseSampler:\n # n_warmup_steps: Disable pruner until the trial reaches the given number of step.\n if sampler_method == \"random\":\n sampler = RandomSampler(seed=self.seed)\n elif sampler_method == \"tpe\":\n # TODO: try with multivariate=True\n sampler = TPESampler(\n n_startup_trials=self.n_startup_trials, seed=self.seed\n )\n elif sampler_method == \"skopt\":\n # cf https://scikit-optimize.github.io/#skopt.Optimizer\n # GP: gaussian process\n # Gradient boosted regression: GBRT\n sampler = SkoptSampler(\n skopt_kwargs={\"base_estimator\": \"GP\", \"acq_func\": \"gp_hedge\"}\n )\n else:\n raise ValueError(f\"Unknown sampler: {sampler_method}\")\n return sampler\n\n def _create_pruner(self, pruner_method: str) -> BasePruner:\n if pruner_method == \"halving\":\n pruner = SuccessiveHalvingPruner(\n min_resource=1, reduction_factor=4, min_early_stopping_rate=0\n )\n elif pruner_method == \"median\":\n pruner = MedianPruner(\n n_startup_trials=self.n_startup_trials,\n n_warmup_steps=self.n_evaluations // 3,\n )\n elif pruner_method == \"none\":\n # Do not prune\n pruner = MedianPruner(\n n_startup_trials=self.n_trials,\n n_warmup_steps=self.n_evaluations,\n )\n else:\n raise ValueError(f\"Unknown pruner: {pruner_method}\")\n return pruner\n\n def objective(self, trial: optuna.Trial) -> float:\n\n kwargs = self._hyperparams.copy()\n\n # Hack to use DDPG/TD3 noise sampler\n trial.n_actions = self.n_actions\n # Hack when using HerReplayBuffer\n trial.using_her_replay_buffer = (\n kwargs.get(\"replay_buffer_class\") == HerReplayBuffer\n )\n if trial.using_her_replay_buffer:\n trial.her_kwargs = kwargs.get(\"replay_buffer_kwargs\", {})\n # Sample candidate hyperparameters\n sampled_hyperparams = HYPERPARAMS_SAMPLER[self.algo](trial)\n kwargs.update(sampled_hyperparams)\n\n model = ALGOS[self.algo](\n env=self.create_envs(self.n_envs, no_log=True),\n tensorboard_log=None,\n # We do not seed the trial\n seed=None,\n verbose=0,\n **kwargs,\n )\n\n model.trial = trial\n\n eval_env = self.create_envs(n_envs=self.n_eval_envs, eval_env=True)\n\n optuna_eval_freq = int(self.n_timesteps / self.n_evaluations)\n # Account for parallel envs\n optuna_eval_freq = max(optuna_eval_freq // model.get_env().num_envs, 1)\n # Use non-deterministic eval for Atari\n path = None\n if self.optimization_log_path is not None:\n path = os.path.join(\n self.optimization_log_path, f\"trial_{str(trial.number)}\"\n )\n callbacks = get_callback_list({\"callback\": self.specified_callbacks})\n eval_callback = TrialEvalCallback(\n eval_env,\n trial,\n best_model_save_path=path,\n log_path=path,\n n_eval_episodes=self.n_eval_episodes,\n eval_freq=optuna_eval_freq,\n deterministic=self.deterministic_eval,\n )\n callbacks.append(eval_callback)\n\n try:\n model.learn(self.n_timesteps, callback=callbacks)\n # Free memory\n model.env.close()\n eval_env.close()\n except (AssertionError, ValueError) as e:\n # Sometimes, random hyperparams can generate NaN\n # Free memory\n model.env.close()\n eval_env.close()\n # Prune hyperparams that generate NaNs\n print(e)\n print(\"============\")\n print(\"Sampled hyperparams:\")\n pprint(sampled_hyperparams)\n raise optuna.exceptions.TrialPruned()\n is_pruned = eval_callback.is_pruned\n reward = eval_callback.last_mean_reward\n\n del model.env, eval_env\n del model\n\n if is_pruned:\n raise optuna.exceptions.TrialPruned()\n\n return reward\n\n def hyperparameters_optimization(self) -> None:\n\n if self.verbose > 0:\n print(\"Optimizing hyperparameters\")\n\n if self.storage is not None and self.study_name is None:\n warnings.warn(\n f\"You passed a remote storage: {self.storage} but no `--study-name`.\"\n \"The study name will be generated by Optuna, make sure to re-use the same study name \"\n \"when you want to do distributed hyperparameter optimization.\"\n )\n\n if self.tensorboard_log is not None:\n warnings.warn(\n \"Tensorboard log is deactivated when running hyperparameter optimization\"\n )\n self.tensorboard_log = None\n\n # TODO: eval each hyperparams several times to account for noisy evaluation\n sampler = self._create_sampler(self.sampler)\n pruner = self._create_pruner(self.pruner)\n\n if self.verbose > 0:\n print(f\"Sampler: {self.sampler} - Pruner: {self.pruner}\")\n\n study = optuna.create_study(\n sampler=sampler,\n pruner=pruner,\n storage=self.storage,\n study_name=self.study_name,\n load_if_exists=True,\n direction=\"maximize\",\n )\n\n try:\n study.optimize(\n self.objective,\n n_trials=self.n_trials,\n n_jobs=self.n_jobs,\n show_progress_bar=True,\n )\n except KeyboardInterrupt:\n pass\n\n print(\"Number of finished trials: \", len(study.trials))\n\n print(\"Best trial:\")\n trial = study.best_trial\n\n print(\"Value: \", trial.value)\n\n print(\"Params: \")\n for key, value in trial.params.items():\n print(f\" {key}: {value}\")\n\n report_name = (\n f\"report_{self.env_id}_{self.n_trials}-trials-{self.n_timesteps}\"\n f\"-{self.sampler}-{self.pruner}_{int(time.time())}\"\n )\n\n log_path = os.path.join(self.log_folder, self.algo, report_name)\n\n if self.verbose:\n print(f\"Writing report to {log_path}\")\n\n # Write report\n os.makedirs(os.path.dirname(log_path), exist_ok=True)\n study.trials_dataframe().to_csv(f\"{log_path}.csv\")\n\n # Save python object to inspect/re-use it later\n with open(f\"{log_path}.pkl\", \"wb+\") as f:\n pkl.dump(study, f)\n\n # Skip plots\n if self.no_optim_plots:\n return\n\n # Plot optimization result\n try:\n fig1 = plot_optimization_history(study)\n fig2 = plot_param_importances(study)\n\n fig1.show()\n fig2.show()\n except (ValueError, ImportError, RuntimeError):\n pass\n" ]
[ [ "numpy.ones", "numpy.zeros" ] ]
jgh9094/Benchmarks
[ "508598ab0354acf0655f6109d347a86b71e11faf" ]
[ "Pilot3/JGH/Data/Code/gather.py" ]
[ "'''\r\nCreated by: Jose Guadalupe Hernandez\r\nEmail: [email protected]\r\n\r\nexport PATH=$HOME/anaconda3/bin:$PATH\r\n\r\nPython file will aggregate\r\n'''\r\n\r\n# general python imports\r\nimport numpy as np\r\nimport argparse\r\nimport pickle as pk\r\nimport psutil\r\n\r\nimport os.path\r\nfrom os import path\r\n\r\nimport pandas as pd\r\n\r\nfrom sklearn.metrics import f1_score\r\n\r\n# global variables for data storing\r\ndata = {'seed': [], 'Beh_Mic': [], 'Beh_Mac': [], 'His_Mic': [], 'His_Mac': [], 'Lat_Mic': [],\r\n 'Lat_Mac': [], 'Site_Mic': [], 'Site_Mac': [], 'Subs_Mic': [], 'Subs_Mac': [], 'model': []}\r\nheader = ['seed', 'Beh_Mic', 'Beh_Mac', 'His_Mic', 'His_Mac', 'Lat_Mic', 'Lat_Mac', 'Site_Mic',\r\n 'Site_Mac', 'Subs_Mic', 'Subs_Mac', 'model']\r\n\r\nPvals = ['P-1', 'P-2', 'P-5']\r\n\r\ntemp = [1,2,5,7,10,13,15,17,20,22,25,30]\r\ntemp01 = [0.1,0.2,0.5,0.7,0.9,1.0,1.1,1.2,1.5,1.7,1.9,2.0]\r\n\r\ndef GetModelType(c, n):\r\n if c == 0:\r\n return 'MTModel-'+ str(n) +'_Rank-'\r\n elif c == 1 or c == 2:\r\n return 'MicMacTest_R.csv'\r\n elif c == 3:\r\n return 'MTDistilled-'+ str(n) +'-'\r\n elif c == 4:\r\n return 'MTDistilledN-'+ str(n) +'-'\r\n elif c == 5:\r\n return 'MTDistilledT-'+ str(n) +'-'\r\n else:\r\n print('UNKNOWN MODEL TYPE')\r\n\r\n# data from 276 models\r\ndef Get276(args):\r\n # iterate through all the models and gather the data\r\n for r in range(args.models):\r\n # load data\r\n print(args.data_dir + GetModelType(args.model, args.cfg) + str(r) + '/MicMacTest_R' + str(r) + '.csv')\r\n file = args.data_dir + GetModelType(args.model, args.cfg) + str(r) + '/MicMacTest_R' + str(r) + '.csv'\r\n df = pd.read_csv(file, index_col=False)\r\n # store and update data\r\n x = df.iloc[1].to_list()\r\n x[0] = r\r\n x.append(args.name)\r\n # store data\r\n for i in range(len(header)):\r\n data[header[i]].append(x[i])\r\n\r\n print(data)\r\n pd.DataFrame(data).to_csv(args.dump_dir + args.name + '.csv', index = False)\r\n\r\n# Random subsample of 276 models\r\ndef GetP(args):\r\n # iterate through all the models and gather the data\r\n for r in range(len(Pvals)):\r\n # load data\r\n print(args.data_dir + Pvals[r] + '/' + GetModelType(args.model, args.cfg))\r\n file = args.data_dir + Pvals[r] + '/' + GetModelType(args.model, args.cfg)\r\n df = pd.read_csv(file, index_col=False)\r\n # store and update data\r\n x = df.iloc[1].to_list()\r\n x[0] = r\r\n x.append(Pvals[r])\r\n # store data\r\n for i in range(len(header)):\r\n data[header[i]].append(x[i])\r\n\r\n print(data)\r\n pd.DataFrame(data).to_csv(args.dump_dir + args.name + '.csv', index = False)\r\n\r\n# 276 aggregated data results\r\ndef GetA(args):\r\n # load data\r\n print(args.data_dir + GetModelType(args.model, args.cfg))\r\n file = args.data_dir + GetModelType(args.model, args.cfg)\r\n df = pd.read_csv(file, index_col=False)\r\n # store and update data\r\n x = df.iloc[1].to_list()\r\n x[0] = 0\r\n x.append(args.name)\r\n # store data\r\n for i in range(len(header)):\r\n data[header[i]].append(x[i])\r\n\r\n print(data)\r\n pd.DataFrame(data).to_csv(args.dump_dir + args.name + '.csv', index = False)\r\n\r\n# distilled model from 276 aggregated models\r\ndef GetDisAgg(args):\r\n for i in range(args.models):\r\n file = args.data_dir + GetModelType(args.model, args.cfg) + str(i) + '/' + 'MicMacTest_R'+ str(i) +'.csv'\r\n print (file +\"exists:\"+str(path.exists(file)))\r\n\r\n if not path.exists(file):\r\n continue\r\n\r\n df = pd.read_csv(file, index_col=False)\r\n # store and update data\r\n x = df.iloc[1].to_list()\r\n x[0] = i\r\n x.append('t-'+str(temp[i]))\r\n # store data\r\n for i in range(len(header)):\r\n data[header[i]].append(x[i])\r\n\r\n print(data)\r\n pd.DataFrame(data).to_csv(args.dump_dir + args.name + '.csv', index = False)\r\n\r\ndef GetN(args):\r\n # iterate through all the models and gather the data\r\n for i in range(args.models):\r\n file = args.data_dir + GetModelType(args.model, args.cfg) + str(i) + '/' + 'MicMacTest_R'+ str(i) +'.csv'\r\n print (file +\"exists:\"+str(path.exists(file)))\r\n\r\n if not path.exists(file):\r\n continue\r\n\r\n df = pd.read_csv(file, index_col=False)\r\n # store and update data\r\n x = df.iloc[1].to_list()\r\n x[0] = i\r\n x.append('t-'+str(temp[i]))\r\n # store data\r\n for i in range(len(header)):\r\n data[header[i]].append(x[i])\r\n\r\n print(data)\r\n pd.DataFrame(data).to_csv(args.dump_dir + args.name + '.csv', index = False)\r\n\r\ndef Get01(args):\r\n # iterate through all the models and gather the data\r\n for i in range(args.models):\r\n file = args.data_dir + GetModelType(args.model, args.cfg) + str(i) + '/' + 'MicMacTest_R'+ str(i) +'.csv'\r\n print (file +\"exists:\"+str(path.exists(file)))\r\n\r\n if not path.exists(file):\r\n continue\r\n\r\n df = pd.read_csv(file, index_col=False)\r\n # store and update data\r\n x = df.iloc[1].to_list()\r\n x[0] = i\r\n x.append('t-'+str(temp01[i]))\r\n # store data\r\n for i in range(len(header)):\r\n data[header[i]].append(x[i])\r\n\r\n print(data)\r\n pd.DataFrame(data).to_csv(args.dump_dir + args.name + '.csv', index = False)\r\n\r\ndef main():\r\n # generate and get arguments\r\n parser = argparse.ArgumentParser(description='Process arguments for model training.')\r\n parser.add_argument('data_dir', type=str, help='Where is the data?')\r\n parser.add_argument('dump_dir', type=str, help='Where are we dumping the output?')\r\n parser.add_argument('model', type=int, help='0: 276 models, 1: Partial % models, 2: aggregated, 3: distilled, 4: basic distilled ')\r\n parser.add_argument('models', type=int, help='How many models')\r\n parser.add_argument('name', type=str, help='Name of file to output')\r\n parser.add_argument('cfg', type=int, help='Configuration we used')\r\n\r\n # parse all the argument\r\n args = parser.parse_args()\r\n print(args)\r\n\r\n # what are the inputs\r\n print('data_dir:', args.data_dir, flush= True)\r\n\r\n if args.model == 0:\r\n Get276(args)\r\n elif args.model == 1:\r\n GetP(args)\r\n elif args.model == 2:\r\n GetA(args)\r\n elif args.model == 3:\r\n GetDisAgg(args)\r\n elif args.model == 4:\r\n GetN(args)\r\n elif args.model == 5:\r\n Get01(args)\r\n else:\r\n print('UNKNOWN')\r\n exit(-1)\r\n\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n" ]
[ [ "pandas.DataFrame", "pandas.read_csv" ] ]
cutz-j/DL
[ "de425477979cd41ecd82e651a9add8b1ba9ee650" ]
[ "DLp/Assignment1-1[reg].py" ]
[ "# import packages\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec\nfrom reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters\nimport sklearn\nimport sklearn.datasets\nimport scipy.io\nfrom testCases import *\n\nnp.random.seed(77)\n\nplt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\ntrain_X, train_Y, test_X, test_Y = load_2D_dataset()\n\ndef model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):\n \"\"\"\n Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.\n \n Arguments:\n X -- input data, of shape (input size, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)\n learning_rate -- learning rate of the optimization\n num_iterations -- number of iterations of the optimization loop\n print_cost -- If True, print the cost every 10000 iterations\n lambd -- regularization hyperparameter, scalar\n keep_prob - probability of keeping a neuron active during drop-out, scalar.\n \n Returns:\n parameters -- parameters learned by the model. They can then be used to predict.\n \"\"\"\n \n grads = {}\n costs = [] # to keep track of the cost\n m = X.shape[1] # number of examples\n layers_dims = [X.shape[0], 20, 3, 1]\n \n # Initialize parameters dictionary.\n parameters = initialize_parameters(layers_dims)\n\n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n\n # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.\n if keep_prob == 1:\n a3, cache = forward_propagation(X, parameters)\n elif keep_prob < 1:\n a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)\n \n # Cost function\n if lambd == 0:\n cost = compute_cost(a3, Y)\n else:\n cost = compute_cost_with_regularization(a3, Y, parameters, lambd)\n \n # Backward propagation.\n assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout, \n # but this assignment will only explore one at a time\n if lambd == 0 and keep_prob == 1:\n grads = backward_propagation(X, Y, cache)\n elif lambd != 0:\n grads = backward_propagation_with_regularization(X, Y, cache, lambd)\n elif keep_prob < 1:\n grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)\n \n # Update parameters.\n parameters = update_parameters(parameters, grads, learning_rate)\n \n # Print the loss every 10000 iterations\n if print_cost and i % 10000 == 0:\n print(\"Cost after iteration {}: {}\".format(i, cost))\n if print_cost and i % 1000 == 0:\n costs.append(cost)\n \n # plot the cost\n plt.plot(costs)\n plt.ylabel('cost')\n plt.xlabel('iterations (x1,000)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n \n return parameters\n\n# GRADED FUNCTION: compute_cost_with_regularization\n\nparameters = model(train_X, train_Y)\nprint (\"On the training set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)\n\nplt.title(\"Model without regularization\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)\n\ndef compute_cost_with_regularization(A3, Y, parameters, lambd):\n \"\"\"\n Implement the cost function with L2 regularization. See formula (2) above.\n \n Arguments:\n A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n parameters -- python dictionary containing parameters of the model\n \n Returns:\n cost - value of the regularized loss function (formula (2))\n \"\"\"\n m = Y.shape[1]\n W1 = parameters[\"W1\"]\n W2 = parameters[\"W2\"]\n W3 = parameters[\"W3\"]\n \n cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost\n \n ### START CODE HERE ### (approx. 1 line)\n \n L2_reg = (1. / m) * (lambd / 2) * (np.sum(np.square(W1))\\\n + np.sum(np.square(W2)) + np.sum(np.square(W3)))\n cost = cross_entropy_cost + L2_reg\n \n return cost\n\nA3, Y_assess, parameters = compute_cost_with_regularization_test_case()\n\nprint(\"cost = \" + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))\n\n\n# GRADED FUNCTION: backward_propagation_with_regularization\n\ndef backward_propagation_with_regularization(X, Y, cache, lambd):\n \"\"\"\n Implements the backward propagation of our baseline model to which we added an L2 regularization.\n \n Arguments:\n X -- input dataset, of shape (input size, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n cache -- cache output from forward_propagation()\n lambd -- regularization hyperparameter, scalar\n \n Returns:\n gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables\n \"\"\"\n \n m = X.shape[1]\n (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n \n ### START CODE HERE ### (approx. 1 line)\n dW3 = 1./m * (np.dot(dZ3, A2.T) + lambd * W3)\n ### END CODE HERE ###\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n \n dA2 = np.dot(W3.T, dZ3)\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n ### START CODE HERE ### (approx. 1 line)\n dW2 = 1./m * (np.dot(dZ2, A1.T) + lambd * W2 )\n ### END CODE HERE ###\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n ### START CODE HERE ### (approx. 1 line)\n dW1 = 1./m * (np.dot(dZ1, X.T) + lambd * W1 )\n ### END CODE HERE ###\n db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\"dA2\": dA2,\n \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2, \"dA1\": dA1, \n \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients\n\nX_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()\n\ngrads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"dW2 = \"+ str(grads[\"dW2\"]))\nprint (\"dW3 = \"+ str(grads[\"dW3\"]))\n\nparameters = model(train_X, train_Y, lambd = 0.7)\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)\n\nplt.title(\"Model with L2-regularization\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)\n\n# GRADED FUNCTION: forward_propagation_with_dropout\ndef forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):\n \"\"\"\n Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.\n \n Arguments:\n X -- input dataset, of shape (2, number of examples)\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n W1 -- weight matrix of shape (20, 2)\n b1 -- bias vector of shape (20, 1)\n W2 -- weight matrix of shape (3, 20)\n b2 -- bias vector of shape (3, 1)\n W3 -- weight matrix of shape (1, 3)\n b3 -- bias vector of shape (1, 1)\n keep_prob - probability of keeping a neuron active during drop-out, scalar\n \n Returns:\n A3 -- last activation value, output of the forward propagation, of shape (1,1)\n cache -- tuple, information stored for computing the backward propagation\n \"\"\"\n np.random.seed(1)\n W1 = parameters['W1']\n b1 = parameters['b1']\n W2 = parameters['W2']\n b2 = parameters['b2']\n W3 = parameters['W3']\n b3 = parameters['b3']\n \n z1 = np.matmul(W1, X) + b1 # shape (20, 1) \n a1 = relu(z1)\n \n # 1) d[1] with same shape as a[1] np.random.rand() #\n d1 = np.random.rand(a1.shape[0], a1.shape[1])\n # 2) d[x] > keep_prob (0, 1) #\n d1 = d1 < keep_prob\n # 3) element wise product #\n a1 *= d1\n # 4) divide by keep_prob #\n a1 /= keep_prob\n \n z2 = np.matmul(W2, a1) + b2 # shape (3, 1)\n a2 = relu(z2)\n d2 = np.random.rand(a2.shape[0], a1.shape[1])\n d2 = d2 < keep_prob\n a2 *= d2\n a2 /= keep_prob\n \n z3 = np.matmul(W3, a2) + b3 # shape (1,1)\n a3 = sigmoid(z3)\n \n cache = (z1, d1, a1, W1, b1, z2, d2, a2, W2, b2, z3, a3, W3, b3)\n return a3, cache\n\nX_assess, parameters = forward_propagation_with_dropout_test_case()\n\nA3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)\nprint (\"A3 = \" + str(A3)) \n \n# GRADED FUNCTION: backward_propagation_with_dropout\n\ndef backward_propagation_with_dropout(X, Y, cache, keep_prob):\n \"\"\"\n Implements the backward propagation of our baseline model to which we added dropout.\n \n Arguments:\n X -- input dataset, of shape (2, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n cache -- cache output from forward_propagation_with_dropout()\n keep_prob - probability of keeping a neuron active during drop-out, scalar\n \n Returns:\n gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables\n \"\"\"\n m = X.shape[0]\n (z1, d1, a1, W1, b1, z2, d2, a2, W2, b2, z3, a3, W3, b3) = cache\n dz3 = a3 - Y\n dW3 = 1./m * np.dot(dz3, a2.T) # shape (1,1)*(1,3) = (1,3)\n db3 = 1./m * np.sum(dz3, axis=1, keepdims=True)\n \n da2 = np.dot(W3.T, dz3)\n # dropout-backprop #\n da2 *= d2\n da2 /= keep_prob\n \n dz2 = np.multiply(da2, np.int64(a2 > 0))\n dW2 = 1./m * np.dot(dz2, a1.T)\n db2 = 1./m * np.sum(dz2, axis=1, keepdims=True)\n \n da1 = np.dot(W2.T, dz2)\n da1 *= d1\n da1 /= keep_prob\n dz1 = np.multiply(da1, np.int64(a1 > 0))\n dW1 = 1./m * np.dot(dz1, X.T)\n db1 = 1./m * np.sum(dz1, axis=1, keepdims=True)\n \n gradients = {\"dZ3\": dz3, \"dW3\": dW3, \"db3\": db3,\"dA2\": da2,\n \"dZ2\": dz2, \"dW2\": dW2, \"db2\": db2, \"dA1\": da1, \n \"dZ1\": dz1, \"dW1\": dW1, \"db1\": db1}\n return gradients\n \nX_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()\n\ngradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)\n\nprint (\"dA1 = \" + str(gradients[\"dA1\"]))\nprint (\"dA2 = \" + str(gradients[\"dA2\"])) \n \n \nparameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.003)\n\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters) \n\n\nplt.title(\"Model with dropout\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)\n\n\n" ]
[ [ "numpy.square", "numpy.dot", "numpy.random.rand", "numpy.matmul", "numpy.random.seed", "matplotlib.pyplot.xlabel", "numpy.sum", "matplotlib.pyplot.title", "matplotlib.pyplot.plot", "numpy.int64", "matplotlib.pyplot.ylabel", "matplotlib.pyplot.show", "matplotlib.pyplot.gca" ] ]
yashgarg2107/Index-structures-for-k-NN-search
[ "25efa4582de4a933d64ee0ff7a600ce5887ed322" ]
[ "Scripts/prec_100_eval.py" ]
[ "import numpy as np \n\nkdt = np.loadtxt(\"results_kdt.txt\",delimiter=' ')\nlsh = np.loadtxt(\"results_lsh.txt\",delimiter=' ')\nseq = np.loadtxt(\"results_seq.txt\",delimiter=' ')\n\ncount=0\nfor i in range(100):\n\tfor j in range(100):\n\t\tfor k in range(100):\n\t\t\tif(np.array_equal(seq[i*100+j],lsh[i*100+k])):\n\t\t\t\tcount+=1 \n\t\t\t\tbreak\n\nprint(count/100)" ]
[ [ "numpy.loadtxt", "numpy.array_equal" ] ]
sebastianmattar/LedFx
[ "c087862666c4cbc1eca9e9e1f741ed741c5bae3a" ]
[ "ledfx/devices/FXMatrix.py" ]
[ "from ledfx.devices import Device\nimport logging\nimport voluptuous as vol\nimport numpy as np\nimport socket\n\n_LOGGER = logging.getLogger(__name__)\n\nclass FXMatrix(Device):\n \"\"\"FXMatrix device support\"\"\"\n\n CONFIG_SCHEMA = vol.Schema({\n vol.Required('ip_address', description='Hostname or IP address of the device'): str,\n vol.Required('port', description='Port for the UDP device'): vol.All(vol.Coerce(int), vol.Range(min=1, max=65535)),\n })\n\n def activate(self):\n self._sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n super().activate()\n\n def deactivate(self):\n super().deactivate()\n self._sock = None\n\n def flush(self, data):\n udpData = bytearray()\n byteData = data.astype(np.dtype('B'))\n # Append all of the pixel data\n udpData.extend(byteData.flatten().tobytes())\n\n self._sock.sendto(bytes(udpData), (self._config['ip_address'], self._config['port']))" ]
[ [ "numpy.dtype" ] ]
igormcsouza/serialize-images
[ "2fc567a7eb380c7cb4c92463cbeae663393361cb" ]
[ "tests/test_serialize_images.py" ]
[ "import numpy\n\nfrom serialize_images import __version__\nfrom serialize_images.core import MainArraySerializable\n\n\ndef test_version():\n assert __version__ == '0.1.1'\n\ndef test_core_encode():\n result = MainArraySerializable.encode_to_string(numpy.ndarray([0]))\n print(result)\n assert type(result) == str\n\ndef test_core_decode():\n result = MainArraySerializable.decode_to_ndarray(\n \"\\x93NUMPY\\x01\\x00v\\x00{'descr': '<f8', 'fortran_order': False, 'shape': (0,), } \\n\"\n )\n assert type(result) == numpy.ndarray\n" ]
[ [ "numpy.ndarray" ] ]
wr1/b3p
[ "3c77a2fda329fa539393188e5045b5fdc91cd8b5" ]
[ "b3p/blade.py" ]
[ "from b3p import splining\nfrom b3p import loft_utils\nfrom b3p import blade_section\n\n\nimport numpy as np\nfrom copy import deepcopy as dc\nfrom matplotlib import pyplot as plt\nimport pickle\nimport math\nimport vtk\n\n\nclass blade:\n def __init__(\n self,\n chord,\n thickness,\n twist,\n dx,\n dy,\n z,\n airfoils,\n np_spanwise=100,\n chordwise_sampling=[],\n offset_optimal=True,\n interpolate_method=1,\n flatten_lw=True,\n barrel_length=0.0,\n offset_clamp_points=[0.32, 0.55, 0.7],\n ):\n \"\"\"\n sequence of what needs to be done:\n\n #. load in airfoils\n\n #. resample airfoils to fixed number of points around circumference\n using spline interpolation\n\n #. sample thickness, chord, twist and centerline at required z positions\n\n #. create linear interpolations of the airfoil table at sampled thickness\n values\n\n #. scale, rotate, offset sections to exist on the\n\n Args:\n chord : chord distribution [(r,chord),..]\n\n thickness : thickness distribution [(r,t),..] --> relative\n\n twist : twist distribution [(r,twist),..]\n\n centerline : centerline spline [(x,y,z),..]\n\n airfoils : table of airfoils (key is thickness) {.35:'naca35'}\n\n np_chordwise (int) : number of chordwise points\n\n np_spanwise (int): number of spanwise points\n\n chordwise_sampling (list) : chordwise list of t coordinates\n (overrides np_chordwise)\n\n offset_optimal (bool): flag indicating whether to run optimal\n offsetting of the sparcap\n\n interpolate_method (int): Type of interpolation (1 being linear, 2\n KochanekSpline , 3 CardinalSpline, 4 SmoothCurve )\n\n flatten_lw (bool): flag indicating whether to run LW side flattening\n\n barrel_length (float): length of root that stays cylindrical\n\n offset_clamp_points (list[float]): list of points closest to which\n the optimal offsetting points are taken\n\n \"\"\"\n self.np_spanwise = np_spanwise\n self.barrel_length = barrel_length\n self.np_chordwise = len(chordwise_sampling)\n\n self._load_airfoils(airfoils, chordwise_sampling)\n self._interpolate_planform(\n chord, thickness, twist, dx, dy, z, flatten_lw=flatten_lw\n )\n self._interpolate_airfoils(\n offset_optimal=offset_optimal,\n interpolate_method=interpolate_method,\n offset_clamp_points=offset_clamp_points,\n )\n\n def to_table(self, prefix=\"prebend_out\", x=[]):\n if x == []:\n x = self.dy[0]\n\n f = open(\"%s.csv\" % prefix, \"w\")\n f.write(\n \"relative_r;z;prebend;chord;relative_thickness;absolute_thickness;twist;dx\\n\"\n )\n\n z, dy, ch, th, thr, tw, dxf = [\n np.interp(x, i[0], i[1])\n for i in [\n self.z,\n self.dx,\n self.chord,\n self.thickness,\n self.absolute_thickness,\n self.twist,\n (\n self.dy[0],\n [i[1] - i[2] for i in zip(self.chord[1], self.dxf[1], self.dy[1])],\n ),\n ]\n ]\n\n for i in zip(x, z, dy, ch, th, thr, tw, dxf):\n f.write(\"%f;%f;%f;%f;%f;%f;%f;%f\\n\" % i)\n f.close()\n\n def _load_airfoils(self, airfoils, x=[]):\n \"\"\"\n fill self.airfoil by interpolating from input airfoil set\n\n args:\n airfoils (dict) : airfoils dict\n\n x (list) : list of sections where airfoils are to be interpolated\n \"\"\"\n self.airfoils = {}\n for i in sorted(airfoils):\n if airfoils[i].find(\"du\") != -1:\n print(\"load %s normalised\" % airfoils[i])\n t = loft_utils.load(airfoils[i], normalise=True)\n else:\n print(\"load %s unnormalised\" % airfoils[i])\n t = loft_utils.load(airfoils[i], normalise=False) # fix so we don't\n # normalise flatback root\n self.airfoils[i] = loft_utils.interp(x, t)[:2]\n\n def _interpolate_planform(\n self, chord, thickness, twist, dx, dy, z, flatten_lw=True\n ):\n \"\"\"\n spline the planform based on parameters\n \"\"\"\n self.x = np.linspace(0, 1.0, self.np_spanwise)\n (\n self.input_chord,\n self.input_thickness,\n self.input_twist,\n self.input_dx,\n self.input_dy,\n ) = (\n list(zip(*dc(chord))),\n list(zip(*dc(thickness))),\n list(zip(*dc(twist))),\n list(zip(*dc(dx))),\n list(zip(*dc(dy))),\n )\n\n self.chord = splining.intp_c(self.x, chord)\n self.twist = splining.intp_c(self.x, twist)\n self.thickness = splining.intp_c(self.x, thickness)\n self.dx = splining.intp_c(self.x, dx)\n self.dy = splining.intp_c(self.x, dy)\n self.z = splining.intp_c(self.x, z)\n self.absolute_thickness = [\n self.x,\n [i[0] * i[1] for i in zip(self.chord[1], self.thickness[1])],\n ]\n\n # here we take the thickness, and determine a y offset such that the\n # suction side sparcap runs flat for the first half of the blade\n dy_flat = [\n self.x,\n [\n 0.5 * self.absolute_thickness[1][0] - 0.5 * i\n for i in self.absolute_thickness[1]\n ],\n ]\n\n if flatten_lw:\n mid_offset = dy_flat[1][int(len(dy_flat[1]) / 2)]\n dy_flat[1] = [\n i[1] - 2.0 * i[0] * mid_offset for i in zip(dy_flat[0], dy_flat[1])\n ]\n\n dynew = list(zip(dy_flat[0], dy_flat[1]))[: int(len(self.x) / 2)]\n\n for i in dy:\n if i[0] > 0.7:\n dynew.append(i)\n\n if self.barrel_length > 0.00001:\n bpnts = []\n xm = 0.0\n for i in np.linspace(\n 0, self.barrel_length / (self.z[1][-1] - self.z[1][0]), 20\n ):\n bpnts.append((i, 0))\n xm = i\n\n for i in dynew:\n if i[0] > 0.2:\n bpnts.append(i)\n dynew = bpnts\n self.dy = splining.intp_c(self.x, dynew)\n\n def plot(self, name=\"_dum\", fname=\"_out.png\"):\n \"\"\"\n plot parameter distributions\n\n args:\n name (str) : blade candidate name\n\n fname (str): plot output file name\n\n \"\"\"\n plt.figure(figsize=(15, 15))\n plt.subplot(3, 3, 1)\n plt.title(\"chord rotor_diam=%.3f\" % (2.0 * max(self.z[1])))\n plt.plot(self.x, self.chord[1], label=name)\n plt.plot(self.input_chord[0], self.input_chord[1], \"o\", label=name + \"_input\")\n plt.xlabel(\"rel span (-)\")\n plt.ylabel(\"chord (m)\")\n plt.grid(True)\n plt.legend(loc=\"best\").get_frame().set_alpha(0.5)\n\n plt.subplot(3, 3, 8)\n plt.plot(self.x, self.chord[1], label=name)\n plt.plot(self.input_chord[0], self.input_chord[1], \"o\", label=\"%s_input\" % name)\n plt.grid(True)\n plt.xlim(0.9, 1)\n\n plt.title(\"tip chord\")\n plt.subplot(3, 3, 2)\n plt.plot(self.x, self.twist[1], label=name)\n plt.plot(self.input_twist[0], self.input_twist[1], \"o\", label=\"%s_input\" % name)\n plt.title(\"twist\")\n plt.grid(True)\n\n plt.subplot(3, 3, 3)\n plt.plot(self.x, self.thickness[1], label=name)\n plt.plot(\n self.input_thickness[0],\n self.input_thickness[1],\n \"o\",\n label=\"%s_input\" % name,\n )\n plt.title(\"rel thickness\")\n plt.grid(True)\n\n plt.subplot(3, 3, 4)\n plt.plot(self.absolute_thickness[0], self.absolute_thickness[1], label=name)\n plt.title(\"abs thickness\")\n plt.grid(True)\n\n plt.subplot(3, 3, 7)\n plt.plot(self.absolute_thickness[0], self.absolute_thickness[1], label=name)\n plt.title(\"abs thickness\")\n plt.grid(True)\n plt.xlim(0, 0.2)\n\n plt.subplot(3, 3, 5)\n plt.plot(self.dx[0], self.dx[1], label=\"%s_x\" % name)\n plt.plot(self.input_dx[0], self.input_dx[1], \"o\", label=\"%s_input\" % name)\n plt.plot(self.dy[0], self.dy[1], label=\"%s_y\" % name)\n plt.plot(self.input_dy[0], self.input_dy[1], \"o\", label=\"%s_input\" % name)\n plt.legend(loc=\"best\").get_frame().set_alpha(0.5)\n plt.title(\"xy offsets\")\n plt.grid(True)\n\n plt.subplot(3, 3, 6)\n plt.plot(\n list(zip(*self.mx_thickness_loc))[0],\n list(zip(*self.mx_thickness_loc))[1],\n \"o\",\n label=\"control points\",\n )\n plt.plot(self.x, self.mmxt, \".\", label=\"sectionwise optimal\")\n plt.plot(self.dxf[0], self.dxf[1], label=\"offset used\")\n plt.ylabel(\"dist to 0.3 x chord (m)\")\n plt.legend(loc=\"best\")\n plt.grid(True)\n\n plt.savefig(fname, dpi=100)\n\n def _interpolate_airfoils(\n self,\n interpolate_method=1,\n offset_optimal=True,\n offset_clamp_points=[0.32, 0.55, 0.7],\n ):\n v = []\n for i in sorted(self.airfoils):\n v.append(\n list(\n zip(\n [i for j in self.airfoils[i][0]],\n self.airfoils[i][0],\n self.airfoils[i][1],\n )\n )\n )\n\n nv = []\n for i in zip(*v):\n t, x, y = zip(*i)\n if interpolate_method == 0:\n nx = np.interp(self.thickness[1], t, x)\n ny = np.interp(self.thickness[1], t, y)\n elif interpolate_method == 1:\n # interpolate along the length of the blade using a splining.intp_k\n dum, nx = splining.intp_k(self.thickness[1], zip(t, x))\n dum, ny = splining.intp_k(self.thickness[1], zip(t, y))\n elif interpolate_method == 2:\n dum, nx = splining.intp_c(self.thickness[1], zip(t, x))\n dum, ny = splining.intp_c(self.thickness[1], zip(t, y))\n elif interpolate_method == 3:\n dum, nx = splining.intp_sc(self.thickness[1], zip(t, x))\n dum, ny = splining.intp_sc(self.thickness[1], zip(t, y))\n else:\n exit(\"%i is not a valid interpolate_method\" % i)\n nv.append(list(zip(nx, ny)))\n\n self.sections = []\n for i in zip(*nv):\n x, y = list(zip(*i))\n self.sections.append(blade_section.section(x, y))\n\n # build the blade up out of sections in two loops\n # first, scale and twist the section\n mx_thickness_loc = []\n c = 0\n\n # a variable is created, analogous to focus, which represents the\n # fraction of the chord around which the twist is defined\n twist_center = 0.3\n\n self.mmxt = []\n for i in zip(self.x, self.chord[1], self.twist[1], self.sections):\n # with -0.3 (since the section is not scaled, this is 0.3*chord)\n i[3].translate(-twist_center, 0.0, 0.0)\n i[3].twist(i[2])\n i[3].scale((i[1], i[1], 1.0))\n mxt = i[3].get_max_thickness()\n self.mmxt.append(mxt)\n for j in sorted(offset_clamp_points):\n if i[0] >= j:\n mx_thickness_loc.append((i[0], mxt))\n offset_clamp_points.remove(j)\n break\n c += 1\n\n mx_thickness_loc.append((i[0], i[3].get_max_thickness()))\n\n dxf = list(splining.intp_c(self.x, mx_thickness_loc))\n\n dxf[1] = np.array(dxf[1])\n\n if offset_optimal == False:\n dxf[1] = np.zeros(len(dxf[1]))\n\n # use the local coordinate to offset the section so that the thickest\n # point lines up with the pitch axis\n if offset_optimal:\n fpx, fpy = [], []\n for i in zip(self.sections, dxf[1], self.twist[1]):\n fpx.append(math.sin(math.radians(i[2])) * i[1])\n fpy.append(-i[1] + (1.0 - np.cos(np.radians(i[2]))) * i[1])\n\n self.dx = (self.dx[0], np.array(self.dx[1]) - np.array(fpx))\n self.dy = (self.dy[0], np.array(self.dy[1]) + np.array(fpy))\n\n for i in self.sections:\n i.local_to_global()\n\n # then, translate the section coordinates to global\n for i in zip(self.sections, self.dx[1], self.dy[1], self.z[1]):\n i[0].translate(i[1], i[2], i[3])\n\n self.dxf = dxf\n self.mx_thickness_loc = mx_thickness_loc\n\n def export_variables(self, fname):\n var = {\n \"dx\": self.dx,\n \"dxf\": self.dxf,\n \"dy\": self.dy,\n \"z\": self.z,\n \"twist\": self.twist,\n \"chord\": self.chord,\n \"thickness\": self.thickness,\n \"absolute_thickness\": self.absolute_thickness,\n }\n open(fname, \"w\").write(str(var))\n return var\n\n def dump(self, fname=\"__sections.txt\", z_rotation=0.0):\n \" dump to a sections list for use in FEA (with webs) \"\n lst = []\n for i in self.sections:\n lst.append(i.get_pointlist(z_rotation=z_rotation))\n\n if fname.endswith(\".txt\"):\n open(fname, \"wb\").write(str(lst).encode(\"utf-8\"))\n elif fname.endswith(\".pck\"):\n pickle.dump(lst, open(fname, \"wb\"))\n\n def export_xfoil(self, prefix=\"airfoil_out/_xf\"):\n for i in zip(self.sections, self.thickness[1], self.z[1]):\n nm = prefix + \"_t_%.3f_r_%.3f\" % (i[1], i[2])\n i[0].to_xfoil(nm.replace(\".\", \"_\") + \".dat\")\n\n def mesh(self, fname=\"\"):\n \"join up the sections\"\n n_points = self.np_chordwise\n vp = vtk.vtkPoints()\n for i in self.sections:\n for j in range(i.polydata.GetNumberOfPoints()):\n pt = i.polydata.GetPoint(j)\n vp.InsertNextPoint((pt[0], pt[1], pt[2]))\n\n self.poly = vtk.vtkPolyData()\n self.poly.SetPoints(vp)\n\n cells = vtk.vtkCellArray()\n for i in range(1, len(self.sections)):\n s0 = range((i - 1) * n_points, i * n_points)\n s1 = range(i * n_points, (i + 1) * n_points)\n for j in range(n_points):\n cells.InsertNextCell(3)\n cells.InsertCellPoint(s0[j])\n cells.InsertCellPoint(s1[j])\n cells.InsertCellPoint(s1[(j + 1) % n_points])\n\n cells.InsertNextCell(3)\n cells.InsertCellPoint(s0[j])\n cells.InsertCellPoint(s1[(j + 1) % n_points])\n cells.InsertCellPoint(s0[(j + 1) % n_points])\n\n # bottom and top caps\n s0 = range(0, n_points)\n for i in range(0, int(math.ceil(0.5 * n_points) - 1)):\n t1 = (s0[i], s0[n_points - i - 2], s0[n_points - i - 1])\n t2 = (s0[i], s0[i + 1], s0[n_points - i - 2])\n\n for j in [t1, t2]:\n if j[1] != j[2]:\n cells.InsertNextCell(3)\n for k in j:\n cells.InsertCellPoint(k)\n\n s0 = s1\n for i in range(0, int(math.ceil(0.5 * n_points) - 1)):\n t1 = (s0[i], s0[n_points - i - 1], s0[n_points - i - 2])\n t2 = (s0[i], s0[n_points - i - 2], s0[i + 1])\n\n for j in [t1, t2]:\n if j[1] != j[2]:\n cells.InsertNextCell(3)\n for k in j:\n cells.InsertCellPoint(k)\n\n self.poly.SetPolys(cells)\n\n if fname != \"\":\n wr = vtk.vtkSTLWriter()\n wr.SetFileTypeToBinary()\n wr.SetFileName(fname)\n wr.SetInputData(self.poly)\n wr.Write()\n\n wr = vtk.vtkXMLPolyDataWriter()\n wr.SetFileName(fname.replace(\".stl\", \".vtp\"))\n wr.SetInputData(self.poly)\n wr.Write()\n" ]
[ [ "numpy.array", "matplotlib.pyplot.xlim", "matplotlib.pyplot.xlabel", "matplotlib.pyplot.grid", "matplotlib.pyplot.plot", "matplotlib.pyplot.title", "matplotlib.pyplot.legend", "matplotlib.pyplot.savefig", "matplotlib.pyplot.figure", "numpy.interp", "numpy.radians", "matplotlib.pyplot.ylabel", "numpy.linspace", "matplotlib.pyplot.subplot" ] ]
felipessalvatore/vol4life
[ "76add3842007c41f4edcc2bdd449922d6ea708ca" ]
[ "vol4life/stats.py" ]
[ "import numpy as np\nimport scipy as sp\nimport pandas as pd\nfrom scipy import stats\n\n\ndef get_ecdf(series_):\n return lambda x: (series_.sort_values() < x).astype(int).mean()\n\n\ndef get_sample_acvf(x, h):\n \"\"\"\n x: series\n h: shift param\n return: autocovariance estimator\n \"\"\"\n n = x.shape[0]\n shift_x = x.shift(h)\n mean = x.mean()\n result = (x - mean) * (shift_x - mean)\n return result.sum() / n\n\n\ndef autocovariance_f(x, nlags):\n \"\"\"\n x: series\n nlags: range of lags param\n return: array of autocovariance estimators\n \"\"\"\n results = np.array([get_sample_acvf(x, h) for h in range(nlags)])\n return results\n\n\ndef autocorrelation_f(x, nlags):\n \"\"\"\n x: series\n nlags: range of lags param\n return: array of autocorrelation estimators\n \"\"\"\n gammas = autocovariance_f(x, nlags)\n gamma_0 = get_sample_acvf(x, 0)\n return gammas / gamma_0\n\n\ndef rank_acf(x, nlags):\n \"\"\"\n x: series\n nlags: range of lags param\n return: array of autocorrelation estimators\n using Spearman rank-order correlation\n \"\"\"\n results = [sp.stats.spearmanr(x.shift(h), x, nan_policy='omit')[\n 0] for h in range(nlags)]\n return np.array(results)\n\n\ndef get_sample_ccvf(x, y, h):\n \"\"\"\n x: series\n y: series\n h: shift param\n return: cross-covariance estimator\n \"\"\"\n n = x.shape[0]\n shift_x = x.shift(h)\n mean_x = x.mean()\n mean_y = y.mean()\n result = (shift_x - mean_x) * (y - mean_y)\n return result.sum() / n\n\n\ndef crosscorrelation_f(x, y, nlags):\n \"\"\"\n x: series\n y: series\n nlags: range of lags param\n return: array of cross-correlation estimators\n \"\"\"\n results = np.array([get_sample_ccvf(x, y, h) for h in range(nlags)])\n gamma_x_0 = get_sample_acvf(x, 0)\n gamma_y_0 = get_sample_acvf(y, 0)\n denominator = np.sqrt(gamma_x_0 * gamma_y_0)\n return results / denominator\n\n\ndef stats_ccf(x, y, nlags):\n return stats.ccf(y, x, unbiased=False)[:nlags]\n\n\ndef rank_sample_ccf(x, y, h):\n \"\"\"\n x: series that we will perform the lag\n y: series\n h: lag param\n return: cross-correlation estimator\n using Spearman rank-order correlation\n \"\"\"\n x_h = x.shift(h)\n return sp.stats.spearmanr(x_h, y, nan_policy='omit')[0]\n\n\ndef rank_ccf(x, y, nlags):\n \"\"\"\n x: series\n y: series\n nlags: range of lags param\n return: array of cross-correlation estimators\n \"\"\"\n results = [rank_sample_ccf(x, y, h) for h in range(nlags)]\n return np.array(results)\n\n\ndef test_normality_skewness(returns, alpha=0.05):\n \"\"\"\n Let $\\{x_1 ,\\dots , x_T \\}$ be a random sample of $X$ with $T$ observations.\n Under the normality assumption, the sample skewness is distributed asymptotically\n as normal with zero mean and variances $6/T$.Given an asset return series $\\{r_1 ,\\dots , r_T\\}$,\n to test the skewness of the returns,\n we consider the null hypothesis $H_0 : S(r) = 0$\n versus the alternative hypothesis $H_a : S(r) \\not= 0$.\n The t-ratio statistic of the sample is\n\n \\begin{equation}\n t = \\frac{\\hat{S}(r)}{\\sqrt{6/T}}\n \\end{equation}\n\n where $\\hat{S}(r)$ is the sample skewness. The decision rule is as follows.\n Reject the null hypothesis at the $\\alpha$ significance level, if $|t| > Z_{\\alpha/2}$ ,\n where $Z_{\\alpha/2}$ is the upper $100(\\alpha/2)$th quantile of the standard normal distribution.\n\n :param returns: daily returns\n :type returns: pd.Series\n :param alpha: significant level\n :type alpha: float\n :return: test results\n :rtype: pd.DataFrame\n \"\"\"\n size = returns.shape[0]\n skew = returns.skew()\n name = returns.name\n\n test_statistic = skew / np.sqrt(6 / size)\n abs_test_statistic = np.abs(test_statistic)\n z_alpha = stats.norm.ppf(1 - (alpha / 2))\n p_value = (1 - stats.norm.cdf(abs_test_statistic)) * 2\n if abs_test_statistic > z_alpha:\n decision = r\"Reject $H_0$\"\n else:\n decision = r\"Retain $H_0$\"\n df = pd.DataFrame([(name, skew, test_statistic, p_value, decision)],\n columns=[\"name\", \"sample skewness\", \"test_statistic\",\n \"p_value\", \"decision\"])\n return df\n" ]
[ [ "numpy.array", "scipy.stats.norm.ppf", "pandas.DataFrame", "scipy.stats.spearmanr", "numpy.abs", "numpy.sqrt", "scipy.stats.norm.cdf", "scipy.stats.ccf" ] ]
franp9am/pytorch-lightning
[ "d2aaf6b4cc420a4ef2aa4d1db29a0e881cea9406" ]
[ "pytorch_lightning/callbacks/early_stopping.py" ]
[ "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nr\"\"\"\nEarly Stopping\n^^^^^^^^^^^^^^\n\nMonitor a metric and stop training when it stops improving.\n\n\"\"\"\nimport logging\nfrom typing import Any, Callable, Dict, Optional, Tuple\n\nimport numpy as np\nimport torch\n\nimport pytorch_lightning as pl\nfrom pytorch_lightning.callbacks.base import Callback\nfrom pytorch_lightning.utilities import rank_zero_warn\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\nlog = logging.getLogger(__name__)\n\n\nclass EarlyStopping(Callback):\n r\"\"\"\n Monitor a metric and stop training when it stops improving.\n\n Args:\n monitor: quantity to be monitored.\n min_delta: minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute\n change of less than or equal to `min_delta`, will count as no improvement.\n patience: number of checks with no improvement\n after which training will be stopped. Under the default configuration, one check happens after\n every training epoch. However, the frequency of validation can be modified by setting various parameters on\n the ``Trainer``, for example ``check_val_every_n_epoch`` and ``val_check_interval``.\n\n .. note::\n\n It must be noted that the patience parameter counts the number of validation checks with\n no improvement, and not the number of training epochs. Therefore, with parameters\n ``check_val_every_n_epoch=10`` and ``patience=3``, the trainer will perform at least 40 training\n epochs before being stopped.\n\n verbose: verbosity mode.\n mode: one of ``'min'``, ``'max'``. In ``'min'`` mode, training will stop when the quantity\n monitored has stopped decreasing and in ``'max'`` mode it will stop when the quantity\n monitored has stopped increasing.\n strict: whether to crash the training if `monitor` is not found in the validation metrics.\n check_finite: When set ``True``, stops training when the monitor becomes NaN or infinite.\n stopping_threshold: Stop training immediately once the monitored quantity reaches this threshold.\n divergence_threshold: Stop training as soon as the monitored quantity becomes worse than this threshold.\n check_on_train_epoch_end: whether to run early stopping at the end of the training epoch.\n If this is ``False``, then the check runs at the end of the validation.\n\n Raises:\n MisconfigurationException:\n If ``mode`` is none of ``\"min\"`` or ``\"max\"``.\n RuntimeError:\n If the metric ``monitor`` is not available.\n\n Example::\n\n >>> from pytorch_lightning import Trainer\n >>> from pytorch_lightning.callbacks import EarlyStopping\n >>> early_stopping = EarlyStopping('val_loss')\n >>> trainer = Trainer(callbacks=[early_stopping])\n\n .. tip:: Saving and restoring multiple early stopping callbacks at the same time is supported under variation in the\n following arguments:\n\n *monitor, mode*\n\n Read more: :ref:`Persisting Callback State`\n \"\"\"\n mode_dict = {\"min\": torch.lt, \"max\": torch.gt}\n\n order_dict = {\"min\": \"<\", \"max\": \">\"}\n\n def __init__(\n self,\n monitor: str,\n min_delta: float = 0.0,\n patience: int = 3,\n verbose: bool = False,\n mode: str = \"min\",\n strict: bool = True,\n check_finite: bool = True,\n stopping_threshold: Optional[float] = None,\n divergence_threshold: Optional[float] = None,\n check_on_train_epoch_end: Optional[bool] = None,\n ):\n super().__init__()\n self.monitor = monitor\n self.min_delta = min_delta\n self.patience = patience\n self.verbose = verbose\n self.mode = mode\n self.strict = strict\n self.check_finite = check_finite\n self.stopping_threshold = stopping_threshold\n self.divergence_threshold = divergence_threshold\n self.wait_count = 0\n self.stopped_epoch = 0\n self._check_on_train_epoch_end = check_on_train_epoch_end\n\n if self.mode not in self.mode_dict:\n raise MisconfigurationException(f\"`mode` can be {', '.join(self.mode_dict.keys())}, got {self.mode}\")\n\n self.min_delta *= 1 if self.monitor_op == torch.gt else -1\n torch_inf = torch.tensor(np.Inf)\n self.best_score = torch_inf if self.monitor_op == torch.lt else -torch_inf\n\n @property\n def state_key(self) -> str:\n return self._generate_state_key(monitor=self.monitor, mode=self.mode)\n\n def on_init_end(self, trainer: \"pl.Trainer\") -> None:\n if self._check_on_train_epoch_end is None:\n # if the user runs validation multiple times per training epoch or multiple training epochs without\n # validation, then we run after validation instead of on train epoch end\n self._check_on_train_epoch_end = trainer.val_check_interval == 1.0 and trainer.check_val_every_n_epoch == 1\n\n def _validate_condition_metric(self, logs: Dict[str, float]) -> bool:\n monitor_val = logs.get(self.monitor)\n\n error_msg = (\n f\"Early stopping conditioned on metric `{self.monitor}` which is not available.\"\n \" Pass in or modify your `EarlyStopping` callback to use any of the following:\"\n f' `{\"`, `\".join(list(logs.keys()))}`'\n )\n\n if monitor_val is None:\n if self.strict:\n raise RuntimeError(error_msg)\n if self.verbose > 0:\n rank_zero_warn(error_msg, RuntimeWarning)\n\n return False\n\n return True\n\n @property\n def monitor_op(self) -> Callable:\n return self.mode_dict[self.mode]\n\n def on_save_checkpoint(\n self, trainer: \"pl.Trainer\", pl_module: \"pl.LightningModule\", checkpoint: Dict[str, Any]\n ) -> Dict[str, Any]:\n return {\n \"wait_count\": self.wait_count,\n \"stopped_epoch\": self.stopped_epoch,\n \"best_score\": self.best_score,\n \"patience\": self.patience,\n }\n\n def on_load_checkpoint(\n self, trainer: \"pl.Trainer\", pl_module: \"pl.LightningModule\", callback_state: Dict[str, Any]\n ) -> None:\n self.wait_count = callback_state[\"wait_count\"]\n self.stopped_epoch = callback_state[\"stopped_epoch\"]\n self.best_score = callback_state[\"best_score\"]\n self.patience = callback_state[\"patience\"]\n\n def _should_skip_check(self, trainer: \"pl.Trainer\") -> bool:\n from pytorch_lightning.trainer.states import TrainerFn\n\n return trainer.state.fn != TrainerFn.FITTING or trainer.sanity_checking\n\n def on_train_epoch_end(self, trainer: \"pl.Trainer\", pl_module: \"pl.LightningModule\") -> None:\n if not self._check_on_train_epoch_end or self._should_skip_check(trainer):\n return\n self._run_early_stopping_check(trainer)\n\n def on_validation_end(self, trainer: \"pl.Trainer\", pl_module: \"pl.LightningModule\") -> None:\n if self._check_on_train_epoch_end or self._should_skip_check(trainer):\n return\n self._run_early_stopping_check(trainer)\n\n def _run_early_stopping_check(self, trainer: \"pl.Trainer\") -> None:\n \"\"\"Checks whether the early stopping condition is met and if so tells the trainer to stop the training.\"\"\"\n logs = trainer.callback_metrics\n\n if trainer.fast_dev_run or not self._validate_condition_metric( # disable early_stopping with fast_dev_run\n logs\n ): # short circuit if metric not present\n return\n\n current = logs.get(self.monitor)\n should_stop, reason = self._evaluate_stopping_criteria(current)\n\n # stop every ddp process if any world process decides to stop\n should_stop = trainer.training_type_plugin.reduce_boolean_decision(should_stop)\n trainer.should_stop = trainer.should_stop or should_stop\n if should_stop:\n self.stopped_epoch = trainer.current_epoch\n if reason and self.verbose:\n self._log_info(trainer, reason)\n\n def _evaluate_stopping_criteria(self, current: torch.Tensor) -> Tuple[bool, Optional[str]]:\n should_stop = False\n reason = None\n if self.check_finite and not torch.isfinite(current):\n should_stop = True\n reason = (\n f\"Monitored metric {self.monitor} = {current} is not finite.\"\n f\" Previous best value was {self.best_score:.3f}. Signaling Trainer to stop.\"\n )\n elif self.stopping_threshold is not None and self.monitor_op(current, self.stopping_threshold):\n should_stop = True\n reason = (\n \"Stopping threshold reached:\"\n f\" {self.monitor} = {current} {self.order_dict[self.mode]} {self.stopping_threshold}.\"\n \" Signaling Trainer to stop.\"\n )\n elif self.divergence_threshold is not None and self.monitor_op(-current, -self.divergence_threshold):\n should_stop = True\n reason = (\n \"Divergence threshold reached:\"\n f\" {self.monitor} = {current} {self.order_dict[self.mode]} {self.divergence_threshold}.\"\n \" Signaling Trainer to stop.\"\n )\n elif self.monitor_op(current - self.min_delta, self.best_score.to(current.device)):\n should_stop = False\n reason = self._improvement_message(current)\n self.best_score = current\n self.wait_count = 0\n else:\n self.wait_count += 1\n if self.wait_count >= self.patience:\n should_stop = True\n reason = (\n f\"Monitored metric {self.monitor} did not improve in the last {self.wait_count} records.\"\n f\" Best score: {self.best_score:.3f}. Signaling Trainer to stop.\"\n )\n\n return should_stop, reason\n\n def _improvement_message(self, current: torch.Tensor) -> str:\n \"\"\"Formats a log message that informs the user about an improvement in the monitored score.\"\"\"\n if torch.isfinite(self.best_score):\n msg = (\n f\"Metric {self.monitor} improved by {abs(self.best_score - current):.3f} >=\"\n f\" min_delta = {abs(self.min_delta)}. New best score: {current:.3f}\"\n )\n else:\n msg = f\"Metric {self.monitor} improved. New best score: {current:.3f}\"\n return msg\n\n @staticmethod\n def _log_info(trainer: Optional[\"pl.Trainer\"], message: str) -> None:\n if trainer is not None and trainer.world_size > 1:\n log.info(f\"[rank: {trainer.global_rank}] {message}\")\n else:\n log.info(message)\n" ]
[ [ "torch.tensor", "torch.isfinite" ] ]
lsrock1/slowfast-detection.
[ "70ceb071bbcc90d472f15076fe917f1328a303ef" ]
[ "slowfast/datasets/loader.py" ]
[ "#!/usr/bin/env python3\n# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n\n\"\"\"Data loader.\"\"\"\n\nimport itertools\nimport numpy as np\nimport torch\nfrom torch.utils.data._utils.collate import default_collate\nfrom torch.utils.data.distributed import DistributedSampler\nfrom torch.utils.data.sampler import RandomSampler\n\nfrom .build import build_dataset\n\n\ndef detection_collate(batch):\n \"\"\"\n Collate function for detection task. Concatanate bboxes, labels and\n metadata from different samples in the first dimension instead of\n stacking them to have a batch-size dimension.\n Args:\n batch (tuple or list): data batch to collate.\n Returns:\n (tuple): collated detection data batch.\n \"\"\"\n inputs, labels, video_idx, extra_data = zip(*batch)\n inputs, video_idx = default_collate(inputs), default_collate(video_idx)\n labels = torch.tensor(np.concatenate(labels, axis=0)).float()\n\n collated_extra_data = {}\n for key in extra_data[0].keys():\n data = [d[key] for d in extra_data]\n if key == \"boxes\" or key == \"ori_boxes\":\n if not torch.is_tensor(data[0]) and not isinstance(data[0], np.ndarray):\n collated_extra_data[key] = data\n else:\n # Append idx info to the bboxes before concatenating them.\n bboxes = [\n np.concatenate(\n [np.full((data[i].shape[0], 1), float(i)), data[i]], axis=1\n )\n for i in range(len(data))\n ]\n bboxes = np.concatenate(bboxes, axis=0)\n collated_extra_data[key] = torch.tensor(bboxes).float()\n elif key == \"metadata\":\n collated_extra_data[key] = torch.tensor(\n list(itertools.chain(*data))\n ).view(-1, 2)\n else:\n collated_extra_data[key] = default_collate(data)\n\n return inputs, labels, video_idx, collated_extra_data\n\n\ndef construct_loader(cfg, split):\n \"\"\"\n Constructs the data loader for the given dataset.\n Args:\n cfg (CfgNode): configs. Details can be found in\n slowfast/config/defaults.py\n split (str): the split of the data loader. Options include `train`,\n `val`, and `test`.\n \"\"\"\n assert split in [\"train\", \"val\", \"test\"]\n if split in [\"train\"]:\n dataset_name = cfg.TRAIN.DATASET\n batch_size = int(cfg.TRAIN.BATCH_SIZE / cfg.NUM_GPUS)\n shuffle = True\n drop_last = True\n elif split in [\"val\"]:\n dataset_name = cfg.TRAIN.DATASET\n batch_size = int(cfg.TRAIN.BATCH_SIZE / cfg.NUM_GPUS)\n shuffle = False\n drop_last = False\n elif split in [\"test\"]:\n dataset_name = cfg.TEST.DATASET\n batch_size = int(cfg.TEST.BATCH_SIZE / cfg.NUM_GPUS)\n shuffle = False\n drop_last = False\n\n # Construct the dataset\n dataset = build_dataset(dataset_name, cfg, split)\n # Create a sampler for multi-process training\n sampler = DistributedSampler(dataset) if cfg.NUM_GPUS > 1 else None\n # Create a loader\n loader = torch.utils.data.DataLoader(\n dataset,\n batch_size=batch_size,\n shuffle=(False if sampler else shuffle),\n sampler=sampler,\n num_workers=cfg.DATA_LOADER.NUM_WORKERS,\n pin_memory=cfg.DATA_LOADER.PIN_MEMORY,\n drop_last=drop_last,\n collate_fn=detection_collate if cfg.DETECTION.ENABLE or cfg.FCOS.ENABLE else None,\n )\n return loader\n\n\ndef shuffle_dataset(loader, cur_epoch):\n \"\"\"\"\n Shuffles the data.\n Args:\n loader (loader): data loader to perform shuffle.\n cur_epoch (int): number of the current epoch.\n \"\"\"\n assert isinstance(\n loader.sampler, (RandomSampler, DistributedSampler)\n ), \"Sampler type '{}' not supported\".format(type(loader.sampler))\n # RandomSampler handles shuffling automatically\n if isinstance(loader.sampler, DistributedSampler):\n # DistributedSampler shuffles data based on epoch\n loader.sampler.set_epoch(cur_epoch)\n" ]
[ [ "numpy.concatenate", "torch.is_tensor", "torch.utils.data._utils.collate.default_collate", "torch.tensor", "torch.utils.data.DataLoader", "torch.utils.data.distributed.DistributedSampler" ] ]
hansupark/tensorflow-yolov4
[ "d8b1ceb0049b080e20c70aea37655006021f7ef5" ]
[ "py_src/yolov4/model/neck.py" ]
[ "\"\"\"\nMIT License\n\nCopyright (c) 2020 Hyeonki Hong <[email protected]>\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\"\"\"\nfrom tensorflow.keras import layers, Model\n\nfrom .common import YOLOConv2D\n\n\nclass PANet(Model):\n def __init__(\n self, num_classes, activation: str = \"leaky\", kernel_regularizer=None,\n ):\n super(PANet, self).__init__(name=\"PANet\")\n self.conv78 = YOLOConv2D(\n filters=256,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.upSampling78 = layers.UpSampling2D(interpolation=\"bilinear\")\n self.conv79 = YOLOConv2D(\n filters=256,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.concat78_79 = layers.Concatenate(axis=-1)\n\n self.conv80 = YOLOConv2D(\n filters=256,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv81 = YOLOConv2D(\n filters=512,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv82 = YOLOConv2D(\n filters=256,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv83 = YOLOConv2D(\n filters=512,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv84 = YOLOConv2D(\n filters=256,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n\n self.conv85 = YOLOConv2D(\n filters=128,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.upSampling85 = layers.UpSampling2D(interpolation=\"bilinear\")\n self.conv86 = YOLOConv2D(\n filters=128,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.concat85_86 = layers.Concatenate(axis=-1)\n\n self.conv87 = YOLOConv2D(\n filters=128,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv88 = YOLOConv2D(\n filters=256,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv89 = YOLOConv2D(\n filters=128,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv90 = YOLOConv2D(\n filters=256,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv91 = YOLOConv2D(\n filters=128,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n\n self.conv92 = YOLOConv2D(\n filters=256,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv93 = YOLOConv2D(\n filters=3 * (num_classes + 5),\n kernel_size=1,\n activation=None,\n kernel_regularizer=kernel_regularizer,\n )\n\n self.conv94 = YOLOConv2D(\n filters=256,\n kernel_size=3,\n strides=2,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.concat84_94 = layers.Concatenate(axis=-1)\n\n self.conv95 = YOLOConv2D(\n filters=256,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv96 = YOLOConv2D(\n filters=512,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv97 = YOLOConv2D(\n filters=256,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv98 = YOLOConv2D(\n filters=512,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv99 = YOLOConv2D(\n filters=256,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n\n self.conv100 = YOLOConv2D(\n filters=512,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv101 = YOLOConv2D(\n filters=3 * (num_classes + 5),\n kernel_size=1,\n activation=None,\n kernel_regularizer=kernel_regularizer,\n )\n\n self.conv102 = YOLOConv2D(\n filters=512,\n kernel_size=3,\n strides=2,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.concat77_102 = layers.Concatenate(axis=-1)\n\n self.conv103 = YOLOConv2D(\n filters=512,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv104 = YOLOConv2D(\n filters=1024,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv105 = YOLOConv2D(\n filters=512,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv106 = YOLOConv2D(\n filters=1024,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv107 = YOLOConv2D(\n filters=512,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n\n self.conv108 = YOLOConv2D(\n filters=1024,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv109 = YOLOConv2D(\n filters=3 * (num_classes + 5),\n kernel_size=1,\n activation=None,\n kernel_regularizer=kernel_regularizer,\n )\n\n def call(self, x):\n route1, route2, route3 = x\n\n x1 = self.conv78(route3)\n part2 = self.upSampling78(x1)\n part1 = self.conv79(route2)\n x1 = self.concat78_79([part1, part2])\n\n x1 = self.conv80(x1)\n x1 = self.conv81(x1)\n x1 = self.conv82(x1)\n x1 = self.conv83(x1)\n x1 = self.conv84(x1)\n\n x2 = self.conv85(x1)\n part2 = self.upSampling85(x2)\n part1 = self.conv86(route1)\n x2 = self.concat85_86([part1, part2])\n\n x2 = self.conv87(x2)\n x2 = self.conv88(x2)\n x2 = self.conv89(x2)\n x2 = self.conv90(x2)\n x2 = self.conv91(x2)\n\n pred_s = self.conv92(x2)\n pred_s = self.conv93(pred_s)\n\n x2 = self.conv94(x2)\n x2 = self.concat84_94([x2, x1])\n\n x2 = self.conv95(x2)\n x2 = self.conv96(x2)\n x2 = self.conv97(x2)\n x2 = self.conv98(x2)\n x2 = self.conv99(x2)\n\n pred_m = self.conv100(x2)\n pred_m = self.conv101(pred_m)\n\n x2 = self.conv102(x2)\n x2 = self.concat77_102([x2, route3])\n\n x2 = self.conv103(x2)\n x2 = self.conv104(x2)\n x2 = self.conv105(x2)\n x2 = self.conv106(x2)\n x2 = self.conv107(x2)\n\n pred_l = self.conv108(x2)\n pred_l = self.conv109(pred_l)\n\n return pred_s, pred_m, pred_l\n\n\nclass PANetTiny(Model):\n def __init__(\n self, num_classes, activation: str = \"leaky\", kernel_regularizer=None\n ):\n super(PANetTiny, self).__init__(name=\"PANetTiny\")\n self.conv15 = YOLOConv2D(\n filters=256,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n\n self.conv16 = YOLOConv2D(\n filters=512,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv17 = YOLOConv2D(\n filters=3 * (num_classes + 5),\n kernel_size=1,\n activation=None,\n kernel_regularizer=kernel_regularizer,\n )\n\n self.conv18 = YOLOConv2D(\n filters=128,\n kernel_size=1,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.upSampling18 = layers.UpSampling2D(interpolation=\"bilinear\")\n self.concat13_18 = layers.Concatenate(axis=-1)\n\n self.conv19 = YOLOConv2D(\n filters=256,\n kernel_size=3,\n activation=activation,\n kernel_regularizer=kernel_regularizer,\n )\n self.conv20 = YOLOConv2D(\n filters=3 * (num_classes + 5),\n kernel_size=1,\n activation=None,\n kernel_regularizer=kernel_regularizer,\n )\n\n def call(self, x):\n route1, route2 = x\n\n x1 = self.conv15(route2)\n\n x2 = self.conv16(x1)\n pred_l = self.conv17(x2)\n\n x1 = self.conv18(x1)\n x1 = self.upSampling18(x1)\n x1 = self.concat13_18([x1, route1])\n\n x1 = self.conv19(x1)\n pred_m = self.conv20(x1)\n\n return pred_m, pred_l\n" ]
[ [ "tensorflow.keras.layers.Concatenate", "tensorflow.keras.layers.UpSampling2D" ] ]
SolarLiner/reconvolve
[ "fbe5419543d0e5fd9b8569714ecc07a78c2e7e51" ]
[ "bin/process.py" ]
[ "from argparse import ArgumentParser, Namespace, FileType\nimport logging\n\nimport numpy as np\nimport scipy.signal as sig\nfrom scipy.io import wavfile\n\nfrom reconvolve.probe import Probe\nfrom . import BaseCommand\n\nlogger = logging.getLogger(__name__)\n\n\nclass ProcessCommand(BaseCommand):\n name = \"process\"\n description = \"Process the recording into a convolver impulse response. Note that both files must have the same sample rate.\"\n\n def setup(self, parser: ArgumentParser):\n parser.add_argument(\"sweep\",\n type=FileType(\"rb\"),\n help=\"Generated sweep file\")\n parser.add_argument(\"record\",\n type=FileType(\"rb\"),\n help=\"Recorded response\")\n parser.add_argument(\"output\",\n type=FileType(\"wb\"),\n help=\"Output impulse response\")\n\n def run(self, args: Namespace):\n sweep_rate, sweep_in = wavfile.read(args.sweep)\n record_rate, record_in = wavfile.read(args.record)\n\n if sweep_rate != record_rate:\n raise ValueError(\"Sample rates do not match, IR cannot be generated\")\n\n try:\n p = Probe(sweep_rate)\n result = p.process(np.float32(sweep_in), np.float32(record_in))\n p.write(args.output, result)\n return 0\n except ValueError as e:\n logger.critical(f\"E: {str(e)}\")\n return 1\n except:\n logger.critical(\"Unknown error.\")\n return 10\n" ]
[ [ "numpy.float32", "scipy.io.wavfile.read" ] ]
lucho8908/adaptive_template_systems
[ "3e217d151d09e5eed213f927960986854331c148" ]
[ "Examples/Shapes/uli_data_random_forest.py" ]
[ "import numpy as np\r\nimport multidim\r\nimport itertools\r\nimport os\r\nimport hdbscan\r\nimport sys\r\nimport time\r\nimport pandas as pd\r\nimport itertools\r\n\r\nfrom copy import deepcopy\r\nfrom matplotlib.patches import Ellipse\r\nfrom ripser import ripser\r\nfrom persim import plot_diagrams\r\nfrom numba import jit, njit, prange\r\nfrom sklearn import mixture\r\nfrom sklearn.model_selection import train_test_split, cross_val_score\r\nfrom sklearn.linear_model import RidgeClassifier, LogisticRegression\r\nfrom sklearn.preprocessing import PolynomialFeatures\r\nfrom sklearn.svm import LinearSVC, SVC\r\nfrom sklearn.ensemble import RandomForestClassifier\r\n\r\nfrom multidim.covertree import CoverTree\r\nfrom multidim.models import CDER\r\n\r\nimport matplotlib.pyplot as plt\r\n\r\nnp.set_printoptions(precision=2)\r\n\r\nsys.path.append('../..')\r\nfrom ATS import *\r\n\r\n# -----------------------------------------------------------------------------\r\n# -------------- ARGUMENTS ----------------------------------------------------\r\n# -----------------------------------------------------------------------------\r\n\r\nadptative_feature = str(sys.argv[1])\r\n\r\nkernel = str(sys.argv[2])\r\n\r\n# -----------------------------------------------------------------------------\r\n# -------------- IMPORT DATA --------------------------------------------------\r\n# -----------------------------------------------------------------------------\r\n\r\nData = pd.read_csv('Uli_data/Uli_data.csv')\r\n\r\n# Code to reshape the data in the groupby command below\r\ndef reshapeVec(g):\r\n A = np.array([g.dim,g.birth,g.death])\r\n A = A.T\r\n return A\r\n\r\nDgmsDF = Data.groupby(['freq', 'trial']).apply(reshapeVec)\r\nDgmsDF = DgmsDF.reset_index()\r\nDgmsDF = DgmsDF.rename(columns = {0:'CollectedDgm'})\r\n\r\ndef getDgm(A, dim = 0):\r\n if type(dim) != str:\r\n A = A[np.where(A[:,0] == dim)[0],1:]\r\n elif dim == 'essential':\r\n A = A[np.where(A[:,0] <0)[0],:]\r\n return(A)\r\n\r\nDgmsDF['Dgm1'] = DgmsDF.CollectedDgm.apply(lambda x: getDgm(x, dim = 1))\r\nDgmsDF['Dgm0'] = DgmsDF.CollectedDgm.apply(lambda x: getDgm(x, dim = 0))\r\nDgmsDF['DgmInf'] = DgmsDF.CollectedDgm.apply(lambda x: getDgm(x, dim = 'essential'))\r\n\r\ndef label(index):\r\n if 0 <= index <= 19:\r\n return 'male_neutral'\r\n elif 20<= index <=39:\r\n return 'male_bodybuilder'\r\n elif 40<= index <=59:\r\n return 'male_fat'\r\n elif 60<= index <=79:\r\n return 'male_thin'\r\n elif 80<= index <=99:\r\n return 'male_average'\r\n elif 100<= index <=119:\r\n return 'female_neutral'\r\n elif 120<= index <=139:\r\n return 'female_bodybuilder'\r\n elif 140<= index <=159:\r\n return 'female_fat'\r\n elif 160<= index <=179:\r\n return 'female_thin'\r\n elif 180<= index <=199:\r\n return 'female_average'\r\n elif 200<= index <=219:\r\n return 'child_neutral'\r\n elif 220<= index <=239:\r\n return 'child_bodybuilder'\r\n elif 240<= index <=259:\r\n return 'child_fat'\r\n elif 260<= index <=279:\r\n return 'child_thin'\r\n elif 280<= index <=299:\r\n return 'child_average'\r\n else:\r\n print('What are you giving me?')\r\n\r\nDgmsDF['TrainingLabel'] = DgmsDF.freq.apply(label)\r\nDgmsDF= DgmsDF.sample(frac=1)\r\n\r\nscore_train_per_frequency = []\r\nscore_test_per_frequency = []\r\n\r\nfor i in range(1,11):\r\n\r\n\tfreq = i\r\n\r\n\tSampleDF = DgmsDF[DgmsDF.trial == freq].sample(frac=1)\r\n\r\n\tX_dgm0 = SampleDF['Dgm0'].tolist()\r\n\r\n\tX_dgm1 = SampleDF['Dgm1'].tolist()\r\n\r\n\t# Lets change the birth-death to birth-persistence\r\n\r\n\tfor j in range(len(X_dgm1)):\r\n\t\ttemp_dgm = X_dgm1[j]\r\n\r\n\t\ttemp_dgm[:,1] = temp_dgm[:,1] - temp_dgm[:,0]\r\n\r\n\t\ttemp_dgm[np.isclose(temp_dgm, 0, rtol=1e-05, atol=1e-05)] = 1e-05\r\n\r\n\t\ttemp_dgm = np.log(temp_dgm)\r\n\r\n\t\tX_dgm1[j] = temp_dgm\r\n\r\n\r\n\tlabels = list(set(SampleDF.TrainingLabel))\r\n\tlabels.sort()\r\n\r\n\tmapping = {}\r\n\tfor i in range(len(labels)):\r\n\t\tmapping[labels[i]] = i\r\n\r\n\tSampleDF = SampleDF.replace({'TrainingLabel': mapping})\r\n\r\n\tF = SampleDF['TrainingLabel'].tolist()\r\n\r\n\td = 10\r\n\r\n\tscore_train_per_iteration = []\r\n\tscore_test_per_iteration = []\r\n\r\n\tfor rep in range(10):\r\n\t\r\n\t\tX_train_0, X_test_0, X_train_1, X_test_1, F_train, F_test = train_test_split(X_dgm0, X_dgm1, F, test_size=0.30)\r\n\r\n\t\t# -----------------------------------------------------------------------------\r\n\t\t# ------------------------------ H_0 ------------------------------------------\r\n\t\t# -----------------------------------------------------------------------------\r\n\r\n\t\t# -----------------------------------------------------------------------------\r\n\t\t# ------------------------------ GMM ------------------------------------------\r\n\t\t# -----------------------------------------------------------------------------\r\n\r\n\t\tprint('Begin GMM...')\r\n\t\tt0 = time.time()\r\n\t\tX_train_temp = np.vstack(X_train_0)\r\n\r\n\t\tX_train_temp = X_train_temp[:,1]\r\n\r\n\t\tX_train_temp = X_train_temp.reshape((-1,1))\r\n\r\n\t\tgmm_f_train=[]\r\n\t\tfor i in range(len(X_train_0)):\r\n\t\t\tgmm_f_train.append(F_train[i]*np.ones(len(X_train_0[i])))\r\n\t\tgmm_f_train = np.concatenate(gmm_f_train)\r\n\r\n\t\tgmm = mixture.BayesianGaussianMixture(n_components=d, covariance_type='full', max_iter=int(10e4)).fit(X_train_temp, gmm_f_train)\r\n\r\n\t\tellipses = []\r\n\t\tfor i in range(len(gmm.means_)):\r\n\t\t\tL, v = np.linalg.eig(gmm.covariances_[i])\r\n\t\t\ttemp = {'mean':gmm.means_[i], 'std':np.sqrt(L), 'rotation':v.transpose(), 'radius':max(np.sqrt(L)), 'entropy':gmm.weights_[i]}\r\n\t\t\tellipses.append(temp)\r\n\t\tt1 = time.time()\r\n\t\tprint('Finish GMM. Time: {}'.format(t1-t0))\r\n\r\n\t\t# -----------------------------------------------------------------------------\r\n\t\t# ------------------------------ Features -------------------------------------\r\n\t\t# -----------------------------------------------------------------------------\r\n\t\tt0 = time.time()\r\n\r\n\t\tX_train_temp = [dgm[:,1] for dgm in X_train_0]\r\n\t\tX_train_features_0 = get_all_features(X_train_temp, ellipses, f_gaussian)\r\n\r\n\t\tX_test_temp = [dgm[:,1] for dgm in X_test_0]\r\n\t\tX_test_features_0 = get_all_features(X_test_temp, ellipses, f_gaussian)\r\n\r\n\t\tt1 = time.time()\r\n\t\tprint('Features H_0:{}'.format(t1-t0))\r\n\r\n\t\t# -----------------------------------------------------------------------------\r\n\t\t# ------------------------------ H_1 ------------------------------------------\r\n\t\t# -----------------------------------------------------------------------------\r\n\r\n\t\tif adptative_feature=='cder':\r\n\t\t\t# ----------------------------------------------------------------------------\r\n\t\t\t# ------------------------------ CDER ----------------------------------------\r\n\t\t\t# ----------------------------------------------------------------------------\r\n\r\n\t\t\tprint('Begin CDER...')\r\n\t\t\tt0 = time.time()\r\n\t\t\tpc_train = multidim.PointCloud.from_multisample_multilabel(X_train_1, F_train)\r\n\t\t\tct_train = CoverTree(pc_train)\r\n\r\n\t\t\tcder = CDER(parsimonious=True)\r\n\r\n\t\t\tcder.fit(ct_train)\r\n\r\n\t\t\tcder_result = cder.gaussians\r\n\r\n\t\t\tellipses = []\r\n\t\t\tfor c in cder_result:\r\n\t\t\t\ttemp = {key:c[key] for key in ['mean', 'std', 'rotation', 'radius', 'entropy']}\r\n\t\t\t\ttemp['std'] = 2*temp['std']\r\n\t\t\t\t#temp['std'] = np.array([temp['radius'], temp['radius']])\r\n\t\t\t\tellipses.append(temp)\r\n\r\n\t\t\tt1 = time.time()\r\n\t\t\tprint('Finish CDER. Time: {}'.format(t1-t0))\r\n\r\n\t\tif adptative_feature=='gmm':\r\n\t\t\t# ----------------------------------------------------------------------------\r\n\t\t\t# ------------------------------ GMM -----------------------------------------\r\n\t\t\t# ----------------------------------------------------------------------------\r\n\t\t\tprint('Begin GMM...')\r\n\t\t\tt0 = time.time()\r\n\t\t\tX_train_temp = np.vstack(X_train_1)\r\n\r\n\t\t\tgmm_f_train=[]\r\n\t\t\tfor i in range(len(X_train_1)):\r\n\t\t\t\tgmm_f_train.append(F_train[i]*np.ones(len(X_train_1[i])))\r\n\t\t\tgmm_f_train = np.concatenate(gmm_f_train)\r\n\r\n\t\t\tgmm = mixture.BayesianGaussianMixture(n_components=d*d, max_iter=int(10e4)).fit(X_train_temp, gmm_f_train)\r\n\r\n\t\t\tellipses = []\r\n\t\t\tfor i in range(len(gmm.means_)):\r\n\t\t\t\tL, v = np.linalg.eig(gmm.covariances_[i])\r\n\t\t\t\ttemp = {'mean':gmm.means_[i], 'std':np.sqrt(L), 'rotation':v.transpose(), 'radius':max(np.sqrt(L)), 'entropy':gmm.weights_[i]}\r\n\t\t\t\ttemp['std'] = 3*temp['std']\r\n\t\t\t\tellipses.append(temp)\r\n\t\t\tt1 = time.time()\r\n\t\t\tprint('Finish GMM. Time: {}'.format(t1-t0))\r\n\r\n\t\tif adptative_feature=='hdbscan':\r\n\t\t\t# ----------------------------------------------------------------------------\r\n\t\t\t# ------------------------------ HDBSCAN -------------------------------------\r\n\t\t\t# ----------------------------------------------------------------------------\r\n\t\t\tprint('Begin HDBSCAN...')\r\n\t\t\tt0 = time.time()\r\n\t\t\tX_train_temp = np.vstack(X_train_1)\r\n\r\n\t\t\tclusterer = hdbscan.HDBSCAN(min_samples=d*d, min_cluster_size=d*d)\r\n\r\n\t\t\tclusterer.fit(X_train_temp)\r\n\r\n\t\t\tnum_clusters = clusterer.labels_.max()\r\n\r\n\t\t\tellipses = []\r\n\t\t\tfor i in range(num_clusters):\r\n\t\t\t\tcluster_i = X_train_temp[clusterer.labels_ == i]\r\n\r\n\t\t\t\ten = np.mean(clusterer.probabilities_[clusterer.labels_ == i])\r\n\t\t\t\t\r\n\t\t\t\tmean = np.mean(cluster_i, axis=0)\r\n\t\t\t\tcov_matrix = np.cov(cluster_i.transpose())\r\n\r\n\t\t\t\tL,v = np.linalg.eig(cov_matrix)\r\n\r\n\t\t\t\ttemp = {'mean':mean, 'std':np.sqrt(L), 'rotation':v.transpose(), 'radius':max(np.sqrt(L)), 'entropy':en}\r\n\t\t\t\ttemp['std'] = 3*temp['std']\r\n\t\t\t\tellipses.append(temp)\r\n\r\n\t\t\tt1 = time.time()\r\n\t\t\tprint('Finish HDBSCAN. Time: {}'.format(t1-t0))\r\n\r\n\t\t# ----------------------------------------------------------------------------\r\n\t\t# ------------------------------ Features ------------------------------------\r\n\t\t# ----------------------------------------------------------------------------\r\n\t\tt0 = time.time()\r\n\t\t\r\n\t\tX_train_features_1 = get_all_features(X_train_1, ellipses, f_ellipse)\r\n\r\n\t\tX_test_features_1 = get_all_features(X_test_1, ellipses, f_ellipse)\r\n\r\n\t\tt1 = time.time()\r\n\t\tprint('Features H_1:{}'.format(t1-t0))\r\n\r\n\t\tX_train_features = np.column_stack((X_train_features_0, X_train_features_1))\r\n\r\n\t\tX_test_features = np.column_stack((X_test_features_0, X_test_features_1))\r\n\r\n\t\t# ----------------------------------------------------------------------------\r\n\t\t# ------------------------------ Classification -----------------------------\r\n\t\t# ----------------------------------------------------------------------------\r\n\r\n\t\tregularization_constants = range(0,20)\r\n\t\t\r\n\t\tdegrees = range(1,202,10)\r\n\r\n\t\tscore_train = np.zeros(( len(regularization_constants), len(degrees) ))\r\n\t\tscore_test = np.zeros(( len(regularization_constants), len(degrees) ))\r\n\r\n\t\tposition_constant = 0\r\n\t\tfor regularization_c in regularization_constants:\r\n\t\t\tprint(regularization_c)\r\n\t\t\tposition_degree = 0 \r\n\t\t\tfor poly_degree in degrees:\r\n\t\t\t\tprint(poly_degree)\r\n\t\t\t\t\r\n\t\t\t\tX_train_features_transformed = X_train_features\r\n\r\n\t\t\t\tX_test_features_transformed = X_test_features\r\n\r\n\t\t\t\tridge_model = RandomForestClassifier(criterion='entropy', min_impurity_decrease=10**(-1*regularization_c), n_estimators=poly_degree , n_jobs=16).fit(X_train_features_transformed, F_train)\r\n\r\n\t\t\t\tscore_train[position_constant, position_degree] = ridge_model.score(X_train_features_transformed, F_train)\r\n\t\t\t\t\r\n\t\t\t\tscore_test[position_constant, position_degree] = ridge_model.score(X_test_features_transformed, F_test)\r\n\r\n\t\t\t\tposition_degree += 1\r\n\r\n\t\t\tposition_constant +=1\r\n\r\n\t\tscore_train_per_iteration.append(score_train)\r\n\t\tscore_test_per_iteration.append(score_test)\r\n\t\t\r\n\tscore_train_per_frequency.append(score_train_per_iteration)\r\n\tscore_test_per_frequency.append(score_test_per_iteration)\r\n\r\nscore_train_per_frequency = np.array(score_train_per_frequency)\r\nscore_test_per_frequency = np.array(score_test_per_frequency)\r\n\r\ntrain_means = np.mean(score_train_per_frequency, 1)\r\ntest_means = np.mean(score_test_per_frequency, 1)\r\n\r\ntrain_stds = np.std(score_train_per_frequency, 1)\r\ntest_stds = np.std(score_test_per_frequency, 1)\r\n\r\n# -----------------------------------------------------------------------------\r\n# ----------- Save Figures ----------------------------------------------------\r\n# -----------------------------------------------------------------------------\r\nos.system('mkdir rforest_results')\r\n\r\nfor i in range(train_means.shape[0]):\r\n\ttrain_m = train_means[i,:,:]\r\n\ttrain_s = train_stds[i,:,:]\r\n\r\n\ttest_m = test_means[i,:,:]\r\n\ttest_s = test_stds[i,:,:]\r\n\r\n\t# Combine all data\r\n\tcombined_data = np.array([train_m, test_m])\r\n\t# Get the min and max of all your data\r\n\t_min, _max = np.amin(combined_data), np.amax(combined_data)\r\n\r\n\tfig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10*2,10*1), dpi=100)\r\n\t\r\n\tim = ax[0].imshow(train_m, vmin = _min, vmax = _max)\r\n\tax[0].set_title('Train score')\r\n\r\n\t# set latels for x-axis on plot \r\n\tax[0].set_yticks(range(0,len(regularization_constants)))\r\n\tax[0].set_yticklabels(regularization_constants)\r\n\tax[0].set_ylabel('Entropy split 10^(-i)')\r\n\r\n\t# set latels for y-axis on plot \r\n\tax[0].set_xticks(range(0,len(degrees)))\r\n\tax[0].set_xticklabels(degrees)\r\n\tax[0].set_xlabel('Num. estimators')\r\n\r\n\t# set color bar in each plot\r\n\t# fig.colorbar(im, ax=ax[0])\r\n\r\n\t# include the value of the matrix in each entry of the plot\r\n\tfor (k,j),label in np.ndenumerate(train_m):\r\n\t\tannotation = str(round(label,2)) + ' + ' + str(round(train_s[k,j],2))\r\n\t\tax[0].text(j, k, annotation, ha='center', va='center', rotation=45)\r\n\r\n\r\n\tim = ax[1].imshow(test_m, vmin = _min, vmax = _max)\r\n\tax[1].set_title('Test score')\r\n\r\n\t# set latels for x-axis on plot \r\n\tax[1].set_yticks(range(0,len(regularization_constants)))\r\n\tax[1].set_yticklabels(regularization_constants)\r\n\tax[1].set_ylabel('Regilarization')\r\n\r\n\t# set latels for y-axis on plot \r\n\tax[1].set_xticks(range(0,len(degrees)))\r\n\tax[1].set_xticklabels(degrees)\r\n\tax[1].set_xlabel('Poly kernel degree')\r\n\r\n\t# set color bar in each plot\r\n\t# fig.colorbar(im, ax=ax[1])\r\n\r\n\tfig.colorbar(im, ax=ax.ravel().tolist())\r\n\r\n\t# include the value of the matrix in each entry of the plot\r\n\tfor (k,j),label in np.ndenumerate(test_m):\r\n\t\tannotation = str(round(label,2)) + ' + ' + str(round(test_s[k,j],2))\r\n\t\tax[1].text(j, k, annotation, ha='center', va='center', rotation=45)\r\n\r\n\r\n\tbest = np.unravel_index(np.argmin(np.abs(train_m - test_m), axis=None), train_m.shape)\r\n\r\n\t# add title to the figure\r\n\tplt.suptitle('Frequency {} \\n {} // {} \\n {}'.format(i, train_m[best], test_m[best], best) )\r\n\t\r\n\tplt.savefig('rforest_results/f_{}_gmm.png'.format(i))\r\n" ]
[ [ "numpy.isclose", "numpy.set_printoptions", "numpy.mean", "numpy.where", "pandas.read_csv", "numpy.concatenate", "numpy.log", "matplotlib.pyplot.subplots", "numpy.sqrt", "numpy.column_stack", "numpy.vstack", "numpy.array", "sklearn.ensemble.RandomForestClassifier", "numpy.std", "numpy.amax", "numpy.amin", "sklearn.model_selection.train_test_split", "numpy.ndenumerate", "numpy.linalg.eig", "numpy.abs" ] ]
GT-AcerZhang/PaddlePaddle-Classification
[ "90ff2188aa0501637fe9e36ee9b5fdf57e4ad60a" ]
[ "eval.py" ]
[ "import argparse\nimport time\nimport numpy as np\nimport paddle.fluid as fluid\nfrom utils import utils\nfrom tqdm import tqdm\n\n\ndef parse_args():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"--use_gpu\", type=bool, default=True)\n parser.add_argument(\"--img_size\", type=int, default=224)\n return parser.parse_args()\n\n\ndef evaluate_quant(data_list, model_path):\n # 加载模型\n def create_predictor(args):\n place = fluid.CUDAPlace(0) if args.use_gpu else fluid.CPUPlace()\n exe = fluid.Executor(place)\n\n [program, feed_names, fetch_lists] = fluid.io.load_inference_model(dirname=model_path,\n executor=exe,\n model_filename='__model__',\n params_filename='__params__')\n compiled_program = fluid.compiler.CompiledProgram(program)\n\n return exe, compiled_program, feed_names, fetch_lists\n\n # 获取预处理op\n def create_operators(args):\n img_mean = [0.485, 0.456, 0.406]\n img_std = [0.229, 0.224, 0.225]\n img_scale = 1.0 / 255.0\n\n decode_op = utils.DecodeImage()\n resize_op = utils.ResizeImage(resize_short=256)\n crop_op = utils.CropImage(size=(args.img_size, args.img_size))\n normalize_op = utils.NormalizeImage(scale=img_scale, mean=img_mean, std=img_std)\n totensor_op = utils.ToTensor()\n\n return [decode_op, resize_op, crop_op, normalize_op, totensor_op]\n\n # 执行预处理\n def preprocess(fname, ops):\n data = open(fname, 'rb').read()\n for op in ops:\n data = op(data)\n return data\n\n # 提取预测结果\n def postprocess(outputs, topk=5):\n output = outputs[0]\n prob = np.array(output).flatten()\n index = prob.argsort(axis=0)[-topk:][::-1].astype('int32')\n return zip(index, prob[index])\n\n args = parse_args()\n operators = create_operators(args)\n exe, program, feed_names, fetch_lists = create_predictor(args)\n\n # 开始预测评估\n with open(data_list, 'r', encoding='utf-8') as f:\n lines = f.readlines()\n results = []\n print('start test %s model accuracy rate...' % model_path)\n start = time.time()\n for line in tqdm(lines):\n path, id = line.replace('\\n', '').split(' ')\n data = preprocess(path, operators)\n data = np.expand_dims(data, axis=0)\n outputs = exe.run(program,\n feed={feed_names[0]: data},\n fetch_list=fetch_lists,\n return_numpy=False)\n lab, porb = postprocess(outputs).__next__()\n if lab == int(id):\n results.append(1)\n end = time.time()\n t = int(round((end - start) * 1000)) / len(lines)\n print(\"准确率:%0.5f, 平均预测时间为:%d\" % (sum(results) / len(lines), t))\n print('=' * 70)\n\n\ndef evaluate_infer(data_list, model_path):\n # 加载模型\n def create_predictor(args):\n place = fluid.CUDAPlace(0) if args.use_gpu else fluid.CPUPlace()\n exe = fluid.Executor(place)\n\n [program, feed_names, fetch_lists] = fluid.io.load_inference_model(dirname=model_path,\n executor=exe,\n model_filename='__model__',\n params_filename='__params__')\n compiled_program = fluid.compiler.CompiledProgram(program)\n\n return exe, compiled_program, feed_names, fetch_lists\n\n # 获取预处理op\n def create_operators(args):\n img_mean = [0.485, 0.456, 0.406]\n img_std = [0.229, 0.224, 0.225]\n img_scale = 1.0 / 255.0\n\n decode_op = utils.DecodeImage()\n resize_op = utils.ResizeImage(resize_short=256)\n crop_op = utils.CropImage(size=(args.img_size, args.img_size))\n normalize_op = utils.NormalizeImage(scale=img_scale, mean=img_mean, std=img_std)\n totensor_op = utils.ToTensor()\n\n return [decode_op, resize_op, crop_op, normalize_op, totensor_op]\n\n # 执行预处理\n def preprocess(fname, ops):\n data = open(fname, 'rb').read()\n for op in ops:\n data = op(data)\n return data\n\n # 提取预测结果\n def postprocess(outputs, topk=5):\n output = outputs[0]\n prob = np.array(output).flatten()\n index = prob.argsort(axis=0)[-topk:][::-1].astype('int32')\n return zip(index, prob[index])\n\n args = parse_args()\n operators = create_operators(args)\n exe, program, feed_names, fetch_lists = create_predictor(args)\n\n # 开始预测评估\n with open(data_list, 'r', encoding='utf-8') as f:\n lines = f.readlines()\n results = []\n print('start test %s model accuracy rate...' % model_path)\n start = time.time()\n for line in tqdm(lines):\n path, id = line.replace('\\n', '').split(' ')\n data = preprocess(path, operators)\n data = np.expand_dims(data, axis=0)\n outputs = exe.run(program,\n feed={feed_names[0]: data},\n fetch_list=fetch_lists,\n return_numpy=False)\n lab, porb = postprocess(outputs).__next__()\n if lab == int(id):\n results.append(1)\n end = time.time()\n t = int(round((end - start) * 1000)) / len(lines)\n print(\"准确率:%0.5f, 平均预测时间为:%dms\" % (sum(results) / len(lines), t))\n print('=' * 70)\n\n\nif __name__ == '__main__':\n evaluate_quant('dataset/test_list.txt', 'output/quant_inference_model')\n evaluate_infer('dataset/test_list.txt', 'output/inference_model')\n" ]
[ [ "numpy.array", "numpy.expand_dims" ] ]
ToluClassics/transformers
[ "aa19f478acc8f46b7f70e9c525a2bd07dfed548c" ]
[ "src/transformers/models/sew_d/modeling_sew_d.py" ]
[ "# coding=utf-8\n# Copyright 2021 ASAPP Inc. and the HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\" PyTorch SEW model.\"\"\"\n\nimport math\nimport warnings\nfrom collections.abc import Sequence\nfrom typing import Optional, Tuple, Union\n\nimport numpy as np\nimport torch\nimport torch.utils.checkpoint\nfrom torch import _softmax_backward_data, nn\nfrom torch.nn import CrossEntropyLoss, LayerNorm\n\nfrom transformers.deepspeed import is_deepspeed_zero3_enabled\n\nfrom ...activations import ACT2FN\nfrom ...file_utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward\nfrom ...modeling_outputs import BaseModelOutput, CausalLMOutput, SequenceClassifierOutput\nfrom ...modeling_utils import PreTrainedModel, torch_int_div\nfrom ...utils import logging\nfrom .configuration_sew_d import SEWDConfig\n\n\nlogger = logging.get_logger(__name__)\n\n\n_HIDDEN_STATES_START_POSITION = 1\n\n\n# General docstring\n_CONFIG_FOR_DOC = \"SEWDConfig\"\n_PROCESSOR_FOR_DOC = \"Wav2Vec2Processor\"\n\n# Base docstring\n_CHECKPOINT_FOR_DOC = \"asapp/sew-d-tiny-100k-ft-ls100h\"\n_EXPECTED_OUTPUT_SHAPE = [1, 292, 384]\n\n# CTC docstring\n_CTC_EXPECTED_OUTPUT = \"'MISTER QUILTER IS THE APOSTIL OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'\"\n_CTC_EXPECTED_LOSS = 0.21\n\n# Audio class docstring\n_FEAT_EXTRACTOR_FOR_DOC = \"Wav2Vec2FeatureExtractor\"\n_SEQ_CLASS_CHECKPOINT = \"anton-l/sew-d-mid-400k-ft-keyword-spotting\"\n_SEQ_CLASS_EXPECTED_OUTPUT = \"'_unknown_'\"\n_SEQ_CLASS_EXPECTED_LOSS = 3.16\n\nSEW_D_PRETRAINED_MODEL_ARCHIVE_LIST = [\n \"asapp/sew-d-tiny-100k\",\n \"asapp/sew-d-small-100k\",\n \"asapp/sew-d-mid-100k\",\n \"asapp/sew-d-mid-k127-100k\",\n \"asapp/sew-d-base-100k\",\n \"asapp/sew-d-base-plus-100k\",\n \"asapp/sew-d-mid-400k\",\n \"asapp/sew-d-mid-k127-400k\",\n \"asapp/sew-d-base-plus-400k\",\n # See all SEW models at https://huggingface.co/models?filter=sew-d\n]\n\n\n# Copied from transformers.models.wav2vec2.modeling_wav2vec2._compute_mask_indices\ndef _compute_mask_indices(\n shape: Tuple[int, int],\n mask_prob: float,\n mask_length: int,\n attention_mask: Optional[torch.LongTensor] = None,\n min_masks: int = 0,\n) -> np.ndarray:\n \"\"\"\n Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for\n ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on\n CPU as part of the preprocessing during training.\n\n Args:\n shape: The shape for which to compute masks. This should be of a tuple of size 2 where\n the first element is the batch size and the second element is the length of the axis to span.\n mask_prob: The percentage of the whole axis (between 0 and 1) which will be masked. The number of\n independently generated mask spans of length `mask_length` is computed by\n `mask_prob*shape[1]/mask_length`. Note that due to overlaps, `mask_prob` is an upper bound and the\n actual percentage will be smaller.\n mask_length: size of the mask\n min_masks: minimum number of masked spans\n attention_mask: A (right-padded) attention mask which independently shortens the feature axis of\n each batch dimension.\n \"\"\"\n batch_size, sequence_length = shape\n\n if mask_length < 1:\n raise ValueError(\"`mask_length` has to be bigger than 0.\")\n\n if mask_length > sequence_length:\n raise ValueError(\n f\"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length}\"\n f\" and `sequence_length`: {sequence_length}`\"\n )\n\n # epsilon is used for probabilistic rounding\n epsilon = np.random.rand(1).item()\n\n def compute_num_masked_span(input_length):\n \"\"\"Given input length, compute how many spans should be masked\"\"\"\n num_masked_span = int(mask_prob * input_length / mask_length + epsilon)\n num_masked_span = max(num_masked_span, min_masks)\n\n # make sure num masked indices <= sequence_length\n if num_masked_span * mask_length > sequence_length:\n num_masked_span = sequence_length // mask_length\n\n return num_masked_span\n\n # compute number of masked spans in batch\n input_lengths = (\n attention_mask.sum(-1).detach().tolist()\n if attention_mask is not None\n else [sequence_length for _ in range(batch_size)]\n )\n\n # SpecAugment mask to fill\n spec_aug_mask = np.zeros((batch_size, sequence_length), dtype=np.bool)\n spec_aug_mask_idxs = []\n\n max_num_masked_span = compute_num_masked_span(sequence_length)\n\n if max_num_masked_span == 0:\n return spec_aug_mask\n\n for input_length in input_lengths:\n # compute num of masked spans for this input\n num_masked_span = compute_num_masked_span(input_length)\n\n # get random indices to mask\n spec_aug_mask_idx = np.random.choice(\n np.arange(input_length - (mask_length - 1)), num_masked_span, replace=False\n )\n\n # pick first sampled index that will serve as a dummy index to pad vector\n # to ensure same dimension for all batches due to probabilistic rounding\n # Picking first sample just pads those vectors twice.\n dummy_mask_idx = spec_aug_mask_idx[0]\n\n spec_aug_mask_idx = np.concatenate(\n [spec_aug_mask_idx, np.ones(max_num_masked_span - num_masked_span, dtype=np.int32) * dummy_mask_idx]\n )\n spec_aug_mask_idxs.append(spec_aug_mask_idx)\n\n spec_aug_mask_idxs = np.array(spec_aug_mask_idxs)\n\n # expand masked indices to masked spans\n spec_aug_mask_idxs = np.broadcast_to(\n spec_aug_mask_idxs[:, :, None], (batch_size, max_num_masked_span, mask_length)\n )\n spec_aug_mask_idxs = spec_aug_mask_idxs.reshape(batch_size, max_num_masked_span * mask_length)\n\n # add offset to the starting indexes so that that indexes now create a span\n offsets = np.arange(mask_length)[None, None, :]\n offsets = np.broadcast_to(offsets, (batch_size, max_num_masked_span, mask_length)).reshape(\n batch_size, max_num_masked_span * mask_length\n )\n spec_aug_mask_idxs = spec_aug_mask_idxs + offsets\n\n # scatter indices to mask\n np.put_along_axis(spec_aug_mask, spec_aug_mask_idxs, 1, -1)\n\n return spec_aug_mask\n\n\n# Copied from transformers.models.deberta_v2.modeling_deberta_v2.make_log_bucket_position\ndef make_log_bucket_position(relative_pos, bucket_size, max_position):\n sign = np.sign(relative_pos)\n mid = bucket_size // 2\n abs_pos = np.where((relative_pos < mid) & (relative_pos > -mid), mid - 1, np.abs(relative_pos))\n log_pos = np.ceil(np.log(abs_pos / mid) / np.log((max_position - 1) / mid) * (mid - 1)) + mid\n bucket_pos = np.where(abs_pos <= mid, relative_pos, log_pos * sign).astype(np.int)\n return bucket_pos\n\n\n# Copied from transformers.models.deberta_v2.modeling_deberta_v2.build_relative_position\ndef build_relative_position(query_size, key_size, bucket_size=-1, max_position=-1):\n \"\"\"\n Build relative position according to the query and key\n\n We assume the absolute position of query \\\\(P_q\\\\) is range from (0, query_size) and the absolute position of key\n \\\\(P_k\\\\) is range from (0, key_size), The relative positions from query to key is \\\\(R_{q \\\\rightarrow k} = P_q -\n P_k\\\\)\n\n Args:\n query_size (int): the length of query\n key_size (int): the length of key\n bucket_size (int): the size of position bucket\n max_position (int): the maximum allowed absolute position\n\n Return:\n `torch.LongTensor`: A tensor with shape [1, query_size, key_size]\n\n \"\"\"\n q_ids = np.arange(0, query_size)\n k_ids = np.arange(0, key_size)\n rel_pos_ids = q_ids[:, None] - np.tile(k_ids, (q_ids.shape[0], 1))\n if bucket_size > 0 and max_position > 0:\n rel_pos_ids = make_log_bucket_position(rel_pos_ids, bucket_size, max_position)\n rel_pos_ids = torch.tensor(rel_pos_ids, dtype=torch.long)\n rel_pos_ids = rel_pos_ids[:query_size, :]\n rel_pos_ids = rel_pos_ids.unsqueeze(0)\n return rel_pos_ids\n\n\[email protected]\n# Copied from transformers.models.deberta.modeling_deberta.c2p_dynamic_expand\ndef c2p_dynamic_expand(c2p_pos, query_layer, relative_pos):\n return c2p_pos.expand([query_layer.size(0), query_layer.size(1), query_layer.size(2), relative_pos.size(-1)])\n\n\[email protected]\n# Copied from transformers.models.deberta.modeling_deberta.p2c_dynamic_expand\ndef p2c_dynamic_expand(c2p_pos, query_layer, key_layer):\n return c2p_pos.expand([query_layer.size(0), query_layer.size(1), key_layer.size(-2), key_layer.size(-2)])\n\n\[email protected]\n# Copied from transformers.models.deberta.modeling_deberta.pos_dynamic_expand\ndef pos_dynamic_expand(pos_index, p2c_att, key_layer):\n return pos_index.expand(p2c_att.size()[:2] + (pos_index.size(-2), key_layer.size(-2)))\n\n\n# Copied from transformers.models.deberta.modeling_deberta.get_mask\ndef get_mask(input, local_context):\n if not isinstance(local_context, DropoutContext):\n dropout = local_context\n mask = None\n else:\n dropout = local_context.dropout\n dropout *= local_context.scale\n mask = local_context.mask if local_context.reuse_mask else None\n\n if dropout > 0 and mask is None:\n mask = (1 - torch.empty_like(input).bernoulli_(1 - dropout)).bool()\n\n if isinstance(local_context, DropoutContext):\n if local_context.mask is None:\n local_context.mask = mask\n\n return mask, dropout\n\n\n# Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2NoLayerNormConvLayer with Wav2Vec2->SEWD\nclass SEWDNoLayerNormConvLayer(nn.Module):\n def __init__(self, config, layer_id=0):\n super().__init__()\n self.in_conv_dim = config.conv_dim[layer_id - 1] if layer_id > 0 else 1\n self.out_conv_dim = config.conv_dim[layer_id]\n\n self.conv = nn.Conv1d(\n self.in_conv_dim,\n self.out_conv_dim,\n kernel_size=config.conv_kernel[layer_id],\n stride=config.conv_stride[layer_id],\n bias=config.conv_bias,\n )\n self.activation = ACT2FN[config.feat_extract_activation]\n\n def forward(self, hidden_states):\n hidden_states = self.conv(hidden_states)\n hidden_states = self.activation(hidden_states)\n return hidden_states\n\n\n# Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2LayerNormConvLayer with Wav2Vec2->SEWD\nclass SEWDLayerNormConvLayer(nn.Module):\n def __init__(self, config, layer_id=0):\n super().__init__()\n self.in_conv_dim = config.conv_dim[layer_id - 1] if layer_id > 0 else 1\n self.out_conv_dim = config.conv_dim[layer_id]\n\n self.conv = nn.Conv1d(\n self.in_conv_dim,\n self.out_conv_dim,\n kernel_size=config.conv_kernel[layer_id],\n stride=config.conv_stride[layer_id],\n bias=config.conv_bias,\n )\n self.layer_norm = nn.LayerNorm(self.out_conv_dim, elementwise_affine=True)\n self.activation = ACT2FN[config.feat_extract_activation]\n\n def forward(self, hidden_states):\n hidden_states = self.conv(hidden_states)\n\n hidden_states = hidden_states.transpose(-2, -1)\n hidden_states = self.layer_norm(hidden_states)\n hidden_states = hidden_states.transpose(-2, -1)\n\n hidden_states = self.activation(hidden_states)\n return hidden_states\n\n\n# Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2GroupNormConvLayer with Wav2Vec2->SEWD\nclass SEWDGroupNormConvLayer(nn.Module):\n def __init__(self, config, layer_id=0):\n super().__init__()\n self.in_conv_dim = config.conv_dim[layer_id - 1] if layer_id > 0 else 1\n self.out_conv_dim = config.conv_dim[layer_id]\n\n self.conv = nn.Conv1d(\n self.in_conv_dim,\n self.out_conv_dim,\n kernel_size=config.conv_kernel[layer_id],\n stride=config.conv_stride[layer_id],\n bias=config.conv_bias,\n )\n self.activation = ACT2FN[config.feat_extract_activation]\n\n self.layer_norm = nn.GroupNorm(num_groups=self.out_conv_dim, num_channels=self.out_conv_dim, affine=True)\n\n def forward(self, hidden_states):\n hidden_states = self.conv(hidden_states)\n hidden_states = self.layer_norm(hidden_states)\n hidden_states = self.activation(hidden_states)\n return hidden_states\n\n\n# Copied from transformers.models.sew.modeling_sew.SEWPositionalConvEmbedding with SEW->SEWD\nclass SEWDPositionalConvEmbedding(nn.Module):\n def __init__(self, config):\n super().__init__()\n self.conv = nn.Conv1d(\n config.hidden_size,\n config.hidden_size,\n kernel_size=config.num_conv_pos_embeddings,\n padding=config.num_conv_pos_embeddings // 2,\n groups=config.num_conv_pos_embedding_groups,\n stride=config.squeeze_factor,\n )\n\n if is_deepspeed_zero3_enabled():\n import deepspeed\n\n with deepspeed.zero.GatheredParameters(self.conv.weight, modifier_rank=0):\n self.conv = nn.utils.weight_norm(self.conv, name=\"weight\", dim=2)\n deepspeed.zero.register_external_parameter(self, self.conv.weight_v)\n deepspeed.zero.register_external_parameter(self, self.conv.weight_g)\n else:\n self.conv = nn.utils.weight_norm(self.conv, name=\"weight\", dim=2)\n\n self.padding = SEWDSamePadLayer(config.num_conv_pos_embeddings)\n self.activation = ACT2FN[config.feat_extract_activation]\n\n def forward(self, hidden_states):\n hidden_states = self.conv(hidden_states)\n hidden_states = self.padding(hidden_states)\n hidden_states = self.activation(hidden_states)\n\n return hidden_states\n\n\n# Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2SamePadLayer with Wav2Vec2->SEW\nclass SEWDSamePadLayer(nn.Module):\n def __init__(self, num_conv_pos_embeddings):\n super().__init__()\n self.num_pad_remove = 1 if num_conv_pos_embeddings % 2 == 0 else 0\n\n def forward(self, hidden_states):\n if self.num_pad_remove > 0:\n hidden_states = hidden_states[:, :, : -self.num_pad_remove]\n return hidden_states\n\n\n# Copied from transformers.models.sew.modeling_sew.SEWUpsampling with SEW->SEWD\nclass SEWDUpsampling(nn.Module):\n def __init__(self, config):\n super().__init__()\n self.projection = nn.Linear(config.hidden_size, config.hidden_size * config.squeeze_factor)\n self.activation = ACT2FN[config.feat_extract_activation]\n self.squeeze_factor = config.squeeze_factor\n\n def forward(self, hidden_states):\n hidden_states = self.projection(hidden_states)\n hidden_states = self.activation(hidden_states)\n\n if self.squeeze_factor > 1:\n # transform embedding channels to sequence length\n bsz, src_len, src_embed_dim = hidden_states.size()\n tgt_len = src_len * self.squeeze_factor\n tgt_embed_dim = src_embed_dim // self.squeeze_factor\n hidden_states = hidden_states.reshape(bsz, src_len, self.squeeze_factor, tgt_embed_dim)\n hidden_states = hidden_states.reshape(bsz, tgt_len, tgt_embed_dim)\n\n return hidden_states\n\n\n# Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2FeatureEncoder with Wav2Vec2->SEWD\nclass SEWDFeatureEncoder(nn.Module):\n \"\"\"Construct the features from raw audio waveform\"\"\"\n\n def __init__(self, config):\n super().__init__()\n\n if config.feat_extract_norm == \"group\":\n conv_layers = [SEWDGroupNormConvLayer(config, layer_id=0)] + [\n SEWDNoLayerNormConvLayer(config, layer_id=i + 1) for i in range(config.num_feat_extract_layers - 1)\n ]\n elif config.feat_extract_norm == \"layer\":\n conv_layers = [SEWDLayerNormConvLayer(config, layer_id=i) for i in range(config.num_feat_extract_layers)]\n else:\n raise ValueError(\n f\"`config.feat_extract_norm` is {config.feat_extract_norm}, but has to be one of ['group', 'layer']\"\n )\n self.conv_layers = nn.ModuleList(conv_layers)\n self.gradient_checkpointing = False\n self._requires_grad = True\n\n def _freeze_parameters(self):\n for param in self.parameters():\n param.requires_grad = False\n self._requires_grad = False\n\n def forward(self, input_values):\n hidden_states = input_values[:, None]\n\n # make sure hidden_states require grad for gradient_checkpointing\n if self._requires_grad and self.training:\n hidden_states.requires_grad = True\n\n for conv_layer in self.conv_layers:\n if self._requires_grad and self.gradient_checkpointing and self.training:\n\n def create_custom_forward(module):\n def custom_forward(*inputs):\n return module(*inputs)\n\n return custom_forward\n\n hidden_states = torch.utils.checkpoint.checkpoint(\n create_custom_forward(conv_layer),\n hidden_states,\n )\n else:\n hidden_states = conv_layer(hidden_states)\n\n return hidden_states\n\n\nclass SEWDFeatureExtractor(SEWDFeatureEncoder):\n def __init__(self, config):\n super().__init__(config)\n warnings.warn(\n f\"The class `{self.__class__.__name__}` has been depreciated \"\n \"and will be removed in Transformers v5. \"\n f\"Use `{self.__class__.__bases__[0].__name__}` instead.\",\n FutureWarning,\n )\n\n\n# Copied from transformers.models.deberta.modeling_deberta.ContextPooler\nclass ContextPooler(nn.Module):\n def __init__(self, config):\n super().__init__()\n self.dense = nn.Linear(config.pooler_hidden_size, config.pooler_hidden_size)\n self.dropout = StableDropout(config.pooler_dropout)\n self.config = config\n\n def forward(self, hidden_states):\n # We \"pool\" the model by simply taking the hidden state corresponding\n # to the first token.\n\n context_token = hidden_states[:, 0]\n context_token = self.dropout(context_token)\n pooled_output = self.dense(context_token)\n pooled_output = ACT2FN[self.config.pooler_hidden_act](pooled_output)\n return pooled_output\n\n @property\n def output_dim(self):\n return self.config.hidden_size\n\n\n# Copied from transformers.models.deberta.modeling_deberta.XSoftmax with deberta->deberta_v2\nclass XSoftmax(torch.autograd.Function):\n \"\"\"\n Masked Softmax which is optimized for saving memory\n\n Args:\n input (`torch.tensor`): The input tensor that will apply softmax.\n mask (`torch.IntTensor`):\n The mask matrix where 0 indicate that element will be ignored in the softmax calculation.\n dim (int): The dimension that will apply softmax\n\n Example:\n\n ```python\n >>> import torch\n >>> from transformers.models.deberta_v2.modeling_deberta_v2 import XSoftmax\n\n >>> # Make a tensor\n >>> x = torch.randn([4, 20, 100])\n\n >>> # Create a mask\n >>> mask = (x > 0).int()\n\n >>> # Specify the dimension to apply softmax\n >>> dim = -1\n\n >>> y = XSoftmax.apply(x, mask, dim)\n ```\"\"\"\n\n @staticmethod\n def forward(self, input, mask, dim):\n self.dim = dim\n rmask = ~(mask.bool())\n\n output = input.masked_fill(rmask, float(\"-inf\"))\n output = torch.softmax(output, self.dim)\n output.masked_fill_(rmask, 0)\n self.save_for_backward(output)\n return output\n\n @staticmethod\n def backward(self, grad_output):\n (output,) = self.saved_tensors\n inputGrad = _softmax_backward_data(grad_output, output, self.dim, output)\n return inputGrad, None, None\n\n @staticmethod\n def symbolic(g, self, mask, dim):\n import torch.onnx.symbolic_helper as sym_help\n from torch.onnx.symbolic_opset9 import masked_fill, softmax\n\n mask_cast_value = g.op(\"Cast\", mask, to_i=sym_help.cast_pytorch_to_onnx[\"Long\"])\n r_mask = g.op(\n \"Cast\",\n g.op(\"Sub\", g.op(\"Constant\", value_t=torch.tensor(1, dtype=torch.int64)), mask_cast_value),\n to_i=sym_help.cast_pytorch_to_onnx[\"Byte\"],\n )\n output = masked_fill(g, self, r_mask, g.op(\"Constant\", value_t=torch.tensor(float(\"-inf\"))))\n output = softmax(g, output, dim)\n return masked_fill(g, output, r_mask, g.op(\"Constant\", value_t=torch.tensor(0, dtype=torch.uint8)))\n\n\n# Copied from transformers.models.deberta.modeling_deberta.DropoutContext\nclass DropoutContext(object):\n def __init__(self):\n self.dropout = 0\n self.mask = None\n self.scale = 1\n self.reuse_mask = True\n\n\n# Copied from transformers.models.deberta.modeling_deberta.XDropout\nclass XDropout(torch.autograd.Function):\n \"\"\"Optimized dropout function to save computation and memory by using mask operation instead of multiplication.\"\"\"\n\n @staticmethod\n def forward(ctx, input, local_ctx):\n mask, dropout = get_mask(input, local_ctx)\n ctx.scale = 1.0 / (1 - dropout)\n if dropout > 0:\n ctx.save_for_backward(mask)\n return input.masked_fill(mask, 0) * ctx.scale\n else:\n return input\n\n @staticmethod\n def backward(ctx, grad_output):\n if ctx.scale > 1:\n (mask,) = ctx.saved_tensors\n return grad_output.masked_fill(mask, 0) * ctx.scale, None\n else:\n return grad_output, None\n\n\n# Copied from transformers.models.deberta.modeling_deberta.StableDropout\nclass StableDropout(nn.Module):\n \"\"\"\n Optimized dropout module for stabilizing the training\n\n Args:\n drop_prob (float): the dropout probabilities\n \"\"\"\n\n def __init__(self, drop_prob):\n super().__init__()\n self.drop_prob = drop_prob\n self.count = 0\n self.context_stack = None\n\n def forward(self, x):\n \"\"\"\n Call the module\n\n Args:\n x (`torch.tensor`): The input tensor to apply dropout\n \"\"\"\n if self.training and self.drop_prob > 0:\n return XDropout.apply(x, self.get_context())\n return x\n\n def clear_context(self):\n self.count = 0\n self.context_stack = None\n\n def init_context(self, reuse_mask=True, scale=1):\n if self.context_stack is None:\n self.context_stack = []\n self.count = 0\n for c in self.context_stack:\n c.reuse_mask = reuse_mask\n c.scale = scale\n\n def get_context(self):\n if self.context_stack is not None:\n if self.count >= len(self.context_stack):\n self.context_stack.append(DropoutContext())\n ctx = self.context_stack[self.count]\n ctx.dropout = self.drop_prob\n self.count += 1\n return ctx\n else:\n return self.drop_prob\n\n\n# Copied from transformers.models.deberta.modeling_deberta.DebertaSelfOutput with DebertaV2->SEWD, DebertaLayerNorm->LayerNorm, hidden_dropout_prob->activation_dropout\nclass SEWDSelfOutput(nn.Module):\n def __init__(self, config):\n super().__init__()\n self.dense = nn.Linear(config.hidden_size, config.hidden_size)\n self.LayerNorm = LayerNorm(config.hidden_size, config.layer_norm_eps)\n self.dropout = StableDropout(config.activation_dropout)\n\n def forward(self, hidden_states, input_tensor):\n hidden_states = self.dense(hidden_states)\n hidden_states = self.dropout(hidden_states)\n hidden_states = self.LayerNorm(hidden_states + input_tensor)\n return hidden_states\n\n\n# Copied from transformers.models.deberta_v2.modeling_deberta_v2.DisentangledSelfAttention with attention_probs_dropout_prob->attention_dropout, hidden_dropout_prob->activation_dropout\nclass DisentangledSelfAttention(nn.Module):\n \"\"\"\n Disentangled self-attention module\n\n Parameters:\n config (`DebertaV2Config`):\n A model config class instance with the configuration to build a new model. The schema is similar to\n *BertConfig*, for more details, please refer [`DebertaV2Config`]\n\n \"\"\"\n\n def __init__(self, config):\n super().__init__()\n if config.hidden_size % config.num_attention_heads != 0:\n raise ValueError(\n f\"The hidden size ({config.hidden_size}) is not a multiple of the number of attention \"\n f\"heads ({config.num_attention_heads})\"\n )\n self.num_attention_heads = config.num_attention_heads\n _attention_head_size = config.hidden_size // config.num_attention_heads\n self.attention_head_size = getattr(config, \"attention_head_size\", _attention_head_size)\n self.all_head_size = self.num_attention_heads * self.attention_head_size\n self.query_proj = nn.Linear(config.hidden_size, self.all_head_size, bias=True)\n self.key_proj = nn.Linear(config.hidden_size, self.all_head_size, bias=True)\n self.value_proj = nn.Linear(config.hidden_size, self.all_head_size, bias=True)\n\n self.share_att_key = getattr(config, \"share_att_key\", False)\n self.pos_att_type = config.pos_att_type if config.pos_att_type is not None else []\n self.relative_attention = getattr(config, \"relative_attention\", False)\n\n if self.relative_attention:\n self.position_buckets = getattr(config, \"position_buckets\", -1)\n self.max_relative_positions = getattr(config, \"max_relative_positions\", -1)\n if self.max_relative_positions < 1:\n self.max_relative_positions = config.max_position_embeddings\n self.pos_ebd_size = self.max_relative_positions\n if self.position_buckets > 0:\n self.pos_ebd_size = self.position_buckets\n\n self.pos_dropout = StableDropout(config.activation_dropout)\n\n if not self.share_att_key:\n if \"c2p\" in self.pos_att_type or \"p2p\" in self.pos_att_type:\n self.pos_key_proj = nn.Linear(config.hidden_size, self.all_head_size, bias=True)\n if \"p2c\" in self.pos_att_type or \"p2p\" in self.pos_att_type:\n self.pos_query_proj = nn.Linear(config.hidden_size, self.all_head_size)\n\n self.dropout = StableDropout(config.attention_dropout)\n\n def transpose_for_scores(self, x, attention_heads):\n new_x_shape = x.size()[:-1] + (attention_heads, -1)\n x = x.view(*new_x_shape)\n return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1))\n\n def forward(\n self,\n hidden_states,\n attention_mask,\n output_attentions=False,\n query_states=None,\n relative_pos=None,\n rel_embeddings=None,\n ):\n \"\"\"\n Call the module\n\n Args:\n hidden_states (`torch.FloatTensor`):\n Input states to the module usually the output from previous layer, it will be the Q,K and V in\n *Attention(Q,K,V)*\n\n attention_mask (`torch.ByteTensor`):\n An attention mask matrix of shape [*B*, *N*, *N*] where *B* is the batch size, *N* is the maximum\n sequence length in which element [i,j] = *1* means the *i* th token in the input can attend to the *j*\n th token.\n\n output_attentions (`bool`, optional):\n Whether return the attention matrix.\n\n query_states (`torch.FloatTensor`, optional):\n The *Q* state in *Attention(Q,K,V)*.\n\n relative_pos (`torch.LongTensor`):\n The relative position encoding between the tokens in the sequence. It's of shape [*B*, *N*, *N*] with\n values ranging in [*-max_relative_positions*, *max_relative_positions*].\n\n rel_embeddings (`torch.FloatTensor`):\n The embedding of relative distances. It's a tensor of shape [\\\\(2 \\\\times\n \\\\text{max_relative_positions}\\\\), *hidden_size*].\n\n\n \"\"\"\n if query_states is None:\n query_states = hidden_states\n query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads)\n key_layer = self.transpose_for_scores(self.key_proj(hidden_states), self.num_attention_heads)\n value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads)\n\n rel_att = None\n # Take the dot product between \"query\" and \"key\" to get the raw attention scores.\n scale_factor = 1\n if \"c2p\" in self.pos_att_type:\n scale_factor += 1\n if \"p2c\" in self.pos_att_type:\n scale_factor += 1\n if \"p2p\" in self.pos_att_type:\n scale_factor += 1\n scale = math.sqrt(query_layer.size(-1) * scale_factor)\n attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2)) / scale\n if self.relative_attention:\n rel_embeddings = self.pos_dropout(rel_embeddings)\n rel_att = self.disentangled_attention_bias(\n query_layer, key_layer, relative_pos, rel_embeddings, scale_factor\n )\n\n if rel_att is not None:\n attention_scores = attention_scores + rel_att\n attention_scores = attention_scores\n attention_scores = attention_scores.view(\n -1, self.num_attention_heads, attention_scores.size(-2), attention_scores.size(-1)\n )\n\n # bsz x height x length x dimension\n attention_probs = XSoftmax.apply(attention_scores, attention_mask, -1)\n attention_probs = self.dropout(attention_probs)\n context_layer = torch.bmm(\n attention_probs.view(-1, attention_probs.size(-2), attention_probs.size(-1)), value_layer\n )\n context_layer = (\n context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1))\n .permute(0, 2, 1, 3)\n .contiguous()\n )\n new_context_layer_shape = context_layer.size()[:-2] + (-1,)\n context_layer = context_layer.view(*new_context_layer_shape)\n if output_attentions:\n return (context_layer, attention_probs)\n else:\n return context_layer\n\n def disentangled_attention_bias(self, query_layer, key_layer, relative_pos, rel_embeddings, scale_factor):\n if relative_pos is None:\n q = query_layer.size(-2)\n relative_pos = build_relative_position(\n q, key_layer.size(-2), bucket_size=self.position_buckets, max_position=self.max_relative_positions\n )\n if relative_pos.dim() == 2:\n relative_pos = relative_pos.unsqueeze(0).unsqueeze(0)\n elif relative_pos.dim() == 3:\n relative_pos = relative_pos.unsqueeze(1)\n # bsz x height x query x key\n elif relative_pos.dim() != 4:\n raise ValueError(f\"Relative position ids must be of dim 2 or 3 or 4. {relative_pos.dim()}\")\n\n att_span = self.pos_ebd_size\n relative_pos = relative_pos.long().to(query_layer.device)\n\n rel_embeddings = rel_embeddings[self.pos_ebd_size - att_span : self.pos_ebd_size + att_span, :].unsqueeze(0)\n if self.share_att_key:\n pos_query_layer = self.transpose_for_scores(\n self.query_proj(rel_embeddings), self.num_attention_heads\n ).repeat(query_layer.size(0) // self.num_attention_heads, 1, 1)\n pos_key_layer = self.transpose_for_scores(self.key_proj(rel_embeddings), self.num_attention_heads).repeat(\n query_layer.size(0) // self.num_attention_heads, 1, 1\n )\n else:\n if \"c2p\" in self.pos_att_type or \"p2p\" in self.pos_att_type:\n pos_key_layer = self.transpose_for_scores(\n self.pos_key_proj(rel_embeddings), self.num_attention_heads\n ).repeat(\n query_layer.size(0) // self.num_attention_heads, 1, 1\n ) # .split(self.all_head_size, dim=-1)\n if \"p2c\" in self.pos_att_type or \"p2p\" in self.pos_att_type:\n pos_query_layer = self.transpose_for_scores(\n self.pos_query_proj(rel_embeddings), self.num_attention_heads\n ).repeat(\n query_layer.size(0) // self.num_attention_heads, 1, 1\n ) # .split(self.all_head_size, dim=-1)\n\n score = 0\n # content->position\n if \"c2p\" in self.pos_att_type:\n scale = math.sqrt(pos_key_layer.size(-1) * scale_factor)\n c2p_att = torch.bmm(query_layer, pos_key_layer.transpose(-1, -2))\n c2p_pos = torch.clamp(relative_pos + att_span, 0, att_span * 2 - 1)\n c2p_att = torch.gather(\n c2p_att,\n dim=-1,\n index=c2p_pos.squeeze(0).expand([query_layer.size(0), query_layer.size(1), relative_pos.size(-1)]),\n )\n score += c2p_att / scale\n\n # position->content\n if \"p2c\" in self.pos_att_type or \"p2p\" in self.pos_att_type:\n scale = math.sqrt(pos_query_layer.size(-1) * scale_factor)\n if key_layer.size(-2) != query_layer.size(-2):\n r_pos = build_relative_position(\n key_layer.size(-2),\n key_layer.size(-2),\n bucket_size=self.position_buckets,\n max_position=self.max_relative_positions,\n ).to(query_layer.device)\n r_pos = r_pos.unsqueeze(0)\n else:\n r_pos = relative_pos\n\n p2c_pos = torch.clamp(-r_pos + att_span, 0, att_span * 2 - 1)\n\n if \"p2c\" in self.pos_att_type:\n p2c_att = torch.bmm(key_layer, pos_query_layer.transpose(-1, -2))\n p2c_att = torch.gather(\n p2c_att,\n dim=-1,\n index=p2c_pos.squeeze(0).expand([query_layer.size(0), key_layer.size(-2), key_layer.size(-2)]),\n ).transpose(-1, -2)\n score += p2c_att / scale\n\n # position->position\n if \"p2p\" in self.pos_att_type:\n pos_query = pos_query_layer[:, :, att_span:, :]\n p2p_att = torch.matmul(pos_query, pos_key_layer.transpose(-1, -2))\n p2p_att = p2p_att.expand(query_layer.size()[:2] + p2p_att.size()[2:])\n p2p_att = torch.gather(\n p2p_att,\n dim=-1,\n index=c2p_pos.expand(\n [query_layer.size(0), query_layer.size(1), query_layer.size(2), relative_pos.size(-1)]\n ),\n )\n score += p2p_att\n\n return score\n\n\n# Copied from transformers.models.deberta.modeling_deberta.DebertaAttention with Deberta->SEWD\nclass SEWDAttention(nn.Module):\n def __init__(self, config):\n super().__init__()\n self.self = DisentangledSelfAttention(config)\n self.output = SEWDSelfOutput(config)\n self.config = config\n\n def forward(\n self,\n hidden_states,\n attention_mask,\n output_attentions=False,\n query_states=None,\n relative_pos=None,\n rel_embeddings=None,\n ):\n self_output = self.self(\n hidden_states,\n attention_mask,\n output_attentions,\n query_states=query_states,\n relative_pos=relative_pos,\n rel_embeddings=rel_embeddings,\n )\n if output_attentions:\n self_output, att_matrix = self_output\n if query_states is None:\n query_states = hidden_states\n attention_output = self.output(self_output, query_states)\n\n if output_attentions:\n return (attention_output, att_matrix)\n else:\n return attention_output\n\n\n# Copied from transformers.models.bert.modeling_bert.BertIntermediate with Bert->SEWD\nclass SEWDIntermediate(nn.Module):\n def __init__(self, config):\n super().__init__()\n self.dense = nn.Linear(config.hidden_size, config.intermediate_size)\n if isinstance(config.hidden_act, str):\n self.intermediate_act_fn = ACT2FN[config.hidden_act]\n else:\n self.intermediate_act_fn = config.hidden_act\n\n def forward(self, hidden_states):\n hidden_states = self.dense(hidden_states)\n hidden_states = self.intermediate_act_fn(hidden_states)\n return hidden_states\n\n\n# Copied from transformers.models.deberta.modeling_deberta.DebertaOutput with DebertaLayerNorm->LayerNorm, hidden_dropout_prob->activation_dropout\nclass SEWDOutput(nn.Module):\n def __init__(self, config):\n super().__init__()\n self.dense = nn.Linear(config.intermediate_size, config.hidden_size)\n self.LayerNorm = LayerNorm(config.hidden_size, config.layer_norm_eps)\n self.dropout = StableDropout(config.activation_dropout)\n self.config = config\n\n def forward(self, hidden_states, input_tensor):\n hidden_states = self.dense(hidden_states)\n hidden_states = self.dropout(hidden_states)\n hidden_states = self.LayerNorm(hidden_states + input_tensor)\n return hidden_states\n\n\n# Copied from transformers.models.deberta.modeling_deberta.DebertaLayer with Deberta->SEWD\nclass SEWDLayer(nn.Module):\n def __init__(self, config):\n super().__init__()\n self.attention = SEWDAttention(config)\n self.intermediate = SEWDIntermediate(config)\n self.output = SEWDOutput(config)\n\n def forward(\n self,\n hidden_states,\n attention_mask,\n query_states=None,\n relative_pos=None,\n rel_embeddings=None,\n output_attentions=False,\n ):\n attention_output = self.attention(\n hidden_states,\n attention_mask,\n output_attentions=output_attentions,\n query_states=query_states,\n relative_pos=relative_pos,\n rel_embeddings=rel_embeddings,\n )\n if output_attentions:\n attention_output, att_matrix = attention_output\n intermediate_output = self.intermediate(attention_output)\n layer_output = self.output(intermediate_output, attention_output)\n if output_attentions:\n return (layer_output, att_matrix)\n else:\n return layer_output\n\n\n# Copied from transformers.models.deberta_v2.modeling_deberta_v2.ConvLayer\nclass ConvLayer(nn.Module):\n def __init__(self, config):\n super().__init__()\n kernel_size = getattr(config, \"conv_kernel_size\", 3)\n groups = getattr(config, \"conv_groups\", 1)\n self.conv_act = getattr(config, \"conv_act\", \"tanh\")\n self.conv = nn.Conv1d(\n config.hidden_size, config.hidden_size, kernel_size, padding=(kernel_size - 1) // 2, groups=groups\n )\n self.LayerNorm = LayerNorm(config.hidden_size, config.layer_norm_eps)\n self.dropout = StableDropout(config.hidden_dropout_prob)\n self.config = config\n\n def forward(self, hidden_states, residual_states, input_mask):\n out = self.conv(hidden_states.permute(0, 2, 1).contiguous()).permute(0, 2, 1).contiguous()\n rmask = (1 - input_mask).bool()\n out.masked_fill_(rmask.unsqueeze(-1).expand(out.size()), 0)\n out = ACT2FN[self.conv_act](self.dropout(out))\n\n layer_norm_input = residual_states + out\n output = self.LayerNorm(layer_norm_input).to(layer_norm_input)\n\n if input_mask is None:\n output_states = output\n else:\n if input_mask.dim() != layer_norm_input.dim():\n if input_mask.dim() == 4:\n input_mask = input_mask.squeeze(1).squeeze(1)\n input_mask = input_mask.unsqueeze(2)\n\n input_mask = input_mask.to(output.dtype)\n output_states = output * input_mask\n\n return output_states\n\n\n# Copied from transformers.models.deberta_v2.modeling_deberta_v2.DebertaV2Encoder with DebertaV2->SEWD\nclass SEWDTransformerEncoder(nn.Module):\n \"\"\"Modified BertEncoder with relative position bias support\"\"\"\n\n def __init__(self, config):\n super().__init__()\n\n self.layer = nn.ModuleList([SEWDLayer(config) for _ in range(config.num_hidden_layers)])\n self.relative_attention = getattr(config, \"relative_attention\", False)\n\n if self.relative_attention:\n self.max_relative_positions = getattr(config, \"max_relative_positions\", -1)\n if self.max_relative_positions < 1:\n self.max_relative_positions = config.max_position_embeddings\n\n self.position_buckets = getattr(config, \"position_buckets\", -1)\n pos_ebd_size = self.max_relative_positions * 2\n\n if self.position_buckets > 0:\n pos_ebd_size = self.position_buckets * 2\n\n self.rel_embeddings = nn.Embedding(pos_ebd_size, config.hidden_size)\n\n self.norm_rel_ebd = [x.strip() for x in getattr(config, \"norm_rel_ebd\", \"none\").lower().split(\"|\")]\n\n if \"layer_norm\" in self.norm_rel_ebd:\n self.LayerNorm = LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=True)\n\n self.conv = ConvLayer(config) if getattr(config, \"conv_kernel_size\", 0) > 0 else None\n self.gradient_checkpointing = False\n\n def get_rel_embedding(self):\n rel_embeddings = self.rel_embeddings.weight if self.relative_attention else None\n if rel_embeddings is not None and (\"layer_norm\" in self.norm_rel_ebd):\n rel_embeddings = self.LayerNorm(rel_embeddings)\n return rel_embeddings\n\n def get_attention_mask(self, attention_mask):\n if attention_mask.dim() <= 2:\n extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)\n attention_mask = extended_attention_mask * extended_attention_mask.squeeze(-2).unsqueeze(-1)\n attention_mask = attention_mask.byte()\n elif attention_mask.dim() == 3:\n attention_mask = attention_mask.unsqueeze(1)\n\n return attention_mask\n\n def get_rel_pos(self, hidden_states, query_states=None, relative_pos=None):\n if self.relative_attention and relative_pos is None:\n q = query_states.size(-2) if query_states is not None else hidden_states.size(-2)\n relative_pos = build_relative_position(\n q, hidden_states.size(-2), bucket_size=self.position_buckets, max_position=self.max_relative_positions\n )\n return relative_pos\n\n def forward(\n self,\n hidden_states,\n attention_mask,\n output_hidden_states=True,\n output_attentions=False,\n query_states=None,\n relative_pos=None,\n return_dict=True,\n ):\n if attention_mask.dim() <= 2:\n input_mask = attention_mask\n else:\n input_mask = (attention_mask.sum(-2) > 0).byte()\n attention_mask = self.get_attention_mask(attention_mask)\n relative_pos = self.get_rel_pos(hidden_states, query_states, relative_pos)\n\n all_hidden_states = () if output_hidden_states else None\n all_attentions = () if output_attentions else None\n\n if isinstance(hidden_states, Sequence):\n next_kv = hidden_states[0]\n else:\n next_kv = hidden_states\n rel_embeddings = self.get_rel_embedding()\n output_states = next_kv\n for i, layer_module in enumerate(self.layer):\n\n if output_hidden_states:\n all_hidden_states = all_hidden_states + (output_states,)\n\n if self.gradient_checkpointing and self.training:\n\n def create_custom_forward(module):\n def custom_forward(*inputs):\n return module(*inputs, output_attentions)\n\n return custom_forward\n\n output_states = torch.utils.checkpoint.checkpoint(\n create_custom_forward(layer_module),\n next_kv,\n attention_mask,\n query_states,\n relative_pos,\n rel_embeddings,\n )\n else:\n output_states = layer_module(\n next_kv,\n attention_mask,\n query_states=query_states,\n relative_pos=relative_pos,\n rel_embeddings=rel_embeddings,\n output_attentions=output_attentions,\n )\n\n if output_attentions:\n output_states, att_m = output_states\n\n if i == 0 and self.conv is not None:\n output_states = self.conv(hidden_states, output_states, input_mask)\n\n if query_states is not None:\n query_states = output_states\n if isinstance(hidden_states, Sequence):\n next_kv = hidden_states[i + 1] if i + 1 < len(self.layer) else None\n else:\n next_kv = output_states\n\n if output_attentions:\n all_attentions = all_attentions + (att_m,)\n\n if output_hidden_states:\n all_hidden_states = all_hidden_states + (output_states,)\n\n if not return_dict:\n return tuple(v for v in [output_states, all_hidden_states, all_attentions] if v is not None)\n return BaseModelOutput(\n last_hidden_state=output_states, hidden_states=all_hidden_states, attentions=all_attentions\n )\n\n\nclass SEWDEncoder(nn.Module):\n def __init__(self, config):\n super().__init__()\n self.config = config\n self.pos_conv_embed = SEWDPositionalConvEmbedding(config)\n self.pool = nn.AvgPool1d(config.squeeze_factor, config.squeeze_factor)\n self.encoder = SEWDTransformerEncoder(config)\n self.upsample = SEWDUpsampling(config)\n self.gradient_checkpointing = False\n\n def forward(\n self,\n hidden_states,\n attention_mask=None,\n output_attentions=False,\n output_hidden_states=False,\n return_dict=True,\n ):\n max_encoder_length = hidden_states.shape[1] // self.config.squeeze_factor\n if attention_mask is None:\n attention_mask = torch.ones(\n (hidden_states.shape[0], max_encoder_length), dtype=torch.long, device=hidden_states.device\n )\n else:\n # make sure padded tokens output 0\n hidden_states[~attention_mask.bool()] = 0.0\n\n input_lengths = (attention_mask.long()).sum(-1)\n # apply pooling formula to get real output_lengths\n output_lengths = input_lengths // self.config.squeeze_factor\n attention_ids = (\n torch.arange(0, max_encoder_length, device=output_lengths.device)\n .view(1, -1)\n .expand(output_lengths.shape[0], -1)\n )\n attention_mask = (attention_ids < output_lengths.view(-1, 1)).long()\n\n n_input_timesteps = hidden_states.shape[1]\n\n hidden_states = hidden_states.transpose(1, 2)\n position_embeddings = self.pos_conv_embed(hidden_states)\n pooled_hidden_states = self.pool(hidden_states)\n min_length = min(position_embeddings.size(-1), pooled_hidden_states.size(-1))\n hidden_states = pooled_hidden_states[..., :min_length] + position_embeddings[..., :min_length]\n hidden_states = hidden_states.transpose(1, 2)\n\n encoder_outputs = self.encoder(hidden_states, attention_mask, output_hidden_states, output_attentions)\n\n hidden_states = self.upsample(encoder_outputs.last_hidden_state)\n if hidden_states.shape[1] < n_input_timesteps:\n hidden_states = nn.functional.pad(hidden_states, (0, 0, 0, n_input_timesteps - hidden_states.shape[1]))\n\n if not return_dict:\n return tuple(\n v for v in [hidden_states, encoder_outputs.hidden_states, encoder_outputs.attentions] if v is not None\n )\n return BaseModelOutput(\n last_hidden_state=hidden_states,\n hidden_states=encoder_outputs.hidden_states,\n attentions=encoder_outputs.attentions,\n )\n\n\nclass SEWDPreTrainedModel(PreTrainedModel):\n \"\"\"\n An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained\n models.\n \"\"\"\n\n config_class = SEWDConfig\n base_model_prefix = \"sew-d\"\n main_input_name = \"input_values\"\n _keys_to_ignore_on_load_missing = [r\"position_ids\"]\n supports_gradient_checkpointing = True\n\n def _init_weights(self, module):\n \"\"\"Initialize the weights\"\"\"\n if isinstance(module, SEWDPositionalConvEmbedding):\n nn.init.normal_(\n module.conv.weight,\n mean=0,\n std=2 * math.sqrt(1 / (module.conv.kernel_size[0] * module.conv.in_channels)),\n )\n nn.init.constant_(module.conv.bias, 0)\n elif isinstance(module, nn.Linear):\n # Slightly different from the TF version which uses truncated_normal for initialization\n # cf https://github.com/pytorch/pytorch/pull/5617\n module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\n elif isinstance(module, (nn.LayerNorm, nn.GroupNorm)):\n module.bias.data.zero_()\n module.weight.data.fill_(1.0)\n elif isinstance(module, nn.Conv1d):\n if is_deepspeed_zero3_enabled():\n import deepspeed\n\n if hasattr(module, \"weight_v\") and hasattr(module, \"weight_g\"):\n with deepspeed.zero.GatheredParameters([module.weight_v, module.weight_g], modifier_rank=0):\n nn.init.kaiming_normal_(module.weight.data)\n else:\n with deepspeed.zero.GatheredParameters(module.weight, modifier_rank=0):\n nn.init.kaiming_normal_(module.weight.data)\n else:\n nn.init.kaiming_normal_(module.weight.data)\n elif isinstance(module, nn.Embedding):\n module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\n if module.padding_idx is not None:\n module.weight.data[module.padding_idx].zero_()\n\n if isinstance(module, (nn.Linear, nn.Conv1d)) and module.bias is not None:\n module.bias.data.zero_()\n\n def _get_feat_extract_output_lengths(self, input_lengths: Union[torch.LongTensor, int]):\n \"\"\"\n Computes the output length of the convolutional layers\n \"\"\"\n\n def _conv_out_length(input_length, kernel_size, stride):\n # 1D convolutional layer output length formula taken\n # from https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html\n return torch_int_div(input_length - kernel_size, stride) + 1\n\n for kernel_size, stride in zip(self.config.conv_kernel, self.config.conv_stride):\n input_lengths = _conv_out_length(input_lengths, kernel_size, stride)\n\n return input_lengths\n\n def _get_feature_vector_attention_mask(self, feature_vector_length: int, attention_mask: torch.LongTensor):\n output_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(-1)).to(torch.long)\n batch_size = attention_mask.shape[0]\n\n attention_mask = torch.zeros(\n (batch_size, feature_vector_length), dtype=attention_mask.dtype, device=attention_mask.device\n )\n # these two operations makes sure that all values before the output lengths idxs are attended to\n attention_mask[(torch.arange(attention_mask.shape[0], device=attention_mask.device), output_lengths - 1)] = 1\n attention_mask = attention_mask.flip([-1]).cumsum(-1).flip([-1]).bool()\n return attention_mask\n\n def _set_gradient_checkpointing(self, module, value=False):\n if isinstance(module, SEWDTransformerEncoder):\n module.gradient_checkpointing = value\n\n\nSEWD_START_DOCSTRING = r\"\"\"\n SEW-D was proposed in [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech\n Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,\n Yoav Artzi.\n\n This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the\n library implements for all its model (such as downloading or saving etc.).\n\n This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use\n it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and\n behavior.\n\n Parameters:\n config ([`SEWDConfig`]): Model configuration class with all the parameters of the model.\n Initializing with a config file does not load the weights associated with the model, only the\n configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.\n\"\"\"\n\n\nSEWD_INPUTS_DOCSTRING = r\"\"\"\n Args:\n input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):\n Float values of input raw speech waveform. Values can be obtained by loading a *.flac* or *.wav* audio file\n into an array of type *List[float]* or a *numpy.ndarray*, *e.g.* via the soundfile library (*pip install\n soundfile*). To prepare the array into *input_values*, the [`Wav2Vec2Processor`] should be used for padding\n and conversion into a tensor of type *torch.FloatTensor*. See [`Wav2Vec2Processor.__call__`] for details.\n attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):\n Mask to avoid performing convolution and attention on padding token indices. Mask values selected in `[0,\n 1]`:\n\n - 1 for tokens that are **not masked**,\n - 0 for tokens that are **masked**.\n\n [What are attention masks?](../glossary#attention-mask)\n\n output_attentions (`bool`, *optional*):\n Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned\n tensors for more detail.\n output_hidden_states (`bool`, *optional*):\n Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for\n more detail.\n return_dict (`bool`, *optional*):\n Whether or not to return a [`~file_utils.ModelOutput`] instead of a plain tuple.\n\"\"\"\n\n\n@add_start_docstrings(\n \"The bare SEW-D Model transformer outputting raw hidden-states without any specific head on top.\",\n SEWD_START_DOCSTRING,\n)\n# Copied from transformers.models.sew.modeling_sew.SEWModel with SEW->SEWD, layer_norm_eps->feature_layer_norm_eps\nclass SEWDModel(SEWDPreTrainedModel):\n def __init__(self, config: SEWDConfig):\n super().__init__(config)\n self.config = config\n self.feature_extractor = SEWDFeatureEncoder(config)\n self.layer_norm = nn.LayerNorm(config.conv_dim[-1], eps=config.feature_layer_norm_eps)\n\n self.project_features = config.conv_dim[-1] != config.hidden_size\n if self.project_features:\n self.feature_projection = nn.Linear(config.conv_dim[-1], config.hidden_size)\n self.feature_dropout = nn.Dropout(config.feat_proj_dropout)\n\n if config.mask_time_prob > 0.0 or config.mask_feature_prob > 0.0:\n self.masked_spec_embed = nn.Parameter(torch.FloatTensor(config.hidden_size).uniform_())\n\n self.encoder = SEWDEncoder(config)\n\n # Initialize weights and apply final processing\n self.post_init()\n\n # Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2Model._mask_hidden_states\n def _mask_hidden_states(\n self,\n hidden_states: torch.FloatTensor,\n mask_time_indices: Optional[torch.FloatTensor] = None,\n attention_mask: Optional[torch.LongTensor] = None,\n ):\n \"\"\"\n Masks extracted features along time axis and/or along feature axis according to\n [SpecAugment](https://arxiv.org/abs/1904.08779).\n \"\"\"\n\n # `config.apply_spec_augment` can set masking to False\n if not getattr(self.config, \"apply_spec_augment\", True):\n return hidden_states\n\n # generate indices & apply SpecAugment along time axis\n batch_size, sequence_length, hidden_size = hidden_states.size()\n\n if mask_time_indices is not None:\n # apply SpecAugment along time axis with given mask_time_indices\n hidden_states[mask_time_indices] = self.masked_spec_embed.to(hidden_states.dtype)\n elif self.config.mask_time_prob > 0 and self.training:\n mask_time_indices = _compute_mask_indices(\n (batch_size, sequence_length),\n mask_prob=self.config.mask_time_prob,\n mask_length=self.config.mask_time_length,\n attention_mask=attention_mask,\n min_masks=self.config.mask_time_min_masks,\n )\n mask_time_indices = torch.tensor(mask_time_indices, device=hidden_states.device, dtype=torch.bool)\n hidden_states[mask_time_indices] = self.masked_spec_embed.to(hidden_states.dtype)\n\n if self.config.mask_feature_prob > 0 and self.training:\n # generate indices & apply SpecAugment along feature axis\n mask_feature_indices = _compute_mask_indices(\n (batch_size, hidden_size),\n mask_prob=self.config.mask_feature_prob,\n mask_length=self.config.mask_feature_length,\n min_masks=self.config.mask_feature_min_masks,\n )\n mask_feature_indices = torch.tensor(mask_feature_indices, device=hidden_states.device, dtype=torch.bool)\n mask_feature_indices = mask_feature_indices[:, None].expand(-1, sequence_length, -1)\n hidden_states[mask_feature_indices] = 0\n\n return hidden_states\n\n @add_start_docstrings_to_model_forward(SEWD_INPUTS_DOCSTRING)\n @add_code_sample_docstrings(\n processor_class=_PROCESSOR_FOR_DOC,\n checkpoint=_CHECKPOINT_FOR_DOC,\n output_type=BaseModelOutput,\n config_class=_CONFIG_FOR_DOC,\n modality=\"audio\",\n expected_output=_EXPECTED_OUTPUT_SHAPE,\n )\n def forward(\n self,\n input_values,\n attention_mask=None,\n mask_time_indices=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\n output_hidden_states = (\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\n )\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n extract_features = self.feature_extractor(input_values)\n extract_features = extract_features.transpose(1, 2)\n extract_features = self.layer_norm(extract_features)\n\n if self.project_features:\n extract_features = self.feature_projection(extract_features)\n hidden_states = self.feature_dropout(extract_features)\n\n if attention_mask is not None:\n # compute reduced attention_mask corresponding to feature vectors\n attention_mask = self._get_feature_vector_attention_mask(hidden_states.shape[1], attention_mask)\n\n hidden_states = self._mask_hidden_states(hidden_states, mask_time_indices=mask_time_indices)\n\n encoder_outputs = self.encoder(\n hidden_states,\n attention_mask=attention_mask,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n\n hidden_states = encoder_outputs[0]\n\n if not return_dict:\n return (hidden_states,) + encoder_outputs[1:]\n\n return BaseModelOutput(\n last_hidden_state=hidden_states,\n hidden_states=encoder_outputs.hidden_states,\n attentions=encoder_outputs.attentions,\n )\n\n\n@add_start_docstrings(\n \"\"\"SEW-D Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC).\"\"\",\n SEWD_START_DOCSTRING,\n)\n# Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC with Wav2Vec2->SEWD, wav2vec2->sew_d, WAV_2_VEC_2->SEWD\nclass SEWDForCTC(SEWDPreTrainedModel):\n def __init__(self, config):\n super().__init__(config)\n\n self.sew_d = SEWDModel(config)\n self.dropout = nn.Dropout(config.final_dropout)\n\n if config.vocab_size is None:\n raise ValueError(\n f\"You are trying to instantiate {self.__class__} with a configuration that \"\n \"does not define the vocabulary size of the language model head. Please \"\n \"instantiate the model as follows: `SEWDForCTC.from_pretrained(..., vocab_size=vocab_size)`. \"\n \"or define `vocab_size` of your model's configuration.\"\n )\n self.lm_head = nn.Linear(config.hidden_size, config.vocab_size)\n\n # Initialize weights and apply final processing\n self.post_init()\n\n def freeze_feature_extractor(self):\n \"\"\"\n Calling this function will disable the gradient computation for the feature encoder so that its parameter will\n not be updated during training.\n \"\"\"\n warnings.warn(\n \"The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5.\"\n \"Please use the equivalent `freeze_feature_encoder` method instead.\",\n FutureWarning,\n )\n self.freeze_feature_encoder()\n\n def freeze_feature_encoder(self):\n \"\"\"\n Calling this function will disable the gradient computation for the feature encoder so that its parameter will\n not be updated during training.\n \"\"\"\n self.sew_d.feature_extractor._freeze_parameters()\n\n @add_start_docstrings_to_model_forward(SEWD_INPUTS_DOCSTRING)\n @add_code_sample_docstrings(\n processor_class=_PROCESSOR_FOR_DOC,\n checkpoint=_CHECKPOINT_FOR_DOC,\n output_type=CausalLMOutput,\n config_class=_CONFIG_FOR_DOC,\n expected_output=_CTC_EXPECTED_OUTPUT,\n expected_loss=_CTC_EXPECTED_LOSS,\n )\n def forward(\n self,\n input_values,\n attention_mask=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n labels=None,\n ):\n r\"\"\"\n labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*):\n Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to\n the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`.\n All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ...,\n config.vocab_size - 1]`.\n \"\"\"\n\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n outputs = self.sew_d(\n input_values,\n attention_mask=attention_mask,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n\n hidden_states = outputs[0]\n hidden_states = self.dropout(hidden_states)\n\n logits = self.lm_head(hidden_states)\n\n loss = None\n if labels is not None:\n\n if labels.max() >= self.config.vocab_size:\n raise ValueError(f\"Label values must be <= vocab_size: {self.config.vocab_size}\")\n\n # retrieve loss input_lengths from attention_mask\n attention_mask = (\n attention_mask if attention_mask is not None else torch.ones_like(input_values, dtype=torch.long)\n )\n input_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(-1)).to(torch.long)\n\n # assuming that padded tokens are filled with -100\n # when not being attended to\n labels_mask = labels >= 0\n target_lengths = labels_mask.sum(-1)\n flattened_targets = labels.masked_select(labels_mask)\n\n # ctc_loss doesn't support fp16\n log_probs = nn.functional.log_softmax(logits, dim=-1, dtype=torch.float32).transpose(0, 1)\n\n with torch.backends.cudnn.flags(enabled=False):\n loss = nn.functional.ctc_loss(\n log_probs,\n flattened_targets,\n input_lengths,\n target_lengths,\n blank=self.config.pad_token_id,\n reduction=self.config.ctc_loss_reduction,\n zero_infinity=self.config.ctc_zero_infinity,\n )\n\n if not return_dict:\n output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:]\n return ((loss,) + output) if loss is not None else output\n\n return CausalLMOutput(\n loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions\n )\n\n\n@add_start_docstrings(\n \"\"\"\n SEWD Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB\n Keyword Spotting.\n \"\"\",\n SEWD_START_DOCSTRING,\n)\n# Copied from transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification with Wav2Vec2->SEWD, wav2vec2->sew_d, WAV_2_VEC_2->SEWD\nclass SEWDForSequenceClassification(SEWDPreTrainedModel):\n def __init__(self, config):\n super().__init__(config)\n\n self.sew_d = SEWDModel(config)\n num_layers = config.num_hidden_layers + 1 # transformer layers + input embeddings\n if config.use_weighted_layer_sum:\n self.layer_weights = nn.Parameter(torch.ones(num_layers) / num_layers)\n self.projector = nn.Linear(config.hidden_size, config.classifier_proj_size)\n self.classifier = nn.Linear(config.classifier_proj_size, config.num_labels)\n\n # Initialize weights and apply final processing\n self.post_init()\n\n def freeze_feature_extractor(self):\n \"\"\"\n Calling this function will disable the gradient computation for the feature encoder so that its parameters will\n not be updated during training.\n \"\"\"\n warnings.warn(\n \"The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5.\"\n \"Please use the equivalent `freeze_feature_encoder` method instead.\",\n FutureWarning,\n )\n self.freeze_feature_encoder()\n\n def freeze_feature_encoder(self):\n \"\"\"\n Calling this function will disable the gradient computation for the feature encoder so that its parameter will\n not be updated during training.\n \"\"\"\n self.sew_d.feature_extractor._freeze_parameters()\n\n def freeze_base_model(self):\n \"\"\"\n Calling this function will disable the gradient computation for the base model so that its parameters will not\n be updated during training. Only the classification head will be updated.\n \"\"\"\n for param in self.sew_d.parameters():\n param.requires_grad = False\n\n @add_start_docstrings_to_model_forward(SEWD_INPUTS_DOCSTRING)\n @add_code_sample_docstrings(\n processor_class=_FEAT_EXTRACTOR_FOR_DOC,\n checkpoint=_SEQ_CLASS_CHECKPOINT,\n output_type=SequenceClassifierOutput,\n config_class=_CONFIG_FOR_DOC,\n modality=\"audio\",\n expected_output=_SEQ_CLASS_EXPECTED_OUTPUT,\n expected_loss=_SEQ_CLASS_EXPECTED_LOSS,\n )\n def forward(\n self,\n input_values,\n attention_mask=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n labels=None,\n ):\n r\"\"\"\n labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):\n Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,\n config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If\n `config.num_labels > 1` a classification loss is computed (Cross-Entropy).\n \"\"\"\n\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states\n\n outputs = self.sew_d(\n input_values,\n attention_mask=attention_mask,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n\n if self.config.use_weighted_layer_sum:\n hidden_states = outputs[_HIDDEN_STATES_START_POSITION]\n hidden_states = torch.stack(hidden_states, dim=1)\n norm_weights = nn.functional.softmax(self.layer_weights, dim=-1)\n hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1)\n else:\n hidden_states = outputs[0]\n\n hidden_states = self.projector(hidden_states)\n if attention_mask is None:\n pooled_output = hidden_states.mean(dim=1)\n else:\n padding_mask = self._get_feature_vector_attention_mask(hidden_states.shape[1], attention_mask)\n hidden_states[~padding_mask] = 0.0\n pooled_output = hidden_states.sum(dim=1) / padding_mask.sum(dim=1).view(-1, 1)\n\n logits = self.classifier(pooled_output)\n\n loss = None\n if labels is not None:\n loss_fct = CrossEntropyLoss()\n loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1))\n\n if not return_dict:\n output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:]\n return ((loss,) + output) if loss is not None else output\n\n return SequenceClassifierOutput(\n loss=loss,\n logits=logits,\n hidden_states=outputs.hidden_states,\n attentions=outputs.attentions,\n )\n" ]
[ [ "torch.nn.Linear", "numpy.random.rand", "torch.stack", "torch.nn.ModuleList", "torch.nn.init.kaiming_normal_", "numpy.tile", "torch.ones", "numpy.sign", "numpy.where", "torch.nn.functional.pad", "numpy.broadcast_to", "torch.nn.CrossEntropyLoss", "torch.nn.functional.ctc_loss", "torch.nn.LayerNorm", "torch.nn.AvgPool1d", "numpy.log", "torch.nn.Conv1d", "torch.nn.init.constant_", "torch.onnx.symbolic_opset9.softmax", "torch.FloatTensor", "numpy.arange", "torch.tensor", "torch._softmax_backward_data", "torch.zeros", "numpy.array", "numpy.zeros", "torch.nn.GroupNorm", "torch.clamp", "torch.nn.functional.log_softmax", "torch.nn.functional.softmax", "torch.nn.utils.weight_norm", "torch.backends.cudnn.flags", "torch.nn.Dropout", "torch.arange", "numpy.ones", "numpy.put_along_axis", "torch.softmax", "numpy.abs", "torch.ones_like", "torch.nn.Embedding", "torch.empty_like" ] ]
eikaramba/segmentation_models.pytorch
[ "7a061d7c73021bfdf881b407817d014de8eea2f0" ]
[ "segmentation_models_pytorch/losses/dice.py" ]
[ "from typing import Optional, List\n\nimport torch\nimport torch.nn.functional as F\nfrom torch.nn.modules.loss import _Loss\nfrom ._functional import soft_dice_score, to_tensor\nfrom .constants import BINARY_MODE, MULTICLASS_MODE, MULTILABEL_MODE\n\n__all__ = [\"DiceLoss\"]\n\n\nclass DiceLoss(_Loss):\n\n def __init__(\n self,\n mode: str,\n classes: Optional[List[int]] = None,\n log_loss: bool = False,\n from_logits: bool = True,\n smooth: float = 0.0,\n ignore_index: Optional[int] = None,\n eps: float = 1e-7,\n ):\n \"\"\"Implementation of Dice loss for image segmentation task.\n It supports binary, multiclass and multilabel cases\n\n Args:\n mode: Loss mode 'binary', 'multiclass' or 'multilabel'\n classes: List of classes that contribute in loss computation. By default, all channels are included.\n log_loss: If True, loss computed as `- log(dice_coeff)`, otherwise `1 - dice_coeff`\n from_logits: If True, assumes input is raw logits\n smooth: Smoothness constant for dice coefficient (a)\n ignore_index: Label that indicates ignored pixels (does not contribute to loss)\n eps: A small epsilon for numerical stability to avoid zero division error \n (denominator will be always greater or equal to eps)\n\n Shape\n - **y_pred** - torch.Tensor of shape (N, C, H, W)\n - **y_true** - torch.Tensor of shape (N, H, W) or (N, C, H, W)\n\n Reference\n https://github.com/BloodAxe/pytorch-toolbelt\n \"\"\"\n assert mode in {BINARY_MODE, MULTILABEL_MODE, MULTICLASS_MODE}\n super(DiceLoss, self).__init__()\n self.mode = mode\n if classes is not None:\n assert mode != BINARY_MODE, \"Masking classes is not supported with mode=binary\"\n classes = to_tensor(classes, dtype=torch.long)\n\n self.classes = classes\n self.from_logits = from_logits\n self.smooth = smooth\n self.eps = eps\n self.log_loss = log_loss\n\n def forward(self, y_pred: torch.Tensor, y_true: torch.Tensor) -> torch.Tensor:\n\n assert y_true.size(0) == y_pred.size(0)\n\n if self.from_logits:\n # Apply activations to get [0..1] class probabilities\n # Using Log-Exp as this gives more numerically stable result and does not cause vanishing gradient on\n # extreme values 0 and 1\n if self.mode == MULTICLASS_MODE:\n y_pred = y_pred.log_softmax(dim=1).exp()\n else:\n y_pred = F.logsigmoid(y_pred).exp()\n\n bs = y_true.size(0)\n num_classes = y_pred.size(1)\n dims = (0, 2)\n\n if self.mode == BINARY_MODE:\n y_true = y_true.view(bs, 1, -1)\n y_pred = y_pred.view(bs, 1, -1)\n\n if self.mode == MULTICLASS_MODE:\n y_true = y_true.view(bs, -1)\n y_pred = y_pred.view(bs, num_classes, -1)\n\n y_true = F.one_hot(y_true, num_classes) # N,H*W -> N,H*W, C\n y_true = y_true.permute(0, 2, 1) # H, C, H*W\n\n if self.mode == MULTILABEL_MODE:\n y_true = y_true.view(bs, num_classes, -1)\n y_pred = y_pred.view(bs, num_classes, -1)\n\n if self.ignore_index is not None:\n mask = y_true != self.ignore_index\n y_pred = y_pred * mask\n y_true = y_true * mask\n\n scores = self.compute_score(y_pred, y_true.type_as(y_pred), smooth=self.smooth, eps=self.eps, dims=dims)\n\n if self.log_loss:\n loss = -torch.log(scores.clamp_min(self.eps))\n else:\n loss = 1.0 - scores\n\n # Dice loss is undefined for non-empty classes\n # So we zero contribution of channel that does not have true pixels\n # NOTE: A better workaround would be to use loss term `mean(y_pred)`\n # for this case, however it will be a modified jaccard loss\n\n mask = y_true.sum(dims) > 0\n loss *= mask.to(loss.dtype)\n\n if self.classes is not None:\n loss = loss[self.classes]\n\n return self.aggregate_loss(loss)\n\n def aggregate_loss(self, loss):\n return loss.mean()\n\n def compute_score(self, output, target, smooth=0.0, eps=1e-7, dims=None) -> torch.Tensor:\n return soft_dice_score(output, target, smooth, eps, dims)\n" ]
[ [ "torch.nn.functional.one_hot", "torch.nn.functional.logsigmoid" ] ]
shlu2019/meta_learning_pacoh
[ "b4c2c37d9715e74542bab556ac1f5d778cc3409c" ]
[ "third_party/neural_processes/neural_process.py" ]
[ "import torch\nfrom third_party.neural_processes.models import Encoder, MuSigmaEncoder, Decoder\nfrom torch import nn\nfrom torch.distributions import Normal\nfrom third_party.neural_processes.utils import img_mask_to_np_input\n\n\nclass NeuralProcess(nn.Module):\n \"\"\"\n Implements Neural Process for functions of arbitrary dimensions.\n\n Parameters\n ----------\n x_dim : int\n Dimension of x values.\n\n y_dim : int\n Dimension of y values.\n\n r_dim : int\n Dimension of output representation r.\n\n z_dim : int\n Dimension of latent variable z.\n\n h_dim : int\n Dimension of hidden layer in encoder and decoder.\n \"\"\"\n def __init__(self, x_dim, y_dim, r_dim, z_dim, h_dim):\n super(NeuralProcess, self).__init__()\n self.x_dim = x_dim\n self.y_dim = y_dim\n self.r_dim = r_dim\n self.z_dim = z_dim\n self.h_dim = h_dim\n\n # Initialize networks\n self.xy_to_r = Encoder(x_dim, y_dim, h_dim, r_dim)\n self.r_to_mu_sigma = MuSigmaEncoder(r_dim, z_dim)\n self.xz_to_y = Decoder(x_dim, z_dim, h_dim, y_dim)\n\n def aggregate(self, r_i):\n \"\"\"\n Aggregates representations for every (x_i, y_i) pair into a single\n representation.\n\n Parameters\n ----------\n r_i : torch.Tensor\n Shape (batch_size, num_points, r_dim)\n \"\"\"\n return torch.mean(r_i, dim=1)\n\n def xy_to_mu_sigma(self, x, y):\n \"\"\"\n Maps (x, y) pairs into the mu and sigma parameters defining the normal\n distribution of the latent variables z.\n\n Parameters\n ----------\n x : torch.Tensor\n Shape (batch_size, num_points, x_dim)\n\n y : torch.Tensor\n Shape (batch_size, num_points, y_dim)\n \"\"\"\n batch_size, num_points, _ = x.size()\n # Flatten tensors, as encoder expects one dimensional inputs\n x_flat = x.view(batch_size * num_points, self.x_dim)\n y_flat = y.contiguous().view(batch_size * num_points, self.y_dim)\n # Encode each point into a representation r_i\n r_i_flat = self.xy_to_r(x_flat, y_flat)\n # Reshape tensors into batches\n r_i = r_i_flat.view(batch_size, num_points, self.r_dim)\n # Aggregate representations r_i into a single representation r\n r = self.aggregate(r_i)\n # Return parameters of distribution\n return self.r_to_mu_sigma(r)\n\n def forward(self, x_context, y_context, x_target, y_target=None):\n \"\"\"\n Given context pairs (x_context, y_context) and target points x_target,\n returns a distribution over target points y_target.\n\n Parameters\n ----------\n x_context : torch.Tensor\n Shape (batch_size, num_context, x_dim). Note that x_context is a\n subset of x_target.\n\n y_context : torch.Tensor\n Shape (batch_size, num_context, y_dim)\n\n x_target : torch.Tensor\n Shape (batch_size, num_target, x_dim)\n\n y_target : torch.Tensor or None\n Shape (batch_size, num_target, y_dim). Only used during training.\n\n Note\n ----\n We follow the convention given in \"Empirical Evaluation of Neural\n Process Objectives\" where context is a subset of target points. This was\n shown to work best empirically.\n \"\"\"\n # Infer quantities from tensor dimensions\n batch_size, num_context, x_dim = x_context.size()\n _, num_target, _ = x_target.size()\n _, _, y_dim = y_context.size()\n\n if self.training:\n # Encode target and context (context needs to be encoded to\n # calculate kl term)\n mu_target, sigma_target = self.xy_to_mu_sigma(x_target, y_target)\n mu_context, sigma_context = self.xy_to_mu_sigma(x_context, y_context)\n # Sample from encoded distribution using reparameterization trick\n q_target = Normal(mu_target, sigma_target)\n q_context = Normal(mu_context, sigma_context)\n z_sample = q_target.rsample()\n # Get parameters of output distribution\n y_pred_mu, y_pred_sigma = self.xz_to_y(x_target, z_sample)\n p_y_pred = Normal(y_pred_mu, y_pred_sigma)\n\n return p_y_pred, q_target, q_context\n else:\n # At testing time, encode only context\n mu_context, sigma_context = self.xy_to_mu_sigma(x_context, y_context)\n # Sample from distribution based on context\n q_context = Normal(mu_context, sigma_context)\n z_sample = q_context.rsample()\n # Predict target points based on context\n y_pred_mu, y_pred_sigma = self.xz_to_y(x_target, z_sample)\n p_y_pred = Normal(y_pred_mu, y_pred_sigma)\n\n return p_y_pred\n\n\nclass NeuralProcessImg(nn.Module):\n \"\"\"\n Wraps regular Neural Process for image processing.\n\n Parameters\n ----------\n img_size : tuple of ints\n E.g. (1, 28, 28) or (3, 32, 32)\n\n r_dim : int\n Dimension of output representation r.\n\n z_dim : int\n Dimension of latent variable z.\n\n h_dim : int\n Dimension of hidden layer in encoder and decoder.\n \"\"\"\n def __init__(self, img_size, r_dim, z_dim, h_dim):\n super(NeuralProcessImg, self).__init__()\n self.img_size = img_size\n self.num_channels, self.height, self.width = img_size\n self.r_dim = r_dim\n self.z_dim = z_dim\n self.h_dim = h_dim\n\n self.neural_process = NeuralProcess(x_dim=2, y_dim=self.num_channels,\n r_dim=r_dim, z_dim=z_dim,\n h_dim=h_dim)\n\n def forward(self, img, context_mask, target_mask):\n \"\"\"\n Given an image and masks of context and target points, returns a\n distribution over pixel intensities at the target points.\n\n Parameters\n ----------\n img : torch.Tensor\n Shape (batch_size, channels, height, width)\n\n context_mask : torch.ByteTensor\n Shape (batch_size, height, width). Binary mask indicating\n the pixels to be used as context.\n\n target_mask : torch.ByteTensor\n Shape (batch_size, height, width). Binary mask indicating\n the pixels to be used as target.\n \"\"\"\n x_context, y_context = img_mask_to_np_input(img, context_mask)\n x_target, y_target = img_mask_to_np_input(img, target_mask)\n return self.neural_process(x_context, y_context, x_target, y_target)\n" ]
[ [ "torch.distributions.Normal", "torch.mean" ] ]
energy-modelling-club/times2spine
[ "12831a78f21a4c696a798aa0990019ff3ea613ac" ]
[ "ImportFunction.py" ]
[ "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Jun 3 16:28:52 2019\n\n@author: Matteo D'Andrea (s180192)\n\"\"\"\n###################### LYBRARIES AND PACKAGES #################################\nimport openpyxl\nimport numpy as np\nimport pandas as pd\n###############################################################################\n\n# a function to extract selected tables from each sheet of a whole excel file \n# input: filename (str) \n# output: pandas dataframe \n\n\ndef tableImport(filename): \n \n # the excel file is open \n # with data_only=True the equations in the cell are ignored\n wb = openpyxl.load_workbook(filename, data_only=True)\n \n ################ SHEETS FILTERING AND TILDE LOCATION ####################\n \n # z is necessary to set a unique id when a sheet has more than one table\n z=0\n # an empty list is initialized \n key_sheet=[]\n # each cell that starts with ~ in the whole excel file is appended to the list\n # along with the cell value and coordinates\n for j in wb.sheetnames:\n if j != 'LOG' or j != 'INTRO':\n for row in wb[j].iter_rows():\n for cell in row:\n if type(cell.value) is str:\n if cell.value.startswith('~'):\n cell_row=cell.row\n cell_col=cell.column\n key_sheet.append((z,j,cell.value,cell_row,cell_col))\n z+=1\n \n # the dictionary will contain each table exported\n dataframe_collection={} \n \n # for each sheet the same operation are repeated \n for i in key_sheet:\n \n # the code is a used as unique dictionary key for each table \n \n code=str(i[0])+'-'+filename.split('.')[0]+'-'+i[1]+i[2]\n sheetname=i[1]\n \n # a specific sheet is selected \n ws = wb[sheetname]\n \n ##################### TABLE DIMENSIONS AND IMPORT ######################\n \n # from the tilde location the first empty cell on the right is found\n for k in ws.iter_cols(min_col=i[4],min_row=i[3]+1,\\\n max_row=i[3]+1):\n if k[0].value == None:\n rightEnd=[(k[0].row,k[0].column-1)]\n break\n rightEnd=[(k[0].row,k[0].column)]\n \n # from the tilde location the first empty cell on the left is found\n for z in reversed(list(ws.iter_cols(max_col=i[4],min_row=i[3]+1,\\\n max_row=i[3]+1))):\n if z[0].value == None:\n leftEnd=[(z[0].row,z[0].column+1)]\n break\n leftEnd=[(z[0].row,z[0].column)]\n \n # an empty list collects the cells \n sel=[]\n # given the table width, the rows are collected until an empty row is found\n for s in ws.iter_rows(min_col=leftEnd[0][1],min_row=i[3]+1,\\\n max_col=rightEnd[0][1]):\n if all(j.value == None for j in s) == True:\n lowerEnd=s[0].row-1\n break \n else:\n for x in s:\n sel.append(x.value)\n lowerEnd=s[0].row\n \n # the dataframe is created reshaping the list into a matrix\n matrix=pd.DataFrame(np.reshape(sel,(lowerEnd-i[3],rightEnd[0][1] \\\n -(leftEnd[0][1]-1))))\n matrix = matrix.applymap(str)\n \n ##################### TABLE SLICING ################################\n # the row and columns which are not necessary are discarded\n selrow=np.char.find(matrix.iloc[:,0].tolist(),'*')==-1\n selcol=np.char.find(matrix.iloc[0,:].tolist(),'*')==-1\n matrix=matrix[selrow]\n matrix=matrix[matrix.columns.values[selcol]]\n selcol2=np.char.find(matrix.iloc[0,:].tolist(),'\\I:')==-1\n matrix=matrix[matrix.columns.values[selcol2]]\n # the column names are set \n matrix.rename(columns=matrix.iloc[0,:],inplace=True)\n matrix=matrix.drop(matrix.index[0])\n # the index is reset after the slicing \n matrix.reset_index(drop=True,inplace=True)\n # the commodity set column is renamed\n matrix.rename(index=str, columns={'Csets':'Cset','CSet':'Cset'},\\\n inplace=True)\n \n # each table is stored in the dictionary with a unique key that \n # describes the source and the type of the table\n dataframe_collection[code]=matrix\n \n \n return dataframe_collection\n" ]
[ [ "numpy.reshape" ] ]
gowthaman25/Image-Noise-Removal-using-CNN
[ "da8141a81281d701b565e17a3d2a186988a99569" ]
[ "Main.py" ]
[ "import numpy as np \nimport glob\nimport cv2\nimport pandas as pd\nimport tensorflow as tf\nimport tensorflow.keras \nfrom keras.models import load_model,Sequential\nfrom keras.layers.advanced_activations import LeakyReLU, PReLU\nfrom keras.optimizers import Adam\nfrom skimage.measure import compare_psnr, compare_ssim\nfrom keras.layers import Input,BatchNormalization,Subtract,Conv2D,Lambda,Activation\nfrom keras.callbacks import CSVLogger, ModelCheckpoint, LearningRateScheduler\nfrom cnn_model import cnn_model\n\nsrc_dir = '/content/drive/My Drive/code/data/Train100/'\nfile_list = glob.glob(src_dir+'*.png') # get name list of all .png files\n\nbatch_size=128\npat_size=40\nstride=4\nstep=0\nscales=[1,0.9,0.8,0.7]\ncount=0\n#calculate the number of patches\nfor i in range(len(file_list)):\n img = cv2.imread(file_list[i],0) \n for s in range(len(scales)):\n newsize=(int(img.shape[0]*scales[s]),int(img.shape[1]*scales[s]))\n img_s=cv2.resize(img,newsize,interpolation=cv2.INTER_CUBIC)\n im_h,im_w=img_s.shape\n for x in range(0+step,(im_h-pat_size),stride):\n for y in range(0+step,(im_w-pat_size),stride):\n count +=1\n\norigin_patch_num=count\nif origin_patch_num % batch_size !=0:\n numPatches=(origin_patch_num/batch_size +1)*batch_size\nelse:\n numPatches=origin_patch_num\nprint('filelist=%d ,total patches=%d, batch_size=%d, total_batches=%d' % (len(file_list),numPatches, batch_size, numPatches/batch_size))\n\n#numpy array to contain patches for training\ninputs=np.zeros((int(numPatches), int(pat_size), int(pat_size),1),dtype=np.uint8)\n\n#generate patches\ncount=0\nfor i in range(len(file_list)):\n img = cv2.imread(file_list[i],0) \n for s in range(len(scales)):\n newsize=(int(img.shape[0]*scales[s]),int(img.shape[1]*scales[s]))\n img_s=cv2.resize(img,newsize,interpolation=cv2.INTER_CUBIC)\n img_s=np.reshape(np.array(img_s,dtype=\"uint8\"),\n (img_s.shape[0],img_s.shape[1],1))\n im_h,im_w,_ = img_s.shape\n for x in range(0+step,im_h-pat_size,stride):\n for y in range(0+step,im_w-pat_size,stride):\n inputs[count,:,:]=img_s[x:x+pat_size,\n y:y+pat_size]\n count += 1\n\n#pad the batch\nif count < numPatches:\n to_pad=int(numPatches-count)\n inputs[-to_pad:,:,:,:]=inputs[:to_pad,:,:,:]\nnp.save(\"img_clean_pats.npy\",inputs)\n\n# load the data and normalize it\ncleanImages=np.load('img_clean_pats.npy')\nprint(cleanImages.dtype)\ncleanImages=cleanImages/255.0\ncleanImages=cleanImages.astype('float32')\ndata = cleanImages\n\nepoch = 10\nsave_every = 10\nlr = 1e-3\nsigma = 25\n\npsnr_val = []\nssim_val = []\nname =[]\ntest_fol = '/content/drive/My Drive/code/data/Test/Set68/'\nout_dir = '/content/drive/My Drive/code/data/Outimg/'\n\ndef data_aug(img, mode=0):\n \n if mode == 0:\n return img\n \ndef gen_patches(file_name):\n\n # read image\n img = cv2.imread(file_name, 0) # gray scale\n h, w = img.shape\n scales = [1, 0.9, 0.8, 0.7]\n patches = []\n\n for s in scales:\n h_scaled, w_scaled = int(h*s),int(w*s)\n img_scaled = cv2.resize(img, (h_scaled,w_scaled), interpolation=cv2.INTER_CUBIC)\n # extract patches\n for i in range(0, h_scaled-patch_size+1, stride):\n for j in range(0, w_scaled-patch_size+1, stride):\n x = img_scaled[i:i+patch_size, j:j+patch_size]\n # data aug\n for k in range(0, aug_times):\n x_aug = data_aug(x, mode=np.random.randint(0,8))\n patches.append(x_aug)\n \n return patches\n\ndef train_datagen(y_,batch_size=8):\n indices = list(range(y_.shape[0]))\n while(True):\n np.random.shuffle(indices) # shuffle\n for i in range(0, len(indices), batch_size):\n ge_batch_y = y_[indices[i:i+batch_size]]\n noise = np.random.normal(0, sigma/255.0, ge_batch_y.shape) # noise\n ge_batch_x = ge_batch_y + noise # input image = clean image + noise\n yield ge_batch_x, ge_batch_y\n\ndef lr_scheduler(epoch, lr):\n decay_rate = 0.1\n decay_step = 90\n if epoch % decay_step == 0 and epoch:\n return lr * decay_rate\n return lr\n\ndef train():\n models.compile(optimizer=Adam(), loss=['mse'])\n callbacks=[LearningRateScheduler(lr_scheduler)]\n # use call back functions\n #ckpt = ModelCheckpoint('/model_{epoch:02d}.h5', monitor='val_loss',verbose=0, period=save_every)\n #csv_logger = CSVLogger('/log.csv', append=True, separator=',')\n history = models.fit_generator(train_datagen(data, batch_size=batch_size),\n steps_per_epoch=len(data)//batch_size, epochs=epoch, verbose=0, \n callbacks=callbacks)\n models.save('myModel.h5')\n return models \n\ndef test(models):\n out_dir = '/content/drive/My Drive/code/data/Outimg/'\n psnr_val = []\n ssim_val = []\n test_dir = glob.glob(test_fol+'*.png')\n for t in range(len(test_dir)):\n print('test dir',len(test_dir))\n img_clean = cv2.imread(str(test_dir[t]),0)\n img_test = np.array(img_clean,dtype='float32')/255\n noise = np.random.normal(0, sigma/255.0, img_test.shape) # noise\n img_test = img_test.astype('float32')\n # predict\n x_test = img_test.reshape(1, img_test.shape[0], img_test.shape[1], 1) \n y_predict = models.predict(x_test)\n # calculate numeric metrics\n img_out = y_predict.reshape(img_clean.shape)\n img_out = np.clip(img_out, 0, 1)\n img_out = np.array((img_out*255).astype('uint8')) \n filename = (str(test_dir[t])).split('/')[-1].split('.')[0] # get the name of image file\n cv2.imwrite(out_dir+str(t)+'.png',img_out)\n psnr_noise, psnr_denoised = compare_psnr(img_clean, img_test), compare_psnr(img_clean, img_out)\n ssim_noise, ssim_denoised = compare_ssim(img_clean, img_test), compare_ssim(img_clean, img_out)\n psnr_val.append(psnr_denoised)\n ssim_val.append(ssim_denoised)\n if len(psnr_val) != 0 : \n psnr_avg = sum(psnr_val)/len(psnr_val)\n else :\n psnr_avg = 0\n if len(ssim_val) != 0 :\n ssim_avg = sum(ssim_val)/len(ssim_val)\n else :\n ssim_avg = 0\n psnr_val.append(psnr_avg)\n ssim_val.append(ssim_avg)\n print('Average PSNR = {0:.2f}, SSIM = {1:.2f}'.format(psnr_avg, ssim_avg))\n return psnr_val , ssim_val\n\nif __name__=='__main__':\n models = cnn_model()\n models = train()\n psnr_val,ssim_val = test(models)\n\n \n" ]
[ [ "numpy.random.normal", "numpy.array", "numpy.load", "numpy.random.shuffle", "numpy.save", "numpy.random.randint", "numpy.clip" ] ]
lacostya86/aphantasia
[ "c287ffd30d19ff1b51111143be7c1d3d98bac962" ]
[ "illustra.py" ]
[ "import os\nimport argparse\nimport math\nimport numpy as np\nimport shutil\nfrom imageio import imsave\nfrom googletrans import Translator, constants\n\nimport torch\nimport torchvision\nimport torch.nn.functional as F\n\nimport clip\nos.environ['KMP_DUPLICATE_LIB_OK']='True'\nfrom sentence_transformers import SentenceTransformer\n\nfrom clip_fft import to_valid_rgb, fft_image\nfrom utils import slice_imgs, derivat, checkout, cvshow, pad_up_to, basename, file_list, img_list, img_read, txt_clean, plot_text\nimport transforms\ntry: # progress bar for notebooks \n get_ipython().__class__.__name__\n from progress_bar import ProgressIPy as ProgressBar\nexcept: # normal console\n from progress_bar import ProgressBar\n\nclip_models = ['ViT-B/16', 'ViT-B/32', 'RN101', 'RN50x16', 'RN50x4', 'RN50']\n\ndef get_args():\n parser = argparse.ArgumentParser()\n parser.add_argument('-i', '--in_txt', default=None, help='Text file to process')\n parser.add_argument('-t2', '--in_txt2', default=None, help='input text - style')\n parser.add_argument('-t0', '--in_txt0', default=None, help='input text to subtract')\n parser.add_argument( '--out_dir', default='_out')\n parser.add_argument('-s', '--size', default='1280-720', help='Output resolution')\n parser.add_argument('-r', '--resume', default=None, help='Path to saved FFT snapshots, to resume from')\n parser.add_argument('-l', '--length', default=180, type=int, help='Total length in sec')\n parser.add_argument( '--fstep', default=1, type=int, help='Saving step')\n parser.add_argument('-tr', '--translate', action='store_true', help='Translate text with Google Translate')\n parser.add_argument('-ml', '--multilang', action='store_true', help='Use SBERT multilanguage model for text')\n parser.add_argument( '--save_pt', action='store_true', help='Save FFT snapshots for further use')\n parser.add_argument( '--fps', default=25, type=int)\n parser.add_argument('-v', '--verbose', default=True, type=bool)\n # training\n parser.add_argument('-m', '--model', default='ViT-B/32', choices=clip_models, help='Select CLIP model to use')\n parser.add_argument( '--steps', default=200, type=int, help='Total iterations')\n parser.add_argument( '--samples', default=200, type=int, help='Samples to evaluate')\n parser.add_argument('-lr', '--lrate', default=0.05, type=float, help='Learning rate')\n parser.add_argument('-p', '--prog', action='store_true', help='Enable progressive lrate growth (up to double a.lrate)')\n # tweaks\n parser.add_argument('-a', '--align', default='uniform', choices=['central', 'uniform', 'overscan'], help='Sampling distribution')\n parser.add_argument('-tf', '--transform', action='store_true', help='use augmenting transforms?')\n parser.add_argument( '--keep', default=0, type=float, help='Accumulate imagery: 0 = random, 1 = prev ema')\n parser.add_argument( '--contrast', default=0.9, type=float)\n parser.add_argument( '--colors', default=1.5, type=float)\n parser.add_argument( '--decay', default=1.5, type=float)\n parser.add_argument('-sh', '--sharp', default=0.3, type=float)\n parser.add_argument('-mm', '--macro', default=0.4, type=float, help='Endorse macro forms 0..1 ')\n parser.add_argument('-e', '--enhance', default=0, type=float, help='Enhance consistency, boosts training')\n parser.add_argument('-n', '--noise', default=0.2, type=float, help='Add noise to suppress accumulation')\n parser.add_argument('-nt', '--notext', default=0, type=float, help='Subtract typed text as image (avoiding graffiti?), [0..1]') # 0.15\n a = parser.parse_args()\n\n if a.size is not None: a.size = [int(s) for s in a.size.split('-')][::-1]\n if len(a.size)==1: a.size = a.size * 2\n if a.multilang is True: a.model = 'ViT-B/32' # sbert model is trained with ViT\n a.diverse = -a.enhance\n a.expand = abs(a.enhance)\n return a\n\ndef ema(base, next, step):\n scale_ma = 1. / (step + 1)\n return next * scale_ma + base * (1.- scale_ma)\n\ndef load_params(file):\n if not os.path.isfile(file):\n print(' Snapshot not found:', file); exit()\n params = torch.load(file)\n if isinstance(params, list): params = params[0]\n return params.detach().clone()\n\ndef main():\n a = get_args()\n\n # Load CLIP models\n use_jit = True if float(torch.__version__[:3]) < 1.8 else False\n model_clip, _ = clip.load(a.model, jit=use_jit)\n try:\n a.modsize = model_clip.visual.input_resolution \n except:\n a.modsize = 288 if a.model == 'RN50x4' else 384 if a.model == 'RN50x16' else 224\n if a.verbose is True: print(' using model', a.model)\n xmem = {'ViT-B/16':0.25, 'RN50':0.5, 'RN50x4':0.16, 'RN50x16':0.06, 'RN101':0.33}\n if a.model in xmem.keys():\n a.samples = int(a.samples * xmem[a.model])\n workdir = os.path.join(a.out_dir, basename(a.in_txt))\n workdir += '-%s' % a.model if 'RN' in a.model.upper() else ''\n os.makedirs(workdir, exist_ok=True)\n\n def enc_text(txt):\n if a.multilang is True:\n model_lang = SentenceTransformer('clip-ViT-B-32-multilingual-v1').cuda()\n emb = model_lang.encode([txt], convert_to_tensor=True, show_progress_bar=False)\n del model_lang\n else:\n emb = model_clip.encode_text(clip.tokenize(txt).cuda())\n return emb.detach().clone()\n \n if a.diverse != 0:\n a.samples = int(a.samples * 0.5)\n \n if a.transform is True:\n # trform_f = transforms.transforms_custom \n trform_f = transforms.transforms_elastic\n a.samples = int(a.samples * 0.95)\n else:\n trform_f = transforms.normalize()\n\n if a.in_txt2 is not None:\n if a.verbose is True: print(' style:', basename(a.in_txt2))\n # a.samples = int(a.samples * 0.75)\n if a.translate:\n translator = Translator()\n a.in_txt2 = translator.translate(a.in_txt2, dest='en').text\n if a.verbose is True: print(' translated to:', a.in_txt2)\n txt_enc2 = enc_text(a.in_txt2)\n\n if a.in_txt0 is not None:\n if a.verbose is True: print(' subtract text:', basename(a.in_txt0))\n if a.translate:\n translator = Translator()\n a.in_txt0 = translator.translate(a.in_txt0, dest='en').text\n if a.verbose is True: print(' translated to:', a.in_txt0) \n txt_enc0 = enc_text(a.in_txt0)\n\n # make init\n global params_start, params_ema\n params_shape = [1, 3, a.size[0], a.size[1]//2+1, 2]\n params_start = torch.randn(*params_shape).cuda() # random init\n params_ema = 0.\n if a.resume is not None and os.path.isfile(a.resume):\n if a.verbose is True: print(' resuming from', a.resume)\n params_start = load_params(a.resume).cuda()\n if a.keep > 0:\n params_ema = params_start[0].detach().clone()\n else:\n a.resume = 'init.pt'\n\n torch.save(params_start, 'init.pt') # final init\n shutil.copy(a.resume, os.path.join(workdir, '000-%s.pt' % basename(a.resume)))\n \n prev_enc = 0\n def process(txt, num):\n\n sd = 0.01\n if a.keep > 0: sd = a.keep + (1-a.keep) * sd\n params, image_f, _ = fft_image([1, 3, *a.size], resume='init.pt', sd=sd, decay_power=a.decay)\n image_f = to_valid_rgb(image_f, colors = a.colors)\n\n if a.prog is True:\n lr1 = a.lrate * 2\n lr0 = a.lrate * 0.1\n else:\n lr0 = a.lrate\n optimizer = torch.optim.AdamW(params, lr0, weight_decay=0.01, amsgrad=True)\n \n if a.verbose is True: print(' topic: ', txt)\n if a.translate:\n translator = Translator()\n txt = translator.translate(txt, dest='en').text\n if a.verbose is True: print(' translated to:', txt)\n txt_enc = enc_text(txt)\n if a.notext > 0:\n txt_plot = torch.from_numpy(plot_text(txt, a.modsize)/255.).unsqueeze(0).permute(0,3,1,2).cuda()\n txt_plot_enc = model_clip.encode_image(txt_plot).detach().clone()\n else: txt_plot_enc = None\n\n out_name = '%03d-%s' % (num+1, txt_clean(txt))\n out_name += '-%s' % a.model if 'RN' in a.model.upper() else ''\n tempdir = os.path.join(workdir, out_name)\n os.makedirs(tempdir, exist_ok=True)\n \n pbar = ProgressBar(a.steps // a.fstep)\n for i in range(a.steps):\n loss = 0\n noise = a.noise * torch.randn(1, 1, *params[0].shape[2:4], 1).cuda() if a.noise > 0 else None\n img_out = image_f(noise)\n img_sliced = slice_imgs([img_out], a.samples, a.modsize, trform_f, a.align, macro=a.macro)[0]\n out_enc = model_clip.encode_image(img_sliced)\n\n loss -= torch.cosine_similarity(txt_enc, out_enc, dim=-1).mean()\n if a.in_txt2 is not None: # input text - style\n loss -= 0.5 * torch.cosine_similarity(txt_enc2, out_enc, dim=-1).mean()\n if a.in_txt0 is not None: # subtract text\n loss += 0.5 * torch.cosine_similarity(txt_enc0, out_enc, dim=-1).mean()\n if a.notext > 0:\n loss += a.notext * torch.cosine_similarity(txt_plot_enc, out_enc, dim=-1).mean()\n if a.sharp != 0: # mode = scharr|sobel|default\n loss -= a.sharp * derivat(img_out, mode='sobel')\n # loss -= a.sharp * derivat(img_sliced, mode='scharr')\n if a.diverse != 0:\n img_sliced = slice_imgs([image_f(noise)], a.samples, a.modsize, trform_f, a.align, macro=a.macro)[0]\n out_enc2 = model_clip.encode_image(img_sliced)\n loss += a.diverse * torch.cosine_similarity(out_enc, out_enc2, dim=-1).mean()\n del out_enc2; torch.cuda.empty_cache()\n if a.expand > 0:\n global prev_enc\n if i > 0:\n loss += a.expand * torch.cosine_similarity(out_enc, prev_enc, dim=-1).mean()\n prev_enc = out_enc.detach().clone()\n del img_out, img_sliced, out_enc; torch.cuda.empty_cache()\n\n if a.prog is True:\n lr_cur = lr0 + (i / a.steps) * (lr1 - lr0)\n for g in optimizer.param_groups: \n g['lr'] = lr_cur\n \n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n if i % a.fstep == 0:\n with torch.no_grad():\n img = image_f(contrast=a.contrast).cpu().numpy()[0]\n if a.sharp != 0:\n img = img ** (1 + a.sharp/2.) # empirical tone mapping\n checkout(img, os.path.join(tempdir, '%04d.jpg' % (i // a.fstep)), verbose=a.verbose)\n pbar.upd()\n del img\n\n if a.keep > 0:\n global params_start, params_ema\n params_ema = ema(params_ema, params[0].detach().clone(), num+1)\n torch.save((1-a.keep) * params_start + a.keep * params_ema, 'init.pt')\n \n torch.save(params[0], '%s.pt' % os.path.join(workdir, out_name))\n shutil.copy(img_list(tempdir)[-1], os.path.join(workdir, '%s-%d.jpg' % (out_name, a.steps)))\n os.system('ffmpeg -v warning -y -i %s\\%%04d.jpg \"%s.mp4\"' % (tempdir, os.path.join(workdir, out_name)))\n\n with open(a.in_txt, 'r', encoding=\"utf-8\") as f:\n texts = f.readlines()\n texts = [tt.strip() for tt in texts if len(tt.strip()) > 0 and tt[0] != '#']\n if a.verbose is True: \n print(' total lines:', len(texts))\n print(' samples:', a.samples)\n\n for i, txt in enumerate(texts):\n process(txt, i)\n\n vsteps = int(a.length * 25 / len(texts)) # 25 fps\n tempdir = os.path.join(workdir, '_final')\n os.makedirs(tempdir, exist_ok=True)\n \n def read_pt(file):\n return torch.load(file).cuda()\n\n if a.verbose is True: print(' rendering complete piece')\n ptfiles = file_list(workdir, 'pt')\n pbar = ProgressBar(vsteps * len(ptfiles))\n for px in range(len(ptfiles)):\n params1 = read_pt(ptfiles[px])\n params2 = read_pt(ptfiles[(px+1) % len(ptfiles)])\n\n params, image_f, _ = fft_image([1, 3, *a.size], resume=params1, sd=1., decay_power=a.decay)\n image_f = to_valid_rgb(image_f, colors = a.colors)\n\n for i in range(vsteps):\n with torch.no_grad():\n img = image_f((params2 - params1) * math.sin(1.5708 * i/vsteps)**2)[0].permute(1,2,0)\n img = torch.clip(img*255, 0, 255).cpu().numpy().astype(np.uint8)\n imsave(os.path.join(tempdir, '%05d.jpg' % (px * vsteps + i)), img)\n if a.verbose is True: cvshow(img)\n pbar.upd()\n\n os.system('ffmpeg -v warning -y -i %s\\%%05d.jpg \"%s.mp4\"' % (tempdir, os.path.join(a.out_dir, basename(a.in_txt))))\n if a.keep > 0: os.remove('init.pt')\n\n\nif __name__ == '__main__':\n main()\n" ]
[ [ "torch.cosine_similarity", "torch.optim.AdamW", "torch.save", "torch.no_grad", "torch.cuda.empty_cache", "torch.load", "torch.randn", "torch.clip" ] ]
ddemaio/pycryptobot
[ "33353c72280816896d3f1e6465e4a5f5ebe64d4b" ]
[ "models/exchange/binance/api.py" ]
[ "from models.exchange.coinbase_pro.api import FREQUENCY_EQUIVALENTS, SUPPORTED_GRANULARITY\nimport math\nimport re\nimport numpy as np\nimport pandas as pd\nimport sys\nfrom datetime import datetime, timedelta\nfrom binance.client import Client\nfrom time import sleep\nfrom models.helper.LogHelper import Logger\n\n\nDEFAULT_MAKER_FEE_RATE = 0.0015 # added 0.0005 to allow for price movements\nDEFAULT_TAKER_FEE_RATE = 0.0015 # added 0.0005 to allow for price movements\nDEFAULT_TRADE_FEE_RATE = 0.0015 # added 0.0005 to allow for price movements\nDEFAULT_GRANULARITY=\"1h\"\nSUPPORTED_GRANULARITY = ['1m', '5m', '15m', '1h', '6h', '1d']\nMULTIPLIER_EQUIVALENTS = [1, 5, 15, 60, 360, 1440]\nFREQUENCY_EQUIVALENTS = [\"T\", \"5T\", \"15T\", \"H\", \"6H\", \"D\"]\nDEFAULT_MARKET = \"BTCGBP\"\n\n\nclass AuthAPIBase():\n def _isMarketValid(self, market: str) -> bool:\n p = re.compile(r\"^[A-Z0-9]{5,12}$\")\n if p.match(market):\n return True\n return False\n\n\nclass AuthAPI(AuthAPIBase):\n def __init__(self, api_key: str='', api_secret: str='', api_url: str='https://api.binance.com') -> None:\n \"\"\"Binance API object model\n \n Parameters\n ----------\n api_key : str\n Your Binance account portfolio API key\n api_secret : str\n Your Binance account portfolio API secret\n \"\"\"\n \n # options\n self.debug = False\n self.die_on_api_error = False\n\n valid_urls = [\n 'https://api.binance.com/',\n 'https://api.binance.us/',\n 'https://testnet.binance.vision/api/'\n ]\n\n if len(api_url) > 1 and api_url[-1] != '/':\n api_url = api_url + '/'\n\n # validate Binance API\n if api_url not in valid_urls:\n raise ValueError('Binance API URL is invalid')\n\n # validates the api key is syntactically correct\n p = re.compile(r\"^[A-z0-9]{64,64}$\")\n if not p.match(api_key):\n self.handle_init_error('Binance API key is invalid')\n \n # validates the api secret is syntactically correct\n p = re.compile(r\"^[A-z0-9]{64,64}$\")\n if not p.match(api_secret):\n self.handle_init_error('Binance API secret is invalid')\n\n self.mode = 'live' # TODO: check if this needs to be set here\n self.api_url = api_url\n self.api_key = api_key\n self.api_secret = api_secret\n\n for i in range(10):\n try:\n sys.tracebacklimit = 0\n if 'api.binance.us' in api_url:\n self.client = Client(self.api_key, self.api_secret, { 'verify': False, 'timeout': 20 }, tld='us')\n else:\n self.client = Client(self.api_key, self.api_secret, { 'verify': False, 'timeout': 20 })\n break\n except Exception as e:\n if i == 9:\n raise SystemExit(\"Can not create instance of AuthAPI client.\")\n Logger.error('Exception: ' + str(e)) \n Logger.error('Error on creating instance of AuthAPI Client. Trying again... Attempt: ' + str(i))\n sleep(0.1)\n\n sys.tracebacklimit = 1\n\n\n def handle_init_error(self, err: str) -> None:\n if self.debug:\n raise TypeError(err)\n else:\n raise SystemExit(err)\n\n\n def getClient(self) -> Client:\n return self.client\n\n\n def getAccount(self):\n \"\"\"Retrieves a specific account\"\"\"\n account = self.client.get_account()\n if 'balances' in account:\n df = pd.DataFrame(account['balances'])\n df = df[(df['free'] != '0.00000000') & (df['free'] != '0.00')]\n df['free'] = df['free'].astype(float)\n df['locked'] = df['locked'].astype(float)\n df['balance'] = df['free'] - df['locked']\n df.columns = ['currency', 'available', 'hold', 'balance']\n df = df[['currency', 'balance', 'hold', 'available']]\n df = df.reset_index(drop=True)\n return df\n else:\n return 0.0\n\n\n def getFees(self, market: str='') -> pd.DataFrame:\n if market != '':\n resp = self.client.get_trade_fee(symbol=market)\n if 'tradeFee' in resp and len(resp['tradeFee']) > 0:\n df = pd.DataFrame(resp['tradeFee'][0], index=[0])\n df['usd_volume'] = None\n df.columns = [ 'maker_fee_rate', 'market', 'taker_fee_rate', 'usd_volume' ]\n return df[[ 'maker_fee_rate', 'taker_fee_rate', 'usd_volume', 'market' ]]\n return pd.DataFrame(columns=[ 'maker_fee_rate', 'taker_fee_rate', 'market' ])\n else:\n resp = self.client.get_trade_fee()\n if 'tradeFee' in resp:\n df = pd.DataFrame(resp['tradeFee'])\n df['usd_volume'] = None\n df.columns = [ 'maker_fee_rate', 'market', 'taker_fee_rate', 'usd_volume' ]\n return df[[ 'maker_fee_rate', 'taker_fee_rate', 'usd_volume', 'market' ]]\n return pd.DataFrame(columns=[ 'maker_fee_rate', 'taker_fee_rate', 'market' ])\n\n\n def getMakerFee(self, market: str='') -> float:\n if market == '':\n fees = self.getFees()\n else:\n fees = self.getFees(market)\n \n if len(fees) == 0 or 'maker_fee_rate' not in fees:\n Logger.error(f\"error: 'maker_fee_rate' not in fees (using {DEFAULT_MAKER_FEE_RATE} as a fallback)\")\n return DEFAULT_MAKER_FEE_RATE\n\n if market == '':\n return fees\n else:\n return float(fees['maker_fee_rate'].to_string(index=False).strip())\n\n\n def getTakerFee(self, market: str='') -> float:\n if market == '':\n return DEFAULT_TAKER_FEE_RATE\n else:\n fees = self.getFees(market)\n\n if len(fees) == 0 or 'taker_fee_rate' not in fees:\n Logger.error(f\"error: 'taker_fee_rate' not in fees (using {DEFAULT_TAKER_FEE_RATE} as a fallback)\")\n return DEFAULT_TAKER_FEE_RATE\n\n return float(fees['taker_fee_rate'].to_string(index=False).strip())\n\n\n def __convertStatus(self, val: str) -> str:\n if val == 'filled':\n return 'done'\n else:\n return val\n\n\n def getOrders(self, market: str='', action: str='', status: str='all') -> pd.DataFrame:\n \"\"\"Retrieves your list of orders with optional filtering\"\"\"\n\n # if market provided\n if market != '':\n # validates the market is syntactically correct\n if not self._isMarketValid(market):\n raise ValueError('Binance market is invalid.')\n\n # if action provided\n if action != '':\n # validates action is either a buy or sell\n if not action in ['buy', 'sell']:\n raise ValueError('Invalid order action.')\n\n # validates status is either open, pending, done, active, or all\n if not status in ['open', 'pending', 'done', 'active', 'all']:\n raise ValueError('Invalid order status.')\n\n resp = self.client.get_all_orders(symbol=market)\n if len(resp) > 0:\n df = pd.DataFrame(resp)\n else:\n df = pd.DataFrame()\n\n if len(df) == 0:\n return pd.DataFrame()\n\n # fix: float division by zero\n # remove pending orders with status new (i.e. order made by user at limit)\n df = df[df['status'] == \"FILLED\"]\n\n # replace null NaN values with 0\n df.fillna(0, inplace=True)\n\n df = df[[ 'time', 'symbol', 'side', 'type', 'executedQty', 'cummulativeQuoteQty', 'status' ]]\n df.columns = [ 'created_at', 'market', 'action', 'type', 'filled', 'size', 'status' ]\n df['created_at'] = df['created_at'].apply(lambda x: int(str(x)[:10]))\n df['created_at'] = df['created_at'].astype(\"datetime64[s]\")\n df['size'] = df['size'].astype(float)\n df['filled'] = df['filled'].astype(float)\n df['action'] = df['action'].str.lower()\n df['type'] = df['type'].str.lower()\n df['status'] = df['status'].str.lower()\n df['price'] = df['size'] / df['filled']\n\n # pylint: disable=unused-variable\n for k, v in df.items():\n if k == 'status':\n df[k] = df[k].map(self.__convertStatus)\n\n if action != '':\n df = df[df['action'] == action]\n df = df.reset_index(drop=True)\n\n if status != 'all' and status != '':\n df = df[df['status'] == status]\n df = df.reset_index(drop=True)\n\n return df\n\n\n def marketBuy(self, market: str='', quote_quantity: float=0) -> list:\n \"\"\"Executes a market buy providing a funding amount\"\"\"\n\n # validates the market is syntactically correct\n if not self._isMarketValid(market):\n raise ValueError('Binance market is invalid.')\n\n # validates quote_quantity is either an integer or float\n if not isinstance(quote_quantity, int) and not isinstance(quote_quantity, float):\n raise TypeError('The funding amount is not numeric.')\n\n try:\n current_price = self.getTicker(market)[1]\n\n base_quantity = np.divide(quote_quantity, current_price)\n\n df_filters = self.getMarketInfoFilters(market)\n step_size = float(df_filters.loc[df_filters['filterType'] == 'LOT_SIZE']['stepSize'])\n precision = int(round(-math.log(step_size, 10), 0))\n\n # remove fees\n base_quantity = base_quantity - (base_quantity * self.getTradeFee(market))\n\n # execute market buy\n stepper = 10.0 ** precision\n truncated = math.trunc(stepper * base_quantity) / stepper\n Logger.info('Order quantity after rounding and fees: ' + str(truncated))\n\n return self.client.order_market_buy(symbol=market, quantity=truncated)\n except Exception as err:\n ts = datetime.now().strftime(\"%d-%m-%Y %H:%M:%S\")\n Logger.error(ts + ' Binance ' + ' marketBuy ' + str(err))\n return [] \n\n\n def marketSell(self, market: str='', base_quantity: float=0) -> list:\n \"\"\"Executes a market sell providing a crypto amount\"\"\"\n\n # validates the market is syntactically correct\n if not self._isMarketValid(market):\n raise ValueError('Binance market is invalid.')\n\n if not isinstance(base_quantity, int) and not isinstance(base_quantity, float):\n raise TypeError('The crypto amount is not numeric.')\n\n try:\n df_filters = self.getMarketInfoFilters(market)\n step_size = float(df_filters.loc[df_filters['filterType'] == 'LOT_SIZE']['stepSize'])\n precision = int(round(-math.log(step_size, 10), 0))\n\n # remove fees\n base_quantity = base_quantity - (base_quantity * self.getTradeFee(market))\n\n # execute market sell\n stepper = 10.0 ** precision\n truncated = math.trunc(stepper * base_quantity) / stepper\n Logger.info('Order quantity after rounding and fees: ' + str(truncated))\n return self.client.order_market_sell(symbol=market, quantity=truncated)\n except Exception as err:\n ts = datetime.now().strftime(\"%d-%m-%Y %H:%M:%S\")\n Logger.error(ts + ' Binance ' + ' marketSell ' + str(err))\n return []\n\n\n def getTradeFee(self, market: str) -> float:\n resp = self.client.get_trade_fee(symbol=market, timestamp=self.getTime())\n \n if 'tradeFee' not in resp:\n Logger.info('*** getTradeFee(' + market + ') - missing \"tradeFee\" ***')\n Logger.info(resp)\n else:\n if len(resp['tradeFee']) == 0:\n Logger.info('*** getTradeFee(' + market + ') - \"tradeFee\" empty ***') \n Logger.info(resp)\n else:\n if 'taker' not in resp['tradeFee'][0]:\n Logger.info('*** getTradeFee(' + market + ') - missing \"trader\" ***')\n Logger.info(resp) \n\n if 'success' in resp:\n return resp['tradeFee'][0]['taker']\n else:\n return DEFAULT_TRADE_FEE_RATE\n\n\n def getMarketInfo(self, market: str) -> dict:\n # validates the market is syntactically correct\n if not self._isMarketValid(market):\n raise TypeError('Binance market required.')\n\n return self.client.get_symbol_info(symbol=market)\n\n\n def getMarketInfoFilters(self, market: str) -> pd.DataFrame:\n return pd.DataFrame(self.client.get_symbol_info(symbol=market)['filters'])\n\n\n def getTicker(self, market:str) -> tuple:\n # validates the market is syntactically correct\n if not self._isMarketValid(market):\n raise TypeError('Binance market required.')\n\n resp = self.client.get_symbol_ticker(symbol=market)\n\n if 'price' in resp:\n return (self.getTime().strftime('%Y-%m-%d %H:%M:%S'), float('{:.8f}'.format(float(resp['price']))))\n\n now = datetime.today().strftime('%Y-%m-%d %H:%M:%S')\n return (now, 0.0)\n\n\n def getTime(self) -> datetime:\n \"\"\"Retrieves the exchange time\"\"\"\n \n try:\n resp = self.client.get_server_time()\n epoch = int(str(resp['serverTime'])[0:10])\n return datetime.fromtimestamp(epoch)\n except:\n return None\n\n\nclass PublicAPI(AuthAPIBase):\n def __init__(self) -> None:\n for i in range(10):\n try:\n self.client = Client()\n break\n except Exception as e:\n if i == 9:\n raise SystemExit(\"Can not create instance of AuthAPI client.\")\n Logger.error('Exception: ' + str(e)) \n Logger.error('Error on creating instance of AuthAPI Client. Trying again... Attempt: ' + str(i))\n sleep(0.1)\n \n\n def __truncate(self, f, n) -> int:\n return math.floor(f * 10 ** n) / 10 ** n\n\n \n def getClient(self) -> Client:\n return self.client\n\n \n def getHistoricalData(self, market: str=DEFAULT_MARKET, granularity: str=DEFAULT_GRANULARITY, iso8601start: str='', iso8601end: str='') -> pd.DataFrame:\n # validates the market is syntactically correct\n if not self._isMarketValid(market):\n raise TypeError('Binance market required.')\n\n # validates granularity is a string\n if not isinstance(granularity, str):\n raise TypeError('Granularity string required.')\n\n # validates the granularity is supported by Binance\n if not granularity in SUPPORTED_GRANULARITY:\n raise TypeError('Granularity options: ' + \", \".join(map(str, SUPPORTED_GRANULARITY)))\n\n # validates the ISO 8601 start date is a string (if provided)\n if not isinstance(iso8601start, str):\n raise TypeError('ISO8601 start integer as string required.')\n\n # validates the ISO 8601 end date is a string (if provided)\n if not isinstance(iso8601end, str):\n raise TypeError('ISO8601 end integer as string required.')\n\n # if only a start date is provided\n if iso8601start != '' and iso8601end == '':\n try:\n multiplier = MULTIPLIER_EQUIVALENTS[SUPPORTED_GRANULARITY.index(granularity)] \n except:\n multiplier = 1\n \n # calculate the end date using the granularity\n iso8601end = str((datetime.strptime(iso8601start, '%Y-%m-%dT%H:%M:%S.%f') + timedelta(minutes=granularity * multiplier)).isoformat())\n\n if iso8601start != '' and iso8601end != '':\n Logger.info('Attempting to retrieve data from ' + iso8601start)\n resp = self.client.get_historical_klines(market, granularity, iso8601start, iso8601end)\n\n if len(resp) > 300:\n resp = resp[:300]\n else: # TODO: replace with a KLINE_MESSAGE_FOO equivalent\n if granularity == '1m':\n resp = self.client.get_historical_klines(market, granularity, '12 hours ago UTC')\n resp = resp[-300:]\n elif granularity == '5m':\n resp = self.client.get_historical_klines(market, granularity, '2 days ago UTC')\n resp = resp[-300:]\n elif granularity == '15m':\n resp = self.client.get_historical_klines(market, granularity, '4 days ago UTC')\n resp = resp[-300:]\n elif granularity == '1h':\n resp = self.client.get_historical_klines(market, granularity, '13 days ago UTC')\n resp = resp[-300:]\n elif granularity == '6h':\n resp = self.client.get_historical_klines(market, granularity, '75 days ago UTC')\n resp = resp[-300:]\n elif granularity == '1d':\n resp = self.client.get_historical_klines(market, granularity, '251 days ago UTC')\n else:\n raise Exception('Something went wrong!')\n \n # convert the API response into a Pandas DataFrame\n df = pd.DataFrame(resp, columns=[ 'open_time', 'open', 'high', 'low', 'close', 'volume', 'close_time', 'quote_asset_volume', 'number_of_trades', 'taker_buy_base_asset_volume', 'traker_buy_quote_asset_volume', 'ignore' ])\n df['market'] = market\n df['granularity'] = granularity\n\n # binance epoch is too long\n df['open_time'] = df['open_time'] + 1\n df['open_time'] = df['open_time'].astype(str)\n df['open_time'] = df['open_time'].str.replace(r'\\d{3}$', '', regex=True) \n\n try:\n freq = FREQUENCY_EQUIVALENTS[SUPPORTED_GRANULARITY.index(granularity)]\n except:\n freq = \"D\"\n\n # convert the DataFrame into a time series with the date as the index/key\n try:\n tsidx = pd.DatetimeIndex(pd.to_datetime(df['open_time'], unit='s'), dtype='datetime64[ns]', freq=freq)\n df.set_index(tsidx, inplace=True)\n df = df.drop(columns=['open_time'])\n df.index.names = ['ts']\n df['date'] = tsidx\n except ValueError:\n tsidx = pd.DatetimeIndex(pd.to_datetime(df['open_time'], unit='s'), dtype='datetime64[ns]')\n df.set_index(tsidx, inplace=True)\n df = df.drop(columns=['open_time'])\n df.index.names = ['ts'] \n df['date'] = tsidx\n\n # re-order columns\n df = df[[ 'date', 'market', 'granularity', 'low', 'high', 'open', 'close', 'volume' ]]\n\n # correct column types\n df['low'] = df['low'].astype(float)\n df['high'] = df['high'].astype(float) \n df['open'] = df['open'].astype(float) \n df['close'] = df['close'].astype(float) \n df['volume'] = df['volume'].astype(float) \n\n # reset pandas dataframe index\n df.reset_index()\n\n return df\n\n \n def getTicker(self, market: str) -> tuple:\n # validates the market is syntactically correct\n if not self._isMarketValid(market):\n raise TypeError('Binance market required.')\n\n resp = self.client.get_symbol_ticker(symbol=market)\n\n if 'price' in resp:\n return (self.getTime().strftime('%Y-%m-%d %H:%M:%S'), float('{:.8f}'.format(float(resp['price']))))\n\n now = datetime.today().strftime('%Y-%m-%d %H:%M:%S')\n return (now, 0.0)\n\n \n def getTime(self) -> datetime:\n \"\"\"Retrieves the exchange time\"\"\"\n \n try:\n resp = self.client.get_server_time()\n epoch = int(str(resp['serverTime'])[0:10])\n return datetime.fromtimestamp(epoch)\n except:\n return None\n\n" ]
[ [ "numpy.divide", "pandas.DataFrame", "pandas.to_datetime" ] ]
Aarya-Create/PBL-Mesh
[ "978bcac47b9c925da4caba22d1f64fc254d13916" ]
[ "ctdr/optimize.py" ]
[ "from ctdr.utils import util_vis, util_mesh, render_objs\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nimport time\n# from mesh_intersection.loss import DistanceFieldPenetrationLoss\n\nfrom ctdr.utils import util_vis, util_mesh\n\ndef get_params(model, exclude_mus=False):\n return model.parameters()\n\ndef keep_topology(model, search_tree, displace_prev):\n vertices = model.get_displaced_vertices()\n faces = model.faces\n triangles = vertices[faces].unsqueeze(dim=0)\n \n with torch.no_grad():\n torch.cuda.synchronize()\n collision_idxs = search_tree(triangles)\n torch.cuda.synchronize()\n \n output = collision_idxs.squeeze()\n collisions = output[output[:, 0] >= 0, :]\n #print(\"@ total number of collisions:\", collisions.shape[0])\n mask_coll_faces = model.labels[collisions[:, 0],:] != model.labels[collisions[:, 1],:]\n mask_coll_faces = torch.prod(mask_coll_faces, dim=1)==1\n collisions = collisions[mask_coll_faces, :]\n \n if collisions.shape[0] == 0:\n return None, None\n else:\n print(\"@ collisions: \", collisions.shape[0])\n \n all_intr_verts = []\n all_recv_verts = []\n \n # case 2: topology-preserving constraint\n cnt = 0\n while collisions.shape[0] > 0 and cnt <= 400:\n cnt += 1\n assert cnt <= 300\n with torch.no_grad():\n intr_verts_vx3_int = model.faces[collisions[:,0]] # face number\n recv_verts_vx3_int = model.faces[collisions[:,1]]\n\n intr_verts = torch.unique(intr_verts_vx3_int)\n recv_verts = torch.unique(recv_verts_vx3_int)\n \n all_intr_verts.append(intr_verts)\n all_recv_verts.append(recv_verts)\n \n #idx_verts = torch.cat([intr_verts-1, intr_verts+1, intr_verts, recv_verts-1, recv_verts+1, recv_verts], dim=0)\n idx_verts = torch.cat([intr_verts, recv_verts], dim=0)\n idx_verts = torch.clamp(idx_verts, 0, model.vertices.shape[0]-1)\n \n #model.displace.data[idx_verts,:] = (model.displace.data[idx_verts,:]+displace_prev[idx_verts,:]) / 2.0\n model.displace.data[idx_verts,:] = displace_prev[idx_verts,:]\n # roll-back\n #model.displace.data = model.displace.data - args.lr * model.displace.grad.data\n \n # double check\n vertices = model.vertices + model.displace\n #vertices = model.get_displaced_vertices()\n triangles = vertices[faces].unsqueeze(dim=0)\n \n torch.cuda.synchronize()\n collision_idxs = search_tree(triangles)\n torch.cuda.synchronize()\n\n output = collision_idxs.squeeze()\n collisions = output[output[:, 0] >= 0, :]\n #print(\"@ collisions including self-intersec: \", collisions.shape[0])\n #print(triangles.shape, collision_idxs.shape)\n\n mask_coll_faces = model.labels[collisions[:, 0],:] != model.labels[collisions[:, 1],:]\n #print(mask_coll_faces)\n mask_coll_faces = torch.prod(mask_coll_faces, dim=1)==1\n #print(mask_coll_faces)\n collisions = collisions[mask_coll_faces, :]\n \n print(\"@ collisions: \", collisions.shape[0])\n nc = collisions.shape[0]\n #assert(nc == 0)\n #collisions = collision_idxs\n \n all_intr_verts = torch.unique(torch.cat(all_intr_verts, dim=0))\n all_recv_verts = torch.unique(torch.cat(all_recv_verts, dim=0))\n \n vertices = model.get_displaced_vertices()\n \n return vertices[all_intr_verts,:], vertices[all_recv_verts,:]\n \n\ndef run(model, ds, niter, args, epoch_start=0, use_adam=True):\n use_collision = True\n if args.data[-1]==\"A\" or ds.nmaterials == 2:\n use_collision = False\n\n if use_collision:\n from mesh_intersection.bvh_search_tree import BVH\n search_tree = BVH(max_collisions=16)\n #pen_distance = DistanceFieldPenetrationLoss(sigma=0.5)\n\n print(\"@ model.mus\", model.mus)\n \n params = get_params(model)\n \n if use_adam == True:\n opt = torch.optim.Adam(params, args.lr, betas=(0.9, 0.99))\n else:\n opt = torch.optim.SGD(params, lr=args.lr, momentum=0.9, nesterov=True)\n\n# loop = tqdm.tqdm(list(range(0,args.niter)))\n\n if epoch_start == 0:\n f = open(args.dresult+'log.txt', 'w')\n else:\n f = open(args.dresult+'log.txt', 'a')\n \n log = f\"@ statistics of mesh: {model.vertices.shape[0]}, {model.faces.shape[0]}\\n\"\n \n # full-batch case\n if args.b == 0:\n idx_angles_full = torch.LongTensor(np.arange(ds.nangles))\n p_full = ds.p.cuda()\n ds_loader = [ [ idx_angles_full, p_full ] ]\n\n # mini-batch case\n else:\n ds_loader = torch.utils.data.DataLoader(ds, batch_size=args.b, shuffle=True,\n num_workers=0, drop_last=True, pin_memory=True)\n\n mask_bg = ds.p < 1e-5\n mask_bg = mask_bg.cuda()\n use_silhouette = False\n # if use_silhouette:\n # mask_bg = ds.p < 1e-5\n # mask_bg = mask_bg.cuda()\n\n# mask_bg = (p_batch > p_full.min()+0.05)\n #mask_bg = 1\n ledge = 0\n llap = 0.\n lflat = 0.\n\n for epoch in range(epoch_start, niter):\n if epoch+100 == niter and niter > 400:\n args.lr *= 0.5\n print(\"@ args.lr\", args.lr)\n \n # if epoch % 20 == 0 or epoch == niter-1:\n start = time.time()\n \n for idx_angles, p_batch in ds_loader:\n displace_prev = model.displace.data.clone()\n if args.b > 0:\n p_batch = p_batch.cuda()\n\n opt.zero_grad()\n \n phat, mask_valid, edge_loss, lap_loss, flat_loss = model(idx_angles, args.wedge) # full angles\n # phat[~mask_valid] = 0.0\n # mask_valid = mask_valid + mask_bg\n \n if 1:\n # l2 loss\n data_loss = (p_batch - phat)[mask_valid].pow(2).mean()\n\n if use_silhouette:\n idx_bg = (~mask_valid) * mask_bg[idx_angles]\n nbg = torch.sum(idx_bg)\n if nbg:\n print(\"sum(idx_bg), min, max\", nbg, torch.min(phat[idx_bg]).item(), torch.max(phat[idx_bg]).item())\n # print(phat[idx_bg])\n \n data_loss += (phat[idx_bg]).pow(2).mean()\n # data_loss += 10 * torch.abs(phat[idx_bg]).mean()\n\n # add for the invalid pixels\n # data_loss += (p_batch)[~mask_valid].pow(2).mean()\n else:\n # student t misfit\n sigma=1\n data_loss = torch.log(1 + ((p_batch - phat)[mask_valid]/sigma)**2).mean()\n\n loss = data_loss + args.wedge * edge_loss + args.wlap * lap_loss + args.wflat * flat_loss\n \n \n loss.backward()\n opt.step()\n \n loss_now = loss.item()\n model.mus.data.clamp_(min=0.0)\n \n if use_collision == False:\n continue\n\n\n # if epoch % 20 == 0 or epoch == niter-1: \n elpased_time = time.time() - start\n\n if epoch > epoch_start and args.b == 0:\n dloss = (loss_prev - loss_now) # should be positive\n if dloss < 1e-11 and dloss > 0:\n print('! converged')\n break\n \n loss_prev = loss_now\n \n\n if args.wedge > 0.:\n ledge = edge_loss.item()\n \n if args.wlap > 0.:\n llap = lap_loss.item()\n \n if args.wflat > 0.:\n lflat = flat_loss.item()\n\n log += f'~ {epoch:03d} l2_loss: {data_loss.item():.8f} edge: {ledge:.6f} lap: {llap:.6f} flat: {lflat:.6f} mus: {str(model.mus.cpu().detach().numpy())} time: {elpased_time:.4f}\\n'\n #log += f'center: {model.center[0,0].item():.6f} {model.center[0,1].item():.6f} {model.center[0,2].item():.6f}'\n # f.write(log+\"\\n\")\n \n if epoch % 60 == 0 or epoch == niter-1:\n if torch.sum(~mask_valid) > 15000 and epoch > 100:\n assert 0, \"consider increasing regularization\"\n\n print(log)\n\n if args.b == 0:\n res_np = ds.p_original - phat.detach().cpu().numpy() \n res_scalar = np.mean(res_np**2)\n f.write(f\"~ res_np: {res_scalar}\\n\")\n util_vis.save_sino_as_img(args.dresult+f'{epoch:04d}_sino_res.png', res_np, cmap='coolwarm')\n \n phat[~mask_valid]=0.\n print(phat.min(), phat.max())\n if args.verbose == 0:\n continue\n\t\t\t\t\n vv = model.vertices.cpu()+model.displace.detach().cpu()\n ff = model.faces.cpu()\n \n labels_v, labels_f = model.labels_v_np, model.labels.cpu().numpy()\n # util_vis.save_vf_as_img_labels(args.dresult+f'{epoch:04d}_render.png', vv, ff, labels_v, labels_f)\n util_vis.save_sino_as_img(args.dresult+f'{epoch:04d}_sino.png', phat.detach().cpu().numpy())\n util_mesh.save_mesh(args.dresult+f'{epoch:04d}.obj', vv.numpy(), ff.numpy(), labels_v, labels_f)\n if args.data == \"3nanoC\":\n import subprocess\n subprocess.Popen([\"python\", \"../plot/compute_volume.py\", args.dresult+f'{epoch:04d}.obj'])\n\n if epoch == niter-1:\n util_mesh.save_mesh(args.dresult+'mesh.obj', vv.numpy(), ff.numpy(), labels_v, labels_f)\n util_vis.save_sino_as_img(args.dresult+f'{epoch:04d}_data.png', ds.p.cuda())\n \n f.write(log+\"\\n\")\n\ndef update_topology(model):\n # update topology information\n mask_coll_faces = model.labels[collisions[:, 0]] != model.labels[collisions[:, 1]]\n mask_coll_faces = torch.sum(mask_coll_faces, dim=1)\n fidx_list_intruder = collisions[mask_coll_faces, 0]\n fidx_list_receiver = collisions[mask_coll_faces, 1]\n #assert len(mask_coll_faces)==1, \n\n ncolls2 = mask_coll_faces.sum().item()\n if ncolls2 == 0:\n return\n\n print(\"number of collisions2:\", ncolls2)\n\n # swap the outside labels for intruder and receiver\n labels_coll1 = model.labels[fidx_list_intruder,1]\n model.labels[fidx_list_intruder,1] = model.labels[fidx_list_receiver,1]\n model.labels[fidx_list_receiver,1] = labels_coll1\n" ]
[ [ "torch.cat", "torch.prod", "torch.unique", "torch.cuda.synchronize", "torch.min", "torch.max", "torch.no_grad", "torch.optim.Adam", "torch.optim.SGD", "torch.clamp", "numpy.mean", "numpy.arange", "torch.utils.data.DataLoader", "torch.log", "torch.sum" ] ]
SteffenMauceri/OWLS-Autonomy
[ "e676282a87e17030887b0174f3b8b38aab170d15" ]
[ "src/helm_dhm/tracker/test/conftest.py" ]
[ "import os\nimport glob\nimport json\nimport pytest\nfrom helm_dhm.tracker import *\n\nimport numpy as np\n\[email protected]\ndef postprocessors_sample_track():\n test_data = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'data', 'sample.track')\n with open(test_data) as f:\n track = json.load(f)\n return track\n\[email protected]\ndef sample_dataset():\n path = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'data', '2015.07.02_11-03_sub')\n name = os.path.basename(path)\n holograms = {'Holograms': glob.glob(os.path.join(path, 'Holograms', '*.tif'))}\n\n return name, holograms\n\[email protected]\ndef test_config():\n return os.path.join(os.path.dirname(__file__), 'data', 'test_config.yml')\n\[email protected]\ndef difference_image():\n return np.load(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'data', 'img_diff_00030.npy'))\n\n\[email protected]\ndef project_params():\n return np.array([2, 3]), np.array([[1, -.5], [-.5, 2]]), np.array([1, -1]), np.array([[1, -1], [-1, 1]])\n\n\[email protected]\ndef particle_track_input():\n # time_0, position, position_variance, velocity, velocity_variance, acceleration, acceleration_variance\n return 1, np.array([-2, 3]), np.array([[1, -.5], [-.5, 2]]), np.array([1, 1]), np.array([[1, .5], [.5, 2]]), \\\n np.array([.1, .2]), np.array([[1, .1], [.1, -1]])\n\n\[email protected]\ndef particle_track_output():\n return {'Times': [1], 'Particles_Position': [np.array([-2, 3])], 'Particles_Variance': [np.array([[1., -0.5],\n [-0.5, 2.]])],\n 'Particles_Estimated_Position': [np.array([-2, 3])],\n 'Particles_Estimated_Position_Variance': [np.array([[1., -0.5],\n [-0.5, 2.]])],\n 'Particles_Estimated_Velocity': [np.array([1, 1])],\n 'Particles_Estimated_Velocity_Variance': [np.array([[1., 0.5],\n [0.5, 2.]])],\n 'Particles_Estimated_Acceleration': np.array([0.1, 0.2]),\n 'Particles_Estimated_Acceleration_Variance': np.array([[1., 0.1],\n [0.1, -1.]]), 'Average_Velocity': [None],\n 'Track_ID': None}\n\n\[email protected]\ndef particle_track_for_save():\n particle_track = {'Times': [17, 18, 19],\n 'Particles_Position': [np.array([271.52631579, 777.94736842]), None,\n np.array([277.57142857, 773.21428571])],\n 'Particles_Variance': [np.array([[2.31871345, -1.30409357], [-1.30409357, 4.21929825]]),\n None, np.array([[1.02380952, -0.04761905], [-0.04761905, 1.30952381]])],\n 'Particles_Estimated_Position': [np.array([271.52631579, 777.94736842]),\n np.array([279.17027864, 769.77708978]),\n np.array([277.57142857, 773.21428571])],\n 'Particles_Estimated_Position_Variance': [\n np.array([[2.31871345, -1.30409357], [-1.30409357, 4.21929825]]),\n np.array([[7.12272102, -3.96848125], [-3.96848125, 11.67389061]]),\n np.array([[1.02380952, -0.04761905], [-0.04761905, 1.30952381]])],\n 'Particles_Estimated_Velocity': [np.array([7.64396285, -8.17027864]),\n np.array([7.64396285, -8.17027864]),\n np.array([-1.59885007, 3.43719593])],\n 'Particles_Estimated_Velocity_Variance': [\n np.array([[4.80400757, -2.66438768], [-2.66438768, 7.45459236]]),\n np.array([[5.30400757, -2.66438768], [-2.66438768, 7.95459236]]),\n np.array([[8.14653054, -4.0161003], [-4.0161003, 12.98341442]])],\n 'Particles_Estimated_Acceleration': np.array([0., 0.]),\n 'Particles_Estimated_Acceleration_Variance': np.array([[0.5, 0.], [0., 0.5]]),\n 'Average_Velocity': [np.array([0.12917508, -0.77912188])],\n 'Track_ID': 5}\n return particle_track\n\n\[email protected]\ndef nearest_neighbor_track():\n return {'Times': [17, 18],\n 'Particles_Position': [np.array([271.52631579, 777.94736842]), None],\n 'Particles_Variance': [np.array([[2.31871345, -1.30409357], [-1.30409357, 4.21929825]]), None],\n 'Particles_Estimated_Position': [np.array([271.52631579, 777.94736842]),\n np.array([279.17027864, 769.77708978])],\n 'Particles_Estimated_Position_Variance': [np.array([[2.31871345, -1.30409357],\n [-1.30409357, 4.21929825]]),\n np.array([[7.12272102, -3.96848125],\n [-3.96848125, 11.67389061]])],\n 'Particles_Estimated_Velocity': [np.array([7.64396285, -8.17027864]),\n np.array([7.64396285, -8.17027864])],\n 'Particles_Estimated_Velocity_Variance': [np.array([[4.80400757, -2.66438768],\n [-2.66438768, 7.45459236]]),\n np.array([[5.30400757, -2.66438768],\n [-2.66438768, 7.95459236]])],\n 'Particles_Estimated_Acceleration': np.array([0., 0.]),\n 'Particles_Estimated_Acceleration_Variance': np.array([[0.5, 0.],\n [0., 0.5]]),\n 'Average_Velocity': [np.array([0.12917508, -0.77912188])],\n 'Current_Memory': 5,\n 'Track_ID': 5\n }\n\n\[email protected]\ndef nearest_neighbor_three_times():\n return {'Times': [17, 18, 19],\n 'Particles_Position':\n [np.array([271.52631579, 777.94736842]), None, np.array([277.57142857, 773.21428571])],\n 'Particles_Variance':\n [np.array([[2.31871345, -1.30409357], [-1.30409357, 4.21929825]]),\n None,\n np.array([[1.02380952, -0.04761905], [-0.04761905, 1.30952381]])],\n 'Particles_Estimated_Position':\n [np.array([271.52631579, 777.94736842]),\n np.array([279.17027864, 769.77708978]),\n np.array([277.57142857, 773.21428571])],\n 'Particles_Estimated_Position_Variance':\n [np.array([[2.31871345, -1.30409357],\n [-1.30409357, 4.21929825]]), np.array([[7.12272102, -3.96848125],\n [-3.96848125, 11.67389061]]),\n np.array([[1.02380952, -0.04761905],\n [-0.04761905, 1.30952381]])],\n 'Particles_Estimated_Velocity':\n [np.array([7.64396285, -8.17027864]),\n np.array([7.64396285, -8.17027864]),\n np.array([-1.59885007, 3.43719593])],\n 'Particles_Estimated_Velocity_Variance':\n [np.array([[4.80400757, -2.66438768], [-2.66438768, 7.45459236]]),\n np.array([[5.30400757, -2.66438768], [-2.66438768, 7.95459236]]),\n np.array([[8.14653054, -4.0161003], [-4.0161003, 12.98341442]])],\n 'Particles_Estimated_Acceleration': np.array([0., 0.]),\n 'Particles_Estimated_Acceleration_Variance': np.array([[0.5, 0.], [0., 0.5]]),\n 'Average_Velocity': [np.array([0.12917508, -0.77912188])],\n 'Current_Memory': 5,\n 'Track_ID': 5}\n\n\[email protected]\ndef random_dbscan_movie():\n np.random.seed(0)\n return np.random.rand(20, 1024, 1024)\n\n\[email protected]()\ndef dbscan_tracker_expected_tracks_list():\n return {'Times': [6, 7, 8, 9, 10, 11],\n 'Cloud': {0: np.array(\n [[380, 247], [380, 248], [380, 249], [381, 247], [381, 248], [381, 249], [381, 250], [382, 247],\n [382, 248], [382, 249], [382, 250]]),\n 1: None,\n 2: np.array([[379, 248], [379, 249], [380, 247], [380, 248], [380, 249], [381, 246], [381, 247],\n [381, 248],\n [381, 249], [382, 245], [382, 246], [382, 247], [382, 248], [382, 249], [382, 250],\n [383, 245],\n [383, 246], [383, 247], [383, 248], [383, 249], [383, 250], [384, 244], [384, 245],\n [384, 246],\n [384, 247], [384, 248], [384, 249], [384, 250], [385, 244], [385, 245], [385, 246],\n [385, 247],\n [385, 248], [385, 249], [385, 250], [386, 244], [386, 245], [386, 246], [386, 247],\n [386, 248],\n [386, 249], [386, 250], [387, 245], [387, 246], [387, 247], [387, 248], [387, 249],\n [387, 250],\n [388, 245], [388, 246], [388, 247], [388, 248], [388, 249], [388, 250], [389, 245],\n [389, 246],\n [389, 247], [389, 248], [389, 249], [390, 246], [390, 247], [390, 248]]),\n 3: np.array([[377, 245], [377, 246], [377, 247], [378, 245], [378, 246], [378, 247], [378, 248],\n [379, 244],\n [379, 245], [379, 246], [379, 247], [379, 248], [380, 244], [380, 245], [380, 246],\n [380, 247],\n [380, 248], [381, 244], [381, 245], [381, 246], [381, 247], [382, 244], [382, 245],\n [382, 246]]),\n 4: None,\n 5: np.array([[377, 246], [377, 247], [377, 248], [378, 245], [378, 246], [378, 247], [378, 248],\n [378, 249],\n [378, 250], [379, 245], [379, 246], [379, 247], [379, 248], [379, 249], [379, 250],\n [379, 251],\n [380, 246], [380, 247], [380, 248], [380, 249], [380, 250], [380, 251], [381, 247],\n [381, 248],\n [381, 249], [381, 250], [381, 251]])},\n 'Particles_Position': [np.array([381.09090909, 248.36363636]), None, np.array([384.87096774, 247.32258065]),\n np.array([379.5, 245.875]), None, np.array([379.14814815, 248.07407407])],\n 'Particles_Variance': [np.array([[0.69090909, 0.16363636], [0.16363636, 1.25454545]]), None,\n np.array([[8.76996298, -0.62982549], [-0.62982549, 2.97620307]]),\n np.array([[2.52173913, -0.58695652],\n [-0.58695652, 1.67934783]]), None,\n np.array([[1.66951567, 0.83475783], [0.83475783, 3.3019943]])],\n 'Particles_Estimated_Position': [np.array([381.09090909, 248.36363636]),\n np.array([381.09090909, 248.36363636]),\n np.array([384.87096774, 247.32258065]), np.array([379.5, 245.875]),\n np.array([374.12903226, 244.42741935]),\n np.array([379.14814815, 248.07407407])],\n 'Particles_Estimated_Position_Variance': [np.array([[0.69090909, 0.16363636], [0.16363636, 1.25454545]]),\n np.array([[0.69090909, 0.16363636], [0.16363636, 1.25454545]]),\n np.array([[8.76996298, -0.62982549],\n [-0.62982549, 2.97620307]]),\n np.array([[2.52173913, -0.58695652], [-0.58695652, 1.67934783]]),\n np.array([[13.81344124, -1.80373853], [-1.80373853, 6.33489872]]),\n np.array([[1.66951567, 0.83475783],\n [0.83475783, 3.3019943]])],\n 'Particles_Estimated_Velocity': [None, np.array([0., 0.]), np.array([3.78005865, -1.04105572]),\n np.array([-5.37096774, -1.44758065]), np.array([-5.37096774, -1.44758065]),\n np.array([5.01911589, 3.64665472])],\n 'Particles_Estimated_Velocity_Variance': [None,\n np.array([[1.38181818, 0.32727273], [0.32727273, 2.50909091]]),\n np.array([[9.46087207, -0.46618913], [-0.46618913, 4.23074852]]),\n np.array([[11.29170211, -1.21678201],\n [-1.21678201, 4.65555089]]),\n np.array([[16.33518037, -2.39069505], [-2.39069505, 8.01424655]]),\n np.array([[15.48295691, -0.9689807], [-0.9689807, 9.63689302]])],\n 'Particles_Estimated_Acceleration': [None, None, np.array([3.78005865, -1.04105572]),\n np.array([-9.15102639, -0.40652493]), np.array([0., 0.]),\n np.array([10.39008363, 5.09423536])],\n 'Particles_Estimated_Acceleration_Variance': [None, None, np.array([[10.84269026, -0.1389164],\n [-0.1389164, 6.73983943]]),\n np.array(\n [[20.75257419, -1.68297114], [-1.68297114, 8.88629941]]),\n np.array(\n [[27.62688249, -3.60747707], [-3.60747707, 12.66979744]]),\n np.array([[31.81813729, -3.35967575],\n [-3.35967575, 17.65113957]])],\n 'Average_Velocity': [np.array([-1.60604706, 4.71995019]),\n np.array([-1.40571219, 3.61376749]),\n np.array([2.65312405, -1.80290545]),\n np.array([-2.81725992, 1.50508258]),\n np.array([0.42382931, 0.27260142]),\n np.array([0.05587251, -0.50865689])],\n 'Track_ID': 6}\n\[email protected]\ndef test_tracks():\n return list(glob.glob(os.path.join(os.path.dirname(__file__), 'data', '2019.11.12_09.26.26.655', '*.track')))\n\n" ]
[ [ "numpy.random.seed", "numpy.array", "numpy.random.rand" ] ]
mritools/mrrt.mri
[ "00829032f6d19d078a23d006b73f1028b3ec3902" ]
[ "mrrt/mri/operators/tests/test_mri_noncartesian.py" ]
[ "\"\"\"Tests related to Non-Cartesian MRI reconstruction.\"\"\"\nfrom itertools import product\nimport time\n\nimport numpy as np\nfrom numpy.testing import assert_, assert_equal\nimport pytest\n\nfrom mrrt.operators.LinOp import DiagonalOperatorMulti\nfrom mrrt.mri.operators.tests._generate_testdata import generate_sim_data\nfrom mrrt.nufft import dtft, dtft_adj\nfrom mrrt.utils import embed, have_cupy, profile\n\nimport os\n\nOMIT_CPU = int(os.environ.get(\"OMIT_CPU\", False))\nOMIT_GPU = int(os.environ.get(\"OMIT_GPU\", False))\n\nall_xp = [np]\ncpu_cases = [\"CPU,Tab0\", \"CPU,Tab\", \"CPU,Sp\"] if not OMIT_CPU else []\nall_cases = cpu_cases\nif have_cupy:\n import cupy\n\n if cupy.cuda.runtime.getDeviceCount() > 0 and not OMIT_GPU:\n gpu_cases = [\"GPU,Tab0\", \"GPU,Tab\", \"GPU,Sp\"]\n all_cases += gpu_cases\n all_xp += [cupy]\n\n# To ignore PendingDeprecationWarning related to scipy.sparse we use:\n# @pytest.mark.filterwarnings(\"ignore:the matrix subclass is not\")\n#\n# This class of warnings could also be ignored via the command line, e.g.:\n# pytest -W ignore::PendingDeprecationWarning test_mri_reconstruction.py\n\n\n@profile\ndef _test_mri_multi(\n ndim=3,\n N0=8,\n grid_os_factor=1.5,\n J0=4,\n Ld=4096,\n n_coils=1,\n fieldmap_segments=None,\n precisions=[\"single\", \"double\"],\n phasings=[\"real\", \"complex\"],\n recon_cases=[\"CPU,Tab0\", \"CPU,Tab\", \"CPU,Sp\"],\n rtol=1e-3,\n compare_to_exact=False,\n show_figures=False,\n nufft_kwargs={},\n navg_time=1,\n n_creation=1,\n return_errors=False,\n gpu_memflags=None,\n verbose=False,\n return_operator=False,\n spectral_offsets=None,\n):\n \"\"\"Run a batch of NUFFT tests.\"\"\"\n all_err_forward = np.zeros(\n (len(recon_cases), len(precisions), len(phasings))\n )\n all_err_adj = np.zeros((len(recon_cases), len(precisions), len(phasings)))\n alltimes = {}\n if not np.isscalar(navg_time):\n navg_time_cpu, navg_time_gpu = navg_time\n else:\n navg_time_cpu = navg_time_gpu = navg_time\n for i, recon_case in enumerate(recon_cases):\n if \"CPU\" in recon_case:\n navg_time = navg_time_cpu\n else:\n navg_time = navg_time_gpu\n\n for j, precision in enumerate(precisions):\n for k, phasing in enumerate(phasings):\n if verbose:\n print(\n \"phasing={}, precision={}, type={}\".format(\n phasing, precision, recon_case\n )\n )\n\n if \"Tab\" in recon_case:\n # may want to create twice when benchmarking GPU case\n # because the custom kernels are compiled the first time\n ncr_max = n_creation\n else:\n ncr_max = 1\n # on_gpu = ('GPU' in recon_case)\n for ncr in range(ncr_max):\n (\n Gn,\n wi_full,\n xTrue,\n ig,\n data_true,\n times,\n ) = generate_sim_data(\n recon_case=recon_case,\n ndim=ndim,\n N0=N0,\n J0=J0,\n grid_os_factor=grid_os_factor,\n fieldmap_segments=fieldmap_segments,\n Ld=Ld,\n n_coils=n_coils,\n precision=precision,\n phasing=phasing,\n nufft_kwargs=nufft_kwargs,\n MRI_object_kwargs=dict(gpu_memflags=gpu_memflags),\n spectral_offsets=spectral_offsets,\n )\n\n xp = Gn.xp\n\n # time the forward operator\n sim_data = Gn * xTrue # dry run\n tstart = time.time()\n for nt in range(navg_time):\n sim_data = Gn * xTrue\n sim_data += 0.0\n sim_data = xp.squeeze(sim_data) # TODO: should be 1D already?\n # print(\"type(xTrue) = {}\".format(type(xTrue)))\n # print(\"type(sim_data) = {}\".format(type(sim_data)))\n t_for = (time.time() - tstart) / navg_time\n times[\"MRI: forward\"] = t_for\n\n # time the norm operator\n Gn.norm(xTrue) # dry run\n tstart = time.time()\n for nt in range(navg_time):\n Gn.norm(xTrue)\n t_norm = (time.time() - tstart) / navg_time\n\n times[\"MRI: norm\"] = t_norm\n if precision == \"single\":\n dtype_real = np.float32\n dtype_cplx = np.complex64\n else:\n dtype_real = np.float64\n dtype_cplx = np.complex128\n\n if \"Tab\" in recon_case:\n if phasing == \"complex\":\n assert_equal(Gn.Gnufft.h[0].dtype, dtype_cplx)\n else:\n assert_equal(Gn.Gnufft.h[0].dtype, dtype_real)\n else:\n if phasing == \"complex\":\n assert_equal(Gn.Gnufft.p.dtype, dtype_cplx)\n else:\n assert_equal(Gn.Gnufft.p.dtype, dtype_real)\n assert_equal(sim_data.dtype, dtype_cplx)\n\n if compare_to_exact:\n # compare_to_exact only currently for single-coil,\n # no fieldmap case\n if spectral_offsets is not None:\n raise NotImplementedError(\n \"compare_to_exact doesn't currently support \"\n \"spectral offsets\"\n )\n nshift_exact = tuple(s / 2 for s in Gn.Nd)\n sim_data2 = dtft(\n xTrue, Gn.omega, shape=Gn.Nd, n_shift=nshift_exact\n )\n\n sd2_norm = xp.linalg.norm(sim_data2)\n rel_err = xp.linalg.norm(sim_data - sim_data2) / sd2_norm\n if \"GPU\" in recon_case:\n if hasattr(rel_err, \"get\"):\n rel_err = rel_err.get()\n all_err_forward[i, j, k] = rel_err\n print(\n \"{},{},{}: forward error = {}\".format(\n recon_case, precision, phasing, rel_err\n )\n )\n rel_err_mag = (\n xp.linalg.norm(np.abs(sim_data) - np.abs(sim_data2))\n / sd2_norm\n )\n print(\n f\"{recon_case},{precision},{phasing}: \"\n f\"forward mag diff error = {rel_err_mag}\"\n )\n assert rel_err < rtol\n\n # TODO: update DiagonalOperatorMulti to auto-set loc_in,\n # loc_out appropriately\n if xp is np:\n diag_args = dict(loc_in=\"cpu\", loc_out=\"cpu\")\n else:\n diag_args = dict(loc_in=\"gpu\", loc_out=\"gpu\")\n diag_op = DiagonalOperatorMulti(wi_full, **diag_args)\n if n_coils == 1:\n data_dcf = diag_op * data_true\n else:\n data_dcf = diag_op * sim_data\n\n # time the adjoint operation\n im_est = Gn.H * data_dcf # dry run\n tstart = time.time()\n for nt in range(navg_time):\n im_est = Gn.H * data_dcf\n t_adj = (time.time() - tstart) / navg_time\n times[\"MRI: adjoint\"] = t_adj\n\n if hasattr(Gn, \"mask\") and Gn.mask is not None:\n im_est = embed(im_est, Gn.mask)\n else:\n if spectral_offsets is None:\n im_est = im_est.reshape(Gn.Nd, order=Gn.order)\n else:\n im_est = im_est.reshape(\n tuple(Gn.Nd) + (len(spectral_offsets),),\n order=Gn.order,\n )\n\n if compare_to_exact:\n im_est_exact = dtft_adj(\n data_dcf, Gn.omega, shape=Gn.Nd, n_shift=nshift_exact\n )\n ex_norm = xp.linalg.norm(im_est_exact)\n rel_err = xp.linalg.norm(im_est - im_est_exact) / ex_norm\n all_err_adj[i, j, k] = rel_err\n if verbose:\n print(\n \"{},{},{}: adjoint error = {}\".format(\n recon_case, precision, phasing, rel_err\n )\n )\n rel_err_mag = (\n xp.linalg.norm(np.abs(im_est) - np.abs(im_est_exact))\n / ex_norm\n )\n if verbose:\n print(\n \"{},{},{}: adjoint mag diff error = {}\".format(\n recon_case, precision, phasing, rel_err\n )\n )\n assert_(rel_err < rtol)\n\n title = \", \".join([recon_case, precision, phasing])\n if show_figures:\n from matplotlib import pyplot as plt\n from pyvolplot import volshow\n\n if compare_to_exact:\n volshow(\n [\n im_est_exact,\n im_est,\n im_est_exact - im_est,\n xp.abs(im_est_exact) - xp.abs(im_est),\n ]\n )\n else:\n volshow(im_est)\n plt.title(title)\n alltimes[title] = times\n\n if return_operator:\n if return_errors:\n return Gn, alltimes, all_err_forward, all_err_adj\n return Gn, alltimes\n\n if return_errors:\n return alltimes, all_err_forward, all_err_adj\n return alltimes\n\n\[email protected](\n \"recon_case, precision, phasing\",\n product(all_cases, [\"single\", \"double\"], [\"complex\", \"real\"]),\n)\[email protected](\"ignore:the matrix subclass is not\")\ndef test_mri_2d_nocoils_nofieldmap_nocompare(\n recon_case, precision, phasing, show_figures=False, verbose=False\n):\n\n _test_mri_multi(\n ndim=2,\n N0=16,\n grid_os_factor=1.5,\n J0=6,\n Ld=4096,\n n_coils=1,\n fieldmap_segments=None,\n precisions=[precision],\n phasings=[phasing],\n recon_cases=[recon_case],\n show_figures=show_figures,\n verbose=verbose,\n compare_to_exact=False,\n nufft_kwargs={},\n rtol=1e-4,\n )\n\n\n# @dec.slow\[email protected](\n \"recon_case, precision, phasing\",\n product(all_cases, [\"single\", \"double\"], [\"complex\", \"real\"]),\n)\[email protected](\"ignore:the matrix subclass is not\")\ndef test_mri_2d_nocoils_nofieldmap(\n recon_case, precision, phasing, show_figures=False, verbose=False\n):\n\n _test_mri_multi(\n ndim=2,\n N0=16,\n grid_os_factor=1.5,\n J0=6,\n Ld=4096,\n n_coils=1,\n fieldmap_segments=None,\n precisions=[precision],\n phasings=[phasing],\n recon_cases=[recon_case],\n show_figures=show_figures,\n verbose=verbose,\n compare_to_exact=True,\n nufft_kwargs={},\n rtol=1e-3,\n )\n\n\n# @dec.slow\[email protected](\n \"recon_case, precision, phasing\",\n product(all_cases, [\"single\", \"double\"], [\"complex\", \"real\"]),\n)\[email protected](\"ignore:the matrix subclass is not\")\ndef test_mri_2d_nocoils_nofieldmap_kernels(\n recon_case, precision, phasing, show_figures=False, verbose=False\n):\n N = 32\n grid_os_factor = 2\n J = 6\n t, ef, ea = _test_mri_multi(\n ndim=2,\n N0=N,\n grid_os_factor=grid_os_factor,\n J0=J,\n Ld=512,\n n_coils=1,\n fieldmap_segments=None,\n precisions=[precision],\n phasings=[phasing],\n recon_cases=[recon_case],\n show_figures=show_figures,\n verbose=verbose,\n compare_to_exact=True,\n nufft_kwargs={},\n rtol=1e-2,\n return_errors=True,\n )\n\n\n# @dec.slow\[email protected](\n \"recon_case, precision, phasing\",\n product(all_cases, [\"single\", \"double\"], [\"complex\", \"real\"]),\n)\[email protected](\"ignore:the matrix subclass is not\")\ndef test_mri_2d_multicoils_nofieldmap(\n recon_case, precision, phasing, show_figures=False, verbose=False\n):\n _test_mri_multi(\n ndim=2,\n N0=16,\n grid_os_factor=1.5,\n J0=6,\n Ld=4096,\n n_coils=4,\n fieldmap_segments=None,\n precisions=[precision],\n phasings=[phasing],\n recon_cases=[recon_case],\n compare_to_exact=False,\n rtol=1e-3,\n show_figures=show_figures,\n verbose=verbose,\n )\n\n\n# @dec.slow\[email protected](\n \"recon_case, precision, phasing\",\n product(all_cases, [\"single\", \"double\"], [\"complex\", \"real\"]),\n)\[email protected](\"ignore:the matrix subclass is not\")\ndef test_mri_2d_multicoils_fieldmap(\n recon_case, precision, phasing, show_figures=False, verbose=False\n):\n _test_mri_multi(\n ndim=2,\n N0=16,\n grid_os_factor=1.5,\n J0=6,\n Ld=4096,\n n_coils=4,\n fieldmap_segments=6,\n precisions=[precision],\n phasings=[phasing],\n recon_cases=[recon_case],\n compare_to_exact=False,\n show_figures=show_figures,\n verbose=verbose,\n )\n\n\n# @dec.slow\[email protected](\n \"recon_case, precision, phasing\",\n product(all_cases, [\"single\", \"double\"], [\"complex\", \"real\"]),\n)\[email protected](\"ignore:the matrix subclass is not\")\ndef test_mri_3d_nocoils_nofieldmap(\n recon_case, precision, phasing, show_figures=False, verbose=False\n):\n if \"Tab0\" in recon_case:\n rtol = 1e-2\n else:\n rtol = 1e-3\n _test_mri_multi(\n ndim=3,\n N0=12,\n grid_os_factor=1.5,\n J0=4,\n Ld=4096,\n n_coils=1,\n fieldmap_segments=None,\n precisions=[precision],\n phasings=[phasing],\n recon_cases=[recon_case],\n compare_to_exact=True,\n show_figures=show_figures,\n rtol=rtol,\n verbose=verbose,\n )\n\n\n# @dec.slow\[email protected](\n \"recon_case, precision, phasing\",\n product(all_cases, [\"single\", \"double\"], [\"complex\", \"real\"]),\n)\[email protected](\"ignore:the matrix subclass is not\")\ndef test_mri_3d_multicoils_nofieldmap(\n recon_case, precision, phasing, show_figures=False, verbose=False\n):\n _test_mri_multi(\n ndim=3,\n N0=12,\n grid_os_factor=1.5,\n J0=4,\n Ld=4096,\n n_coils=4,\n fieldmap_segments=None,\n precisions=[precision],\n phasings=[phasing],\n recon_cases=[recon_case],\n compare_to_exact=False,\n show_figures=show_figures,\n verbose=verbose,\n )\n\n\n# @dec.slow\[email protected](\n \"recon_case, precision, phasing\",\n product(all_cases, [\"single\", \"double\"], [\"complex\", \"real\"]),\n)\[email protected](\"ignore:the matrix subclass is not\")\ndef test_mri_3d_multicoils_fieldmap(\n recon_case, precision, phasing, show_figures=False, verbose=False\n):\n _test_mri_multi(\n ndim=3,\n N0=12,\n grid_os_factor=1.5,\n J0=4,\n Ld=4096,\n n_coils=4,\n fieldmap_segments=6,\n precisions=[precision],\n phasings=[phasing],\n recon_cases=[recon_case],\n compare_to_exact=False,\n show_figures=show_figures,\n verbose=verbose,\n )\n" ]
[ [ "numpy.testing.assert_equal", "matplotlib.pyplot.title", "numpy.testing.assert_", "numpy.isscalar", "numpy.abs" ] ]
locuslab/DC3
[ "35437af7f22390e4ed032d9eef90cc525764d26f" ]
[ "baseline_opt.py" ]
[ "try:\n import waitGPU\n waitGPU.wait(utilization=50, memory_ratio=0.5, available_memory=5000, interval=9, nproc=1, ngpu=1)\nexcept ImportError:\n pass\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\ntorch.set_default_dtype(torch.float64)\n\nimport operator\nfrom functools import reduce\nfrom torch.utils.data import TensorDataset, DataLoader\n\nimport numpy as np\nimport pickle\nimport time\nfrom setproctitle import setproctitle\nimport os\nimport argparse\n\nfrom utils import my_hash, str_to_bool\nimport default_args\n\nDEVICE = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\n\ndef main():\n parser = argparse.ArgumentParser(description='baseline_opt')\n parser.add_argument('--probType', type=str, default='acopf57',\n choices=['simple', 'nonconvex', 'acopf57'], help='problem type')\n parser.add_argument('--simpleVar', type=int, \n help='number of decision vars for simple problem')\n parser.add_argument('--simpleIneq', type=int,\n help='number of inequality constraints for simple problem')\n parser.add_argument('--simpleEq', type=int,\n help='number of equality constraints for simple problem')\n parser.add_argument('--simpleEx', type=int,\n help='total number of datapoints for simple problem')\n parser.add_argument('--nonconvexVar', type=int,\n help='number of decision vars for nonconvex problem')\n parser.add_argument('--nonconvexIneq', type=int,\n help='number of inequality constraints for nonconvex problem')\n parser.add_argument('--nonconvexEq', type=int,\n help='number of equality constraints for nonconvex problem')\n parser.add_argument('--nonconvexEx', type=int,\n help='total number of datapoints for nonconvex problem')\n parser.add_argument('--corrEps', type=float,\n help='correction procedure tolerance')\n\n args = parser.parse_args()\n args = vars(args) # change to dictionary\n defaults = default_args.baseline_opt_default_args(args['probType'])\n for key in defaults.keys():\n if args[key] is None:\n args[key] = defaults[key]\n print(args)\n\n setproctitle('baselineOpt-{}'.format(args['probType']))\n\n # Load data, and put on GPU if needed\n prob_type = args['probType']\n if prob_type == 'simple':\n torch.set_default_dtype(torch.float64)\n filepath = os.path.join('datasets', 'simple', \"random_simple_dataset_var{}_ineq{}_eq{}_ex{}\".format(\n args['simpleVar'], args['simpleIneq'], args['simpleEq'], args['simpleEx']))\n elif prob_type == 'nonconvex':\n filepath = os.path.join('datasets', 'nonconvex', \"random_nonconvex_dataset_var{}_ineq{}_eq{}_ex{}\".format(\n args['nonconvexVar'], args['nonconvexIneq'], args['nonconvexEq'], args['nonconvexEx']))\n elif prob_type == 'acopf57':\n filepath = os.path.join('datasets', 'acopf', 'acopf57_dataset')\n else:\n raise NotImplementedError\n\n with open(filepath, 'rb') as f:\n data = pickle.load(f)\n for attr in dir(data):\n var = getattr(data, attr)\n if not callable(var) and not attr.startswith(\"__\") and torch.is_tensor(var):\n try:\n setattr(data, attr, var.to(DEVICE))\n except AttributeError:\n pass\n data._device = DEVICE\n\n\n ## Run pure optimization baselines\n if prob_type == 'simple':\n solvers = ['osqp', 'qpth']\n elif prob_type == 'nonconvex':\n solvers = ['ipopt']\n else:\n solvers = ['pypower']\n\n for solver in solvers:\n save_dir = os.path.join('results', str(data), 'baselineOpt-{}'.format(solver),\n 'run', str(time.time()).replace('.', '-'))\n if not os.path.exists(save_dir):\n os.makedirs(save_dir)\n Yvalid_opt, valid_time_total, valid_time_parallel = data.opt_solve(data.validX, solver_type=solver, tol=args['corrEps'])\n Ytest_opt, test_time_total, test_time_parallel = data.opt_solve(data.testX, solver_type=solver, tol=args['corrEps'])\n opt_results = get_opt_results(data, args, torch.tensor(Yvalid_opt).to(DEVICE),\n torch.tensor(Ytest_opt).to(DEVICE))\n opt_results.update(\n dict([('test_time', test_time_parallel), ('valid_time', valid_time_parallel), ('train_time', 0),\n ('test_time_total', test_time_total), ('valid_time_total', valid_time_total), ('train_time_total', 0)]))\n with open(os.path.join(save_dir, 'results.dict'), 'wb') as f:\n pickle.dump(opt_results, f)\n\ndef get_opt_results(data, args, Yvalid, Ytest, Yvalid_precorr=None, Ytest_precorr=None):\n eps_converge = args['corrEps']\n results = {}\n results['valid_eval'] = data.obj_fn(Yvalid).detach().cpu().numpy()\n results['valid_ineq_max'] = torch.max(data.ineq_dist(data.validX, Yvalid), dim=1)[0].detach().cpu().numpy()\n results['valid_ineq_mean'] = torch.mean(data.ineq_dist(data.validX, Yvalid), dim=1).detach().cpu().numpy()\n results['valid_ineq_num_viol_0'] = torch.sum(data.ineq_dist(data.validX, Yvalid) > eps_converge,\n dim=1).detach().cpu().numpy()\n results['valid_ineq_num_viol_1'] = torch.sum(data.ineq_dist(data.validX, Yvalid) > 10 * eps_converge,\n dim=1).detach().cpu().numpy()\n results['valid_ineq_num_viol_2'] = torch.sum(data.ineq_dist(data.validX, Yvalid) > 100 * eps_converge,\n dim=1).detach().cpu().numpy()\n results['valid_eq_max'] = torch.max(torch.abs(data.eq_resid(data.validX, Yvalid)), dim=1)[0].detach().cpu().numpy()\n results['valid_eq_mean'] = torch.mean(torch.abs(data.eq_resid(data.validX, Yvalid)),\n dim=1).detach().cpu().numpy()\n results['valid_eq_num_viol_0'] = torch.sum(torch.abs(data.eq_resid(data.validX, Yvalid)) > eps_converge,\n dim=1).detach().cpu().numpy()\n results['valid_eq_num_viol_1'] = torch.sum(torch.abs(data.eq_resid(data.validX, Yvalid)) > 10 * eps_converge,\n dim=1).detach().cpu().numpy()\n results['valid_eq_num_viol_2'] = torch.sum(torch.abs(data.eq_resid(data.validX, Yvalid)) > 100 * eps_converge,\n dim=1).detach().cpu().numpy()\n\n if Yvalid_precorr is not None:\n results['valid_correction_dist'] = torch.norm(Yvalid - Yvalid_precorr, dim=1).detach().cpu().numpy()\n results['test_eval'] = data.obj_fn(Ytest).detach().cpu().numpy()\n results['test_ineq_max'] = torch.max(data.ineq_dist(data.testX, Ytest), dim=1)[0].detach().cpu().numpy()\n results['test_ineq_mean'] = torch.mean(data.ineq_dist(data.testX, Ytest), dim=1).detach().cpu().numpy()\n results['test_ineq_num_viol_0'] = torch.sum(data.ineq_dist(data.testX, Ytest) > eps_converge,\n dim=1).detach().cpu().numpy()\n results['test_ineq_num_viol_1'] = torch.sum(data.ineq_dist(data.testX, Ytest) > 10 * eps_converge,\n dim=1).detach().cpu().numpy()\n results['test_ineq_num_viol_2'] = torch.sum(data.ineq_dist(data.testX, Ytest) > 100 * eps_converge,\n dim=1).detach().cpu().numpy()\n results['test_eq_max'] = torch.max(torch.abs(data.eq_resid(data.testX, Ytest)), dim=1)[0].detach().cpu().numpy()\n results['test_eq_mean'] = torch.mean(torch.abs(data.eq_resid(data.testX, Ytest)),\n dim=1).detach().cpu().numpy()\n results['test_eq_num_viol_0'] = torch.sum(torch.abs(data.eq_resid(data.testX, Ytest)) > eps_converge,\n dim=1).detach().cpu().numpy()\n results['test_eq_num_viol_1'] = torch.sum(torch.abs(data.eq_resid(data.testX, Ytest)) > 10 * eps_converge,\n dim=1).detach().cpu().numpy()\n results['test_eq_num_viol_2'] = torch.sum(torch.abs(data.eq_resid(data.testX, Ytest)) > 100 * eps_converge,\n dim=1).detach().cpu().numpy()\n if Ytest_precorr is not None:\n results['test_correction_dist'] = torch.norm(Ytest - Ytest_precorr, dim=1).detach().cpu().numpy()\n return results\n\n# Modifies stats in place\ndef dict_agg(stats, key, value, op='concat'):\n if key in stats.keys():\n if op == 'sum':\n stats[key] += value\n elif op == 'concat':\n stats[key] = np.concatenate((stats[key], value), axis=0)\n else:\n raise NotImplementedError\n else:\n stats[key] = value\n\nif __name__=='__main__':\n main()" ]
[ [ "numpy.concatenate", "torch.device", "torch.is_tensor", "torch.norm", "torch.cuda.is_available", "torch.tensor", "torch.set_default_dtype" ] ]
aahouzi/FaceFocus
[ "a864a5f4427938572ea7e9630ccf65c26477ae84" ]
[ "Train/TPU_training.py" ]
[ "############################################################################################\n# Author: Anas AHOUZI #\n# File Name: Train/TPU_training.py #\n# Creation Date: December 17, 2019 #\n# Source Language: Python #\n# Repository: https://github.com/aahouzi/FaceFocus.git #\n# --- Code Description --- #\n# Implementation of TPU training process for the SRGAN #\n############################################################################################\n\n\n################################################################################\n# Packages #\n################################################################################\nimport tensorflow as tf\nfrom tensorflow.keras.optimizers import Adam\nfrom Model.srgan import generator, discriminator\nfrom Loss.loss import model_vgg19, content_loss, adversarial_loss, discriminator_loss\nfrom Utils.utils import get_dataset, plot_and_save, show_samples, normalize_tanh\nfrom ast import literal_eval as make_tuple\nfrom collections import defaultdict\nimport numpy as np\nimport argparse\nimport time\nimport os\n\n\n################################################################################\n# Main arguments #\n################################################################################\n\nparser = argparse.ArgumentParser(description='Train the model using Colab TPUs.')\n\nparser.add_argument('--n_epochs', type=int, required=True, help='Number of epochs')\nparser.add_argument('--learning_rate', type=float, required=True, help='Learning rate')\nparser.add_argument('--hr_shape', type=make_tuple, required=True, help='High resolution shape')\nparser.add_argument('--lr_shape', type=make_tuple, required=True, help='Low resolution shape')\nparser.add_argument('--train_hr_path', type=str, required=True, help='Path to training HR tfRecords in GCS')\nparser.add_argument('--val_hr_path', type=str, required=True, help='Path to validation HR tfRecords in GCS')\nparser.add_argument('--drive_path', type=str, required=True, help='Path to a drive location for saving weights/images')\n\nargs = parser.parse_args()\n\n################################################################################\n# Main code #\n################################################################################\n\n\ntry:\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])\nexcept ValueError:\n tpu = None\n gpus = tf.config.experimental.list_logical_devices(\"GPU\")\n\nif tpu:\n tf.config.experimental_connect_to_cluster(tpu)\n tf.tpu.experimental.initialize_tpu_system(tpu)\n strategy = tf.distribute.experimental.TPUStrategy(tpu)\n print('\\n\\n'+'---' * 10 + 'TPU WORKS' + '---' * 10)\n\nelif len(gpus) > 1:\n strategy = tf.distribute.MirroredStrategy([gpu.name for gpu in gpus])\n print('Running on multiple GPUs ', [gpu.name for gpu in gpus])\n\nelif len(gpus) == 1:\n strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU\n print('Running on single GPU ', gpus[0].name)\n\nelse:\n strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU\n print('Running on CPU')\n\nprint(\"\\n Number of accelerators: {}\\n\".format(strategy.num_replicas_in_sync))\n\n\nwith strategy.scope():\n # Get the Generator/Discriminator\n discriminator_model = discriminator(hr_size=256)\n generator_model = generator(lr_size=64)\n\n # Load the pre weights\n generator_model.load_weights(\"FaceFocus/weights/pre_gen_weights.h5\")\n\n # Define the loss function for the discriminator, and the optimizer\n VGG = model_vgg19()\n optimizer = Adam(learning_rate=args.learning_rate)\n\n # Instantiate metrics\n adversarial_loss_sum = tf.keras.metrics.Sum()\n discriminator_loss_sum = tf.keras.metrics.Sum()\n content_loss_sum = tf.keras.metrics.Sum()\n perceptual_loss_sum = tf.keras.metrics.Sum()\n\n\n @tf.function\n def train_step(dataset, batch_size):\n \"\"\"\n This function performs one training step on a batch of HR/LR images.\n :param dataset: A batch of HR/LR images.\n :param batch_size: Batch size.\n :return:\n \"\"\"\n # Get HR/LR images\n hr_img, lr_img = dataset\n\n # Get HR images in range [-1, 1]\n hr_img = tf.map_fn(normalize_tanh, hr_img)\n\n with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:\n # Start the training\n generated_images = generator_model(lr_img, training=True)\n fake_output = discriminator_model(generated_images, training=True)\n real_output = discriminator_model(hr_img, training=True)\n\n # Compute generator loss\n cont_loss = content_loss(VGG, generated_images, hr_img, batch_size)\n adv_loss = adversarial_loss(fake_output, batch_size)\n perceptual_loss = cont_loss + 1e-3 * adv_loss\n\n # Compute discriminator loss\n disc_loss = discriminator_loss(real_output, fake_output, batch_size)\n\n # Compute the gradient of the discriminator\n grads_disc = disc_tape.gradient(disc_loss, discriminator_model.trainable_variables)\n optimizer.apply_gradients(zip(grads_disc, discriminator_model.trainable_variables))\n\n # Compute the gradient of the generator\n grads_gen = gen_tape.gradient(perceptual_loss, generator_model.trainable_variables)\n optimizer.apply_gradients(zip(grads_gen, generator_model.trainable_variables))\n\n # update metrics\n discriminator_loss_sum.update_state(disc_loss)\n adversarial_loss_sum.update_state(adv_loss)\n content_loss_sum.update_state(cont_loss)\n perceptual_loss_sum.update_state(perceptual_loss)\n\n\n # Distribute the dataset according to the strategy.\n train_dataset = get_dataset(args.train_hr_path, args.hr_shape, args.lr_shape,\n batch_size=4 * strategy.num_replicas_in_sync)\n\n # Returns a tf.distribute.DistributedDataset from tf.data.Dataset, which represents a dataset\n # distributed among devices and machines.\n train_dist_img = strategy.experimental_distribute_dataset(train_dataset)\n\n # Define the batch size based on number of cores.\n batch_size = tf.constant(4 * strategy.num_replicas_in_sync, dtype=tf.float32)\n\n # Define Number of steps per epoch.\n steps_per_epoch = 800 // (4 * strategy.num_replicas_in_sync)\n\n # Prepare a batch of 4 HR/LR images from validation dataset, to test the generator every 100 epoch\n print(\"\\n[INFO]: Visualizing a random HR/LR batch of images from validation dataset\\n\")\n batch_hr, batch_lr = show_samples(args.val_hr_path, args.hr_shape, args.lr_shape, batch_size=4)\n\n print(\"\\n[INFO]: Steps per epoch: {}\\n\".format(steps_per_epoch))\n Loss = defaultdict(list)\n epoch_start_time = time.time()\n epoch = 0\n # A step is one gradient update over a batch of data, An epoch\n # is usually many steps when we go through the whole training set.\n for step, images in enumerate(train_dist_img):\n\n # Launch the training\n strategy.run(train_step, args=(images, batch_size))\n print('=', end='', flush=True)\n\n # Displaying the results after each epoch\n if ((step + 1) // steps_per_epoch) > epoch:\n print('>', end='', flush=True)\n\n # compute metrics\n Loss['adv_loss'].append(adversarial_loss_sum.result().numpy() / steps_per_epoch)\n Loss['disc_loss'].append(discriminator_loss_sum.result().numpy() / steps_per_epoch)\n Loss['perceptual_loss'].append(perceptual_loss_sum.result().numpy() / steps_per_epoch)\n Loss['content_loss'].append(content_loss_sum.result().numpy() / steps_per_epoch)\n\n if epoch % 100 == 0:\n # Test the model over a batch of 4 images (batch_lr), and save the results\n batch_sr = plot_and_save(generator_model, batch_lr, args.drive_path, epoch)\n\n # Compute the PSNR/SSIM metrics average over the batch\n psnr_metric = round(np.sum(tf.image.psnr(batch_sr, batch_hr, max_val=255.0)) / 4, 2)\n ssim_metric = round(np.sum(tf.image.ssim(batch_sr, batch_hr, max_val=255.0)) / 4, 2)\n\n print('\\n PSNR: {} | SSIM: {} \\n'.format(psnr_metric, ssim_metric))\n\n # Save the models every 100 epoch\n generator_model.save(args.drive_path+'/weights/ModelTPU-generator-{}.h5'.format(epoch))\n discriminator_model.save(args.drive_path+'/weights/ModelTPU-discriminator-{}.h5'.format(epoch))\n\n # report metrics\n duration = round(time.time() - epoch_start_time, 2)\n print('\\nEpoch: {}/{}'.format(epoch, args.n_epochs),\n 'Duration: {}s'.format(duration),\n 'disc_loss: {}'.format(round(Loss['disc_loss'][-1], 2)),\n 'adv_loss: {}'.format(round(Loss['adv_loss'][-1], 2)),\n 'content_loss: {}'.format(round(Loss['content_loss'][-1], 2)),\n 'perceptual_loss: {}'.format(round(Loss['perceptual_loss'][-1], 2)),\n flush=True)\n\n # set up next epoch\n epoch = (step + 1) // steps_per_epoch\n epoch_start_time = time.time()\n adversarial_loss_sum.reset_states()\n discriminator_loss_sum.reset_states()\n content_loss_sum.reset_states()\n perceptual_loss_sum.reset_states()\n\n if epoch >= args.n_epochs:\n break\n\n" ]
[ [ "tensorflow.distribute.MirroredStrategy", "tensorflow.GradientTape", "tensorflow.distribute.get_strategy", "tensorflow.image.psnr", "tensorflow.keras.metrics.Sum", "tensorflow.config.experimental_connect_to_cluster", "tensorflow.map_fn", "tensorflow.distribute.cluster_resolver.TPUClusterResolver", "tensorflow.constant", "tensorflow.config.experimental.list_logical_devices", "tensorflow.image.ssim", "tensorflow.tpu.experimental.initialize_tpu_system", "tensorflow.keras.optimizers.Adam", "tensorflow.distribute.experimental.TPUStrategy" ] ]
JTQIN/landlab
[ "08baf1f11eebf99aded071acd786d2c9e44d1f39" ]
[ "landlab/bmi/bmi_bridge.py" ]
[ "\"\"\"\n========================================================================================\nWrap landlab component with the Basic Modeling Interface (:mod:`landlab.bmi.bmi_bridge`)\n========================================================================================\n\n.. sectionauthor:: Eric Hutton\n\nFunction reference\n------------------\n\nThe `wrap_as_bmi` function wraps a landlab component class so that it\nexposes a Basic Modelling Interface.\n\n\"\"\"\nimport inspect\n\nimport numpy as np\n\nfrom bmipy import Bmi\n\nfrom ..core import load_params\nfrom ..core.model_component import Component\nfrom ..framework.decorators import snake_case\nfrom ..grid import HexModelGrid, RasterModelGrid\nfrom ..grid.create import create_grid\n\nBMI_LOCATION = {\n \"node\": \"node\",\n \"link\": \"edge\",\n \"patch\": \"face\",\n \"corner\": \"node\",\n \"face\": \"edge\",\n \"cell\": \"face\",\n \"grid\": \"none\",\n}\n\nBMI_GRID = {\n \"node\": 0,\n \"link\": 0,\n \"patch\": 0,\n \"corner\": 1,\n \"face\": 1,\n \"cell\": 1,\n \"grid\": 2,\n}\n\n\nclass TimeStepper(object):\n\n \"\"\"Step through time.\n\n Parameters\n ----------\n start : float, optional\n Clock start time.\n stop : float, optional\n Stop time.\n step : float, optional\n Time step.\n\n Examples\n --------\n >>> from landlab.bmi import TimeStepper\n >>> time_stepper = TimeStepper()\n >>> time_stepper.start\n 0.0\n >>> time_stepper.stop is None\n True\n >>> time_stepper.step\n 1.0\n >>> time_stepper.time\n 0.0\n >>> for _ in range(10): time_stepper.advance()\n >>> time_stepper.time\n 10.0\n >>> time_stepper = TimeStepper(1., 13., 2.)\n >>> [time for time in time_stepper]\n [1.0, 3.0, 5.0, 7.0, 9.0, 11.0]\n \"\"\"\n\n def __init__(self, start=0.0, stop=None, step=1.0, units=\"s\"):\n self._start = start\n self._stop = stop\n self._step = step\n self._units = units\n\n self._time = start\n\n def __iter__(self):\n if self.stop is None:\n while 1:\n yield self._time\n self._time += self._step\n else:\n while self._time < self._stop:\n yield self._time\n self._time += self._step\n return\n\n @property\n def time(self):\n \"\"\"Current time.\"\"\"\n return self._time\n\n @property\n def start(self):\n \"\"\"Start time.\"\"\"\n return self._start\n\n @property\n def stop(self):\n \"\"\"Stop time.\"\"\"\n return self._stop\n\n @property\n def step(self):\n \"\"\"Time Step.\"\"\"\n return self._step\n\n @step.setter\n def step(self, new_val):\n \"\"\"Change the time step.\"\"\"\n self._step = new_val\n\n @property\n def units(self):\n \"\"\"Time units.\"\"\"\n return self._units\n\n def advance(self):\n \"\"\"Advance the time stepper by one time step.\"\"\"\n self._time += self.step\n if self._stop is not None and self._time > self._stop:\n raise StopIteration()\n\n\ndef wrap_as_bmi(cls):\n \"\"\"Wrap a landlab class so it exposes a BMI.\n\n Give a landlab component a Basic Model Interface (BMI). Since landlab\n components have an interface that is already in the style of BMI,\n this function adds just a light wrapping to landlab components. There\n are a number of differences that may cause some confusion to\n landlab users.\n\n 1. Because BMI doesn't have a concept of a dual grid, it only\n defines *nodes* (points), *edges* (vectors), and *faces*\n (areas). The dual-graph of landlab is considered as two\n separate grids by BMI.\n\n 2. It is important to note that BMI has only three grid elements\n (*node*, *edge*, and *face*) while landlab has 6. The names\n used by landlab and BMI are also different.\n\n Thus, a BMI-wrapped landlab component will always have two\n grids with grid identifiers 0, and 1. Grid 0 will contain\n the landlab *nodes*, *links*, and *patches* while grid 1 will\n contain *corners*, *faces*, and *cells*.landlab and BMI\n refer to grid elements by different names. The mapping from\n landlab to BMI nomenclature is the following:\n\n Grid 0:\n * *node*: *node*\n * *link*: *edge*\n * *patch*: *face*\n\n Grid 1:\n * *corner*: *node*\n * *face*: *edge*\n * *cell*: *face*\n\n 3. In BMI, the *initialize* method requires an input file that is\n used to create and setup the model for time-stepping. landlab\n components generally do not have anything like this; instead\n this task is usually done programmatically. Thus, the\n input file that is used by the BMI *initialize* method is\n a standard landlab input file as used by the landlab *create_grid*\n function.\n\n Parameters\n ----------\n cls : class\n A landlab class that inherits from `Component`.\n\n Returns\n -------\n class\n A wrapped class that exposes a BMI.\n\n Examples\n --------\n >>> from landlab.bmi import wrap_as_bmi\n >>> from landlab.components.flexure import Flexure\n\n >>> BmiFlexure = wrap_as_bmi(Flexure)\n >>> flexure = BmiFlexure()\n >>> sorted(flexure.get_input_var_names())\n ['boundary_condition_flag', 'lithosphere__overlying_pressure_increment']\n >>> flexure.get_var_units(\"lithosphere__overlying_pressure_increment\")\n 'Pa'\n\n >>> config = \\\"\\\"\\\"\n ... flexure:\n ... eet: 10.e+3\n ... method: flexure\n ... clock:\n ... start: 0.\n ... stop: 10.\n ... step: 2.\n ... grid:\n ... RasterModelGrid:\n ... - [20, 40]\n ... - xy_spacing: [2000., 1000.]\n ... - fields:\n ... node:\n ... lithosphere__overlying_pressure_increment:\n ... constant:\n ... - value: 0.0\n ... \\\"\\\"\\\"\n >>> flexure.initialize(config)\n >>> sorted(flexure.get_output_var_names())\n ['boundary_condition_flag', 'lithosphere_surface__elevation_increment']\n >>> flexure.get_var_grid('lithosphere_surface__elevation_increment')\n 0\n >>> flexure.get_grid_shape(0, np.empty(flexure.get_grid_rank(0), dtype=int))\n array([20, 40])\n >>> dz = np.empty(flexure.get_grid_size(0))\n >>> _ = flexure.get_value('lithosphere_surface__elevation_increment', dz)\n\n >>> np.all(dz == 0.)\n True\n >>> flexure.get_current_time()\n 0.0\n\n >>> sorted(flexure.get_input_var_names())\n ['boundary_condition_flag', 'lithosphere__overlying_pressure_increment']\n >>> load = np.zeros((20, 40), dtype=float)\n >>> load[0, 0] = 1.\n >>> flexure.set_value('lithosphere__overlying_pressure_increment', load)\n >>> flexure.update()\n >>> flexure.get_current_time()\n 2.0\n >>> _ = flexure.get_value('lithosphere_surface__elevation_increment', dz)\n >>> np.all(dz == 0.)\n False\n \"\"\"\n if not issubclass(cls, Component):\n raise TypeError(\"class must inherit from Component\")\n\n class BmiWrapper(Bmi):\n __doc__ = \"\"\"\n Basic Modeling Interface for the {name} component.\n \"\"\".format(\n name=cls.__name__\n ).strip()\n\n _cls = cls\n\n def __init__(self):\n self._base = None\n self._clock = None\n super().__init__()\n\n self._input_var_names = tuple(\n set(self._cls.input_var_names) | {\"boundary_condition_flag\"}\n )\n self._output_var_names = tuple(\n set(self._cls.output_var_names) | {\"boundary_condition_flag\"}\n )\n self._info = self._cls._info.copy()\n\n self._info[\"boundary_condition_flag\"] = {\n \"mapping\": \"node\",\n \"units\": \"\",\n \"dtype\": int,\n \"intent\": None,\n \"doc\": \"boundary condition flag of grid nodes\",\n }\n\n def get_component_name(self):\n \"\"\"Name of the component.\"\"\"\n return self._cls.name\n\n def get_input_var_names(self):\n \"\"\"Names of the input exchange items.\"\"\"\n return self._input_var_names\n\n def get_output_var_names(self):\n \"\"\"Names of the output exchange items.\"\"\"\n return self._output_var_names\n\n def get_current_time(self):\n \"\"\"Current component time.\"\"\"\n return self._clock.time\n\n def get_end_time(self):\n \"\"\"Stop time for the component.\"\"\"\n return self._clock.stop\n\n def get_start_time(self):\n \"\"\"Start time of the component.\"\"\"\n return self._clock.start\n\n def get_time_step(self):\n \"\"\"Component time step.\"\"\"\n return self._clock.step\n\n def get_time_units(self):\n \"\"\"Time units used by the component.\"\"\"\n return self._clock.units\n\n def initialize(self, config_file):\n \"\"\"Initialize the component from a file.\n\n BMI-wrapped Landlab components use input files in YAML format.\n Component-specific parameters are listed at the top level,\n followed by grid and then time information. An example input\n file looks like::\n\n flexure:\n eet: 15.e+3\n clock:\n start: 0\n stop: 100.\n step: 2.\n grid:\n type: raster\n shape: [20, 40]\n spacing: [1000., 2000.]\n\n In this case, a `RasterModelGrid` is created (with the given shape\n and spacing) and passed to the underlying landlab component. The\n `eet=15000.` is also given to the component but as a keyword\n parameter. The BMI clock is initialized with the given parameters.\n\n Parameters\n ----------\n config_file : str or file_like\n YAML-formatted input file for the component.\n \"\"\"\n grid = create_grid(config_file, section=\"grid\")\n\n if not grid:\n raise ValueError(\"no grid in config file ({0})\".format(config_file))\n elif isinstance(grid, list):\n raise ValueError(\n \"multiple grids in config file ({0})\".format(config_file)\n )\n\n params = load_params(config_file)\n params.pop(\"grid\")\n clock_params = params.pop(\"clock\")\n self._clock = TimeStepper(**clock_params)\n\n self._base = self._cls(grid, **params.pop(snake_case(cls.__name__), {}))\n self._base.grid.at_node[\n \"boundary_condition_flag\"\n ] = self._base.grid.status_at_node\n\n def update(self):\n \"\"\"Update the component one time step.\"\"\"\n if hasattr(self._base, \"update\"):\n self._base.update()\n elif hasattr(self._base, \"run_one_step\"):\n args = []\n for name, arg in inspect.signature(\n self._base.run_one_step\n ).parameters.items():\n if arg.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD:\n args.append(name)\n\n if len(args) == 0 or \"dt\" not in args:\n self._base.run_one_step()\n else:\n self._base.run_one_step(self._clock.step)\n\n self._clock.advance()\n\n def update_frac(self, frac):\n \"\"\"Update the component a fraction of a time step.\"\"\"\n time_step = self.get_time_step()\n self._clock.step = time_step * frac\n self.update()\n self._clock.step = time_step\n\n def update_until(self, then):\n \"\"\"Update the component until a given time.\"\"\"\n n_steps = (then - self.get_current_time()) / self.get_time_step()\n for _ in range(int(n_steps)):\n self.update()\n self.update_frac(n_steps - int(n_steps))\n\n def finalize(self):\n \"\"\"Clean-up the component.\"\"\"\n pass\n\n def get_var_grid(self, name):\n \"\"\"Get the grid id for a variable.\"\"\"\n at = self._info[name][\"mapping\"]\n return BMI_GRID[at]\n\n def get_var_itemsize(self, name):\n \"\"\"Get the size of elements of a variable.\"\"\"\n at = self._info[name][\"mapping\"]\n return self._base.grid[at][name].itemsize\n\n def get_var_nbytes(self, name):\n \"\"\"Get the total number of bytes used by a variable.\"\"\"\n at = self._info[name][\"mapping\"]\n return self._base.grid[at][name].nbytes\n\n def get_var_type(self, name):\n \"\"\"Get the data type for a variable.\"\"\"\n at = self._info[name][\"mapping\"]\n return str(self._base.grid[at][name].dtype)\n\n def get_var_units(self, name):\n \"\"\"Get the unit used by a variable.\"\"\"\n return self._info[name][\"units\"]\n\n def get_value_ref(self, name):\n \"\"\"Get a reference to a variable's data.\"\"\"\n at = self._info[name][\"mapping\"]\n return self._base.grid[at][name]\n\n def get_value(self, name, dest):\n \"\"\"Get a copy of a variable's data.\"\"\"\n at = self._info[name][\"mapping\"]\n dest[:] = self._base.grid[at][name]\n return dest\n\n def set_value(self, name, values):\n \"\"\"Set the values of a variable.\"\"\"\n if name in self.get_input_var_names():\n if name == \"boundary_condition_flag\":\n self._base.grid.status_at_node = values\n else:\n at = self._info[name][\"mapping\"]\n self._base.grid[at][name][:] = values.flat\n else:\n raise KeyError(\"{name} is not an input item\".format(name=name))\n\n def get_grid_origin(self, grid, origin):\n \"\"\"Get the origin for a structured grid.\"\"\"\n if grid == 0:\n origin[:] = (self._base.grid.node_y[0], self._base.grid.node_x[0])\n elif grid == 1:\n origin[:] = (\n self._base.grid.node_y[0] + self._base.grid.dy * 0.5,\n self._base.grid.node_x[0] + self._base.grid.dx * 0.5,\n )\n return origin\n\n def get_grid_rank(self, grid):\n \"\"\"Get the number of dimensions of a grid.\"\"\"\n if grid in (0, 1):\n return 2\n else:\n return 0\n\n def get_grid_shape(self, grid, shape):\n \"\"\"Get the shape of a structured grid.\"\"\"\n if grid == 0:\n shape[:] = (\n self._base.grid.number_of_node_rows,\n self._base.grid.number_of_node_columns,\n )\n elif grid == 1:\n shape[:] = (\n self._base.grid.number_of_node_rows - 1,\n self._base.grid.number_of_node_columns - 1,\n )\n return shape\n\n def get_grid_spacing(self, grid, spacing):\n \"\"\"Get the row and column spacing of a structured grid.\"\"\"\n spacing[:] = (self._base.grid.dy, self._base.grid.dx)\n return spacing\n\n def get_grid_type(self, grid):\n \"\"\"Get the type of grid.\"\"\"\n if grid == 2:\n return \"scalar\"\n elif isinstance(self._base.grid, RasterModelGrid):\n return \"uniform_rectilinear\"\n else:\n return \"unstructured\"\n\n def get_grid_edge_count(self, grid):\n if grid == 0:\n return self._base.grid.number_of_links\n elif grid == 1:\n return self._base.grid.number_of_faces\n\n def get_grid_edge_nodes(self, grid, edge_nodes):\n if grid == 0:\n return self._base.grid.nodes_at_link.reshape((-1,))\n elif grid == 1:\n return self._base.grid.corners_at_face.reshape((-1,))\n\n def get_grid_face_count(self, grid):\n if grid == 0:\n return self._base.grid.number_of_patches\n elif grid == 1:\n return self._base.grid.number_of_cells\n\n def get_grid_face_nodes(self, grid, face_nodes):\n if grid == 0:\n return self._base.grid.nodes_at_patch\n elif grid == 1:\n return self._base.grid.corners_at_cell\n\n def get_grid_node_count(self, grid):\n if grid == 0:\n return self._base.grid.number_of_nodes\n elif grid == 1:\n return self._base.grid.number_of_corners\n\n def get_grid_nodes_per_face(self, grid, nodes_per_face):\n if grid == 0:\n return np.full(self._base.grid.number_of_nodes, 3, dtype=int)\n elif grid == 1:\n if isinstance(self._base.grid, HexModelGrid):\n return np.full(self._base.grid.number_of_faces, 6, dtype=int)\n\n def get_grid_size(self, grid):\n if grid == 0:\n return self._base.grid.number_of_nodes\n elif grid == 1:\n return self._base.grid.number_of_corners\n\n def get_grid_x(self, grid, x):\n if grid == 0:\n return self._base.grid.x_of_node\n elif grid == 1:\n return self._base.grid.x_of_corner\n\n def get_grid_y(self, grid, y):\n if grid == 0:\n return self._base.grid.y_of_node\n elif grid == 1:\n return self._base.grid.y_of_corner\n\n def get_grid_z(self, grid, z):\n raise NotImplementedError(\"get_grid_z\")\n # Only should be implemented for presently non-existant 3D grids.\n\n def get_value_at_indices(self, name, dest, inds):\n at = self._info[name][\"mapping\"]\n dest[:] = self._base.grid[at][name][inds]\n return dest\n\n def get_value_ptr(self, name):\n at = self._info[name][\"mapping\"]\n return self._base.grid[at][name]\n\n def get_var_location(self, name):\n return BMI_LOCATION[self._info[name][\"mapping\"]]\n\n def set_value_at_indices(self, name, inds, src):\n at = self._info[name][\"mapping\"]\n self._base.grid[at][name][inds] = src\n\n BmiWrapper.__name__ = cls.__name__\n return BmiWrapper\n" ]
[ [ "numpy.full" ] ]
royhzq/sg-headlines-rnn
[ "cef84f1ffe7f18a37386e9602166082c10e9020b" ]
[ "app.py" ]
[ "from flask import Flask, request, jsonify\nfrom utils import generate_headlines\nimport tensorflow as tf\nimport json\ntf.enable_eager_execution()\n\nimport numpy as np\n\napp = Flask(__name__)\n\nchar2idx = json.loads(open(\"./data/st_char2idx.txt\", 'br').read().decode(encoding='utf-8'))\nidx2char = np.array(json.loads(open(\"./data/st_idx2char.txt\", 'br').read().decode(encoding='utf-8')))\nmodel = tf.keras.models.load_model('./models/st_model_20191004.h5')\n\ndef handle_error(status_code, message):\n \"\"\" Helper function to return handle HTTP errors\n and return message to users\n \"\"\"\n response = jsonify({\n 'status':status_code,\n 'message':message,\n }) \n response.status_code = status_code\n return response\n\n\[email protected]('/headlines', methods=['GET', 'POST'])\ndef headlines_generator():\n\n if request.method == \"GET\": \n start_string = \"Singapore \"\n n_headlines = 5\n\n if request.method == \"POST\":\n data = request.json\n if not data:\n return handle_error(400, \"POST data not found\")\n\n start_string = data.get('start_string', 'Singapore ')\n n_headlines = data.get('n_headlines', 5) # default 5 headlines\n\n # Validate inputs\n if not isinstance(start_string, str):\n return handle_error(400, \"start_string parameter must be string\")\n\n if not isinstance(n_headlines, int):\n return handle_error(400, \"n_headlines parameter must be integer\")\n\n if len(start_string) > 256:\n # Limit number of start_string characters \n return handle_error(400, \"start_string character limit exceeded. Max character limit: 256\")\n\n # limit to max 10 headlines\n # coerce negative and 0 to at least 1 headline\n n_headlines = min(max(n_headlines, 1), 10) \n\n generated = generate_headlines(\n model, \n char2idx,\n idx2char,\n start_string=start_string, \n n_headlines=n_headlines\n )\n\n return jsonify(generated)" ]
[ [ "tensorflow.keras.models.load_model", "tensorflow.enable_eager_execution" ] ]
jeevan-revaneppa-hirethanad/audio-to-speech-pipeline
[ "a5bd7f0321834507e4157eb1aea8659cd205bf1c" ]
[ "packages/ekstep_data_pipelines/audio_analysis/speaker_analysis/clustering.py" ]
[ "import math\n\nimport hdbscan\nimport numpy as np\nfrom sklearn.metrics.pairwise import cosine_distances\n\n\nclass Clustering:\n def __init__(self):\n pass\n\n def make_partial_sets(self, embeddings, partial_set_size):\n \"\"\"\n Takes all the embeddings and returns a list of partial sets\n :param embeddings: all embeddings\n :param partial_set_size: partial set embedding size\n :return: partial sets each of len(partial_set_size) from embeddings | list of np.array\n \"\"\"\n partial_sets = []\n embeds_shape = embeddings.shape[0]\n num_partial_sets = math.ceil(embeds_shape / partial_set_size)\n for i in range(num_partial_sets):\n start = i * partial_set_size\n end = (i + 1) * partial_set_size\n if i == num_partial_sets - 1 or embeds_shape < partial_set_size:\n end = embeds_shape + 1\n partial_set = embeddings[start:end][:]\n partial_sets.append(partial_set)\n\n return partial_sets\n\n def get_cluster_embeddings(self, embeddings, labels):\n \"\"\"\n Takes embeddings (np.array) and their corresponding labels (list),\n Returns a dictionary with cluster label as key and respective cluster embeddings as value\n \"\"\"\n cluster_vs_embeds = dict({})\n number_of_clusters = max(labels)\n start = 0\n if -1 in labels:\n start = -1\n for cluster in range(start, number_of_clusters + 1):\n cluster_indices = [index for index, _ in enumerate(labels) if _ == cluster]\n cluster_vs_embeds[cluster] = embeddings[cluster_indices]\n return cluster_vs_embeds\n\n def run_hdbscan(\n self,\n embeddings,\n metric,\n min_cluster_size,\n min_samples,\n cluster_selection_method,\n ):\n # because HDBSCAN expects double dtype\n embeddings = embeddings.astype(\"double\")\n distance_matrix = cosine_distances(embeddings)\n\n clusterer = hdbscan.HDBSCAN(\n metric=metric,\n min_cluster_size=min_cluster_size,\n min_samples=min_samples,\n cluster_selection_method=cluster_selection_method,\n )\n clusterer.fit(distance_matrix)\n\n return clusterer\n\n def run_partial_set_clusterings(\n self,\n embeddings,\n min_cluster_size=15,\n partial_set_size=11122,\n min_samples=15,\n cluster_selection_method=\"eom\",\n ):\n \"\"\"\n Runs HDBSCAN on partial sets of orginial data,\n Returns:\n - mean embeddings: np.ndarray -> mean embeds of each cluster found in each partial set\n - flat_noise_embeds: np.ndarray -> an array containing all the noise points\n found over all partial sets.\n - all_cluster_embeds: list of np.arrays ->\n\n \"\"\"\n\n noise = []\n mean_embeddings = []\n all_cluster_embeds = []\n if min_samples is None:\n min_samples = min_cluster_size\n\n partial_sets = self.make_partial_sets(\n embeddings, partial_set_size=partial_set_size\n )\n for ind, partial_set in enumerate(partial_sets):\n\n clusterer = self.run_hdbscan(\n partial_set,\n metric=\"precomputed\",\n min_cluster_size=min_cluster_size,\n min_samples=min_samples,\n cluster_selection_method=cluster_selection_method,\n )\n\n partial_set_labels = clusterer.labels_\n\n noise_point_embeds = [\n partial_set[index]\n for index, label in enumerate(partial_set_labels)\n if label == -1\n ]\n noise.append(noise_point_embeds)\n\n print(\n \"Points classified as noise in partial set {} :{}\".format(\n ind + 1, len(noise_point_embeds)\n )\n )\n\n # mapping contains cluster label as key and cluster embeddings as\n # values\n mapping = self.get_cluster_embeddings(partial_set, partial_set_labels)\n\n # logic for calculating mean embedding of the cluster if\n # cluster-label != -1 (noise)\n for i in mapping.items():\n if i[0] != -1:\n raw_embed = np.mean(i[1], axis=0)\n mean_embeddings.append(raw_embed / np.linalg.norm(raw_embed, 2))\n all_cluster_embeds.append(list(i[1]))\n\n # getting flat noise embeds -> noise contains a list of numpy arrays :\n # len(noise) = num_partial_sets\n flat_noise_embeds = [item for sublist in noise for item in sublist]\n\n return (\n np.array(mean_embeddings),\n np.array(flat_noise_embeds),\n all_cluster_embeds,\n )\n" ]
[ [ "numpy.array", "sklearn.metrics.pairwise.cosine_distances", "numpy.linalg.norm", "numpy.mean" ] ]
chrisdane/ESMValTool
[ "35e0c2b0dbaa0927a2d677f382b43520633a1a0f" ]
[ "esmvaltool/diag_scripts/ocean/diagnostic_tools.py" ]
[ "\"\"\"\nDiagnostic tools\n================\n\nThis module contains several python tools used elsewhere by the ocean\ndiagnostics package.\n\nThis tool is part of the ocean diagnostic tools package in the ESMValTool.\n\nAuthor: Lee de Mora (PML)\n [email protected]\n\"\"\"\nimport logging\nimport os\nimport sys\nimport iris\n\nimport numpy as np\nimport cftime\nimport matplotlib.pyplot as plt\nimport yaml\n\nfrom esmvaltool.diag_scripts.shared._base import _get_input_data_files\n\n# This part sends debug statements to stdout\nlogger = logging.getLogger(os.path.basename(__file__))\nlogging.getLogger().addHandler(logging.StreamHandler(sys.stdout))\n\n\ndef get_obs_projects():\n \"\"\"\n Return a list of strings with the names of observations projects.\n\n Please keep this list up to date, or replace it with something more\n sensible.\n\n Returns\n ---------\n list\n Returns a list of strings of the various types of observational data.\n \"\"\"\n obs_projects = [\n 'obs4mips',\n ]\n return obs_projects\n\n\ndef folder(name):\n \"\"\"\n Make a directory out of a string or list or strings.\n\n Take a string or a list of strings, convert it to a directory style,\n then make the folder and the string.\n Returns folder string and final character is always os.sep. ('/')\n\n Arguments\n ---------\n name: list or string\n A list of nested directories, or a path to a directory.\n\n Returns\n ---------\n str\n Returns a string of a full (potentially new) path of the directory.\n \"\"\"\n sep = os.sep\n if isinstance(name, list):\n name = os.sep.join(name)\n if name[-1] != sep:\n name = name + sep\n if os.path.exists(name) is False:\n os.makedirs(name)\n logger.info('Making new directory:\\t%s', str(name))\n return name\n\n\ndef get_input_files(cfg, index=''):\n \"\"\"\n Load input configuration file as a Dictionairy.\n\n Get a dictionary with input files from the metadata.yml files.\n This is a wrappper for the _get_input_data_files function from\n diag_scripts.shared._base.\n\n Arguments\n ---------\n cfg: dict\n the opened global config dictionairy, passed by ESMValTool.\n index: int\n the index of the file in the cfg file.\n\n Returns\n ---------\n dict\n A dictionairy of the input files and their linked details.\n \"\"\"\n if isinstance(index, int):\n metadata_file = cfg['input_files'][index]\n with open(metadata_file) as input_file:\n metadata = yaml.safe_load(input_file)\n return metadata\n return _get_input_data_files(cfg)\n\n\ndef bgc_units(cube, name):\n \"\"\"\n Convert the cubes into some friendlier units.\n\n This is because many CMIP standard units are not the standard units\n used by the BGC community (ie, Celsius is prefered over Kelvin, etc.)\n\n Parameters\n ----------\n cube: iris.cube.Cube\n the opened dataset as a cube.\n name: str\n The string describing the data field.\n\n Returns\n -------\n iris.cube.Cube\n the cube with the new units.\n \"\"\"\n new_units = ''\n if name in ['tos', 'thetao']:\n new_units = 'celsius'\n\n if name in ['no3', ]:\n new_units = 'mmol m-3'\n\n if name in ['chl', ]:\n new_units = 'mg m-3'\n\n if name in ['intpp', ]:\n new_units = 'mol m-2 d-1'\n\n if name in ['fgco2', ]:\n new_units = 'g m-2 d-1'\n\n if name in ['spco2', 'dpco2', ]:\n new_units = 'uatm'\n\n if name in ['mfo', 'amoc', 'msftmyz']:\n # sverdrup are 1000000 m3.s-1, but mfo is kg s-1.\n new_units = 'Tg s-1'\n\n if new_units != '':\n logger.info(' '.join(\n [\"Changing units from\",\n str(cube.units), 'to', new_units]))\n cube.convert_units(new_units)\n\n return cube\n\n\ndef match_model_to_key(\n model_type,\n cfg_dict,\n input_files_dict,\n):\n \"\"\"\n Match up model or observations dataset dictionairies from config file.\n\n This function checks that the control_model, exper_model and\n observational_dataset dictionairies from the recipe are matched with the\n input file dictionairy in the cfg metadata.\n\n Arguments\n ---------\n model_type: str\n The string model_type to match (only used in debugging).\n cfg_dict: dict\n the config dictionairy item for this model type, parsed directly from\n the diagnostics/ scripts, part of the recipe.\n input_files_dict: dict\n The input file dictionairy, loaded directly from the get_input_files()\n function, in diagnostics_tools.py.\n\n Returns\n ---------\n dict\n A dictionairy of the input files and their linked details.\n \"\"\"\n for input_file, intput_dict in input_files_dict.items():\n intersect_keys = intput_dict.keys() & cfg_dict.keys()\n match = True\n for key in intersect_keys:\n if intput_dict[key] == cfg_dict[key]:\n continue\n match = False\n if match:\n return input_file\n logger.warning(\"Unable to match model: %s\", model_type)\n return ''\n\n\ndef cube_time_to_float(cube):\n \"\"\"\n Convert from time coordinate into decimal time.\n\n Takes an iris time coordinate and returns a list of floats.\n Parameters\n ----------\n cube: iris.cube.Cube\n the opened dataset as a cube.\n\n Returns\n -------\n list\n List of floats showing the time coordinate in decimal time.\n\n \"\"\"\n times = cube.coord('time')\n datetime = guess_calendar_datetime(cube)\n\n dtimes = times.units.num2date(times.points)\n floattimes = []\n for dtime in dtimes:\n # TODO: it would be better to have a calendar dependent value\n # for daysperyear, as this is not accurate for 360 day calendars.\n daysperyear = 365.25\n\n try:\n dayofyr = dtime.dayofyr\n except AttributeError:\n time = datetime(dtime.year, dtime.month, dtime.day)\n time0 = datetime(dtime.year, 1, 1, 0, 0)\n dayofyr = (time - time0).days\n\n floattime = dtime.year + dayofyr / daysperyear + dtime.hour / (\n 24. * daysperyear)\n if dtime.hour:\n floattime += dtime.hour / (24. * daysperyear)\n if dtime.minute:\n floattime += dtime.minute / (24. * 60. * daysperyear)\n floattimes.append(floattime)\n return floattimes\n\n\ndef guess_calendar_datetime(cube):\n \"\"\"\n Guess the cftime.datetime form to create datetimes.\n\n Parameters\n ----------\n cube: iris.cube.Cube\n the opened dataset as a cube.\n\n Returns\n -------\n cftime.datetime\n A datetime creator function from cftime, based on the cube's calendar.\n \"\"\"\n time_coord = cube.coord('time')\n\n if time_coord.units.calendar in ['360_day', ]:\n datetime = cftime.Datetime360Day\n elif time_coord.units.calendar in ['365_day', 'noleap']:\n datetime = cftime.DatetimeNoLeap\n elif time_coord.units.calendar in ['julian', ]:\n datetime = cftime.DatetimeJulian\n elif time_coord.units.calendar in ['gregorian', ]:\n datetime = cftime.DatetimeGregorian\n elif time_coord.units.calendar in ['proleptic_gregorian', ]:\n datetime = cftime.DatetimeProlepticGregorian\n else:\n logger.warning('Calendar set to Gregorian, instead of %s',\n time_coord.units.calendar)\n datetime = cftime.DatetimeGregorian\n return datetime\n\n\ndef get_decade(coord, value):\n \"\"\"\n Determine the decade.\n\n Called by iris.coord_categorisation.add_categorised_coord.\n \"\"\"\n date = coord.units.num2date(value)\n return date.year - date.year % 10\n\n\ndef decadal_average(cube):\n \"\"\"\n Calculate the decadal_average.\n\n Parameters\n ----------\n cube: iris.cube.Cube\n The input cube\n\n Returns\n -------\n iris.cube\n \"\"\"\n iris.coord_categorisation.add_categorised_coord(cube, 'decade', 'time',\n get_decade)\n return cube.aggregated_by('decade', iris.analysis.MEAN)\n\n\ndef load_thresholds(cfg, metadata):\n \"\"\"\n Load the thresholds for contour plots from the config files.\n\n Parameters\n ----------\n cfg: dict\n the opened global config dictionairy, passed by ESMValTool.\n metadata: dict\n the metadata dictionairy\n\n Returns\n -------\n list:\n List of thresholds\n \"\"\"\n thresholds = set()\n\n if 'threshold' in cfg:\n thresholds.update(float(cfg['threshold']))\n\n if 'threshold' in metadata:\n thresholds.update(float(metadata['threshold']))\n\n if 'thresholds' in cfg:\n thresholds.update([float(thres) for thres in cfg['thresholds']])\n\n if 'thresholds' in metadata:\n thresholds.update([float(thres) for thres in metadata['thresholds']])\n\n return sorted(list(thresholds))\n\n\ndef get_colour_from_cmap(number, total, cmap='jet'):\n \"\"\"\n Get a colour `number` of `total` from a cmap.\n\n This function is used when several lines are created evenly along a\n colour map.\n\n Parameters\n ----------\n number: int, float\n The\n total: int\n\n cmap: string, plt.cm\n A colour map, either by name (string) or from matplotlib\n \"\"\"\n if isinstance(cmap, str):\n cmap = plt.get_cmap(cmap)\n\n if number > total:\n raise ValueError('The cannot be larger than the total length '\n 'of the list ie: {} > {}'.format(number, total))\n\n if total > 1:\n colour = cmap(float(number) / float(total - 1.))\n else:\n colour = cmap(0.)\n return colour\n\n\ndef add_legend_outside_right(plot_details, ax1, column_width=0.1, loc='right'):\n \"\"\"\n Add a legend outside the plot, to the right.\n\n plot_details is a 2 level dict,\n where the first level is some key (which is hidden)\n and the 2nd level contains the keys:\n 'c': color\n 'lw': line width\n 'label': label for the legend.\n ax1 is the axis where the plot was drawn.\n\n Parameters\n ----------\n plot_details: dict\n A dictionary of the plot details (color, linestyle, linewidth, label)\n ax1: matplotlib.pyplot.axes\n The pyplot axes to add the\n column_width: float\n The width of the legend column. This is used to adjust for longer words\n in the legends\n loc: string\n Location of the legend. Options are \"right\" and \"below\".\n\n Returns\n -------\n cftime.datetime\n A datetime creator function from cftime, based on the cube's calendar.\n\n \"\"\"\n # ####\n # Create dummy axes:\n legend_size = len(plot_details) + 1\n box = ax1.get_position()\n if loc.lower() == 'right':\n nrows = 25\n ncols = int(legend_size / nrows) + 1\n ax1.set_position([\n box.x0, box.y0, box.width * (1. - column_width * ncols), box.height\n ])\n\n if loc.lower() == 'below':\n ncols = 4\n nrows = int(legend_size / ncols) + 1\n ax1.set_position([\n box.x0, box.y0 + (nrows * column_width), box.width,\n box.height - (nrows * column_width)\n ])\n\n # Add emply plots to dummy axis.\n for index in sorted(plot_details):\n colour = plot_details[index]['c']\n\n linewidth = plot_details[index].get('lw', 1)\n\n linestyle = plot_details[index].get('ls', '-')\n\n label = plot_details[index].get('label', str(index))\n\n plt.plot([], [], c=colour, lw=linewidth, ls=linestyle, label=label)\n\n if loc.lower() == 'right':\n legd = ax1.legend(\n loc='center left',\n ncol=ncols,\n prop={'size': 10},\n bbox_to_anchor=(1., 0.5))\n if loc.lower() == 'below':\n legd = ax1.legend(\n loc='upper center',\n ncol=ncols,\n prop={'size': 10},\n bbox_to_anchor=(0.5, -2. * column_width))\n legd.draw_frame(False)\n legd.get_frame().set_alpha(0.)\n\n\ndef get_image_format(cfg, default='png'):\n \"\"\"\n Load the image format from the global config file.\n\n Current tested options are svg, png.\n\n The cfg is the opened global config.\n The default format is used if no specific format is requested.\n The default is set in the user config.yml\n Individual diagnostics can set their own format which will\n supercede the main config.yml.\n\n Arguments\n ---------\n cfg: dict\n the opened global config dictionairy, passed by ESMValTool.\n\n Returns\n ---------\n str\n The image format extention.\n \"\"\"\n image_extention = default\n\n # Load format from config.yml and set it as default\n if 'output_file_type' in cfg:\n image_extention = cfg['output_file_type']\n\n # Load format from config.yml and set it as default\n if 'image_format' in cfg:\n image_extention = cfg['image_format']\n\n matplotlib_image_formats = plt.gcf().canvas.get_supported_filetypes()\n if image_extention not in matplotlib_image_formats:\n logger.warning(' '.join([\n 'Image format ', image_extention, 'not in matplot:',\n ', '.join(matplotlib_image_formats)\n ]))\n\n image_extention = '.' + image_extention\n image_extention = image_extention.replace('..', '.')\n return image_extention\n\n\ndef get_image_path(\n cfg,\n metadata,\n prefix='diag',\n suffix='image',\n metadata_id_list='default',\n):\n \"\"\"\n Produce a path to the final location of the image.\n\n The cfg is the opened global config,\n metadata is the metadata dictionairy (for the individual dataset file)\n\n Arguments\n ---------\n cfg: dict\n the opened global config dictionairy, passed by ESMValTool.\n metadata: dict\n The metadata dictionairy for a specific model.\n prefix: str\n A string to prepend to the image basename.\n suffix: str\n A string to append to the image basename\n metadata_id_list: list\n A list of strings to add to the file path. It loads these from the cfg.\n\n Returns\n ---------\n str\n The ultimate image path\n\n \"\"\"\n #####\n if metadata_id_list == 'default':\n metadata_id_list = [\n 'project',\n 'dataset',\n 'mip',\n 'exp',\n 'ensemble',\n 'field',\n 'short_name',\n 'preprocessor',\n 'diagnostic',\n 'start_year',\n 'end_year',\n ]\n\n path = folder(cfg['plot_dir'])\n if prefix:\n path += prefix + '_'\n # Check that the keys are in the dict.\n intersection = [va for va in metadata_id_list if va in metadata]\n path += '_'.join([str(metadata[b]) for b in intersection])\n if suffix:\n path += '_' + suffix\n\n image_extention = get_image_format(cfg)\n\n if path.find(image_extention) == -1:\n path += image_extention\n\n path = path.replace(' ', '_')\n\n logger.info(\"Image path will be: %s\", path)\n return path\n\n\ndef make_cube_layer_dict(cube):\n \"\"\"\n Take a cube and return a dictionairy layer:cube\n\n Each item in the dict is a layer with a separate cube for each layer.\n ie: cubes[depth] = cube from specific layer\n\n Cubes with no depth component are returned as dict, where the dict key\n is a blank empty string, and the value is the cube.\n\n Parameters\n ----------\n cube: iris.cube.Cube\n the opened dataset as a cube.\n\n Returns\n ---------\n dict\n A dictionairy of layer name : layer cube.\n \"\"\"\n #####\n # Check layering:\n coords = cube.coords()\n layers = []\n for coord in coords:\n if coord.standard_name in ['depth', 'region']:\n layers.append(coord)\n\n cubes = {}\n if layers == []:\n cubes[''] = cube\n return cubes\n\n # if len(layers) > 1:\n # # This field has a strange number of layer dimensions.\n # # depth and regions?\n # print(cube)\n # raise ValueError('This cube has both `depth` & `region` coordinates:'\n # ' %s', layers)\n\n # iris stores coords as a list with one entry:\n layer_dim = layers[0]\n if len(layer_dim.points) in [\n 1,\n ]:\n cubes[''] = cube\n return cubes\n\n if layer_dim.standard_name == 'depth':\n coord_dim = cube.coord_dims('depth')[0]\n for layer_index, layer in enumerate(layer_dim.points):\n slices = [slice(None) for index in cube.shape]\n slices[coord_dim] = layer_index\n cubes[layer] = cube[tuple(slices)]\n\n if layer_dim.standard_name == 'region':\n coord_dim = cube.coord_dims('region')[0]\n for layer_index, layer in enumerate(layer_dim.points):\n slices = [slice(None) for index in cube.shape]\n slices[coord_dim] = layer_index\n layer = layer.replace('_', ' ').title()\n cubes[layer] = cube[tuple(slices)]\n return cubes\n\n\ndef get_cube_range(cubes):\n \"\"\"\n Determinue the minimum and maximum values of a list of cubes.\n\n Parameters\n ----------\n cubes: list of iris.cube.Cube\n A list of cubes.\n\n Returns\n ----------\n list:\n A list of two values: the overall minumum and maximum values of the\n list of cubes.\n\n \"\"\"\n mins = []\n maxs = []\n for cube in cubes:\n mins.append(cube.data.min())\n maxs.append(cube.data.max())\n return [np.min(mins), np.max(maxs), ]\n\n\ndef get_cube_range_diff(cubes):\n \"\"\"\n Determinue the largest deviation from zero in an list of cubes.\n\n Parameters\n ----------\n cubes: list of iris.cube.Cube\n A list of cubes.\n\n Returns\n ----------\n list:\n A list of two values: the maximum deviation from zero and its opposite.\n \"\"\"\n ranges = []\n for cube in cubes:\n ranges.append(np.abs(cube.data.min()))\n ranges.append(np.abs(cube.data.max()))\n return [-1. * np.max(ranges), np.max(ranges)]\n\n\ndef get_array_range(arrays):\n \"\"\"\n Determinue the minimum and maximum values of a list of arrays..\n\n Parameters\n ----------\n arrays: list of numpy.array\n A list of numpy.array.\n\n Returns\n ----------\n list:\n A list of two values, the overall minumum and maximum values of the\n list of cubes.\n \"\"\"\n mins = []\n maxs = []\n for arr in arrays:\n mins.append(arr.min())\n maxs.append(arr.max())\n logger.info('get_array_range: %s, %s', np.min(mins), np.max(maxs))\n return [np.min(mins), np.max(maxs), ]\n" ]
[ [ "numpy.max", "matplotlib.pyplot.get_cmap", "matplotlib.pyplot.plot", "numpy.min", "matplotlib.pyplot.gcf" ] ]
potipot/ultra_light_face
[ "4b8489e2e207cf490549f94de1e35212e846f115" ]
[ "train.py" ]
[ "\"\"\"\nThis code is the main training code.\n\"\"\"\nimport argparse\nimport itertools\nimport logging\nimport os\nimport sys\n\nimport torch\nfrom torch import nn\nfrom torch.optim.lr_scheduler import CosineAnnealingLR, MultiStepLR\nfrom torch.utils.data import DataLoader, ConcatDataset\n\nfrom vision.datasets.voc_dataset import VOCDataset\nfrom vision.nn.multibox_loss import MultiboxLoss\nfrom vision.ssd.config.fd_config import define_img_size\nfrom vision.utils.misc import str2bool, Timer, freeze_net_layers, store_labels\n\nparser = argparse.ArgumentParser(\n description='train With Pytorch')\n\nparser.add_argument(\"--dataset_type\", default=\"voc\", type=str,\n help='Specify dataset type. Currently support voc.')\n\nparser.add_argument('--datasets', nargs='+', help='Dataset directory path')\nparser.add_argument('--validation_dataset', default=None, help='Dataset directory path')\nparser.add_argument('--balance_data', action='store_true',\n help=\"Balance training data by down-sampling more frequent labels.\")\n\nparser.add_argument('--net', default=\"RFB\",\n help=\"The network architecture ,optional(RFB , slim)\")\nparser.add_argument('--freeze_base_net', action='store_true',\n help=\"Freeze base net layers.\")\nparser.add_argument('--freeze_net', action='store_true',\n help=\"Freeze all the layers except the prediction head.\")\n\n# Params for SGD\nparser.add_argument('--lr', '--learning-rate', default=1e-2, type=float,\n help='initial learning rate')\nparser.add_argument('--momentum', default=0.9, type=float,\n help='Momentum value for optim')\nparser.add_argument('--weight_decay', default=5e-4, type=float,\n help='Weight decay for SGD')\nparser.add_argument('--gamma', default=0.1, type=float,\n help='Gamma update for SGD')\nparser.add_argument('--base_net_lr', default=None, type=float,\n help='initial learning rate for base net.')\nparser.add_argument('--extra_layers_lr', default=None, type=float,\n help='initial learning rate for the layers not in base net and prediction heads.')\n\n# Params for loading pretrained basenet or checkpoints.\nparser.add_argument('--base_net',\n help='Pretrained base model')\nparser.add_argument('--pretrained_ssd', help='Pre-trained base model')\nparser.add_argument('--resume', default=None, type=str,\n help='Checkpoint state_dict file to resume training from')\n\n# Scheduler\nparser.add_argument('--scheduler', default=\"multi-step\", type=str,\n help=\"Scheduler for SGD. It can one of multi-step and cosine\")\n\n# Params for Multi-step Scheduler\nparser.add_argument('--milestones', default=\"80,100\", type=str,\n help=\"milestones for MultiStepLR\")\n\n# Params for Cosine Annealing\nparser.add_argument('--t_max', default=120, type=float,\n help='T_max value for Cosine Annealing Scheduler.')\n\n# Train params\nparser.add_argument('--batch_size', default=24, type=int,\n help='Batch size for training')\nparser.add_argument('--num_epochs', default=200, type=int,\n help='the number epochs')\nparser.add_argument('--num_workers', default=4, type=int,\n help='Number of workers used in dataloading')\nparser.add_argument('--validation_epochs', default=5, type=int,\n help='the number epochs')\nparser.add_argument('--debug_steps', default=100, type=int,\n help='Set the debug log output frequency.')\nparser.add_argument('--use_cuda', default=True, type=str2bool,\n help='Use CUDA to train model')\n\nparser.add_argument('--checkpoint_folder', default='models/',\n help='Directory for saving checkpoint models')\nparser.add_argument('--log_dir', default='./models/Ultra-Light(1MB)_&_Fast_Face_Detector/logs',\n help='lod dir')\nparser.add_argument('--cuda_index', default=\"0\", type=str,\n help='Choose cuda index.If you have 4 GPUs, you can set it like 0,1,2,3')\nparser.add_argument('--power', default=2, type=int,\n help='poly lr pow')\nparser.add_argument('--overlap_threshold', default=0.35, type=float,\n help='overlap_threshold')\nparser.add_argument('--optimizer_type', default=\"SGD\", type=str,\n help='optimizer_type')\nparser.add_argument('--input_size', default=320, type=int,\n help='define network input size,default optional value 128/160/320/480/640/1280')\n\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO,\n format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')\nargs = parser.parse_args()\n\ninput_img_size = args.input_size # define input size ,default optional(128/160/320/480/640/1280)\nlogging.info(\"inpu size :{}\".format(input_img_size))\ndefine_img_size(input_img_size) # must put define_img_size() before 'import fd_config'\n\nfrom vision.ssd.config import fd_config\nfrom vision.ssd.data_preprocessing import TrainAugmentation, TestTransform\nfrom vision.ssd.mb_tiny_RFB_fd import create_Mb_Tiny_RFB_fd\nfrom vision.ssd.mb_tiny_fd import create_mb_tiny_fd\nfrom vision.ssd.ssd import MatchPrior\n\nDEVICE = torch.device(\"cuda:0\" if torch.cuda.is_available() and args.use_cuda else \"cpu\")\n\nif args.use_cuda and torch.cuda.is_available():\n torch.backends.cudnn.benchmark = True\n logging.info(\"Use Cuda.\")\n\n\ndef lr_poly(base_lr, iter):\n return base_lr * ((1 - float(iter) / args.num_epochs) ** (args.power))\n\n\ndef adjust_learning_rate(optimizer, i_iter):\n \"\"\"Sets the learning rate to the initial LR divided by 5 at 60th, 120th and 160th epochs\"\"\"\n lr = lr_poly(args.lr, i_iter)\n optimizer.param_groups[0]['lr'] = lr\n\n\ndef train(loader, net, criterion, optimizer, device, debug_steps=100, epoch=-1):\n net.train(True)\n running_loss = 0.0\n running_regression_loss = 0.0\n running_classification_loss = 0.0\n for i, data in enumerate(loader):\n print(\".\", end=\"\", flush=True)\n images, boxes, labels = data\n images = images.to(device)\n boxes = boxes.to(device)\n labels = labels.to(device)\n\n optimizer.zero_grad()\n confidence, locations = net(images)\n regression_loss, classification_loss = criterion(confidence, locations, labels, boxes) # TODO CHANGE BOXES\n loss = regression_loss + classification_loss\n loss.backward()\n optimizer.step()\n\n running_loss += loss.item()\n running_regression_loss += regression_loss.item()\n running_classification_loss += classification_loss.item()\n if i and i % debug_steps == 0:\n print(\".\", flush=True)\n avg_loss = running_loss / debug_steps\n avg_reg_loss = running_regression_loss / debug_steps\n avg_clf_loss = running_classification_loss / debug_steps\n logging.info(\n f\"Epoch: {epoch}, Step: {i}, \" +\n f\"Average Loss: {avg_loss:.4f}, \" +\n f\"Average Regression Loss {avg_reg_loss:.4f}, \" +\n f\"Average Classification Loss: {avg_clf_loss:.4f}\"\n )\n\n running_loss = 0.0\n running_regression_loss = 0.0\n running_classification_loss = 0.0\n\n\ndef test(loader, net, criterion, device):\n net.eval()\n running_loss = 0.0\n running_regression_loss = 0.0\n running_classification_loss = 0.0\n num = 0\n for _, data in enumerate(loader):\n images, boxes, labels = data\n images = images.to(device)\n boxes = boxes.to(device)\n labels = labels.to(device)\n num += 1\n\n with torch.no_grad():\n confidence, locations = net(images)\n regression_loss, classification_loss = criterion(confidence, locations, labels, boxes)\n loss = regression_loss + classification_loss\n\n running_loss += loss.item()\n running_regression_loss += regression_loss.item()\n running_classification_loss += classification_loss.item()\n return running_loss / num, running_regression_loss / num, running_classification_loss / num\n\n\nif __name__ == '__main__':\n timer = Timer()\n\n logging.info(args)\n if args.net == 'slim':\n create_net = create_mb_tiny_fd\n config = fd_config\n elif args.net == 'RFB':\n create_net = create_Mb_Tiny_RFB_fd\n config = fd_config\n else:\n logging.fatal(\"The net type is wrong.\")\n parser.print_help(sys.stderr)\n sys.exit(1)\n\n train_transform = TrainAugmentation(config.image_size, config.image_mean, config.image_std)\n target_transform = MatchPrior(config.priors, config.center_variance,\n config.size_variance, args.overlap_threshold)\n\n test_transform = TestTransform(config.image_size, config.image_mean_test, config.image_std)\n\n if not os.path.exists(args.checkpoint_folder):\n os.makedirs(args.checkpoint_folder)\n logging.info(\"Prepare training datasets.\")\n datasets = []\n for dataset_path in args.datasets:\n if args.dataset_type == 'voc':\n dataset = VOCDataset(dataset_path, transform=train_transform,\n target_transform=target_transform)\n label_file = os.path.join(args.checkpoint_folder, \"voc-model-labels.txt\")\n store_labels(label_file, dataset.class_names)\n num_classes = len(dataset.class_names)\n\n else:\n raise ValueError(f\"Dataset tpye {args.dataset_type} is not supported.\")\n datasets.append(dataset)\n logging.info(f\"Stored labels into file {label_file}.\")\n train_dataset = ConcatDataset(datasets)\n logging.info(\"Train dataset size: {}\".format(len(train_dataset)))\n train_loader = DataLoader(train_dataset, args.batch_size,\n num_workers=args.num_workers,\n shuffle=True)\n logging.info(\"Prepare Validation datasets.\")\n validation_dataset = args.validation_dataset or dataset_path\n if args.dataset_type == \"voc\":\n val_dataset = VOCDataset(validation_dataset, transform=test_transform,\n target_transform=target_transform, is_test=True)\n logging.info(\"validation dataset size: {}\".format(len(val_dataset)))\n\n val_loader = DataLoader(val_dataset, args.batch_size,\n num_workers=args.num_workers,\n shuffle=False)\n logging.info(\"Build network.\")\n net = create_net(num_classes)\n\n timer.start(\"Load Model\")\n if args.resume:\n logging.info(f\"Resume from the model {args.resume}\")\n net.load(args.resume)\n elif args.base_net:\n logging.info(f\"Init from base net {args.base_net}\")\n net.init_from_base_net(args.base_net)\n elif args.pretrained_ssd:\n logging.info(f\"Init from pretrained ssd {args.pretrained_ssd}\")\n net.init_from_pretrained_ssd(args.pretrained_ssd)\n logging.info(f'Took {timer.end(\"Load Model\"):.2f} seconds to load the model.')\n\n # add multigpu_train\n if torch.cuda.device_count() >= 1:\n cuda_index_list = [int(v.strip()) for v in args.cuda_index.split(\",\")]\n net = nn.DataParallel(net, device_ids=cuda_index_list)\n logging.info(\"use gpu :{}\".format(cuda_index_list))\n\n min_loss = -10000.0\n last_epoch = -1\n\n base_net_lr = args.base_net_lr if args.base_net_lr is not None else args.lr\n extra_layers_lr = args.extra_layers_lr if args.extra_layers_lr is not None else args.lr\n if args.freeze_base_net:\n logging.info(\"Freeze base net.\")\n freeze_net_layers(net.base_net)\n params = itertools.chain(net.source_layer_add_ons.parameters(), net.extras.parameters(),\n net.regression_headers.parameters(), net.classification_headers.parameters())\n params = [\n {'params': itertools.chain(\n net.source_layer_add_ons.parameters(),\n net.extras.parameters()\n ), 'lr': extra_layers_lr},\n {'params': itertools.chain(\n net.regression_headers.parameters(),\n net.classification_headers.parameters()\n )}\n ]\n elif args.freeze_net:\n freeze_net_layers(net.base_net)\n freeze_net_layers(net.source_layer_add_ons)\n freeze_net_layers(net.extras)\n params = itertools.chain(net.regression_headers.parameters(), net.classification_headers.parameters())\n logging.info(\"Freeze all the layers except prediction heads.\")\n else:\n params = [\n {'params': net.module.base_net.parameters(), 'lr': base_net_lr},\n {'params': itertools.chain(\n net.module.source_layer_add_ons.parameters(),\n net.module.extras.parameters()\n ), 'lr': extra_layers_lr},\n {'params': itertools.chain(\n net.module.regression_headers.parameters(),\n net.module.classification_headers.parameters()\n )}\n ]\n\n\n\n net.to(DEVICE)\n\n criterion = MultiboxLoss(config.priors, neg_pos_ratio=3,\n center_variance=0.1, size_variance=0.2, device=DEVICE)\n if args.optimizer_type == \"SGD\":\n optimizer = torch.optim.SGD(params, lr=args.lr, momentum=args.momentum,\n weight_decay=args.weight_decay)\n elif args.optimizer_type == \"Adam\":\n optimizer = torch.optim.Adam(params, lr=args.lr)\n logging.info(\"use Adam optimizer\")\n else:\n logging.fatal(f\"Unsupported optimizer: {args.scheduler}.\")\n parser.print_help(sys.stderr)\n sys.exit(1)\n logging.info(f\"Learning rate: {args.lr}, Base net learning rate: {base_net_lr}, \"\n + f\"Extra Layers learning rate: {extra_layers_lr}.\")\n if args.optimizer_type != \"Adam\":\n if args.scheduler == 'multi-step':\n logging.info(\"Uses MultiStepLR scheduler.\")\n milestones = [int(v.strip()) for v in args.milestones.split(\",\")]\n scheduler = MultiStepLR(optimizer, milestones=milestones,\n gamma=0.1, last_epoch=last_epoch)\n elif args.scheduler == 'cosine':\n logging.info(\"Uses CosineAnnealingLR scheduler.\")\n scheduler = CosineAnnealingLR(optimizer, args.t_max, last_epoch=last_epoch)\n elif args.scheduler == 'poly':\n logging.info(\"Uses PolyLR scheduler.\")\n else:\n logging.fatal(f\"Unsupported Scheduler: {args.scheduler}.\")\n parser.print_help(sys.stderr)\n sys.exit(1)\n\n logging.info(f\"Start training from epoch {last_epoch + 1}.\")\n for epoch in range(last_epoch + 1, args.num_epochs):\n if args.optimizer_type != \"Adam\":\n if args.scheduler != \"poly\":\n if epoch != 0:\n scheduler.step()\n train(train_loader, net, criterion, optimizer,\n device=DEVICE, debug_steps=args.debug_steps, epoch=epoch)\n if args.scheduler == \"poly\":\n adjust_learning_rate(optimizer, epoch)\n logging.info(\"lr rate :{}\".format(optimizer.param_groups[0]['lr']))\n\n if epoch % args.validation_epochs == 0 or epoch == args.num_epochs - 1:\n logging.info(\"lr rate :{}\".format(optimizer.param_groups[0]['lr']))\n val_loss, val_regression_loss, val_classification_loss = test(val_loader, net, criterion, DEVICE)\n logging.info(\n f\"Epoch: {epoch}, \" +\n f\"Validation Loss: {val_loss:.4f}, \" +\n f\"Validation Regression Loss {val_regression_loss:.4f}, \" +\n f\"Validation Classification Loss: {val_classification_loss:.4f}\"\n )\n model_path = os.path.join(args.checkpoint_folder, f\"{args.net}-Epoch-{epoch}-Loss-{val_loss}.pth\")\n net.module.save(model_path)\n logging.info(f\"Saved model {model_path}\")\n" ]
[ [ "torch.utils.data.ConcatDataset", "torch.optim.lr_scheduler.CosineAnnealingLR", "torch.no_grad", "torch.optim.SGD", "torch.optim.Adam", "torch.cuda.device_count", "torch.optim.lr_scheduler.MultiStepLR", "torch.cuda.is_available", "torch.utils.data.DataLoader", "torch.nn.DataParallel" ] ]
daniel-izmaylov/DialogBERT
[ "24197b55cb9b2647bf336294f3880ec92bf6eedf" ]
[ "main.py" ]
[ "# DialogBERT\n# Copyright 2021-present NAVER Corp.\n# BSD 3-clause\n\n# coding=utf-8\nimport argparse\nimport glob\nimport logging\nimport os\nimport pickle\nimport random\nimport re\nimport shutil\nfrom tqdm import tqdm\nimport numpy as np\nimport torch\n\nimport models, solvers, data_loader\n\nlogger = logging.getLogger(__name__)\n \ndef set_seed(args):\n random.seed(args.seed)\n np.random.seed(args.seed)\n torch.manual_seed(args.seed)\n if args.n_gpu > 0:\n torch.cuda.manual_seed_all(args.seed) \n \n\ndef main():\n parser = argparse.ArgumentParser()\n\n ## Required parameters\n parser.add_argument(\"--data_path\", default='./data/', type=str, help=\"The input data path.\")\n parser.add_argument(\"--dataset\", default='dailydial', type=str, help=\"dataset name\")\n ## Other parameters\n parser.add_argument(\"--model\", default=\"DialogBERT\", type=str, help=\"The model architecture to be fine-tuned.\") \n parser.add_argument(\"--model_size\", default=\"tiny\", type=str, help=\"tiny, small, base, large\")\n parser.add_argument(\"--language\", default=\"english\", type=str, help= \"language, english or chinese\")\n\n parser.add_argument('--do_test', action='store_true', help=\"whether test or train\")\n parser.add_argument(\"--reload_from\", default=-1, type=int, help=\"The global iteration of optimal checkpoint.\")\n \n parser.add_argument(\"--per_gpu_train_batch_size\", default=32, type=int, help=\"Batch size per GPU/CPU for training.\")\n parser.add_argument(\"--per_gpu_eval_batch_size\", default=1, type=int, help=\"Batch size per GPU/CPU for evaluation.\")\n parser.add_argument('--grad_accum_steps', type=int, default=2,\n help=\"Number of updates steps to accumulate before performing a backward/update pass.\")\n parser.add_argument(\"--learning_rate\", default=5e-5, type=float, help=\"The initial learning rate for Adam.\")\n parser.add_argument(\"--weight_decay\", default=0.01, type=float, help=\"Weight deay if we apply some.\")\n parser.add_argument(\"--adam_epsilon\", default=1e-8, type=float, help=\"Epsilon for Adam optimizer.\")\n parser.add_argument(\"--max_grad_norm\", default=1.0, type=float, help=\"Max gradient norm.\")\n parser.add_argument(\"--n_epochs\", default=1.0, type=float, help=\"Total number of training epochs to perform.\")\n parser.add_argument(\"--max_steps\", default=200000, type=int, help=\"If > 0: set total number of training steps to perform. Override n_epochs.\")\n parser.add_argument(\"--warmup_steps\", default=5000, type=int, help=\"Linear warmup over warmup_steps.\")\n\n parser.add_argument('--logging_steps', type=int, default=200, help=\"Log every X updates steps.\")\n parser.add_argument('--validating_steps', type=int, default = 20, help= \"Validate every X updates steps.\")\n parser.add_argument('--save_steps', type=int, default=5000, help=\"Save checkpoint every X updates steps.\")\n parser.add_argument('--save_total_limit', type=int, default=100,\n help='Limit the total amount of checkpoints, delete the older checkpoints in the output_dir, does not delete by default')\n parser.add_argument('--seed', type=int, default=42, help=\"random seed for initialization\")\n\n parser.add_argument('--fp16', action='store_true', help=\"Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit\")\n parser.add_argument('--fp16_opt_level', type=str, default='O1',\n help=\"For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3'].\"\n \"See details at https://nvidia.github.io/apex/amp.html\")\n parser.add_argument(\"--local_rank\", type=int, default=-1, help=\"For distributed training: local_rank\")\n parser.add_argument('--server_ip', type=str, default='', help=\"For distant debugging.\")\n parser.add_argument('--server_port', type=str, default='', help=\"For distant debugging.\")\n \n args = parser.parse_args()\n \n args.data_path = os.path.join(args.data_path, args.dataset)\n\n # Setup distant debugging if needed\n if args.server_ip and args.server_port:\n # Distant debugging - see https://code.visualstudio.com/docs/python/debugging#_attach-to-a-local-script\n import ptvsd\n print(\"Waiting for debugger attach\")\n ptvsd.enable_attach(address=(args.server_ip, args.server_port), redirect_output=True)\n ptvsd.wait_for_attach()\n\n # Setup CUDA, GPU & distributed training\n if args.local_rank == -1:\n device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n args.n_gpu = torch.cuda.device_count()\n print(f\"number of gpus: {args.n_gpu}\")\n else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs\n torch.cuda.set_device(args.local_rank)\n device = torch.device(\"cuda\", args.local_rank)\n torch.distributed.init_process_group(backend='nccl')\n args.n_gpu = 1\n args.device = device\n\n # Setup logging\n logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s', datefmt = '%m/%d/%Y %H:%M:%S',\n level = logging.INFO if args.local_rank in [-1, 0] else logging.WARN)\n logger.warning(\"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s\",\n args.local_rank, device, args.n_gpu, bool(args.local_rank != -1), args.fp16)\n\n set_seed(args)\n\n solver = getattr(solvers, args.model+'Solver')(args)\n\n logger.info(\"Training/evaluation parameters %s\", args)\n\n # Training\n if not args.do_test:\n global_step, tr_loss = solver.train(args)\n logger.info(\" global_step = %s, average loss = %s\", global_step, tr_loss)\n else:\n # Evaluation\n if args.local_rank in [-1, 0]:\n results = solver.evaluate(args)\n print(results)\n \n\nif __name__ == \"__main__\":\n main()\n" ]
[ [ "torch.device", "torch.cuda.manual_seed_all", "numpy.random.seed", "torch.distributed.init_process_group", "torch.cuda.device_count", "torch.manual_seed", "torch.cuda.set_device", "torch.cuda.is_available" ] ]
simonbatzner/e3nn
[ "9f5e336d5443d26a04d37162c10eb851beb0f5c5" ]
[ "tests/util/test_jit.py" ]
[ "import pytest\nimport warnings\n\nimport torch\n\nfrom e3nn.o3 import Linear, Irreps\nfrom e3nn.util.jit import script, trace_module, compile_mode, compile\nfrom e3nn.util.test import assert_equivariant\n\n\ndef test_submod_tracing():\n \"\"\"Check that tracing actually occurs\"\"\"\n @compile_mode('trace')\n class BadTrace(torch.nn.Module):\n def forward(self, x):\n if x.shape[0] == 7:\n return x.new_ones(8)\n else:\n return x\n\n # This class has no irreps_in, so we need this to allow trace compilation\n def make_tracing_input(self):\n return {\n 'forward': torch.randn(8, 3)\n }\n\n class ParentMod(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.child = BadTrace()\n\n def forward(self, x):\n return torch.as_tensor(0.5585)*self.child(x)\n\n parent = ParentMod()\n\n with pytest.raises(Exception):\n with warnings.catch_warnings():\n warnings.filterwarnings('error', category=torch.jit.TracerWarning)\n script(parent)\n\n\ndef test_submod_scripting():\n \"\"\"Check that scripting actually occurs\"\"\"\n @compile_mode('script')\n class ScriptSubmod(torch.nn.Module):\n def forward(self, x):\n if x.shape[0] == 7:\n return x.new_zeros(8)\n else:\n return x\n\n class ParentMod(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.child = ScriptSubmod()\n\n def forward(self, x):\n return self.child(x)\n\n parent = ParentMod()\n assert parent(torch.randn(7, 4)).shape == (8,)\n\n parent_trace = trace_module(\n parent,\n inputs={\n 'forward': (torch.randn(7, 4),) # get the conditional behaviour\n }\n )\n # Does it get the behaviour it was traced for?\n assert parent_trace(torch.randn(7, 4)).shape == (8,)\n # Does it get the conditional that should have been scripted?\n x = torch.randn(5, 7)\n assert torch.allclose(parent(x), x)\n assert torch.allclose(parent_trace(x), x)\n\n\ndef test_compilation():\n\n class Supermod(torch.nn.Module):\n def forward(self, x):\n return x * 2.\n\n @compile_mode('trace')\n class ChildMod(Supermod):\n def forward(self, x):\n return super().forward(x) * 3.\n\n def _make_tracing_inputs(self, n: int):\n return [{'forward': (torch.randn(2, 3),)} for _ in range(n)]\n\n # This module can't be compiled directly by TorchScript, since ChildMod is a subclass and calls super() in forward()\n class ContainerMod(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.submod = ChildMod()\n self.alpha = torch.randn(1).squeeze()\n\n def forward(self, x):\n return self.submod(x) + self.alpha*self.submod(x)\n\n mod = ContainerMod()\n # Try and xfail with torch.jit.script\n with pytest.raises((RuntimeError, torch.jit.Error)):\n mod_script = torch.jit.script(mod)\n # Compile with our compiler\n mod_script = script(mod)\n\n x = torch.randn(3, 2)\n assert torch.allclose(mod(x), mod_script(x))\n\n\ndef test_equivariant():\n # Confirm that a compiled tensorproduct is still equivariant\n irreps_in = Irreps(\"1e + 2e + 3x3o\")\n irreps_out = Irreps(\"1e + 2e + 3x3o\")\n mod = Linear(irreps_in, irreps_out)\n mod_script = compile(mod)\n assert_equivariant(mod_script)\n\n\ndef test_unsupported():\n @compile_mode('unsupported')\n class ChildMod(torch.nn.Module):\n pass\n\n class Supermod(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.child = ChildMod()\n\n mod = Supermod()\n with pytest.raises(NotImplementedError):\n mod = script(mod)\n" ]
[ [ "torch.as_tensor", "torch.jit.script", "torch.randn" ] ]
HuanYunXuanZi/tensorfow-rbm
[ "b55460e3fdbf83c2a9278c35cf20c6fed1d4666c" ]
[ "tfrbm/rbm.py" ]
[ "from __future__ import print_function\n\nimport tensorflow as tf\nimport numpy as np\nimport sys\nfrom .util import tf_xavier_init\n\n\nclass RBM:\n def __init__(self,\n n_visible,\n n_hidden,\n learning_rate=0.01,\n momentum=0.95,\n xavier_const=1.0,\n err_function='mse',\n use_tqdm=False,\n # DEPRECATED:\n tqdm=None):\n if not 0.0 <= momentum <= 1.0:\n raise ValueError('momentum should be in range [0, 1]')\n\n if err_function not in {'mse', 'cosine'}:\n raise ValueError('err_function should be either \\'mse\\' or \\'cosine\\'')\n\n self._use_tqdm = use_tqdm\n self._tqdm = None\n\n if use_tqdm or tqdm is not None:\n from tqdm import tqdm\n self._tqdm = tqdm\n\n self.n_visible = n_visible\n self.n_hidden = n_hidden\n self.learning_rate = learning_rate\n self.momentum = momentum\n\n self.x = tf.placeholder(tf.float32, [None, self.n_visible])\n self.y = tf.placeholder(tf.float32, [None, self.n_hidden])\n\n self.w = tf.Variable(tf_xavier_init(self.n_visible, self.n_hidden, const=xavier_const), dtype=tf.float32)\n self.visible_bias = tf.Variable(tf.zeros([self.n_visible]), dtype=tf.float32)\n self.hidden_bias = tf.Variable(tf.zeros([self.n_hidden]), dtype=tf.float32)\n\n self.delta_w = tf.Variable(tf.zeros([self.n_visible, self.n_hidden]), dtype=tf.float32)\n self.delta_visible_bias = tf.Variable(tf.zeros([self.n_visible]), dtype=tf.float32)\n self.delta_hidden_bias = tf.Variable(tf.zeros([self.n_hidden]), dtype=tf.float32)\n\n self.update_weights = None\n self.update_deltas = None\n self.compute_hidden = None\n self.compute_visible = None\n self.compute_visible_from_hidden = None\n\n self._initialize_vars()\n\n assert self.update_weights is not None\n assert self.update_deltas is not None\n assert self.compute_hidden is not None\n assert self.compute_visible is not None\n assert self.compute_visible_from_hidden is not None\n\n if err_function == 'cosine':\n x1_norm = tf.nn.l2_normalize(self.x, 1)\n x2_norm = tf.nn.l2_normalize(self.compute_visible, 1)\n cos_val = tf.reduce_mean(tf.reduce_sum(tf.mul(x1_norm, x2_norm), 1))\n self.compute_err = tf.acos(cos_val) / tf.constant(np.pi)\n else:\n self.compute_err = tf.reduce_mean(tf.square(self.x - self.compute_visible))\n\n init = tf.global_variables_initializer()\n self.sess = tf.Session()\n self.sess.run(init)\n\n def _initialize_vars(self):\n pass\n\n def get_err(self, batch_x):\n return self.sess.run(self.compute_err, feed_dict={self.x: batch_x})\n\n def get_free_energy(self):\n pass\n\n def transform(self, batch_x):\n return self.sess.run(self.compute_hidden, feed_dict={self.x: batch_x})\n\n def transform_inv(self, batch_y):\n return self.sess.run(self.compute_visible_from_hidden, feed_dict={self.y: batch_y})\n\n def reconstruct(self, batch_x):\n return self.sess.run(self.compute_visible, feed_dict={self.x: batch_x})\n\n def partial_fit(self, batch_x):\n self.sess.run(self.update_weights + self.update_deltas, feed_dict={self.x: batch_x})\n\n def fit(self,\n data_x,\n n_epoches=10,\n batch_size=10,\n shuffle=True,\n verbose=True):\n assert n_epoches > 0\n\n n_data = data_x.shape[0]\n\n if batch_size > 0:\n n_batches = n_data // batch_size + (0 if n_data % batch_size == 0 else 1)\n else:\n n_batches = 1\n\n if shuffle:\n data_x_cpy = data_x.copy()\n inds = np.arange(n_data)\n else:\n data_x_cpy = data_x\n\n errs = []\n\n for e in range(n_epoches):\n if verbose and not self._use_tqdm:\n print('Epoch: {:d}'.format(e))\n\n epoch_errs = np.zeros((n_batches,))\n epoch_errs_ptr = 0\n\n if shuffle:\n np.random.shuffle(inds)\n data_x_cpy = data_x_cpy[inds]\n\n r_batches = range(n_batches)\n\n if verbose and self._use_tqdm:\n r_batches = self._tqdm(r_batches, desc='Epoch: {:d}'.format(e), ascii=True, file=sys.stdout)\n\n for b in r_batches:\n batch_x = data_x_cpy[b * batch_size:(b + 1) * batch_size]\n self.partial_fit(batch_x)\n batch_err = self.get_err(batch_x)\n epoch_errs[epoch_errs_ptr] = batch_err\n epoch_errs_ptr += 1\n\n if verbose:\n err_mean = epoch_errs.mean()\n if self._use_tqdm:\n self._tqdm.write('Train error: {:.4f}'.format(err_mean))\n self._tqdm.write('')\n else:\n print('Train error: {:.4f}'.format(err_mean))\n print('')\n sys.stdout.flush()\n\n errs = np.hstack([errs, epoch_errs])\n\n return errs\n\n def get_weights(self):\n return self.sess.run(self.w),\\\n self.sess.run(self.visible_bias),\\\n self.sess.run(self.hidden_bias)\n\n def save_weights(self, filename, name):\n saver = tf.train.Saver({name + '_w': self.w,\n name + '_v': self.visible_bias,\n name + '_h': self.hidden_bias})\n return saver.save(self.sess, filename)\n\n def set_weights(self, w, visible_bias, hidden_bias):\n self.sess.run(self.w.assign(w))\n self.sess.run(self.visible_bias.assign(visible_bias))\n self.sess.run(self.hidden_bias.assign(hidden_bias))\n\n def load_weights(self, filename, name):\n saver = tf.train.Saver({name + '_w': self.w,\n name + '_v': self.visible_bias,\n name + '_h': self.hidden_bias})\n saver.restore(self.sess, filename)\n" ]
[ [ "tensorflow.zeros", "tensorflow.acos", "numpy.zeros", "tensorflow.Session", "tensorflow.train.Saver", "numpy.random.shuffle", "tensorflow.mul", "tensorflow.constant", "tensorflow.placeholder", "numpy.arange", "numpy.hstack", "tensorflow.global_variables_initializer", "tensorflow.square", "tensorflow.nn.l2_normalize" ] ]
siddeshpillai/neural-graph
[ "ff4ff603284cad82af5df0dea98807bb7c1b54ff" ]
[ "neural_graph.py" ]
[ "import os\n\nimport numpy as np\nimport scipy.misc\n\nfrom stylize import stylize\n\nimport math\nfrom argparse import ArgumentParser\n\nfrom PIL import Image\n\n# default arguments\nCONTENT_WEIGHT = 5e0\nCONTENT_WEIGHT_BLEND = 1\nSTYLE_WEIGHT = 5e2\nTV_WEIGHT = 1e2\nSTYLE_LAYER_WEIGHT_EXP = 1\nLEARNING_RATE = 1e1\nBETA1 = 0.9\nBETA2 = 0.999\nEPSILON = 1e-08\nSTYLE_SCALE = 1.0\nITERATIONS = 1000\nVGG_PATH = 'imagenet-vgg-verydeep-19.mat'\nPOOLING = 'max'\n\ndef build_parser():\n parser = ArgumentParser()\n\n parser.add_argument('--content',\n dest='content', help='content image',\n metavar='CONTENT', required=True)\n\n parser.add_argument('--styles',\n dest='styles',\n nargs='+', help='one or more style images',\n metavar='STYLE', required=True)\n\n parser.add_argument('--output',\n dest='output', help='output path',\n metavar='OUTPUT', required=True)\n\n parser.add_argument('--iterations', type=int,\n dest='iterations', help='iterations (default %(default)s)',\n metavar='ITERATIONS', default=ITERATIONS)\n\n parser.add_argument('--print-iterations', type=int,\n dest='print_iterations', help='statistics printing frequency',\n metavar='PRINT_ITERATIONS')\n\n parser.add_argument('--checkpoint-output',\n dest='checkpoint_output', help='checkpoint output format, e.g. output%%s.jpg',\n metavar='OUTPUT')\n\n parser.add_argument('--checkpoint-iterations', type=int,\n dest='checkpoint_iterations', help='checkpoint frequency',\n metavar='CHECKPOINT_ITERATIONS')\n\n parser.add_argument('--width', type=int,\n dest='width', help='output width',\n metavar='WIDTH')\n\n parser.add_argument('--style-scales', type=float,\n dest='style_scales',\n nargs='+', help='one or more style scales',\n metavar='STYLE_SCALE')\n parser.add_argument('--network',\n dest='network', help='path to network parameters (default %(default)s)',\n metavar='VGG_PATH', default=VGG_PATH)\n parser.add_argument('--content-weight-blend', type=float,\n dest='content_weight_blend', help='content weight blend, conv4_2 * blend + conv5_2 * (1-blend) (default %(default)s)',\n metavar='CONTENT_WEIGHT_BLEND', default=CONTENT_WEIGHT_BLEND)\n parser.add_argument('--content-weight', type=float,\n dest='content_weight', help='content weight (default %(default)s)',\n metavar='CONTENT_WEIGHT', default=CONTENT_WEIGHT)\n parser.add_argument('--style-weight', type=float,\n dest='style_weight', help='style weight (default %(default)s)',\n metavar='STYLE_WEIGHT', default=STYLE_WEIGHT)\n parser.add_argument('--style-layer-weight-exp', type=float,\n dest='style_layer_weight_exp', help='style layer weight exponentional increase - weight(layer<n+1>) = weight_exp*weight(layer<n>) (default %(default)s)',\n metavar='STYLE_LAYER_WEIGHT_EXP', default=STYLE_LAYER_WEIGHT_EXP)\n parser.add_argument('--style-blend-weights', type=float,\n dest='style_blend_weights', help='style blending weights',\n nargs='+', metavar='STYLE_BLEND_WEIGHT')\n parser.add_argument('--tv-weight', type=float,\n dest='tv_weight', help='total variation regularization weight (default %(default)s)',\n metavar='TV_WEIGHT', default=TV_WEIGHT)\n parser.add_argument('--learning-rate', type=float,\n dest='learning_rate', help='learning rate (default %(default)s)',\n metavar='LEARNING_RATE', default=LEARNING_RATE)\n parser.add_argument('--beta1', type=float,\n dest='beta1', help='Adam: beta1 parameter (default %(default)s)',\n metavar='BETA1', default=BETA1)\n parser.add_argument('--beta2', type=float,\n dest='beta2', help='Adam: beta2 parameter (default %(default)s)',\n metavar='BETA2', default=BETA2)\n parser.add_argument('--eps', type=float,\n dest='epsilon', help='Adam: epsilon parameter (default %(default)s)',\n metavar='EPSILON', default=EPSILON)\n parser.add_argument('--initial',\n dest='initial', help='initial image',\n metavar='INITIAL')\n parser.add_argument('--initial-noiseblend', type=float,\n dest='initial_noiseblend', help='ratio of blending initial image with normalized noise (if no initial image specified, content image is used) (default %(default)s)',\n metavar='INITIAL_NOISEBLEND')\n parser.add_argument('--preserve-colors', action='store_true',\n dest='preserve_colors', help='style-only transfer (preserving colors) - if color transfer is not needed')\n parser.add_argument('--pooling',\n dest='pooling', help='pooling layer configuration: max or avg (default %(default)s)',\n metavar='POOLING', default=POOLING)\n return parser\n\n\ndef main():\n parser = build_parser()\n options = parser.parse_args()\n\n if not os.path.isfile(options.network):\n parser.error(\"Network %s does not exist. (Did you forget to download it?)\" % options.network)\n\n content_image = imread(options.content)\n style_images = [imread(style) for style in options.styles]\n\n width = options.width\n if width is not None:\n new_shape = (int(math.floor(float(content_image.shape[0]) /\n content_image.shape[1] * width)), width)\n content_image = scipy.misc.imresize(content_image, new_shape)\n target_shape = content_image.shape\n for i in range(len(style_images)):\n style_scale = STYLE_SCALE\n if options.style_scales is not None:\n style_scale = options.style_scales[i]\n style_images[i] = scipy.misc.imresize(style_images[i], style_scale *\n target_shape[1] / style_images[i].shape[1])\n\n style_blend_weights = options.style_blend_weights\n if style_blend_weights is None:\n # default is equal weights\n style_blend_weights = [1.0/len(style_images) for _ in style_images]\n else:\n total_blend_weight = sum(style_blend_weights)\n style_blend_weights = [weight/total_blend_weight\n for weight in style_blend_weights]\n\n initial = options.initial\n if initial is not None:\n initial = scipy.misc.imresize(imread(initial), content_image.shape[:2])\n # Initial guess is specified, but not noiseblend - no noise should be blended\n if options.initial_noiseblend is None:\n options.initial_noiseblend = 0.0\n else:\n # Neither inital, nor noiseblend is provided, falling back to random generated initial guess\n if options.initial_noiseblend is None:\n options.initial_noiseblend = 1.0\n if options.initial_noiseblend < 1.0:\n initial = content_image\n\n if options.checkpoint_output and \"%s\" not in options.checkpoint_output:\n parser.error(\"To save intermediate images, the checkpoint output \"\n \"parameter must contain `%s` (e.g. `foo%s.jpg`)\")\n\n # try saving a dummy image to the output path to make sure that it's writable\n try:\n imsave(options.output, np.zeros((500, 500, 3)))\n except:\n raise IOError('%s is not writable or does not have a valid file extension for an image file' % options.output)\n\n for iteration, image in stylize(\n network=options.network,\n initial=initial,\n initial_noiseblend=options.initial_noiseblend,\n content=content_image,\n styles=style_images,\n preserve_colors=options.preserve_colors,\n iterations=options.iterations,\n content_weight=options.content_weight,\n content_weight_blend=options.content_weight_blend,\n style_weight=options.style_weight,\n style_layer_weight_exp=options.style_layer_weight_exp,\n style_blend_weights=style_blend_weights,\n tv_weight=options.tv_weight,\n learning_rate=options.learning_rate,\n beta1=options.beta1,\n beta2=options.beta2,\n epsilon=options.epsilon,\n pooling=options.pooling,\n print_iterations=options.print_iterations,\n checkpoint_iterations=options.checkpoint_iterations\n ):\n output_file = None\n combined_rgb = image\n if iteration is not None:\n if options.checkpoint_output:\n output_file = options.checkpoint_output % iteration\n else:\n output_file = options.output\n if output_file:\n imsave(output_file, combined_rgb)\n\n\ndef imread(path):\n img = scipy.misc.imread(path).astype(np.float)\n if len(img.shape) == 2:\n # grayscale\n img = np.dstack((img,img,img))\n elif img.shape[2] == 4:\n # PNG with alpha channel\n img = img[:,:,:3]\n return img\n\n\ndef imsave(path, img):\n img = np.clip(img, 0, 255).astype(np.uint8)\n Image.fromarray(img).save(path, quality=95)\n\nif __name__ == '__main__':\n main()\n" ]
[ [ "numpy.dstack", "numpy.zeros", "numpy.clip" ] ]
cabustillo13/Jugando-con-Python
[ "a02f67531426a8ca819f91317445c567e7094546" ]
[ "juego2.py" ]
[ "import cv2\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Imagen a analizar\nimagenColor = cv2.imread('./Recursos/juego2/photo0.png')\nimagenGris = cv2.cvtColor(imagenColor, cv2.COLOR_BGR2GRAY)\n\ncont=0 #Cantidad de rectangulos\ntol=10 #Tolerancia para evitar duplicado en el conteo de rectangulos\n\n# Plantillas a detectar\nfor i in range(3):\n path = './Recursos/juego2/ref' + str(i) +'.png'\n template = cv2.imread(path,0)\n w, h = template.shape[::-1]\n\n # Plantilla de comparacion\n res = cv2.matchTemplate(imagenGris,template,cv2.TM_CCOEFF_NORMED)\n threshold = 0.8\n loc = np.where( res >= threshold)\n\n listPt=list()\n #Detectar y dibujar rectangulos\n for pt in zip(*loc[::-1]):\n cv2.rectangle(imagenColor, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)\n listPt.append(([pt[0],pt[1]]))\n \n #Contar rectangulos\n listPt.sort()\n for i in range (len(listPt)-1):\n valor = abs(listPt[i+1][1] - listPt[i][1]) + abs(listPt[i+1][0] - listPt[i][0])\n if (valor > tol) or (i==0):\n cont+=1\n \n#Mostrar y guardar resultados \ncv2.imwrite(\"./Recursos/juego2/ejemplo.png\",imagenColor)\nprint(\"Cantidad de rectangulos detectados: \",cont)\n\n#Bibliografia: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html \n" ]
[ [ "numpy.where" ] ]
smoosavioon/saeedconnect4
[ "8fa002119317e7d684a1c959b4e5c0cd906d060a" ]
[ "agents/common.py" ]
[ "import numpy as np\nfrom enum import Enum\nfrom typing import Optional\nfrom typing import Callable, Tuple\nfrom numba import njit\n\n\nBoardPiece = np.int8 # The data type (dtype) of the board\nNO_PLAYER = BoardPiece(0) # board[i, j] == NO_PLAYER where the position is empty\nPLAYER1 = BoardPiece(1) # board[i, j] == PLAYER1 where player 1 has a piece\nPLAYER2 = BoardPiece(2) # board[i, j] == PLAYER2 where player 2 has a piece\n\nPlayerAction = np.int8 # The column to be played\n\nclass SavedState:\n pass\n\n\nGenMove = Callable[\n [np.ndarray, BoardPiece, Optional[SavedState]], # Arguments for the generate_move function\n Tuple[PlayerAction, Optional[SavedState]] # Return type of the generate_move function\n]\n\nclass GameState(Enum):\n IS_WIN = 1\n IS_DRAW = -1\n STILL_PLAYING = 0\n\ndef initialize_game_state() -> np.ndarray:\n \"\"\"\n Returns an ndarray, shape (6, 7) and data type (dtype) BoardPiece, initialized to 0 (NO_PLAYER).\n \"\"\"\n board = np.zeros((6,7), dtype=BoardPiece)\n return board\n\ndef pretty_print_board(board: np.ndarray):\n \"\"\"\n Should return `board` converted to a human readable string representation,\n to be used when playing or printing diagnostics to the console (stdout). The piece in\n board[0, 0] should appear in the lower-left. Here's an example output:\n |==============|\n | |\n | |\n | X X |\n | O X X |\n | O X O O |\n | O O X X |\n |==============|\n |0 1 2 3 4 5 6 |\n \"\"\"\n print('==============')\n for i in range(6):\n for j in range(7):\n if np.flipud(board)[i][j] == 0:\n print('.', end=\" \")\n elif np.flipud(board)[i][j] == 1:\n print('X', end=\" \")\n else:\n print('O', end=\" \")\n\n print()\n print('==============', '\\n0 1 2 3 4 5 6')\n\n\ndef string_to_board(pp_board: str) -> np.ndarray:\n \"\"\"\n Takes the output of pretty_print_board and turns it back into an ndarray.\n This is quite useful for debugging, when the agent crashed and you have the last\n board state as a string.\n \"\"\"\n\n return np.array(pretty_print_board(pp_board))\n\ndef apply_player_action(\n board: np.ndarray, action: PlayerAction, player: BoardPiece, copy: bool\n) -> np.ndarray:\n \"\"\"\n Sets board[i, action] = player, where i is the lowest open row. The modified\n board is returned. If copy is True, makes a copy of the board before modifying it.\n \"\"\"\n i = 0\n while board[i, action] != 0:\n i += 1\n\n if copy:\n board = board.copy()\n\n board[i, action] = player\n return board\n\n@njit()\ndef connected_four(\n board: np.ndarray, player: BoardPiece, _last_action: Optional[PlayerAction] = None\n) -> bool:\n CONNECT_N = 4\n rows, cols = board.shape\n rows_edge = rows - CONNECT_N + 1\n cols_edge = cols - CONNECT_N + 1\n for i in range(rows):\n for j in range(cols_edge):\n if np.all(board[i, j:j+CONNECT_N] == player):\n return True\n for i in range(rows_edge):\n for j in range(cols):\n if np.all(board[i:i+CONNECT_N, j] == player):\n return True\n for i in range(rows_edge):\n for j in range(cols_edge):\n block = board[i:i+CONNECT_N, j:j+CONNECT_N]\n if np.all(np.diag(block) == player):\n return True\n if np.all(np.diag(block[::-1, :]) == player):\n return True\n return False\n\n\ndef check_end_state(\n board: np.ndarray, player: BoardPiece, last_action: Optional[PlayerAction]=None,\n) -> GameState:\n \"\"\"\n Returns the current game state for the current `player`, i.e. has their last\n action won (GameState.IS_WIN) or drawn (GameState.IS_DRAW) the game,\n or is play still on-going (GameState.STILL_PLAYING)?\n \"\"\"\n is_con = connected_four(board, player, last_action)\n if is_con:\n return GameState.IS_WIN\n elif np.sum(np.where(board==0)) == 0:\n return GameState.IS_DRAW\n else:\n return GameState.STILL_PLAYING\n\n\n" ]
[ [ "numpy.zeros", "numpy.flipud", "numpy.where", "numpy.all", "numpy.diag" ] ]
TahaEntezari/ramstk
[ "f82e5b31ef5c4e33cc02252263247b99a9abe129" ]
[ "tests/analyses/statistics/weibull_unit_test.py" ]
[ "# pylint: skip-file\n# type: ignore\n# -*- coding: utf-8 -*-\n#\n# tests.analyses.statistics.weibull_unit_test.py is part of The RAMSTK Project\n#\n# All rights reserved.\n# Copyright since 2007 Doyle \"weibullguy\" Rowland doyle.rowland <AT> reliaqual <DOT> com\n\"\"\"Test class for the Exponential distribution module.\"\"\"\n\n# Standard Library Imports\nimport math\n\n# Third Party Imports\nimport numpy as np\nimport pytest\nimport scipy\n\n# RAMSTK Package Imports\nfrom ramstk.analyses.statistics import weibull\n\n\[email protected](scope=\"function\")\ndef test_data():\n \"\"\"Data set of Weibull distributed points.\n\n Data is the same as that used in the ReliaSoft wiki examples.\n The table can be found at the following URL, for example.\n http://reliawiki.org/index.php/The_Weibull_Distribution#Rank_Regression_on_Y\n\n eta = 76.318 and beta = 1.4301 when fit to the WEI.\n \"\"\"\n yield np.array(\n [\n 16.0,\n 34.0,\n 53.0,\n 75.0,\n 93.0,\n 120.0,\n ]\n )\n\n\[email protected]\ndef test_get_hazard_rate_defaults():\n \"\"\"should calculate the (WEI) hazard rate when using default confidence level.\"\"\"\n assert weibull.get_hazard_rate(0.8, 525.0, 105.0) == pytest.approx(0.006616111)\n assert weibull.get_hazard_rate(1.0, 525.0, 105.0) == pytest.approx(0.008603153)\n assert weibull.get_hazard_rate(2.5, 525.0, 105.0) == pytest.approx(0.0235972)\n\n\[email protected]\ndef test_get_hazard_rate_specified_location():\n \"\"\"should calculate the (WEI) hazard rate when specifying the location.\"\"\"\n assert weibull.get_hazard_rate(0.8, 525.0, 105.0, location=18.5) == pytest.approx(\n 0.008198783\n )\n assert weibull.get_hazard_rate(1.0, 525.0, 105.0, location=18.5) == pytest.approx(\n 0.01063445\n )\n assert weibull.get_hazard_rate(2.5, 525.0, 105.0, location=18.5) == pytest.approx(\n 0.02874279\n )\n\n\[email protected]\ndef test_get_hazard_rate_zero_shape():\n \"\"\"should return nan when passed a shape=0.0.\"\"\"\n assert math.isnan(weibull.get_hazard_rate(0.0, 525.0, 105.0))\n\n\[email protected]\ndef test_get_hazard_rate_zero_scale():\n \"\"\"should return nan when passed a scale=0.0.\"\"\"\n assert math.isnan(weibull.get_hazard_rate(2.5, 0.0, 105.0))\n\n\[email protected]\ndef test_get_hazard_rate_zero_time():\n \"\"\"should return zero when passed a time=0.0.\"\"\"\n assert weibull.get_hazard_rate(2.5, 525.0, 0.0) == 0.0\n\n\[email protected]\ndef test_get_mtbf_defaults():\n \"\"\"should calculate the WEI MTBF when using default confidence level.\"\"\"\n assert weibull.get_mtbf(2.5, 525.0) == pytest.approx(465.8135042)\n\n\[email protected]\ndef test_get_mtbf_specified_location():\n \"\"\"should calculate the WEI MTBF when specifying the location.\"\"\"\n assert weibull.get_mtbf(2.5, 525.0, location=18.5) == pytest.approx(484.3135042)\n\n\[email protected]\ndef test_get_mtbf_zero_shape():\n \"\"\"should return nan when passed a shape=0.0.\"\"\"\n assert math.isnan(weibull.get_mtbf(0.0, 525.0))\n\n\[email protected]\ndef test_get_mtbf_zero_scale():\n \"\"\"should return nan when passed a shape=0.0.\"\"\"\n assert math.isnan(weibull.get_mtbf(2.5, 0.0))\n\n\[email protected]\ndef test_get_survival_defaults():\n \"\"\"should calculate the value of the survival function at time T.\"\"\"\n assert weibull.get_survival(2.5, 525.0, 105.0) == pytest.approx(0.9822705)\n\n\[email protected]\ndef test_get_survival_specified_location():\n \"\"\"should calculate the value of the survival when specifying the location.\"\"\"\n assert weibull.get_survival(2.5, 525.0, 105.0, location=18.5) == pytest.approx(\n 0.9890415\n )\n\n\[email protected]\ndef test_get_survival_zero_shape():\n \"\"\"should return 1.0 when passed a shape=0.0.\"\"\"\n assert math.isnan(weibull.get_survival(0.0, 525.0, 105.0))\n\n\[email protected]\ndef test_get_survival_zero_scale():\n \"\"\"should return nan when passed a scale=0.0.\"\"\"\n assert math.isnan(weibull.get_survival(2.5, 0.0, 105.0))\n\n\[email protected]\ndef test_get_survival_zero_time():\n \"\"\"should return nan when passed a time=0.0.\"\"\"\n assert weibull.get_survival(2.5, 525.0, 0.0) == 1.0\n\n\[email protected]\ndef test_do_fit_defaults(test_data):\n \"\"\"should estimate the scale, shape, and location parameters for the data.\"\"\"\n _shape, _location, _scale = weibull.do_fit(test_data)\n\n assert _shape == pytest.approx(0.3982693)\n assert _location == pytest.approx(16.0)\n assert _scale == pytest.approx(9.7800320)\n\n\[email protected]\ndef test_do_fit_no_floc(test_data):\n \"\"\"should estimate the scale and shape parameters for the data.\"\"\"\n _shape, _location, _scale = weibull.do_fit(test_data, floc=0.0)\n\n assert _shape == pytest.approx(1.9326793)\n assert _location == 0.0\n assert _scale == pytest.approx(73.5260822)\n\n\[email protected]\[email protected](scipy.__version__ < \"1.7.1\", reason=\"requires scipy>=1.7.1\")\ndef test_do_fit_mm_method(test_data):\n \"\"\"should estimate the scale, shape, and location parameters using the MM\n method.\"\"\"\n _shape, _location, _scale = weibull.do_fit(test_data, method=\"MM\")\n\n assert _shape == pytest.approx(0.1891056)\n assert _location == pytest.approx(0.002784309)\n assert _scale == pytest.approx(0.003346282)\n\n\[email protected]\[email protected](scipy.__version__ < \"1.7.1\", reason=\"requires scipy>=1.7.1\")\ndef test_do_fit_mm_method_no_floc(test_data):\n \"\"\"should estimate the scale and shape parameters using the MM method.\"\"\"\n _shape, _location, _scale = weibull.do_fit(test_data, method=\"MM\", floc=0.0)\n\n assert _shape == pytest.approx(0.166558)\n assert _location == 0.0\n assert _scale == pytest.approx(0.00337509)\n" ]
[ [ "numpy.array" ] ]
micbru/montepython_public
[ "d8a90553f327bc67a45eaad21338681d0c34c29f" ]
[ "montepython/nested_sampling.py" ]
[ "\"\"\"\n.. module:: nested_sampling\n :synopsis: Interface the MultiNest program with Monte Python\n\nThis implementation relies heavily on the existing Python wrapper for\nMultiNest, called PyMultinest, written by Johannes Buchner, and available `at\nthis address <https://github.com/JohannesBuchner/PyMultiNest>`_ .\n\nThe main routine, :func:`run`, truly interfaces the two codes. It takes for\ninput the cosmological module, data and command line. It then defines\ninternally two functions, :func:`prior() <nested_sampling.prior>` and\n:func:`loglike` that will serve as input for the run function of PyMultiNest.\n\n.. moduleauthor:: Jesus Torrado <[email protected]>\n.. moduleauthor:: Benjamin Audren <[email protected]>\n\"\"\"\nfrom pymultinest import run as nested_run\nimport numpy as np\nimport os\nfrom copy import copy\nimport warnings\nimport io_mp\nimport sampler\n\n# Data on file names and MultiNest options, that may be called by other modules\n\n# MultiNest subfolder and name separator\nNS_subfolder = 'NS'\nNS_separator = '-'\n# MultiNest file names ending, i.e. after the defined 'base_name'\nname_rejected = NS_separator + 'ev.dat' # rejected points\nname_post = NS_separator + '.txt' # accepted points\nname_post_sep = NS_separator + 'post_separate.dat' # accepted points separated by '\\n\\n'\nname_post_equal = NS_separator + 'post_equal_weights.dat' # some acc. points, same sample prob.\nname_stats = NS_separator + 'stats.dat' # summarized information, explained\nname_summary = NS_separator + 'summary.txt' # summarized information\n# New files\nname_paramnames = '.paramnames' # in the NS/ subfolder\nname_arguments = '.arguments' # in the NS/ subfolder\nname_chain_acc = 'chain_NS__accepted.txt' # in the chain root folder\nname_chain_rej = 'chain_NS__rejected.txt' # in the chain root folder\n# Log.param name (ideally, we should import this one from somewhere else)\nname_logparam = 'log.param'\n\n# Multinest option prefix\nNS_prefix = 'NS_'\n# User-defined arguments of PyMultiNest, and 'argparse' keywords\n# First: basic string -> bool type conversion:\nstr2bool = lambda s: True if s.lower() == 'true' else False\nNS_user_arguments = {\n # General sampling options\n 'n_live_points':\n {'help': 'Number of live samples',\n 'type': int},\n 'importance_nested_sampling':\n {'help': 'True or False',\n 'type': str2bool},\n 'sampling_efficiency':\n {'help': 'Sampling efficiency',\n 'type': float},\n 'const_efficiency_mode':\n {'help': 'True or False',\n 'type': str2bool},\n 'seed':\n {'help': 'Random seed',\n 'type': int},\n 'log_zero':\n {'help': 'Min. log-evidence to consider',\n 'type': float},\n 'n_iter_before_update':\n {'help': 'Number of iterations between updates',\n 'type': int},\n # Ending conditions\n 'evidence_tolerance':\n {'help': 'Evidence tolerance',\n 'type': float},\n 'max_iter':\n {'help': 'Max. number of iterations',\n 'type': int},\n # Multimodal sampling\n 'multimodal':\n {'help': 'True or False',\n 'type': str2bool},\n 'max_modes':\n {'help': 'Max. number of modes to consider',\n 'type': int},\n 'mode_tolerance':\n {'help': 'Min. value of the log-evidence for a mode to be considered',\n 'type': float},\n 'clustering_params':\n {'help': 'Parameters to be used for mode separation',\n 'type': str,\n 'nargs': '+'}\n }\n# Automatically-defined arguments of PyMultiNest, type specified\nNS_auto_arguments = {\n 'n_dims': {'type': int},\n 'n_params': {'type': int},\n 'verbose': {'type': str2bool},\n 'outputfiles_basename': {'type': str},\n 'init_MPI': {'type': str2bool}\n }\n\n\ndef initialise(cosmo, data, command_line):\n \"\"\"\n Main call to prepare the information for the MultiNest run.\n \"\"\"\n\n # Convenience variables\n varying_param_names = data.get_mcmc_parameters(['varying'])\n derived_param_names = data.get_mcmc_parameters(['derived'])\n\n # Check that all the priors are flat and that all the parameters are bound\n is_flat, is_bound = sampler.check_flat_bound_priors(\n data.mcmc_parameters, varying_param_names)\n if not is_flat:\n raise io_mp.ConfigurationError(\n 'Nested Sampling with MultiNest is only possible with flat ' +\n 'priors. Sorry!')\n if not is_bound:\n raise io_mp.ConfigurationError(\n 'Nested Sampling with MultiNest is only possible for bound ' +\n 'parameters. Set reasonable bounds for them in the \".param\"' +\n 'file.')\n\n # If absent, create the sub-folder NS\n NS_folder = os.path.join(command_line.folder, NS_subfolder)\n if not os.path.exists(NS_folder):\n os.makedirs(NS_folder)\n\n # Use chain name as a base name for MultiNest files\n chain_name = [a for a in command_line.folder.split(os.path.sep) if a][-1]\n base_name = os.path.join(NS_folder, chain_name)\n\n # Prepare arguments for PyMultiNest\n # -- Automatic arguments\n data.NS_arguments['n_dims'] = len(varying_param_names)\n data.NS_arguments['n_params'] = (len(varying_param_names) +\n len(derived_param_names))\n data.NS_arguments['verbose'] = True\n data.NS_arguments['outputfiles_basename'] = base_name + NS_separator\n # -- User-defined arguments\n for arg in NS_user_arguments:\n value = getattr(command_line, NS_prefix+arg)\n # Special case: clustering parameters\n if arg == 'clustering_params':\n clustering_param_names = value if value != -1 else []\n continue\n # Rest of the cases\n if value != -1:\n data.NS_arguments[arg] = value\n # else: don't define them -> use PyMultiNest default value\n\n # Clustering parameters -- reordering to put them first\n NS_param_names = []\n if clustering_param_names:\n data.NS_arguments['n_clustering_params'] = len(clustering_param_names)\n for param in clustering_param_names:\n if not param in varying_param_names:\n raise io_mp.ConfigurationError(\n 'The requested clustering parameter \"%s\"' % param +\n ' was not found in your \".param\" file. Pick a valid one.')\n NS_param_names.append(param)\n for param in varying_param_names:\n if not param in NS_param_names:\n NS_param_names.append(param)\n data.NS_param_names = NS_param_names\n \n # Caveat: multi-modal sampling OFF by default; if requested, INS disabled\n try:\n if data.NS_arguments['multimodal']:\n data.NS_arguments['importance_nested_sampling'] = False\n warnings.warn('Multi-modal sampling has been requested, ' +\n 'so Importance Nested Sampling has been disabled')\n except KeyError:\n data.NS_arguments['multimodal'] = False\n\n # MPI: don't initialise it inside MultiNest.\n # Rather, it is either initialised by Monte Python (if MPI used) or ignored\n data.NS_arguments['init_MPI']=False\n\n # Write the MultiNest arguments and parameter ordering\n with open(base_name+name_arguments, 'w') as afile:\n for arg in data.NS_arguments:\n if arg != 'n_clustering_params':\n afile.write(' = '.join(\n [str(arg), str(data.NS_arguments[arg])]))\n else:\n afile.write('clustering_params = ' +\n ' '.join(clustering_param_names))\n afile.write('\\n')\n with open(base_name+name_paramnames, 'w') as pfile:\n pfile.write('\\n'.join(NS_param_names+derived_param_names))\n\n\ndef run(cosmo, data, command_line):\n \"\"\"\n Main call to run the MultiNest sampler.\n\n Note the unusual set-up here, with the two following functions, `prior` and\n `loglike` having their docstrings written in the encompassing function.\n This trick was necessary as MultiNest required these two functions to be\n defined with a given number of parameters, so we could not add `data`. By\n defining them inside the run function, this problem was by-passed.\n\n .. function:: prior\n\n Generate the prior function for MultiNest\n\n It should transform the input unit cube into the parameter cube. This\n function actually wraps the method :func:`map_from_unit_interval()\n <prior.Prior.map_from_unit_interval>` of the class :class:`Prior\n <prior.Prior>`.\n\n Parameters\n ----------\n cube : array\n Contains the current point in unit parameter space that has been\n selected within the MultiNest part.\n ndim : int\n Number of varying parameters\n nparams : int\n Total number of parameters, including the derived ones (not used,\n so hidden in `*args`)\n\n\n .. function:: loglike\n\n Generate the Likelihood function for MultiNest\n\n Parameters\n ----------\n cube : array\n Contains the current point in the correct parameter space after\n transformation from :func:`prior`.\n ndim : int\n Number of varying parameters\n nparams : int\n Total number of parameters, including the derived ones (not used,\n so hidden in `*args`)\n\n \"\"\"\n # Convenience variables\n derived_param_names = data.get_mcmc_parameters(['derived'])\n NS_param_names = data.NS_param_names\n\n # Function giving the prior probability\n def prior(cube, ndim, *args):\n \"\"\"\n Please see the encompassing function docstring\n\n \"\"\"\n for i, name in zip(range(ndim), NS_param_names):\n cube[i] = data.mcmc_parameters[name]['prior']\\\n .map_from_unit_interval(cube[i])\n\n # Function giving the likelihood probability\n def loglike(cube, ndim, *args):\n \"\"\"\n Please see the encompassing function docstring\n\n \"\"\"\n # Updates values: cube --> data\n for i, name in zip(range(ndim), NS_param_names):\n data.mcmc_parameters[name]['current'] = cube[i]\n # Propagate the information towards the cosmo arguments\n data.update_cosmo_arguments()\n lkl = sampler.compute_lkl(cosmo, data)\n for i, name in enumerate(derived_param_names):\n cube[ndim+i] = data.mcmc_parameters[name]['current']\n return lkl\n\n # Launch MultiNest, and recover the output code\n output = nested_run(loglike, prior, **data.NS_arguments)\n\n # Assuming this worked, i.e. if output is `None`,\n # state it and suggest the user to analyse the output.\n if output is None:\n warnings.warn('The sampling with MultiNest is done.\\n' +\n 'You can now analyse the output calling Monte Python ' +\n ' with the -info flag in the chain_name/NS subfolder,' +\n 'or, if you used multimodal sampling, in the ' +\n 'chain_name/mode_# subfolders.')\n\n\ndef from_NS_output_to_chains(folder):\n \"\"\"\n Translate the output of MultiNest into readable output for Monte Python\n\n This routine will be called by the module :mod:`analyze`.\n\n If mode separation has been performed (i.e., multimodal=True), it creates\n 'mode_#' subfolders containing a chain file with the corresponding samples\n and a 'log.param' file in which the starting point is the best fit of the\n nested sampling, and the same for the sigma. The minimum and maximum value\n are cropped to the extent of the modes in the case of the parameters used\n for the mode separation, and preserved in the rest.\n\n The mono-modal case is treated as a special case of the multi-modal one.\n\n \"\"\"\n chain_name = [a for a in folder.split(os.path.sep) if a][-2]\n base_name = os.path.join(folder, chain_name)\n\n # Read the arguments of the NS run\n # This file is intended to be machine generated: no \"#\" ignored or tests\n # done\n NS_arguments = {}\n with open(base_name+name_arguments, 'r') as afile:\n for line in afile:\n arg = line.split('=')[0].strip()\n value = line.split('=')[1].strip()\n arg_type = (NS_user_arguments[arg]['type']\n if arg in NS_user_arguments else\n NS_auto_arguments[arg]['type'])\n value = arg_type(value)\n if arg == 'clustering_params':\n value = [a.strip() for a in value.split()]\n NS_arguments[arg] = value\n multimodal = NS_arguments.get('multimodal')\n # Read parameters order\n NS_param_names = np.loadtxt(base_name+name_paramnames, dtype='str').tolist()\n # In multimodal case, if there were no clustering params specified, ALL are\n if multimodal and not NS_arguments.get('clustering_params'):\n NS_arguments['clustering_params'] = NS_param_names\n\n # Extract the necessary information from the log.param file\n # Including line numbers of the parameters\n with open(os.path.join(folder, '..', name_logparam), 'r') as log_file:\n log_lines = log_file.readlines()\n # Number of the lines to be changed\n param_names = []\n param_lines = {}\n param_data = {}\n pre, pos = 'data.parameters[', ']'\n for i, line in enumerate(log_lines):\n if pre in line:\n if line.strip()[0] == '#':\n continue\n\n # These lines allow MultiNest to deal with fixed nuisance parameters \n sigma = float(line.split(',')[3].strip())\n if sigma == 0.0:\n #If derived parameter, keep it, else discard it: \n paramtype = line.split(',')[5].strip()[1:-2]\n if paramtype != 'derived':\n continue\n\n param_name = line.split('=')[0][line.find(pre)+len(pre):\n line.find(pos)]\n param_name = param_name.replace('\"','').replace(\"'\",'').strip()\n param_names.append(param_name)\n param_data[param_name] = [a.strip() for a in\n line.split('=')[1].strip('[]').split(',')]\n param_lines[param_name] = i\n\n # Create the mapping from NS ordering to log.param ordering\n columns_reorder = [NS_param_names.index(param) for param in param_names]\n\n # Open the 'stats.dat' file to see what happened and retrieve some info\n stats_file = open(base_name+name_stats, 'r')\n lines = stats_file.readlines()\n stats_file.close()\n # Mode-separated info\n i = 0\n n_modes = 0\n stats_mode_lines = {0: []}\n for line in lines:\n if 'Nested Sampling Global Log-Evidence' in line:\n global_logZ, global_logZ_err = [float(a.strip()) for a in\n line.split(':')[1].split('+/-')]\n if 'Total Modes Found' in line:\n n_modes = int(line.split(':')[1].strip())\n if line[:4] == 'Mode':\n i += 1\n stats_mode_lines[i] = []\n # This stores the info of each mode i>1 in stats_mode_lines[i] and in\n # i=0 the lines previous to the modes, in the multi-modal case or the\n # info of the only mode, in the mono-modal case\n stats_mode_lines[i].append(line)\n assert n_modes == max(stats_mode_lines.keys()), (\n 'Something is wrong... (strange error n.1)')\n\n # Prepare the accepted-points file -- modes are separated by 2 line breaks\n accepted_name = base_name + (name_post_sep if multimodal else name_post)\n with open(accepted_name, 'r') as accepted_file:\n mode_lines = [a for a in ''.join(accepted_file.readlines()).split('\\n\\n')\n if a != '']\n if multimodal:\n mode_lines = [[]] + mode_lines\n assert len(mode_lines) == 1+n_modes, 'Something is wrong... (strange error n.2)'\n\n# TODO: prepare total and rejected chain\n\n # Process each mode:\n ini = 1 if multimodal else 0\n for i in range(ini, 1+n_modes):\n # Create subfolder\n if multimodal:\n mode_subfolder = 'mode_'+str(i).zfill(len(str(n_modes)))\n else:\n mode_subfolder = ''\n mode_subfolder = os.path.join(folder, '..', mode_subfolder)\n if not os.path.exists(mode_subfolder):\n os.makedirs(mode_subfolder)\n\n # Add ACCEPTED points\n mode_data = np.array(mode_lines[i].split(), dtype='float64')\n columns = 2+NS_arguments['n_params']\n mode_data = mode_data.reshape([mode_data.shape[0]/columns, columns])\n # Rearrange: sample-prob | -2*loglik | params (clustering first)\n # ---> sample-prob | -loglik | params (log.param order)\n mode_data[:, 1] = mode_data[:, 1] / 2.\n mode_data[:, 2:] = mode_data[:, [2+j for j in columns_reorder]]\n np.savetxt(os.path.join(mode_subfolder, name_chain_acc),\n mode_data, fmt='%.6e')\n\n # If we are not in the multimodal case, we are done!\n if not multimodal:\n break\n # In the multimodal case, we want to write a log.param for each mod\n this_log_lines = copy(log_lines)\n\n # Get the necessary info of the parameters:\n # -- max_posterior (MAP), sigma <--- stats.dat file\n for j, line in enumerate(stats_mode_lines[i]):\n if 'Sigma' in line:\n line_sigma = j+1\n if 'MAP' in line:\n line_MAP = j+2\n MAPs = {}\n sigmas = {}\n for j, param in enumerate(NS_param_names):\n n, MAP = stats_mode_lines[i][line_MAP+j].split()\n assert int(n) == j+1, 'Something is wrong... (strange error n.3)'\n MAPs[param] = MAP\n n, mean, sigma = stats_mode_lines[i][line_sigma+j].split()\n assert int(n) == j+1, 'Something is wrong... (strange error n.4)'\n sigmas[param] = sigma\n # -- minimum rectangle containing the mode (only clustering params)\n mins = {}\n maxs = {}\n for param in NS_arguments['clustering_params']:\n # Notice that in the next line we use param_names and not\n # NS_param_names: the chain lines have already been reordered\n values = mode_data[:, 2+param_names.index(param)]\n mins[param] = min(values)\n maxs[param] = max(values)\n # Create the log.param file\n for param in param_names:\n if param in NS_arguments['clustering_params']:\n mini, maxi = '%.6e'%mins[param], '%.6e'%maxs[param]\n else:\n mini, maxi = param_data[param][1], param_data[param][2]\n scaling = param_data[param][4]\n ptype = param_data[param][5]\n line = pre+\"'\"+param+\"'\"+pos\n values = [MAPs[param], mini, maxi, sigmas[param], scaling, ptype]\n line += ' = [' + ', '.join(values) + ']\\n'\n this_log_lines[param_lines[param]] = line\n # Write it!\n with open(os.path.join(mode_subfolder, 'log.param'), 'w') as log_file:\n log_file.writelines(this_log_lines)\n" ]
[ [ "numpy.loadtxt" ] ]
SamyTahar/CarND-Capstone
[ "c92238c0e36501729402d7a8644c12cc3859988f" ]
[ "ros/src/tl_detector/light_classification/tl_classifier.py" ]
[ "from styx_msgs.msg import TrafficLight\nimport tensorflow as tf\nimport numpy as np\nimport datetime\nimport rospy\n\n\nclass TLClassifier(object):\n def __init__(self, is_sim):\n #TODO load classifier\n\n if is_sim is False:\n PATH_TO_GRAPH = r'light_classification/models/sim/frozen_inference_graph.pb'\n rospy.logwarn('PATH_TO_GRAPH: %s', PATH_TO_GRAPH)\n rospy.logwarn('simulator')\n else:\n PATH_TO_GRAPH = r'light_classification/models/site/frozen_inference_graph.pb'\n rospy.logwarn('PATH_TO_GRAPH: %s', PATH_TO_GRAPH)\n rospy.logwarn('on site')\n\n self.graph = tf.Graph()\n self.threshold = .5\n\n with self.graph.as_default():\n od_graph_def = tf.GraphDef()\n with tf.gfile.GFile(PATH_TO_GRAPH, 'rb') as fid:\n od_graph_def.ParseFromString(fid.read())\n tf.import_graph_def(od_graph_def, name='')\n\n self.image_tensor = self.graph.get_tensor_by_name('image_tensor:0')\n self.boxes = self.graph.get_tensor_by_name('detection_boxes:0')\n self.scores = self.graph.get_tensor_by_name('detection_scores:0')\n self.classes = self.graph.get_tensor_by_name('detection_classes:0')\n self.num_detections = self.graph.get_tensor_by_name('num_detections:0')\n\n self.sess = tf.Session(graph=self.graph)\n\n def get_classification(self, image):\n \"\"\"Determines the color of the traffic light in the image\n Args:\n image (cv::Mat): image containing the traffic light\n Returns:\n int: ID of traffic light color (specified in styx_msgs/TrafficLight)\n \"\"\"\n #TODO implement light color prediction\n\n with self.graph.as_default():\n img_expand = np.expand_dims(np.asarray(image, dtype=np.uint8), 0)\n (boxes, scores, classes, num_detections) = self.sess.run(\n [self.boxes, self.scores, self.classes, self.num_detections],\n feed_dict={self.image_tensor: img_expand})\n\n boxes = np.squeeze(boxes)\n scores = np.squeeze(scores)\n classes = np.squeeze(classes).astype(np.int32)\n\n #confidence_cutoff = .1\n\n #boxes, scores, classes = self.filter_boxes(confidence_cutoff, boxes, scores, classes)\n\n #if scores[0] is not None:\n # print('SCORES: ', scores[0])\n # print('CLASSES: ', classes[0])\n\n if scores[0] > self.threshold:\n if classes[0] == 1:\n print('GREEN')\n return TrafficLight.GREEN\n elif classes[0] == 2:\n print('RED')\n return TrafficLight.RED\n elif classes[0] == 3:\n print('YELLOW')\n return TrafficLight.YELLOW\n elif classes[0] == 4:\n print('UNKNOWN')\n return TrafficLight.UNKNOWN\n\n return TrafficLight.UNKNOWN\n\n def filter_boxes(self, min_score, boxes, scores, classes):\n \"\"\"Return boxes with a confidence >= `min_score`\"\"\"\n n = len(classes)\n idxs = []\n for i in range(n):\n if scores[i] >= min_score:\n idxs.append(i)\n\n filtered_boxes = boxes[idxs, ...]\n filtered_scores = scores[idxs, ...]\n filtered_classes = classes[idxs, ...]\n return filtered_boxes, filtered_scores, filtered_classes\n" ]
[ [ "numpy.asarray", "tensorflow.Graph", "tensorflow.Session", "tensorflow.GraphDef", "tensorflow.import_graph_def", "tensorflow.gfile.GFile", "numpy.squeeze" ] ]
mkkb/vispy
[ "8540f8d96fe3af84ba80bde6d6bf55484eaa8e3a" ]
[ "vispy/visuals/image.py" ]
[ "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\nfrom __future__ import division\n\nimport numpy as np\n\nfrom ..gloo import Texture2D, VertexBuffer\nfrom ..color import get_colormap\nfrom .shaders import Function, FunctionChain\nfrom .transforms import NullTransform\nfrom .visual import Visual\nfrom ..ext.six import string_types\nfrom ..io import load_spatial_filters\n\nVERT_SHADER = \"\"\"\nuniform int method; // 0=subdivide, 1=impostor\nattribute vec2 a_position;\nattribute vec2 a_texcoord;\nvarying vec2 v_texcoord;\n\nvoid main() {\n v_texcoord = a_texcoord;\n gl_Position = $transform(vec4(a_position, 0., 1.));\n}\n\"\"\"\n\nFRAG_SHADER = \"\"\"\nuniform vec2 image_size;\nuniform int method; // 0=subdivide, 1=impostor\nuniform sampler2D u_texture;\nvarying vec2 v_texcoord;\n\nvec4 map_local_to_tex(vec4 x) {\n // Cast ray from 3D viewport to surface of image\n // (if $transform does not affect z values, then this\n // can be optimized as simply $transform.map(x) )\n vec4 p1 = $transform(x);\n vec4 p2 = $transform(x + vec4(0, 0, 0.5, 0));\n p1 /= p1.w;\n p2 /= p2.w;\n vec4 d = p2 - p1;\n float f = p2.z / d.z;\n vec4 p3 = p2 - d * f;\n\n // finally map local to texture coords\n return vec4(p3.xy / image_size, 0, 1);\n}\n\n\nvoid main()\n{\n vec2 texcoord;\n if( method == 0 ) {\n texcoord = v_texcoord;\n }\n else {\n // vertex shader ouptuts clip coordinates;\n // fragment shader maps to texture coordinates\n texcoord = map_local_to_tex(vec4(v_texcoord, 0, 1)).xy;\n }\n\n gl_FragColor = $color_transform($get_data(texcoord));\n}\n\"\"\" # noqa\n\n_interpolation_template = \"\"\"\n #include \"misc/spatial-filters.frag\"\n vec4 texture_lookup_filtered(vec2 texcoord) {\n if(texcoord.x < 0.0 || texcoord.x > 1.0 ||\n texcoord.y < 0.0 || texcoord.y > 1.0) {\n discard;\n }\n return %s($texture, $shape, texcoord);\n }\"\"\"\n\n_texture_lookup = \"\"\"\n vec4 texture_lookup(vec2 texcoord) {\n if(texcoord.x < 0.0 || texcoord.x > 1.0 ||\n texcoord.y < 0.0 || texcoord.y > 1.0) {\n discard;\n }\n return texture2D($texture, texcoord);\n }\"\"\"\n\n\n_null_color_transform = 'vec4 pass(vec4 color) { return color; }'\n_c2l = 'float cmap(vec4 color) { return (color.r + color.g + color.b) / 3.; }'\n\n\ndef _build_color_transform(data, cmap):\n if data.ndim == 2 or data.shape[2] == 1:\n fun = FunctionChain(None, [Function(_c2l), Function(cmap.glsl_map)])\n else:\n fun = Function(_null_color_transform)\n return fun\n\n\nclass ImageVisual(Visual):\n \"\"\"Visual subclass displaying an image.\n\n Parameters\n ----------\n data : ndarray\n ImageVisual data. Can be shape (M, N), (M, N, 3), or (M, N, 4).\n method : str\n Selects method of rendering image in case of non-linear transforms.\n Each method produces similar results, but may trade efficiency\n and accuracy. If the transform is linear, this parameter is ignored\n and a single quad is drawn around the area of the image.\n\n * 'auto': Automatically select 'impostor' if the image is drawn\n with a nonlinear transform; otherwise select 'subdivide'.\n * 'subdivide': ImageVisual is represented as a grid of triangles\n with texture coordinates linearly mapped.\n * 'impostor': ImageVisual is represented as a quad covering the\n entire view, with texture coordinates determined by the\n transform. This produces the best transformation results, but may\n be slow.\n\n grid: tuple (rows, cols)\n If method='subdivide', this tuple determines the number of rows and\n columns in the image grid.\n cmap : str | ColorMap\n Colormap to use for luminance images.\n clim : str | tuple\n Limits to use for the colormap. Can be 'auto' to auto-set bounds to\n the min and max of the data.\n interpolation : str\n Selects method of image interpolation. Makes use of the two Texture2D\n interpolation methods and the available interpolation methods defined\n in vispy/gloo/glsl/misc/spatial_filters.frag\n\n * 'nearest': Default, uses 'nearest' with Texture2D interpolation.\n * 'bilinear': uses 'linear' with Texture2D interpolation.\n * 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', 'bicubic',\n 'catrom', 'mitchell', 'spline16', 'spline36', 'gaussian',\n 'bessel', 'sinc', 'lanczos', 'blackman'\n\n **kwargs : dict\n Keyword arguments to pass to `Visual`.\n\n Notes\n -----\n The colormap functionality through ``cmap`` and ``clim`` are only used\n if the data are 2D.\n \"\"\"\n def __init__(self, data=None, method='auto', grid=(1, 1),\n cmap='viridis', clim='auto',\n interpolation='nearest', **kwargs):\n self._data = None\n\n # load 'float packed rgba8' interpolation kernel\n # to load float interpolation kernel use\n # `load_spatial_filters(packed=False)`\n kernel, self._interpolation_names = load_spatial_filters()\n\n self._kerneltex = Texture2D(kernel, interpolation='nearest')\n # The unpacking can be debugged by changing \"spatial-filters.frag\"\n # to have the \"unpack\" function just return the .r component. That\n # combined with using the below as the _kerneltex allows debugging\n # of the pipeline\n # self._kerneltex = Texture2D(kernel, interpolation='linear',\n # internalformat='r32f')\n\n # create interpolation shader functions for available\n # interpolations\n fun = [Function(_interpolation_template % n)\n for n in self._interpolation_names]\n self._interpolation_names = [n.lower()\n for n in self._interpolation_names]\n\n self._interpolation_fun = dict(zip(self._interpolation_names, fun))\n self._interpolation_names.sort()\n self._interpolation_names = tuple(self._interpolation_names)\n\n # overwrite \"nearest\" and \"bilinear\" spatial-filters\n # with \"hardware\" interpolation _data_lookup_fn\n self._interpolation_fun['nearest'] = Function(_texture_lookup)\n self._interpolation_fun['bilinear'] = Function(_texture_lookup)\n\n if interpolation not in self._interpolation_names:\n raise ValueError(\"interpolation must be one of %s\" %\n ', '.join(self._interpolation_names))\n\n self._interpolation = interpolation\n\n # check texture interpolation\n if self._interpolation == 'bilinear':\n texture_interpolation = 'linear'\n else:\n texture_interpolation = 'nearest'\n\n self._method = method\n self._grid = grid\n self._need_texture_upload = True\n self._need_vertex_update = True\n self._need_colortransform_update = True\n self._need_interpolation_update = True\n self._texture = Texture2D(np.zeros((1, 1, 4)),\n interpolation=texture_interpolation)\n self._subdiv_position = VertexBuffer()\n self._subdiv_texcoord = VertexBuffer()\n\n # impostor quad covers entire viewport\n vertices = np.array([[-1, -1], [1, -1], [1, 1],\n [-1, -1], [1, 1], [-1, 1]],\n dtype=np.float32)\n self._impostor_coords = VertexBuffer(vertices)\n self._null_tr = NullTransform()\n\n self._init_view(self)\n super(ImageVisual, self).__init__(vcode=VERT_SHADER, fcode=FRAG_SHADER)\n self.set_gl_state('translucent', cull_face=False)\n self._draw_mode = 'triangles'\n\n # define _data_lookup_fn as None, will be setup in\n # self._build_interpolation()\n self._data_lookup_fn = None\n\n self.clim = clim\n self.cmap = cmap\n if data is not None:\n self.set_data(data)\n self.freeze()\n\n def set_data(self, image):\n \"\"\"Set the data\n\n Parameters\n ----------\n image : array-like\n The image data.\n \"\"\"\n data = np.asarray(image)\n if self._data is None or self._data.shape != data.shape:\n self._need_vertex_update = True\n self._data = data\n self._need_texture_upload = True\n\n def view(self):\n v = Visual.view(self)\n self._init_view(v)\n return v\n\n def _init_view(self, view):\n # Store some extra variables per-view\n view._need_method_update = True\n view._method_used = None\n\n @property\n def clim(self):\n return (self._clim if isinstance(self._clim, string_types) else\n tuple(self._clim))\n\n @clim.setter\n def clim(self, clim):\n if isinstance(clim, string_types):\n if clim != 'auto':\n raise ValueError('clim must be \"auto\" if a string')\n else:\n clim = np.array(clim, float)\n if clim.shape != (2,):\n raise ValueError('clim must have two elements')\n self._clim = clim\n self._need_texture_upload = True\n self.update()\n\n @property\n def cmap(self):\n return self._cmap\n\n @cmap.setter\n def cmap(self, cmap):\n self._cmap = get_colormap(cmap)\n self._need_colortransform_update = True\n self.update()\n\n @property\n def method(self):\n return self._method\n\n @method.setter\n def method(self, m):\n if self._method != m:\n self._method = m\n self._need_vertex_update = True\n self.update()\n\n @property\n def size(self):\n return self._data.shape[:2][::-1]\n\n @property\n def interpolation(self):\n return self._interpolation\n\n @interpolation.setter\n def interpolation(self, i):\n if i not in self._interpolation_names:\n raise ValueError(\"interpolation must be one of %s\" %\n ', '.join(self._interpolation_names))\n if self._interpolation != i:\n self._interpolation = i\n self._need_interpolation_update = True\n self.update()\n\n @property\n def interpolation_functions(self):\n return self._interpolation_names\n\n # The interpolation code could be transferred to a dedicated filter\n # function in visuals/filters as discussed in #1051\n def _build_interpolation(self):\n \"\"\"Rebuild the _data_lookup_fn using different interpolations within\n the shader\n \"\"\"\n interpolation = self._interpolation\n self._data_lookup_fn = self._interpolation_fun[interpolation]\n self.shared_program.frag['get_data'] = self._data_lookup_fn\n\n # only 'bilinear' uses 'linear' texture interpolation\n if interpolation == 'bilinear':\n texture_interpolation = 'linear'\n else:\n # 'nearest' (and also 'bilinear') doesn't use spatial_filters.frag\n # so u_kernel and shape setting is skipped\n texture_interpolation = 'nearest'\n if interpolation != 'nearest':\n self.shared_program['u_kernel'] = self._kerneltex\n self._data_lookup_fn['shape'] = self._data.shape[:2][::-1]\n\n if self._texture.interpolation != texture_interpolation:\n self._texture.interpolation = texture_interpolation\n\n self._data_lookup_fn['texture'] = self._texture\n\n self._need_interpolation_update = False\n\n def _build_vertex_data(self):\n \"\"\"Rebuild the vertex buffers used for rendering the image when using\n the subdivide method.\n \"\"\"\n grid = self._grid\n w = 1.0 / grid[1]\n h = 1.0 / grid[0]\n\n quad = np.array([[0, 0, 0], [w, 0, 0], [w, h, 0],\n [0, 0, 0], [w, h, 0], [0, h, 0]],\n dtype=np.float32)\n quads = np.empty((grid[1], grid[0], 6, 3), dtype=np.float32)\n quads[:] = quad\n\n mgrid = np.mgrid[0.:grid[1], 0.:grid[0]].transpose(1, 2, 0)\n mgrid = mgrid[:, :, np.newaxis, :]\n mgrid[..., 0] *= w\n mgrid[..., 1] *= h\n\n quads[..., :2] += mgrid\n tex_coords = quads.reshape(grid[1]*grid[0]*6, 3)\n tex_coords = np.ascontiguousarray(tex_coords[:, :2])\n vertices = tex_coords * self.size\n\n self._subdiv_position.set_data(vertices.astype('float32'))\n self._subdiv_texcoord.set_data(tex_coords.astype('float32'))\n\n def _update_method(self, view):\n \"\"\"Decide which method to use for *view* and configure it accordingly.\n \"\"\"\n method = self._method\n if method == 'auto':\n if view.transforms.get_transform().Linear:\n method = 'subdivide'\n else:\n method = 'impostor'\n view._method_used = method\n\n if method == 'subdivide':\n view.view_program['method'] = 0\n view.view_program['a_position'] = self._subdiv_position\n view.view_program['a_texcoord'] = self._subdiv_texcoord\n elif method == 'impostor':\n view.view_program['method'] = 1\n view.view_program['a_position'] = self._impostor_coords\n view.view_program['a_texcoord'] = self._impostor_coords\n else:\n raise ValueError(\"Unknown image draw method '%s'\" % method)\n\n self.shared_program['image_size'] = self.size\n view._need_method_update = False\n self._prepare_transforms(view)\n\n def _build_texture(self):\n data = self._data\n if data.dtype == np.float64:\n data = data.astype(np.float32)\n\n if data.ndim == 2 or data.shape[2] == 1:\n # deal with clim on CPU b/c of texture depth limits :(\n # can eventually do this by simulating 32-bit float... maybe\n clim = self._clim\n if isinstance(clim, string_types) and clim == 'auto':\n clim = np.min(data), np.max(data)\n clim = np.asarray(clim, dtype=np.float32)\n data = data - clim[0] # not inplace so we don't modify orig data\n if clim[1] - clim[0] > 0:\n data /= clim[1] - clim[0]\n else:\n data[:] = 1 if data[0, 0] != 0 else 0\n self._clim = np.array(clim)\n\n self._texture.set_data(data)\n self._need_texture_upload = False\n\n def _compute_bounds(self, axis, view):\n if axis > 1:\n return (0, 0)\n else:\n return (0, self.size[axis])\n\n def _prepare_transforms(self, view):\n trs = view.transforms\n prg = view.view_program\n method = view._method_used\n if method == 'subdivide':\n prg.vert['transform'] = trs.get_transform()\n prg.frag['transform'] = self._null_tr\n else:\n prg.vert['transform'] = self._null_tr\n prg.frag['transform'] = trs.get_transform().inverse\n\n def _prepare_draw(self, view):\n if self._data is None:\n return False\n\n if self._need_interpolation_update:\n self._build_interpolation()\n\n if self._need_texture_upload:\n self._build_texture()\n\n if self._need_colortransform_update:\n self.shared_program.frag['color_transform'] = \\\n _build_color_transform(self._data, self.cmap)\n self._need_colortransform_update = False\n\n if self._need_vertex_update:\n self._build_vertex_data()\n\n if view._need_method_update:\n self._update_method(view)\n" ]
[ [ "numpy.max", "numpy.array", "numpy.empty", "numpy.asarray", "numpy.zeros", "numpy.ascontiguousarray", "numpy.min" ] ]
Yuriowindiatmoko2401/Xpersona
[ "3f5ccb183e3cfa755dea2dd2afd9abbf1a0f93b8" ]
[ "multilingual/transformers/file_utils.py" ]
[ "\"\"\"\nUtilities for working with the local dataset cache.\nThis file is adapted from the AllenNLP library at https://github.com/allenai/allennlp\nCopyright by the AllenNLP authors.\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function, unicode_literals)\n\nimport sys\nimport json\nimport logging\nimport os\nimport six\nimport shutil\nimport tempfile\nimport fnmatch\nfrom functools import wraps\nfrom hashlib import sha256\nfrom io import open\n\nimport boto3\nfrom botocore.config import Config\nfrom botocore.exceptions import ClientError\nimport requests\nfrom tqdm.auto import tqdm\nfrom contextlib import contextmanager\n\nlogger = logging.getLogger(__name__) # pylint: disable=invalid-name\n\ntry:\n import tensorflow as tf\n assert hasattr(tf, '__version__') and int(tf.__version__[0]) >= 2\n _tf_available = True # pylint: disable=invalid-name\n logger.info(\"TensorFlow version {} available.\".format(tf.__version__))\nexcept (ImportError, AssertionError):\n _tf_available = False # pylint: disable=invalid-name\n\ntry:\n import torch\n _torch_available = True # pylint: disable=invalid-name\n logger.info(\"PyTorch version {} available.\".format(torch.__version__))\nexcept ImportError:\n _torch_available = False # pylint: disable=invalid-name\n\n\ntry:\n from torch.hub import _get_torch_home\n torch_cache_home = _get_torch_home()\nexcept ImportError:\n torch_cache_home = os.path.expanduser(\n os.getenv('TORCH_HOME', os.path.join(\n os.getenv('XDG_CACHE_HOME', '~/.cache'), 'torch')))\ndefault_cache_path = os.path.join(torch_cache_home, 'transformers')\n\ntry:\n from urllib.parse import urlparse\nexcept ImportError:\n from urlparse import urlparse\n\ntry:\n from pathlib import Path\n PYTORCH_PRETRAINED_BERT_CACHE = Path(\n os.getenv('PYTORCH_TRANSFORMERS_CACHE', os.getenv('PYTORCH_PRETRAINED_BERT_CACHE', default_cache_path)))\nexcept (AttributeError, ImportError):\n PYTORCH_PRETRAINED_BERT_CACHE = os.getenv('PYTORCH_TRANSFORMERS_CACHE',\n os.getenv('PYTORCH_PRETRAINED_BERT_CACHE',\n default_cache_path))\n\nPYTORCH_TRANSFORMERS_CACHE = PYTORCH_PRETRAINED_BERT_CACHE # Kept for backward compatibility\nTRANSFORMERS_CACHE = PYTORCH_PRETRAINED_BERT_CACHE # Kept for backward compatibility\n\nWEIGHTS_NAME = \"pytorch_model.bin\"\nTF2_WEIGHTS_NAME = 'tf_model.h5'\nTF_WEIGHTS_NAME = 'model.ckpt'\nCONFIG_NAME = \"config.json\"\n\n\nDUMMY_INPUTS = [[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]\nDUMMY_MASK = [[1, 1, 1, 1, 1], [1, 1, 1, 0, 0], [0, 0, 0, 1, 1]]\n\nS3_BUCKET_PREFIX = \"https://s3.amazonaws.com/models.huggingface.co/bert\"\n\n\ndef is_torch_available():\n return _torch_available\n\ndef is_tf_available():\n return _tf_available\n\nif not six.PY2:\n def add_start_docstrings(*docstr):\n def docstring_decorator(fn):\n fn.__doc__ = ''.join(docstr) + fn.__doc__\n return fn\n return docstring_decorator\n\n def add_end_docstrings(*docstr):\n def docstring_decorator(fn):\n fn.__doc__ = fn.__doc__ + ''.join(docstr)\n return fn\n return docstring_decorator\nelse:\n # Not possible to update class docstrings on python2\n def add_start_docstrings(*docstr):\n def docstring_decorator(fn):\n return fn\n return docstring_decorator\n\n def add_end_docstrings(*docstr):\n def docstring_decorator(fn):\n return fn\n return docstring_decorator\n\n\ndef is_remote_url(url_or_filename):\n parsed = urlparse(url_or_filename)\n return parsed.scheme in ('http', 'https', 's3')\n\ndef hf_bucket_url(identifier, postfix=None):\n if postfix is None:\n return \"/\".join((S3_BUCKET_PREFIX, identifier))\n else:\n return \"/\".join((S3_BUCKET_PREFIX, identifier, postfix))\n\n\ndef url_to_filename(url, etag=None):\n \"\"\"\n Convert `url` into a hashed filename in a repeatable way.\n If `etag` is specified, append its hash to the url's, delimited\n by a period.\n If the url ends with .h5 (Keras HDF5 weights) ands '.h5' to the name\n so that TF 2.0 can identify it as a HDF5 file\n (see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1380)\n \"\"\"\n url_bytes = url.encode('utf-8')\n url_hash = sha256(url_bytes)\n filename = url_hash.hexdigest()\n\n if etag:\n etag_bytes = etag.encode('utf-8')\n etag_hash = sha256(etag_bytes)\n filename += '.' + etag_hash.hexdigest()\n\n if url.endswith('.h5'):\n filename += '.h5'\n\n return filename\n\n\ndef filename_to_url(filename, cache_dir=None):\n \"\"\"\n Return the url and etag (which may be ``None``) stored for `filename`.\n Raise ``EnvironmentError`` if `filename` or its stored metadata do not exist.\n \"\"\"\n if cache_dir is None:\n cache_dir = TRANSFORMERS_CACHE\n if sys.version_info[0] == 3 and isinstance(cache_dir, Path):\n cache_dir = str(cache_dir)\n\n cache_path = os.path.join(cache_dir, filename)\n if not os.path.exists(cache_path):\n raise EnvironmentError(\"file {} not found\".format(cache_path))\n\n meta_path = cache_path + '.json'\n if not os.path.exists(meta_path):\n raise EnvironmentError(\"file {} not found\".format(meta_path))\n\n with open(meta_path, encoding=\"utf-8\") as meta_file:\n metadata = json.load(meta_file)\n url = metadata['url']\n etag = metadata['etag']\n\n return url, etag\n\n\ndef cached_path(url_or_filename, cache_dir=None, force_download=False, proxies=None, resume_download=False):\n \"\"\"\n Given something that might be a URL (or might be a local path),\n determine which. If it's a URL, download the file and cache it, and\n return the path to the cached file. If it's already a local path,\n make sure the file exists and then return the path.\n Args:\n cache_dir: specify a cache directory to save the file to (overwrite the default cache dir).\n force_download: if True, re-dowload the file even if it's already cached in the cache dir.\n resume_download: if True, resume the download if incompletly recieved file is found.\n \"\"\"\n if cache_dir is None:\n cache_dir = TRANSFORMERS_CACHE\n if sys.version_info[0] == 3 and isinstance(url_or_filename, Path):\n url_or_filename = str(url_or_filename)\n if sys.version_info[0] == 3 and isinstance(cache_dir, Path):\n cache_dir = str(cache_dir)\n\n if is_remote_url(url_or_filename):\n # URL, so get it from the cache (downloading if necessary)\n return get_from_cache(url_or_filename, cache_dir=cache_dir,\n force_download=force_download, proxies=proxies,\n resume_download=resume_download)\n elif os.path.exists(url_or_filename):\n # File, and it exists.\n return url_or_filename\n elif urlparse(url_or_filename).scheme == '':\n # File, but it doesn't exist.\n raise EnvironmentError(\"file {} not found\".format(url_or_filename))\n else:\n # Something unknown\n raise ValueError(\"unable to parse {} as a URL or as a local path\".format(url_or_filename))\n\n\ndef split_s3_path(url):\n \"\"\"Split a full s3 path into the bucket name and path.\"\"\"\n parsed = urlparse(url)\n if not parsed.netloc or not parsed.path:\n raise ValueError(\"bad s3 path {}\".format(url))\n bucket_name = parsed.netloc\n s3_path = parsed.path\n # Remove '/' at beginning of path.\n if s3_path.startswith(\"/\"):\n s3_path = s3_path[1:]\n return bucket_name, s3_path\n\n\ndef s3_request(func):\n \"\"\"\n Wrapper function for s3 requests in order to create more helpful error\n messages.\n \"\"\"\n\n @wraps(func)\n def wrapper(url, *args, **kwargs):\n try:\n return func(url, *args, **kwargs)\n except ClientError as exc:\n if int(exc.response[\"Error\"][\"Code\"]) == 404:\n raise EnvironmentError(\"file {} not found\".format(url))\n else:\n raise\n\n return wrapper\n\n\n@s3_request\ndef s3_etag(url, proxies=None):\n \"\"\"Check ETag on S3 object.\"\"\"\n s3_resource = boto3.resource(\"s3\", config=Config(proxies=proxies))\n bucket_name, s3_path = split_s3_path(url)\n s3_object = s3_resource.Object(bucket_name, s3_path)\n return s3_object.e_tag\n\n\n@s3_request\ndef s3_get(url, temp_file, proxies=None):\n \"\"\"Pull a file directly from S3.\"\"\"\n s3_resource = boto3.resource(\"s3\", config=Config(proxies=proxies))\n bucket_name, s3_path = split_s3_path(url)\n s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file)\n\n\ndef http_get(url, temp_file, proxies=None, resume_size=0):\n headers={'Range':'bytes=%d-'%(resume_size,)} if resume_size > 0 else None\n response = requests.get(url, stream=True, proxies=proxies, headers=headers)\n if response.status_code == 416: # Range not satisfiable\n return\n content_length = response.headers.get('Content-Length')\n total = resume_size + int(content_length) if content_length is not None else None\n progress = tqdm(unit=\"B\", unit_scale=True, total=total, initial=resume_size, desc=\"Downloading\")\n for chunk in response.iter_content(chunk_size=1024):\n if chunk: # filter out keep-alive new chunks\n progress.update(len(chunk))\n temp_file.write(chunk)\n progress.close()\n\n\ndef get_from_cache(url, cache_dir=None, force_download=False, proxies=None, etag_timeout=10, resume_download=False):\n \"\"\"\n Given a URL, look for the corresponding dataset in the local cache.\n If it's not there, download it. Then return the path to the cached file.\n \"\"\"\n if cache_dir is None:\n cache_dir = TRANSFORMERS_CACHE\n if sys.version_info[0] == 3 and isinstance(cache_dir, Path):\n cache_dir = str(cache_dir)\n if sys.version_info[0] == 2 and not isinstance(cache_dir, str):\n cache_dir = str(cache_dir)\n\n if not os.path.exists(cache_dir):\n os.makedirs(cache_dir)\n\n # Get eTag to add to filename, if it exists.\n if url.startswith(\"s3://\"):\n etag = s3_etag(url, proxies=proxies)\n else:\n try:\n response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)\n if response.status_code != 200:\n etag = None\n else:\n etag = response.headers.get(\"ETag\")\n except (EnvironmentError, requests.exceptions.Timeout):\n etag = None\n\n if sys.version_info[0] == 2 and etag is not None:\n etag = etag.decode('utf-8')\n filename = url_to_filename(url, etag)\n\n # get cache path to put the file\n cache_path = os.path.join(cache_dir, filename)\n\n # If we don't have a connection (etag is None) and can't identify the file\n # try to get the last downloaded one\n if not os.path.exists(cache_path) and etag is None:\n matching_files = fnmatch.filter(os.listdir(cache_dir), filename + '.*')\n matching_files = list(filter(lambda s: not s.endswith('.json'), matching_files))\n if matching_files:\n cache_path = os.path.join(cache_dir, matching_files[-1])\n\n if resume_download:\n incomplete_path = cache_path + '.incomplete'\n @contextmanager\n def _resumable_file_manager():\n with open(incomplete_path,'a+b') as f:\n yield f\n os.remove(incomplete_path)\n temp_file_manager = _resumable_file_manager\n if os.path.exists(incomplete_path):\n resume_size = os.stat(incomplete_path).st_size\n else:\n resume_size = 0\n else:\n temp_file_manager = tempfile.NamedTemporaryFile\n resume_size = 0\n\n if not os.path.exists(cache_path) or force_download:\n # Download to temporary file, then copy to cache dir once finished.\n # Otherwise you get corrupt cache entries if the download gets interrupted.\n with temp_file_manager() as temp_file:\n logger.info(\"%s not found in cache or force_download set to True, downloading to %s\", url, temp_file.name)\n\n # GET file object\n if url.startswith(\"s3://\"):\n if resume_download:\n logger.warn('Warning: resumable downloads are not implemented for \"s3://\" urls')\n s3_get(url, temp_file, proxies=proxies)\n else:\n http_get(url, temp_file, proxies=proxies, resume_size=resume_size)\n\n # we are copying the file before closing it, so flush to avoid truncation\n temp_file.flush()\n # shutil.copyfileobj() starts at the current position, so go to the start\n temp_file.seek(0)\n\n logger.info(\"copying %s to cache at %s\", temp_file.name, cache_path)\n with open(cache_path, 'wb') as cache_file:\n shutil.copyfileobj(temp_file, cache_file)\n\n logger.info(\"creating metadata file for %s\", cache_path)\n meta = {'url': url, 'etag': etag}\n meta_path = cache_path + '.json'\n with open(meta_path, 'w') as meta_file:\n output_string = json.dumps(meta)\n if sys.version_info[0] == 2 and isinstance(output_string, str):\n output_string = unicode(output_string, 'utf-8') # The beauty of python 2\n meta_file.write(output_string)\n\n logger.info(\"removing temp file %s\", temp_file.name)\n\n return cache_path\n" ]
[ [ "torch.hub._get_torch_home" ] ]
rezaghoddoosian/Hierarchical-Task-Modeling
[ "b338b21547de11880d4345cb2bfdc5b44a37299d" ]
[ "train.py" ]
[ "import torch\r\nimport torch.nn.functional as F\r\nimport torch.optim as optim\r\nfrom video_dataset import Dataset\r\nfrom tensorboard_logger import log_value\r\nimport utils\r\nimport numpy as np\r\nfrom torch.autograd import Variable\r\nimport time\r\ntorch.set_default_tensor_type('torch.cuda.FloatTensor')\r\n\r\n\r\ndef compute_loss(itr,W,SHS_dish_logits_list,THS_dish_logits_list,final_logits,labels,dish_labels):\r\n ################\r\n dish_logits1,dish_logits2,dish_logits3,dish_logits_total=THS_dish_logits_list\r\n psi_a_logits,psi_c_logits=SHS_dish_logits_list\r\n attribute_loss=-torch.mean(torch.sum(torch.Tensor(W) * (Variable(labels) * F.log_softmax(psi_a_logits, dim=1)), dim=1),dim=0)\r\n dish_loss=-torch.mean(torch.sum(Variable(dish_labels) * F.log_softmax(psi_c_logits, dim=1), dim=1), dim=0)\r\n L_sh=(0.9*attribute_loss+0.1*dish_loss)\r\n ################\r\n dish_stage1 = -torch.mean(torch.sum(Variable(dish_labels) * F.log_softmax(dish_logits1, dim=1), dim=1), dim=0)\r\n dish_stage2 = -torch.mean(torch.sum(Variable(dish_labels) * F.log_softmax(dish_logits2, dim=1), dim=1), dim=0)\r\n dish_stage3 = -torch.mean(torch.sum(Variable(dish_labels) * F.log_softmax(dish_logits3, dim=1), dim=1), dim=0)\r\n dish_milloss_total = -torch.mean(torch.sum(Variable(dish_labels) * F.log_softmax(dish_logits_total, dim=1), dim=1), dim=0)\r\n L_th=dish_stage1 + dish_stage2 + dish_stage3 + dish_milloss_total\r\n ################\r\n L_fused = -torch.mean(torch.sum(Variable(dish_labels) * F.log_softmax(final_logits, dim=1), dim=1),dim=0)\r\n\r\n return L_sh+0.25*L_th+L_fused\r\n\r\n\r\n\r\n\r\ndef train(itr, dataset, args, model, optimizer, device,s=8):\r\n\r\n features, labels,dish_labels,W_tfidf,filenames = dataset.load_data(is_training=True,Wiftdf=model.W)\r\n seq_len = np.sum(np.max(np.abs(features), axis=2) > 0, axis=1)\r\n features = features[:,:np.max(seq_len),:]\r\n features = torch.from_numpy(features).float().to(device)\r\n labels = torch.from_numpy(labels).float().to(device)\r\n dish_labels = torch.from_numpy(dish_labels).float().to(device)\r\n SHS_logits,THS_logits,final_logits = model(itr,filenames,Variable(features),device,s,seq_len)\r\n total_loss = compute_loss(itr, W_tfidf, SHS_logits,THS_logits,final_logits, labels,dish_labels)\r\n if itr%100==0:\r\n print('Iteration: %d, Loss: %.3f' %(itr, total_loss.data.cpu().numpy()))\r\n optimizer.zero_grad()\r\n total_loss.backward()\r\n optimizer.step()\r\n\r\n\r\n\r\n\r\n" ]
[ [ "numpy.max", "torch.autograd.Variable", "torch.set_default_tensor_type", "torch.nn.functional.log_softmax", "torch.from_numpy", "numpy.abs", "torch.Tensor" ] ]
v-liuwei/USTC-2021Spring-Introduction_to_Deep_Learning_labs-
[ "92815244520001f01deed334d066a4e087f7a959" ]
[ "exp3/src/utils.py" ]
[ "import torch\nimport random\nimport os\nimport numpy as np\n\n\ndef binary_acc(preds, label):\n preds = torch.round(preds)\n correct = torch.eq(preds, label).float()\n acc = correct.sum() / correct.shape[0]\n return acc\n\n\ndef set_seed(seed=123):\n random.seed(seed)\n np.random.seed(seed)\n os.environ[\"PYTHONHASHSEED\"] = str(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n # torch.use_deterministic_algorithms(True)\n # torch.backends.cudnn.enabled = False\n torch.backends.cudnn.benchmark = False\n torch.backends.cudnn.deterministic = True\n os.environ[\"CUBLAS_WORKSPACE_CONFIG\"] = \":4096:2\"\n" ]
[ [ "torch.round", "torch.cuda.manual_seed_all", "torch.eq", "numpy.random.seed", "torch.manual_seed" ] ]
Atharva-Phatak/Is-That-A-Melanoma
[ "2327f05d0268cacab513890a8d7c4e6219bf54c7" ]
[ "dataset.py" ]
[ "import torch\nimport torch.nn as nn\n\nfrom PIL import Image\n\nclass MelanomaDataset(torch.utils.data.Dataset):\n\n '''This class retrieves the image and corresponding label for the image.\n Args:\n df : The dataframe which contains the image name and the label.\n path : The path where all the images as saved.\n transformations : The augmentations which you want to perform on the Image.\n is_train : Whether the data is training data or testing data \n\n Returns: A an Image Tensor and the correspoding Label tensor. \n '''\n\n def __init__(self , df , path , transformations = None , is_train = False):\n\n super(MelanomaDataset , self).__init__()\n\n self.df = df\n self.path = path\n self.transformations = transformations\n self.is_train = is_train\n\n def __len__(self):\n\n return len(self.df.shape[0])\n\n def __getitem__(self , idx):\n\n im_path = self.path + '/' + self.df['image_name'][idx] + '.png'\n img = Image.open(im_path)\n\n if self.transformations is not None:\n\n img = self.transformations(img)\n \n if self.is_train:\n\n target = self.df.iloc[idx]['target']\n return img , torch.tensor(target , dtype = torch.long)\n \n else:\n\n return img" ]
[ [ "torch.tensor" ] ]
MCW9/ImageForensics
[ "0e1fc7b992832b222a57ecfb2446fc6cf159baad" ]
[ "train.py" ]
[ "# Marcelo Cicconet, Oct 2017\n\nfrom scipy import misc\nimport numpy as np\nimport tensorflow as tf\nimport random\nimport matplotlib.pyplot as plt\nimport sys\nfrom image_distortions import *\nimport time\n\nwith tf.device('/gpu:0'):\n # --------------------------------------------------\n print('setup parameters')\n # --------------------------------------------------\n\n ntrain = 4000\n nvalid = 500\n ntest = 715\n imsize = 256\n imcropsize = 128\n nchannels = 1\n vsize = imsize*imsize # vector size\n batchsize = 256 # ntrain should be >= 2*batchsize\n\n nsteps = 1000 # n training steps; set to 0 and restorevariables (below) to true if you just want to perform test\n restorevariables = True # set to false to train from scratch; true to pickup training from last run\n\n impath = '/home/mc457/Images/SynthExamplesSelectedNoPlotsShuffle' # where's the data\n modelpath = '/home/mc457/Workspace/TFModel/SiamRelease.ckpt' # where to save model\n trainlogpath = '/home/mc457/Workspace/TFLog/SiamRelease/Train' # where to save train log for tensorboard\n validlogpath = '/home/mc457/Workspace/TFLog/SiamRelease/Valid' # where to save validation log for tensorboard\n testimoutpath = '/home/mc457/Workspace/Test' # where to save test images output results\n testplotoutpath = '/home/mc457/Workspace' # where to save plot with summary of test results\n\n # see Models.py for model hyperparameters setup\n\n # --------------------------------------------------\n print('setup data')\n # --------------------------------------------------\n\n # random similarity transform parameters\n rotrange = [-45,45]\n sclrange = [75,125] # in percent\n tlxrange = [-20,20]\n tlyrange = [-20,20]\n # random perspective transform parameter\n drange = 20\n # random histogram transform parameter\n gammarange = [75,125] # actual gamma is this/100\n\n row0 = int((imsize-imcropsize)/2)\n col0 = row0\n\n def imcrop(im):\n return im[row0:row0+imcropsize,col0:col0+imcropsize]\n\n def imdeformandcrop(im):\n # imout = imrandtform(imrandpptf(imrandreflection(im),drange),rotrange,sclrange,tlxrange,tlyrange)\n # imoutcrop = imout[row0:row0+imcropsize,col0:col0+imcropsize]\n # return imrandlocaledit(imrandgammaadj(imoutcrop,gammarange))\n\n r = np.random.rand()\n\n if r < 0.9:\n im1 = imrandreflection(im) if np.random.rand() < 0.5 else im\n im2 = imrandpptf(im1,drange) if np.random.rand() < 0.5 else im1\n im3 = imrandtform(im2,rotrange,sclrange,tlxrange,tlyrange) if np.random.rand() < 0.5 else im2\n else:\n im3 = im\n im4 = im3[row0:row0+imcropsize,col0:col0+imcropsize]\n im4 = im4-np.min(im4)\n if r < 0.9:\n im5 = imrandgammaadj(im4,gammarange) if np.random.rand() < 0.5 else im4\n im6 = imrandlocaledit(im5) if np.random.rand() < 0.5 else im5\n else:\n im6 = im4\n return im6\n \n Train = np.zeros((ntrain,imsize,imsize,nchannels))\n Valid = np.zeros((nvalid,imsize,imsize,nchannels))\n Test = np.zeros((ntest,imsize,imsize,nchannels))\n\n itrain = -1\n ivalid = -1\n itest = -1\n perm = np.arange(ntrain+nvalid+ntest)\n np.random.shuffle(perm)\n for isample in range(0, ntrain):\n path = '%s/I%05d.png' % (impath,perm[isample]+1)\n im = misc.imread(path).astype(float)/255\n itrain += 1\n Train[itrain,:,:,0] = im\n for isample in range(ntrain, ntrain+nvalid):\n path = '%s/I%05d.png' % (impath,perm[isample]+1)\n im = misc.imread(path).astype(float)/255\n ivalid += 1\n Valid[ivalid,:,:,0] = im\n for isample in range(ntrain+nvalid, ntrain+nvalid+ntest):\n path = '%s/I%05d.png' % (impath,perm[isample]+1)\n im = misc.imread(path).astype(float)/255\n itest += 1\n Test[itest,:,:,0] = im\n\n # --------------------------------------------------\n print('model')\n # --------------------------------------------------\n\n from Models import Model\n model = Model(nchannels, imcropsize)\n\n # --------------------------------------------------\n print('train')\n # -------------------------------------------------- \n\n saver = tf.train.Saver()\n sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) # config parameter needed to save variables when using GPU\n\n train_writer = tf.summary.FileWriter(trainlogpath, sess.graph)\n valid_writer = tf.summary.FileWriter(validlogpath, sess.graph)\n\n if restorevariables:\n saver.restore(sess, modelpath)\n print(\"Model restored.\")\n else:\n sess.run(tf.global_variables_initializer()) # do not use when restoring variables\n\n batch_data1 = np.zeros((batchsize,imcropsize,imcropsize,nchannels))\n batch_data2 = np.zeros((batchsize,imcropsize,imcropsize,nchannels))\n batch_data3 = np.zeros((batchsize,imcropsize,imcropsize,nchannels))\n\n batch_data1_Valid = np.zeros((nvalid/2,imcropsize,imcropsize,nchannels))\n batch_data2_Valid = np.zeros((nvalid/2,imcropsize,imcropsize,nchannels))\n batch_data3_Valid = np.zeros((nvalid/2,imcropsize,imcropsize,nchannels))\n\n # validation set\n for i in range(nvalid/2):\n batch_data1_Valid[i,:,:,0] = imrandclutter(imcrop(Valid[i,:,:,0]))\n batch_data2_Valid[i,:,:,0] = imrandclutter(imdeformandcrop(Valid[i,:,:,0]))\n batch_data3_Valid[i,:,:,0] = imrandclutter(imdeformandcrop(Valid[-i-1,:,:,0]))\n\n # train\n best_triplet_loss = np.inf\n count_no_improv = 0\n for i in range(nsteps+1):\n print('step %d' % i)\n\n perm = np.arange(ntrain)\n np.random.shuffle(perm)\n\n for j in range(batchsize):\n batch_data1[j,:,:,0] = imrandclutter(imcrop(Train[perm[j],:,:,0]))\n batch_data2[j,:,:,0] = imrandclutter(imdeformandcrop(Train[perm[j],:,:,0]))\n batch_data3[j,:,:,0] = imrandclutter(imdeformandcrop(Train[perm[-j-1],:,:,0]))\n\n if i % 10 == 0:\n model.assign_running_averages(sess)\n\n summary, tl, acrc_valid = model.valid_step_with_summary(sess,batch_data1_Valid,batch_data2_Valid,batch_data3_Valid,nvalid)\n valid_writer.add_summary(summary, i)\n print('\\ttl: %f' % tl)\n\n summary, acrc_batch, _ = model.train_step_with_summary(sess,batch_data1,batch_data2,batch_data3,2*batchsize)\n train_writer.add_summary(summary, i)\n\n print('\\tacrc_batch: %f' % acrc_batch)\n print('\\tacrc_valid: %f' % acrc_valid)\n\n if tl < best_triplet_loss:\n print(\"Saving model. Unsafe to CRTL+C.\")\n save_path = saver.save(sess, modelpath)\n print(\"Model saved in file: %s\" % save_path)\n\n best_triplet_loss = tl\n count_no_improv = 0\n else:\n count_no_improv += 1\n\n if count_no_improv == 1000:\n break # done with learning\n else:\n model.train_step_without_summary(sess,batch_data1,batch_data2,batch_data3)\n\n train_writer.close()\n valid_writer.close()\n\n # --------------------------------------------------\n print('test')\n # --------------------------------------------------\n\n hts = int(ntest/2)\n\n batch_data1 = np.zeros((1,imcropsize,imcropsize,nchannels))\n batch_data2 = np.zeros((1,imcropsize,imcropsize,nchannels))\n\n acc = 0\n plt.clf()\n for i in range(hts):\n im1 = imrandclutter(imcrop(Test[i,:,:,0]))\n im2 = imrandclutter(imdeformandcrop(Test[i,:,:,0]))\n im3 = imrandclutter(imdeformandcrop(Test[-i-1,:,:,0]))\n\n batch_data1[0,:,:,0] = im1\n test_label = 0\n if np.random.rand() < 0.5:\n batch_data2[0,:,:,0] = im2\n else:\n test_label = 1\n batch_data2[0,:,:,0] = im3\n\n sms = model.test(sess,batch_data1, batch_data2)\n\n plt.plot(i,0.5,'.k')\n if test_label == 0:\n plt.plot(i,sms,'og')\n else:\n plt.plot(i,sms,'or')\n\n lbl = 0\n if sms > 0.5:\n s = 'Same'\n else:\n s = 'Diff'\n lbl = 1\n\n concat1 = np.concatenate((batch_data1[0,:,:,0],0.5*np.ones((imcropsize,5))),axis=1)\n concat2 = np.concatenate((concat1,batch_data2[0,:,:,0]),axis=1)\n \n if lbl == test_label:\n acc += 1\n path = '%s/T%05d_%s.png' % (testimoutpath,i,s)\n else:\n path = '%s/T%05d_%s_ERROR.png' % (testimoutpath,i,s)\n\n misc.imsave(path,concat2)\n\n acrc = (float(acc)/float(hts))\n print('accuracy: %f' % acrc)\n plt.draw()\n plt.savefig('%s/SgmdTest_Acrc%dPcnt.png' % (testplotoutpath,int(100*acrc)), bbox_inches='tight')\n\n # --------------------------------------------------\n print('clean up')\n # --------------------------------------------------\n\n sess.close()" ]
[ [ "numpy.concatenate", "numpy.random.rand", "numpy.zeros", "matplotlib.pyplot.clf", "matplotlib.pyplot.plot", "tensorflow.train.Saver", "numpy.min", "numpy.random.shuffle", "numpy.ones", "scipy.misc.imread", "tensorflow.ConfigProto", "numpy.arange", "matplotlib.pyplot.draw", "tensorflow.device", "tensorflow.summary.FileWriter", "tensorflow.global_variables_initializer", "scipy.misc.imsave" ] ]
Afellman/hospital-chargemaster
[ "b3473c798fd2f343f7f02c1e32496f9eea9fa94d" ]
[ "data/oshpd-ca/parse.py" ]
[ "#!/usr/bin/env python\n\nimport os\nfrom glob import glob\nimport json\nimport pandas\nimport datetime\n\n# This is a challenging dataset because it's so big!\n# We will do our best to break the data into pieces\n\nhere = os.path.dirname(os.path.abspath(__file__))\nfolder = os.path.basename(here)\nlatest = '%s/latest' % here\nyear = datetime.datetime.today().year\n\noutput_data = os.path.join(here, 'data-latest-1.tsv')\noutput_year = os.path.join(here, 'data-%s-1.tsv' % year)\n\n# Don't continue if we don't have latest folder\nif not os.path.exists(latest):\n print('%s does not have parsed data.' % folder)\n sys.exit(0)\n\n# Don't continue if we don't have results.json\nresults_json = os.path.join(latest, 'records.json')\nif not os.path.exists(results_json):\n print('%s does not have results.json' % folder)\n sys.exit(1)\n\nwith open(results_json, 'r') as filey:\n results = json.loads(filey.read())\n\ncolumns = ['charge_code', \n 'price', \n 'description', \n 'hospital_id', \n 'filename', \n 'charge_type']\n\ndf = pandas.DataFrame(columns=columns)\n\nseen = []\nfor r in range(0, len(results)):\n result = results[r]\n filename = os.path.join(latest, result['filename'])\n if not os.path.exists(filename):\n print('%s is not found in latest folder.' % filename)\n continue\n\n if os.stat(filename).st_size == 0:\n print('%s is empty, skipping.' % filename)\n continue\n\n charge_type = 'standard'\n if \"drg\" in filename.lower():\n charge_type = \"drg\"\n\n if result['filename'] in seen:\n continue\n\n seen.append(result['filename'])\n\n print(\"Parsing %s\" % filename)\n\n if filename.endswith('xlsx'):\n\n # has counts, description, procedure type (not costs)\n if \"common25\" in filename.lower():\n continue\n\n # Unfortunately cdm_all files are inconsistent, would need custom parsing (for sheets) each\n elif \"cdm_all\" in filename.lower():\n continue\n\n # ['CDM', 'Description', 'Standard Charge', 'Surgery Center/Procedure Room Charge', 'L&D Charge', 'Supply Charge Range']\n elif \"106190930_CDM\" in filename or \"106190687_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'Description'\n price_key = 'Standard Charge'\n code_key = 'CDM'\n\n # ['NDC ID', 'NDC_CODE', 'RAW_11_DIGIT_NDC', 'RAW_NDC_CODE',\n # 'Medication ID', 'Medication Name', 'PACKAGE_SIZE', 'MED Units',\n # 'PACKAGE_QUANTITY', 'PRICE_CODE_C', 'Price COde', 'PRICE',\n # 'PRICE_PER_UNIT_YN']\n elif \"106370782_CDM_Rx\" in filename:\n content = pandas.read_excel(filename)\n\n # This distinguishes between inpatient / outpatient price code\n # ['Inpt/Contract Costs', 'Outpatient Cost']\n\n for row in content.iterrows():\n \n charge_type = 'outpatient'\n if row[1]['Price COde'] == 'Inpt/Contract Costs':\n charge_type = 'inpatient'\n\n idx = df.shape[0] + 1\n entry = [row[1]['PRICE_CODE_C'], # charge code\n row[1]['PRICE'], # price\n row[1]['Medication Name'], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n charge_type] \n df.loc[idx,:] = entry\n\n continue\n\n\n #['CHG CODE', 'BILLING DESC', 'DEPT',\n #' 2017 Price Per Unit for Infussion and Pharmacy',\n #'2018 Price Per Unit for Infussion and Pharmacy',\n #'Infussion and Pharmacy Price Change', '2018 Current PRICE',\n # '2017 PRICE', 'Price Difference', 'CPT']\n elif \"106130760_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'BILLING DESC'\n price_key = '2018 Current PRICE'\n code_key = 'CHG CODE'\n\n # ['MNEMONIC', 'DESCRIPTION', 'PRICE']\n elif \"106540798_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=4)\n description_key = 'DESCRIPTION'\n price_key = 'PRICE' \n code_key = 'MNEMONIC'\n\n # ['Procedure Code', 'Procedure Name', 'Revenue Code', 'Price']\n elif \"106370673_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'Procedure Name'\n price_key = 'Price' \n code_key = 'Revenue Code'\n\n # ['Count', 'Description', 'Procedure Type']\n # Doesn't have prices\n elif \"106560508_CDM\" in filename or \"106560529_CDM\" in filename:\n continue\n\n # ['CHARGE CODE', 'CHARGE DESCRIPTION', 'CHARGE AMOUNT ']\n elif \"106010967_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=1) \n description_key = 'CHARGE DESCRIPTION'\n price_key = 'CHARGE AMOUNT ' \n code_key = 'CHARGE CODE'\n\n\n # ['Description', 'FY -17 Facility Rate', 'FY-17 Professional Rate']\n elif \"106560481_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=3) \n description_key = 'Description'\n price_key = 'FY -17 Facility Rate'\n code_key = None\n\n # ['Charge Code', 'Description', 'Std Charge', 'Outpt Charge', 'Comment']\n elif \"106190680_CDM\" in filename or \"106190756_CDM\" in filename or \"106190758_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=3) \n description_key = 'Description'\n price_key = 'Std Charge' \n code_key = 'Charge Code'\n\n # ['Charge Code', 'Charge Description', 'Charge Amount', 'Comments']\n elif \"106071018_CDM\" in filename or \"106070988_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'Charge Description'\n price_key = 'Charge Amount' \n code_key = 'Charge Code'\n\n # ['Item ID', 'Item Name', 'External IDs', 'Alias', 'Active?',\n # 'Effective Date', 'Description', 'Supply or Drug?', 'Type of Item',\n # 'LDA Type', 'Reusable?', 'OR Location', 'Inv Location', 'Supplier',\n # 'Supplier Catalog No.', 'Supplier Price', 'Supplier Pkg Type',\n # 'Supplier Pkg to Shelf Ratio', 'Last Supplier', 'Last Catalog No.',\n # 'Last Purchase Price', 'Manufacturer', 'Manuf Catalog No.',\n # 'Manuf Pkg to Shelf Ratio', 'Order Ratio', 'Order Pack Type', 'GTINs',\n # 'Used by Areas', 'Chargeable', 'Charge Code EAP', 'Charge Code FT',\n # 'Charge Per Unit', 'Cost Per Unit', 'Unnamed: 33']\n elif \"106370782_CDM(2)\" in filename:\n content = pandas.read_excel(filename, skiprows=3)\n description_key = 'Item Name'\n price_key = 'Cost Per Unit'\n code_key = 'Item ID'\n\n # ['CDM#', 'CDM Description', 'Facility', 'gl_account_id', 'gl_sub_acct',\n #'chg_type_int_id', 'cod_dtl_ds', 'active_dt', 'Rev Code', 'Price',\n # 'CPT/HCPCS', 'Unnamed: 11']\n elif \"106301258_CDM\" in filename or \"106361370_CDM_\" in filename:\n content = pandas.read_excel(filename, skiprows=1)\n description_key = 'CDM Description'\n price_key = 'Price'\n code_key = 'CDM#'\n\n # ['IVNUM', 'IVDESC', 'CPT', 'Price', 'Clinic']\n elif \"106321016_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'IVDESC'\n price_key = 'Price'\n code_key = 'CPT'\n\n # ['GL', 'Bill Item Activity Type', 'PS:Price Sched Price', 'Description', 'SJGH_CDM']\n elif \"106391010_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'Description'\n price_key = 'PS:Price Sched Price'\n code_key = 'SJGH_CDM'\n\n # ['procedure status', 'procedure master #', 'procedure name', 'Default',\n #'Trauma variants', 'OP Rx (not in Pharmacy file)',\n #'OP Rx (not in Pharmacy file, MediCal only)', 'Note']\n elif \"106370782_CDM(1)\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'procedure name'\n price_key = 'Default'\n code_key = 'procedure master #'\n\n # ['Charge Code', 'Charge Description', 'Price', 'Comments']\n elif \"106074039_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'Charge Description'\n price_key = 'Price' \n code_key = 'Charge Code'\n\n # ['Chrg Dept', 'Chrg Code', 'Chrg Bill Desc', 'Chrg Rev Code',\n # 'Chrg HCPCS Proc', 'Chrg CPT Code', 'Chrg Amt IP', 'Chrg Amt OP',\n # 'Chrg Type']\n elif \"106560492_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=6)\n description_key = 'Chrg Bill Desc'\n code_key = 'Chrg Code'\n\n for row in content.iterrows():\n \n if not pandas.isnull(row[1]['Chrg Amt OP']):\n # Outpatient\n idx = df.shape[0] + 1\n entry = [row[1][code_key], # charge code\n row[1]['Chrg Amt OP'], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'outpatient'] \n df.loc[idx,:] = entry\n\n if not pandas.isnull(row[1]['Chrg Amt IP']):\n # Inpatient\n idx = df.shape[0] + 1\n entry = [row[1][code_key], # charge code\n row[1]['Chrg Amt IP'], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'inpatient'] \n df.loc[idx,:] = entry\n continue\n\n # Writing over row of dashes ----\n elif \"106420491_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=2)\n content.columns = ['Description', \n 'Code', \n 'Unnamed: 2', 'Unnamed: 3', \n 'Price', \n 'Tier Code', \n 'Dept', \n 'Subd', \n 'Elem', \n 'Stat']\n description_key = 'Description'\n price_key = 'Price' \n code_key = 'Code'\n\n # ['Charge Code', 'Charge Description', 'Charge Code and Description', 'org_nm', 'gl_account_id', 'gl_sub_acct', 'NRV', 'Charge Amount', 'CPT']\n elif \"106450936_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'Charge Description'\n price_key = 'Charge Amount' \n code_key = 'Charge Code'\n\n # ['CDM', 'Description', 'Standard Charge', 'Surgery Center/Procedure Room Charge', 'L&D Charge', 'Supply Charge Range']\n elif \"106190796_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'Description'\n price_key = 'Standard Charge' \n code_key = 'CDM'\n\n\n # ['Charge Code ', 'Description', 'Primary Price', 'Comments']\n elif \"106434040_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=5)\n description_key = 'Description'\n price_key = 'Primary Price' \n code_key = 'Charge Code '\n \n # ['Fac', 'Charge #', 'Description', 'Price', 'GL Key']\n elif \"106301357_CDM\" in filename or \"106190570_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=5)\n description_key = 'Description'\n price_key = 'Price' \n code_key = 'Charge #'\n\n # ['CDM/SPSI', 'SERVICE DESCRIPTION', 'RATE TYPE', 'PRICE PER UNIT',\n # 'MIN UNIT', 'START VALUE1', 'STOP VALUE1', 'MIN PER UNIT1',\n # 'UNIT PRICE1', 'START VALUE2', 'STOP VALUE2', 'MIN PER UNIT2',\n # 'UNIT PRICE2', 'START VALUE3', 'STOP VALUE3', 'MIN PER UNIT3',\n # 'UNIT PRICE3']\n elif \"106190630_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=3)\n description_key = 'SERVICE DESCRIPTION'\n price_key = 'PRICE PER UNIT' \n code_key = 'CDM/SPSI'\n\n # ['BILL ITEM ID', 'LONG DESCRIPTION', 'CPT-4', 'HCPCS', 'REVENUE',\n # 'TMMC Standard Price Schedule', 'TMMC OP Lab Price Schedule']\n elif \"106190422_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'LONG DESCRIPTION'\n code_key = 'BILL ITEM ID'\n price_key = 'TMMC Standard Price Schedule'\n\n # ['Chg Code', 'Chrg Description', 'ref_eff_ts', 'row_sta_cd',\n # 'table_int_id', 'ref_int_id', 'org_int_id', 'lst_mod_id', 'lst_mod_ts',\n # 'compute_0010', 'org_nm', 'gl_account_id', 'gl_sub_acct',\n # 'chg_type_int_id', 'cod_dtl_ds', 'active_dt', 'Rev Code', 'Price',\n # 'CPT/HCPCS', 'gl_stat']\n elif \"106334018_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=1)\n description_key = 'Chrg Description'\n code_key = 'Chg Code'\n price_key = 'Price'\n\n # ['Chrg Code', 'Chrg Desc', 'Chrg Amt IP', 'Chrg Amt OP']\n elif \"106430779_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=5)\n description_key = 'Chrg Desc'\n code_key = 'Chrg Code'\n for row in content.iterrows():\n\n # Outpatient\n idx = df.shape[0] + 1\n entry = [row[1][code_key], # charge code\n row[1]['Chrg Amt OP'], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'outpatient'] \n df.loc[idx,:] = entry\n\n # Inpatient\n idx = df.shape[0] + 1\n entry = [row[1][code_key], # charge code\n row[1]['Chrg Amt IP'], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'inpatient'] \n df.loc[idx,:] = entry\n\n # ['Charge code', 'Charge Description', 'Charge Cat', 'Previous Price', 'Current Price', 'UOS', 'Charges', 'Change', '% Change', 'Note'],\n elif \"106190449_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=1)\n description_key = 'Charge Description'\n price_key = 'Current Price' \n code_key = 'Charge code'\n\n # ['Group', 'ChargeCode', 'ChargeCode Description', 'Fee Schedule Charge 1']\n elif \"106370875_CDM\" in filename or \"106370689_CDM\" in filename or \"106370714_CDM\" in filename or \"106370694_CDM\" in filename or \"106370745_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'ChargeCode Description'\n price_key = 'Fee Schedule Charge 1'\n code_key = 'ChargeCode'\n\n\n # ['CDM NO', 'DISPENSED DESCRIPTION', 'PRICE ', 'TYPE', 'NOTE'\n elif \"106331152_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'DISPENSED DESCRIPTION'\n price_key = 'PRICE ' \n code_key = 'CDM NO'\n\n # ['Charge Description', 'Price', 'Comment']\n elif \"106430763_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'Charge Description'\n price_key = 'Price' \n code_key = None\n\n # ['Fac', 'Charge #', 'Description', 'Price', 'GL Key'],\n elif \"106190380_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=5)\n description_key = 'Description'\n price_key = 'Price' \n code_key = 'Charge #'\n\n # ['CDM NO', 'DISPENSED DESCRIPTION', 'PRICE', 'TYPE', 'NOTE']\n elif \"106334068_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'DISPENSED DESCRIPTION'\n price_key = 'PRICE' \n code_key = 'CDM NO'\n\n # ['Charge Code', 'Charge Description', 'CPT-4', 'Amount']\n elif \"106190110_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=5)\n description_key = 'Charge Description'\n price_key = 'Amount' \n code_key = 'Charge Code'\n\n # ['ITEM #', 'DESCRIPTION', 'PRICE', 'CPT CODE']\n elif \"106040937_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=2)\n description_key = 'DESCRIPTION'\n price_key = 'PRICE' \n code_key = 'ITEM #'\n\n # ['Item # ', 'Description', 'Unit of Measure', 'Patient Charge']\n elif \"106331168_CDM(2)\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'Description'\n price_key = 'Patient Charge' \n code_key = 'Item # '\n\n # ['Fac', 'Charge #', 'Description', 'Price', 'GL Key']\n elif \"106190198_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=5)\n description_key = 'Description'\n price_key = 'Price'\n code_key = 'Charge #'\n\n # ['Procedure Code', 'Description', 'Unit Price']\n elif \"106331168_CDM(1)\" in filename:\n content = pandas.read_excel(filename, skiprows=3)\n description_key = 'Description'\n price_key = 'Unit Price'\n code_key = 'Procedure Code'\n\n # ['Item\\nNumber', '\\nDescription', '\\nBegin Date ', '\\nEnd Date', '\\nUnits', '\\nCharge By', '\\nCost', 'Base Price/\\nMarkup']\n elif \"106196404_CDM(1)\" in filename:\n content = pandas.read_excel(filename, skiprows=5)\n description_key = '\\nDescription'\n price_key = '\\nCost'\n code_key = None\n\n elif \"106196404_CDM(2)\" in filename:\n content = pandas.read_excel(filename, skiprows=5)\n description_key = '\\nDescription'\n price_key = '\\nCost'\n code_key = None\n\n # ['CDMCHRG#', 'CDMDSC', 'NEWPRICE', 'STAT']\n elif \"106440755_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'CDMDSC'\n price_key = 'NEWPRICE'\n code_key = 'CDMCHRG#'\n\n # ['Code', 'Description', 'Code.1', ' Amount ']\n elif \"106190256_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=4)\n description_key = 'Description'\n price_key = ' Amount '\n code_key = 'Code'\n \n # ['PROC (CDM)', 'DRUG DESCRIPTION', 'CHARGE']\n elif \"106364231_CDM_RX\" in filename:\n content = pandas.read_excel(filename, skiprows=7)\n description_key = \"DRUG DESCRIPTION\"\n price_key = \"CHARGE\" \n code_key = 'PROC (CDM)'\n\n # 'Charge Code', 'Description', 'CPT Code', 'Rate'\n elif \"106070924_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=3)\n description_key = \"Description\"\n price_key = \"Rate\" \n code_key = \"Charge Code\"\n\n # Code Description Code.1 Amount\n elif \"106190766_CDM\" in filename or \"106190197_CDM\" in filename or \"106190521_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=4)\n content.columns = ['Code', 'Description', 'Code.1', 'Amount']\n description_key = \"Description\"\n price_key = \"Amount\" \n code_key = \"Code\"\n\n # ['PROC_CODE', 'PROC_NAME', 'INPAT_FEE', 'OUTPAT_FEE']\n elif \"106090933_CDM\" in filename:\n content = pandas.read_excel(filename)\n\n for row in content.iterrows():\n # Outpatient\n idx = df.shape[0] + 1\n price_key = 'OP/Default Price'\n entry = [row[1]['PROC_CODE'], # charge code\n row[1]['OUTPAT_FEE'], # price\n row[1]['PROC_NAME'], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'outpatient'] \n df.loc[idx,:] = entry\n\n # Inpatient\n idx = df.shape[0] + 1\n entry = [row[1]['PROC_CODE'], # charge code\n row[1]['INPAT_FEE'], # price\n row[1]['PROC_NAME'], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'inpatient'] \n df.loc[idx,:] = entry\n\n continue\n\n\n # ['PROC_NAME', 'CHARGE_AMOUNT', 'COMMENT', 'Unnamed: 3']\n elif \"106304409_CDM\" in filename or \"106196035_CDM\" in filename or \"106196403_CDM\" in filename or \"106361223_CDM\" in filename or \"106014132_CDM\" in filename or \"106104062_CDM\" in filename or \"106190429_CDM\" in filename or \"106394009_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = \"PROC_NAME\"\n price_key = \"CHARGE_AMOUNT\" \n code_key = None\n\n # ['CDM #', 'CDM Description', 'gl_account_id', 'Account Name',\n # 'gl_sub_acct', 'chg_type_int_id', 'Charge Type', 'active_dt',\n # 'Rev Code', 'Price', 'CPT or HCPCS CD', 'Price.1']\n elif \"106301566_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=1)\n description_key = \"CDM Description\"\n price_key = \"Price\" \n code_key = \"CDM #\"\n\n\n # ['CDM NO', 'DISPENSED DESCRIPTION', 'PRICE', 'TYPE', 'NOTE']\n elif \"106334564_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = \"DISPENSED DESCRIPTION\"\n price_key = \"PRICE\" \n code_key = \"CDM NO\"\n\n\n # ['CDM#', 'CDM Description', 'Facility', 'gl_account_id', 'Rev Code', 'Price', 'CPT/HCPCS']\n elif \"106301188_CDM\" in filename or \"106301140_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=1)\n description_key = \"CDM Description\"\n price_key = \"Price\" \n code_key = \"CDM#\"\n\n # ['CDM #', 'Description', 'Price']\n elif \"106190298_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=4)\n description_key = 'Description'\n price_key = \"CDM #\"\n code_key = 'Price'\n\n # ['CHARGE #', 'DESC', 'REV', 'CPT', 'PRICE', 'Unnamed: 5']\n elif \"106220733_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=4)\n description_key = 'DESC'\n price_key = 'PRICE'\n code_key = 'CHARGE #'\n\n # ['PROC (CDM)', 'CHG CAT', 'ARMC REV DEPT', 'PROCEDURE (CDM) DESCRIPTION', 'CHARGE', 'CPT-4', 'MCLcde']\n elif \"106364231_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=7)\n description_key = 'PROCEDURE (CDM) DESCRIPTION'\n price_key = \"CHARGE\"\n code_key = 'PROC (CDM)'\n\n #['PROCEDURE', 'DESCRIPTION', 'Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4', 'Unnamed: 5', 'STD AMOUNT']\n elif \"106010887_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=3)\n description_key = \"DESCRIPTION\"\n price_key = \"STD AMOUNT\"\n code_key = \"PROCEDURE\"\n\n # ['CDM NO', 'DISPENSED DESCRIPTION', 'PRICE ', 'TYPE', 'NOTE']\n elif \"106196405_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = \"DISPENSED DESCRIPTION\"\n price_key = \"PRICE \"\n code_key = \"CDM NO\"\n\n # ['PROC_NAME', 'CHARGE_AMOUNT', 'COMMENT']\n elif \"106074097_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=1)\n description_key = \"PROC_NAME\"\n price_key = \"CHARGE_AMOUNT\"\n code_key = None\n\n # ['Charge Code', 'Description', 'Std Charge', 'Outpt Charge', 'Comment']\n elif \"106190385_CDM\" in filename or \"106190470_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=3)\n description_key = \"Description\"\n price_key = \"Std Charge\"\n code_key = 'Charge Code'\n\n # Service ID', 'User Gen. Service ID', 'Service Name', 'Effective Date', 'Price ($)'\n elif \"106474007_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=1)\n description_key = \"Service Name\"\n price_key = \"Price ($)\"\n code_key = 'Service ID'\n\n elif \"106201281_CDM\" in filename:\n content = pandas.read_excel(filename)\n additional_row = content.columns.tolist()\n idx = content.shape[0] + 1\n content.loc[idx] = additional_row\n content.columns = [\"CODE\", 'DESCRIPTION', \"PRICE\"]\n code_key = \"CODE\"\n description_key = \"DESCRIPTION\"\n price_key = 'PRICE'\n\n # ['Charge#', 'Description', 'Charge Price', 'Unnamed: 3']\n elif \"106260011_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=6)\n description_key = \"Description\"\n price_key = \"Charge Price\"\n code_key = 'Charge#'\n\n # ['Code ', 'Procedure_Name', 'Cost']\n elif \"106304159_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=2)\n description_key = \"Procedure_Name\"\n price_key = \"Cost\"\n code_key = 'Code '\n\n # ['PROCEDURE', 'DESCRIPTION', 'DEPARTMENT', 'CHG CAT', 'COST', 'STD AMOUNT CPT HCPC', 'A']\n elif \"106301127_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = \"DESCRIPTION\"\n price_key = 'STD AMOUNT CPT HCPC'\n code_key = \"PROCEDURE\"\n\n elif \"106370749_\" in filename:\n content = pandas.read_excel(filename)\n description_key = \"Description\"\n price_key = \"Patient Price\"\n code_key = None \n\n # ['Level of Care', 'Begin Date', 'End Date', 'Charge By', 'Base Price ', 'Unnamed: 5']\n elif \"106196404_CDM(4)\" in filename:\n content = pandas.read_excel(filename, skiprows=6)\n description_key = 'Level of Care'\n price_key = 'Base Price '\n code_key = None\n\n # ['CHARGE CODE', 'BILLING DESCRIPTION', 'CHARGE PRICE TIER 1', 'CHARGE PRICE TIER 2']\n elif \"106211006_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = 'BILLING DESCRIPTION'\n price_key = 'CHARGE PRICE TIER 1'\n code_key = 'CHARGE CODE'\n\n elif \"106196404_CDM(3)\" in filename:\n content = pandas.read_excel(filename, skiprows=7)\n description_key = 'Level of Care'\n price_key = 'Base Price '\n code_key = None\n\n # ['CDM', 'CDM DESCRIPTION', 'CPT/HCPC', 'OP', 'IP', 'FQHC', 'LAB OP',\n # 'LAB IP', 'RAD OP', 'RAD IP']\n elif \"106301279_CDM\" in filename:\n content = pandas.read_excel(filename)\n description_key = \"CDM DESCRIPTION\"\n code_key = 'CDM' \n\n for row in content.iterrows():\n\n # Outpatient\n if row[1]['OP'] != 0:\n idx = df.shape[0] + 1\n entry = [row[1][code_key], # charge code\n row[1]['OP'], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'outpatient'] \n df.loc[idx,:] = entry\n\n if row[1]['IP'] != 0:\n idx = df.shape[0] + 1\n entry = [row[1][code_key], # charge code\n row[1]['IP'], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'inpatient'] \n df.loc[idx,:] = entry\n\n continue\n\n # ['PROC (CDM)', 'DRUG DESCRIPTION', 'CHARGE']\n elif \"106364231_CDM_RX\" in filename:\n content = pandas.read_excel(filename, skiprows=7)\n description_key = \"DRUG DESCRIPTION\"\n price_key = \"CHARGE\"\n code_key = 'PROC (CDM)' \n\n # ['CHARGE #', 'CHARGE DESCRIPTION', 'CPT-4', 'PT CHG $', 'INS CD']\n elif \"106301155_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=4)\n code_key = 'CHARGE #'\n description_key = 'CHARGE DESCRIPTION'\n price_key = 'PT CHG $'\n\n # ['Reference ID', 'Description', 'Price']\n elif \"106364014_CDM\" in filename or \"106334589_CDM\" in filename or \"106361246_CDM\" in filename or \"106364502_CDM\" in filename:\n content = pandas.read_excel(filename)\n code_key = 'Reference ID'\n description_key = 'Description'\n price_key = 'Price'\n\n\n # ['Reference ID', 'Description', 'Price']\n elif \"106364014_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=4)\n code_key = 'Reference ID'\n description_key = 'Description'\n price_key = 'Price'\n\n elif \"106090793_CDM_RX\" in filename:\n content = pandas.read_excel(filename)\n additional_row = content.columns.tolist()\n idx = content.shape[0] + 1\n content.loc[idx] = additional_row\n content.columns = ['DESCRIPTION', \"CODE\", \"PRICE\"]\n code_key = \"CODE\"\n description_key = \"DESCRIPTION\"\n price_key = 'PRICE'\n \n # ['Unnamed: 0', 'CHARGE CODE', 'CHARGE DESCRIPTION', 'PRICE', 'Unnamed: 4']\n elif \"106130699_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=6)\n code_key = \"CHARGE CODE\"\n description_key = \"CHARGE DESCRIPTION\"\n price_key = 'PRICE'\n\n elif \"106090793_CDM\" in filename:\n # ['EAP PROC CODE', 'CODE DESCRIPTION', 'CPT', 'REV CODE', 'UNIT CHARGE AMOUNT']\n content = pandas.read_excel(filename)\n code_key = \"CPT\"\n description_key = \"CODE DESCRIPTION\"\n price_key = 'UNIT CHARGE AMOUNT'\n\n # ['Charge Code', 'Charge Description', 'CPT/HCPCS Code',\n # 'Inpatient \\nPrice', 'Outpatient \\nPrice']\n elif \"106341006_CDM\" in filename:\n content = pandas.read_excel(filename, skiprows=5)\n code_key = 'Charge Code'\n description_key = \"Charge Description\"\n\n for row in content.iterrows():\n\n # Outpatient\n idx = df.shape[0] + 1\n entry = [row[1][code_key], # charge code\n row[1]['Outpatient \\nPrice'], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'outpatient'] \n df.loc[idx,:] = entry\n\n # Inpatient\n idx = df.shape[0] + 1\n entry = [row[1][code_key], # charge code\n row[1]['Inpatient \\nPrice'], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'inpatient'] \n df.loc[idx,:] = entry\n\n continue\n\n\n\n elif \"106190555_CDM\" in filename or \"106190500_CDM\" in filename:\n # 'Charge\\nCode', 'Description', 'CPT/ HCPCS\\nCode', 'OP/ Default Price', 'IP/ER\\nPrice'\n content = pandas.read_excel(filename, skiprows=4)\n code_key = 'Charge\\nCode'\n description_key = \"Description\"\n for row in content.iterrows():\n\n # Outpatient\n idx = df.shape[0] + 1\n price_key = 'OP/Default Price'\n if price_key not in content.columns.tolist():\n price_key = 'OP/ Default Price'\n entry = [row[1][code_key], # charge code\n row[1][price_key], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'outpatient'] \n df.loc[idx,:] = entry\n\n # Inpatient\n idx = df.shape[0] + 1\n entry = [row[1][code_key], # charge code\n row[1]['IP/ER\\nPrice'], # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n 'inpatient'] \n df.loc[idx,:] = entry\n\n continue\n\n elif \"cdm_\" in filename.lower():\n # ['2018 Charge Codes', 'Charge Codes Description', 'HCPCS Codes', 'June 2018 Prices']\n content = pandas.read_excel(filename, skiprows=4)\n code_key = '2018 Charge Codes'\n description_key = 'Charge Codes Description'\n price_key = 'June 2018 Prices'\n\n if code_key not in content.columns.tolist():\n code_key = None\n description_key = 'Description'\n price_key = 'Patient Price'\n\n if description_key not in content.columns.tolist():\n content = pandas.read_excel(filename)\n description_key = \"PROC_NAME\"\n price_key = \"CHARGE_AMOUNT\" \n code_key = None\n\n if description_key not in content.columns.tolist():\n description_key = \"DESCRIPTION\"\n price_key = \"STD AMOUNT\" \n code_key = \"PROCEDURE\"\n\n if code_key not in content.columns.tolist():\n content = pandas.read_excel(filename, skiprows=3) \n description_key = 'Description'\n price_key = 'Std Charge' \n code_key = 'Charge Code'\n\n\n elif \"cdm(\" in filename.lower():\n # CDM # Description Price\n content = pandas.read_excel(filename, skiprows=4)\n code_key = \"CDM #\"\n description_key = \"Description\"\n price_key = \"Price\"\n\n elif \"pct_chg\" in filename.lower():\n continue\n\n elif \"drg\" in filename.lower():\n #['Hospital', 'MS DRG', 'MS DRG Desc', 'Rank', 'Cases', 'Total Charges', 'Avg Chrg / Case']\n content = pandas.read_excel(filename, skiprows=8)\n code_key = \"MS DRG\"\n description_key = \"MS DRG Desc\"\n price_key = 'Avg Chrg / Case'\n\n else:\n continue\n\n for row in content.iterrows():\n if code_key != None:\n code = row[1][code_key]\n else:\n code = None\n price = row[1][price_key]\n if pandas.isnull(price):\n continue\n idx = df.shape[0] + 1\n entry = [code, # charge code\n price, # price\n row[1][description_key], # description\n result[\"hospital_id\"], # hospital_id\n result['filename'],\n charge_type] \n df.loc[idx,:] = entry\n\n # When we get to index 350 (hospital_id 'kaiser-foundation-hospital---walnut-creek')\n # It's time to save and start a new data file, we just hit the max Github file size\n if result['hospital_id'] == 'kaiser-foundation-hospital---walnut-creek':\n\n # Remove empty rows\n df = df.dropna(how='all')\n\n # Save data!\n print(df.shape) # 757\n df.to_csv(output_data, sep='\\t', index=False)\n df.to_csv(output_year, sep='\\t', index=False)\n output_data = os.path.join(here, 'data-latest-2.tsv')\n output_year = os.path.join(here, 'data-%s-2.tsv' % year)\n df = pandas.DataFrame(columns=columns)\n\n\n# One final save\nprint(df.shape)\ndf.to_csv(output_data, sep='\\t', index=False)\ndf.to_csv(output_year, sep='\\t', index=False)\n" ]
[ [ "pandas.isnull", "pandas.DataFrame", "pandas.read_excel" ] ]
lobachevzky/hsr
[ "b8c40b82c0d39fedc8f64cb50734620a0b2d84ab" ]
[ "baselines/ddpg/main.py" ]
[ "import argparse\nimport time\nimport os\nimport logging\nfrom baselines import logger, bench\nfrom baselines.common.misc_util import (\n set_global_seeds,\n boolean_flag,\n)\nimport baselines.ddpg.training as training\nfrom baselines.ddpg.models import Actor, Critic\nfrom baselines.ddpg.memory import Memory\nfrom baselines.ddpg.noise import *\n#from environment.arm2pos import Arm2PosEnv\n#from environment.pick_and_place import PickAndPlaceEnv\nfrom environment.arm2pos import Arm2PosEnv\nfrom environment.navigate_old import NavigateEnv\nfrom environment.pick_and_place import PickAndPlaceEnv\nfrom toy_environment import gridworld, gridworld\nimport gym\nimport tensorflow as tf\n#from environment.navigate import NavigateEnv\nfrom mpi4py import MPI\n\n\ndef run(env_id, seed, noise_type, layer_norm, evaluation, **kwargs):\n # Configure things.\n rank = MPI.COMM_WORLD.Get_rank()\n if rank != 0:\n logger.set_level(logger.DISABLED)\n\n # Create envs.\n if env_id == 'navigate':\n env = NavigateEnv(use_camera=False, continuous_actions=True, neg_reward=False, max_steps=500)\n elif env_id == 'toy':\n #env = continuous_gridworld.ContinuousGridworld('', max_steps=1000, obstacle_mode=continuous_gridworld.NO_OBJECTS)\n from toy_environment import room_obstacle_list\n env = gridworld.Gridworld(room_obstacle_list.obstacle_list, step_size=0.2)\n elif env_id == 'arm2pos':\n env = Arm2PosEnv(continuous=True, max_steps=500, neg_reward=False)\n elif env_id == 'pick-and-place':\n env = PickAndPlaceEnv(max_steps=500)\n else:\n env = gym.make(env_id)\n env = bench.Monitor(env, logger.get_dir() and os.path.join(logger.get_dir(), str(rank)))\n # env = gym.wrappers.Monitor(env, '/tmp/ddpg/', force=True)\n gym.logger.setLevel(logging.WARN)\n\n if evaluation and rank == 0:\n eval_env = gym.make(env_id)\n eval_env = bench.Monitor(eval_env, os.path.join(logger.get_dir(), 'gym_eval'))\n env = bench.Monitor(env, None)\n else:\n eval_env = None\n\n # Parse noise_type\n action_noise = None\n param_noise = None\n\n nb_actions = env.action_space.shape[-1]\n for current_noise_type in noise_type.split(','):\n current_noise_type = current_noise_type.strip()\n if current_noise_type == 'none':\n pass\n elif 'adaptive-param' in current_noise_type:\n _, stddev = current_noise_type.split('_')\n param_noise = AdaptiveParamNoiseSpec(initial_stddev=float(stddev), desired_action_stddev=float(stddev))\n elif 'normal' in current_noise_type:\n _, stddev = current_noise_type.split('_')\n action_noise = NormalActionNoise(mu=np.zeros(nb_actions), sigma=float(stddev) * np.ones(nb_actions))\n elif 'ou' in current_noise_type:\n _, stddev = current_noise_type.split('_')\n action_noise = OrnsteinUhlenbeckActionNoise(mu=np.zeros(nb_actions),\n sigma=float(stddev) * np.ones(nb_actions))\n else:\n raise RuntimeError('unknown noise type \"{}\"'.format(current_noise_type))\n\n # Configure components.\n memory = Memory(limit=int(1e6), action_shape=env.action_space.shape, observation_shape=env.observation_space.shape)\n critic = Critic(layer_norm=layer_norm)\n actor = Actor(nb_actions, layer_norm=layer_norm)\n\n\n\n # Seed everything to make things reproducible.\n seed = seed + 1000000 * rank\n logger.info('rank {}: seed={}, logdir={}'.format(rank, seed, logger.get_dir()))\n tf.reset_default_graph()\n set_global_seeds(seed)\n env.seed(seed)\n if eval_env is not None:\n eval_env.seed(seed)\n\n # Disable logging for rank != 0 to avoid noise.\n if rank == 0:\n start_time = time.time()\n del kwargs['tb_dir']\n del kwargs['save_path']\n hindsight_mode = kwargs['hindsight_mode']\n del kwargs['hindsight_mode']\n training.train(env=env, eval_env=eval_env, param_noise=param_noise,\n action_noise=action_noise, actor=actor, critic=critic, memory=memory,\n hindsight_mode=hindsight_mode, **kwargs)\n env.close()\n if eval_env is not None:\n eval_env.close()\n if rank == 0:\n logger.info('total runtime: {}s'.format(time.time() - start_time))\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n\n parser.add_argument('--env-id', type=str, default='HalfCheetah-v1')\n boolean_flag(parser, 'render-eval', default=False)\n boolean_flag(parser, 'layer-norm', default=True)\n boolean_flag(parser, 'render', default=False)\n boolean_flag(parser, 'normalize-returns', default=False)\n boolean_flag(parser, 'normalize-observations', default=True)\n parser.add_argument('--seed', help='RNG seed', type=int, default=0)\n parser.add_argument('--critic-l2-reg', type=float, default=1e-2)\n parser.add_argument('--batch-size', type=int, default=64) # per MPI worker\n parser.add_argument('--actor-lr', type=float, default=1e-4)\n parser.add_argument('--critic-lr', type=float, default=1e-3)\n boolean_flag(parser, 'popart', default=False)\n parser.add_argument('--gamma', type=float, default=0.99)\n parser.add_argument('--reward-scale', type=float, default=1.)\n parser.add_argument('--clip-norm', type=float, default=None)\n parser.add_argument('--nb-epochs', type=int, default=5000) # with default settings, perform 1M steps total\n parser.add_argument('--nb-epoch-cycles', type=int, default=20)\n parser.add_argument('--nb-train-steps', type=int, default=50) # per epoch cycle and MPI worker\n parser.add_argument('--nb-eval-steps', type=int, default=100) # per epoch cycle and MPI worker\n parser.add_argument('--nb-rollout-steps', type=int, default=100) # per epoch cycle and MPI worker\n parser.add_argument('--noise-type', type=str, default='normal_0.05') # choices are adaptive-param_xx, ou_xx, normal_xx, none\n parser.add_argument('--tb-dir', type=str, default=None)\n parser.add_argument('--num-timesteps', type=int, default=None)\n parser.add_argument('--restore-path', type=str, default=None)\n parser.add_argument('--save-path', type=str, default=None)\n parser.add_argument('--hindsight-mode', type=str, default=None)\n boolean_flag(parser, 'evaluation', default=False)\n args = parser.parse_args()\n # we don't directly specify timesteps for this script, so make sure that if we do specify them\n # they agree with the other parameters\n if args.save_path is not None:\n print('Warning: saving is not implemented yet')\n if args.num_timesteps is not None:\n assert (args.num_timesteps == args.nb_epochs * args.nb_epoch_cycles * args.nb_rollout_steps)\n dict_args = vars(args)\n del dict_args['num_timesteps']\n return dict_args\n\n\nif __name__ == '__main__':\n args = parse_args()\n if MPI.COMM_WORLD.Get_rank() == 0:\n logger.configure()\n if args['tb_dir'] is not None:\n logger.configure(dir=args['tb_dir'], format_strs=['stdout', 'tensorboard'])\n # Run actual script.\n run(**args)\n" ]
[ [ "tensorflow.reset_default_graph" ] ]
cap-michaili/Detectron.pytorch
[ "f49cd51c12ff5c8c3dd3264352116266aa3d7d85" ]
[ "lib/roi_data/retinanet.py" ]
[ "# Copyright (c) 2017-present, Facebook, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n##############################################################################\n\n\"\"\"Compute minibatch blobs for training a RetinaNet network.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport numpy as np\nimport logging\n\nimport utils.boxes as box_utils\nimport roi_data.data_utils as data_utils\nfrom core.config import cfg\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_retinanet_blob_names(is_training=True):\n \"\"\"\n Returns blob names in the order in which they are read by the data\n loader.\n \"\"\"\n # im_info: (height, width, image scale)\n blob_names = ['im_info']\n assert cfg.FPN.FPN_ON, \"RetinaNet uses FPN for dense detection\"\n # Same format as RPN blobs, but one per FPN level\n if is_training:\n blob_names += ['roidb', 'retnet_fg_num', 'retnet_bg_num']\n for lvl in range(cfg.FPN.RPN_MIN_LEVEL, cfg.FPN.RPN_MAX_LEVEL + 1):\n suffix = 'fpn{}'.format(lvl)\n blob_names += [\n 'retnet_cls_labels_' + suffix,\n 'retnet_roi_bbox_targets_' + suffix,\n 'retnet_bbox_inside_weights_wide_' + suffix,\n ]\n return blob_names\n\n\ndef add_retinanet_blobs(blobs, im_scales, roidb, image_width, image_height):\n \"\"\"Add RetinaNet blobs.\"\"\"\n # RetinaNet is applied to many feature levels, as in the FPN paper\n k_max, k_min = cfg.FPN.RPN_MAX_LEVEL, cfg.FPN.RPN_MIN_LEVEL\n scales_per_octave = cfg.RETINANET.SCALES_PER_OCTAVE\n num_aspect_ratios = len(cfg.RETINANET.ASPECT_RATIOS)\n aspect_ratios = cfg.RETINANET.ASPECT_RATIOS\n anchor_scale = cfg.RETINANET.ANCHOR_SCALE\n\n # get anchors from all levels for all scales/aspect ratios\n foas = []\n for lvl in range(k_min, k_max + 1):\n stride = 2. ** lvl\n for octave in range(scales_per_octave):\n octave_scale = 2 ** (octave / float(scales_per_octave))\n for idx in range(num_aspect_ratios):\n anchor_sizes = (stride * octave_scale * anchor_scale,)\n anchor_aspect_ratios = (aspect_ratios[idx],)\n foa = data_utils.get_field_of_anchors(\n stride, anchor_sizes, anchor_aspect_ratios, octave, idx)\n foas.append(foa)\n all_anchors = np.concatenate([f.field_of_anchors for f in foas])\n\n blobs['retnet_fg_num'], blobs['retnet_bg_num'] = 0.0, 0.0\n for im_i, entry in enumerate(roidb):\n scale = im_scales[im_i]\n im_height = np.round(entry['height'] * scale)\n im_width = np.round(entry['width'] * scale)\n gt_inds = np.where(\n (entry['gt_classes'] > 0) & (entry['is_crowd'] == 0))[0]\n assert len(gt_inds) > 0, \\\n 'Empty ground truth empty for image is not allowed. Please check.'\n\n gt_rois = entry['boxes'][gt_inds, :] * scale\n gt_classes = entry['gt_classes'][gt_inds]\n\n im_info = np.array([[im_height, im_width, scale]], dtype=np.float32)\n blobs['im_info'].append(im_info)\n\n retinanet_blobs, fg_num, bg_num = _get_retinanet_blobs(\n foas, all_anchors, gt_rois, gt_classes, image_width, image_height)\n for i, foa in enumerate(foas):\n for k, v in retinanet_blobs[i].items():\n level = int(np.log2(foa.stride))\n key = '{}_fpn{}'.format(k, level)\n blobs[key].append(v)\n blobs['retnet_fg_num'] += fg_num\n blobs['retnet_bg_num'] += bg_num\n\n blobs['retnet_fg_num'] = blobs['retnet_fg_num'].astype(np.float32)\n blobs['retnet_bg_num'] = blobs['retnet_bg_num'].astype(np.float32)\n\n N = len(roidb)\n for k, v in blobs.items():\n if isinstance(v, list) and len(v) > 0:\n # compute number of anchors\n A = int(len(v) / N)\n # for the cls branch labels [per fpn level],\n # we have blobs['retnet_cls_labels_fpn{}'] as a list until this step\n # and length of this list is N x A where\n # N = num_images, A = num_anchors for example, N = 2, A = 9\n # Each element of the list has the shape 1 x 1 x H x W where H, W are\n # spatial dimension of curret fpn lvl. Let a{i} denote the element\n # corresponding to anchor i [9 anchors total] in the list.\n # The elements in the list are in order [[a0, ..., a9], [a0, ..., a9]]\n # however the network will make predictions like 2 x (9 * 80) x H x W\n # so we first concatenate the elements of each image to a numpy array\n # and then concatenate the two images to get the 2 x 9 x H x W\n\n if k.find('retnet_cls_labels') >= 0 \\\n or k.find('retnet_roi_bbox_targets') >= 0:\n tmp = []\n # concat anchors within an image\n for i in range(0, len(v), A):\n tmp.append(np.concatenate(v[i: i + A], axis=1))\n # concat images\n blobs[k] = np.concatenate(tmp, axis=0)\n else:\n # for the bbox branch elements [per FPN level],\n # we have the targets and the fg boxes locations\n # in the shape: M x 4 where M is the number of fg locations in a\n # given image at the current FPN level. For the given level,\n # the bbox predictions will be. The elements in the list are in\n # order [[a0, ..., a9], [a0, ..., a9]]\n # Concatenate them to form M x 4\n blobs[k] = np.expand_dims(np.concatenate(v, axis=0), axis=0)\n\n valid_keys = [\n 'has_visible_keypoints', 'boxes', 'segms', 'seg_areas', 'gt_classes',\n 'gt_overlaps', 'is_crowd', 'box_to_gt_ind_map', 'gt_keypoints'\n ]\n minimal_roidb = [{} for _ in range(len(roidb))]\n for i, e in enumerate(roidb):\n for k in valid_keys:\n if k in e:\n minimal_roidb[i][k] = e[k]\n # blobs['roidb'] = blob_utils.serialize(minimal_roidb)\n blobs['roidb'] = minimal_roidb\n\n return True\n\n\ndef _get_retinanet_blobs(\n foas, all_anchors, gt_boxes, gt_classes, im_width, im_height):\n total_anchors = all_anchors.shape[0]\n logger.debug('Getting mad blobs: im_height {} im_width: {}'.format(\n im_height, im_width))\n\n inds_inside = np.arange(all_anchors.shape[0])\n anchors = all_anchors\n num_inside = len(inds_inside)\n\n logger.debug('total_anchors: {}'.format(total_anchors))\n logger.debug('inds_inside: {}'.format(num_inside))\n logger.debug('anchors.shape: {}'.format(anchors.shape))\n\n # Compute anchor labels:\n # label=1 is positive, 0 is negative, -1 is don't care (ignore)\n labels = np.empty((num_inside,), dtype=np.float32)\n labels.fill(-1)\n if len(gt_boxes) > 0:\n # Compute overlaps between the anchors and the gt boxes overlaps\n anchor_by_gt_overlap = box_utils.bbox_overlaps(anchors, gt_boxes)\n # Map from anchor to gt box that has highest overlap\n anchor_to_gt_argmax = anchor_by_gt_overlap.argmax(axis=1)\n # For each anchor, amount of overlap with most overlapping gt box\n anchor_to_gt_max = anchor_by_gt_overlap[\n np.arange(num_inside), anchor_to_gt_argmax]\n\n # Map from gt box to an anchor that has highest overlap\n gt_to_anchor_argmax = anchor_by_gt_overlap.argmax(axis=0)\n # For each gt box, amount of overlap with most overlapping anchor\n gt_to_anchor_max = anchor_by_gt_overlap[\n gt_to_anchor_argmax, np.arange(anchor_by_gt_overlap.shape[1])]\n # Find all anchors that share the max overlap amount\n # (this includes many ties)\n anchors_with_max_overlap = np.where(\n anchor_by_gt_overlap == gt_to_anchor_max)[0]\n\n # Fg label: for each gt use anchors with highest overlap\n # (including ties)\n gt_inds = anchor_to_gt_argmax[anchors_with_max_overlap]\n labels[anchors_with_max_overlap] = gt_classes[gt_inds]\n # Fg label: above threshold IOU\n inds = anchor_to_gt_max >= cfg.RETINANET.POSITIVE_OVERLAP\n gt_inds = anchor_to_gt_argmax[inds]\n labels[inds] = gt_classes[gt_inds]\n\n fg_inds = np.where(labels >= 1)[0]\n bg_inds = np.where(anchor_to_gt_max < cfg.RETINANET.NEGATIVE_OVERLAP)[0]\n labels[bg_inds] = 0\n num_fg, num_bg = len(fg_inds), len(bg_inds)\n\n bbox_targets = np.zeros((num_inside, 4), dtype=np.float32)\n bbox_targets[fg_inds, :] = data_utils.compute_targets(\n anchors[fg_inds, :], gt_boxes[anchor_to_gt_argmax[fg_inds], :])\n\n # Bbox regression loss has the form:\n # loss(x) = weight_outside * L(weight_inside * x)\n # Inside weights allow us to set zero loss on an element-wise basis\n # Bbox regression is only trained on positive examples so we set their\n # weights to 1.0 (or otherwise if config is different) and 0 otherwise\n bbox_inside_weights = np.zeros((num_inside, 4), dtype=np.float32)\n bbox_inside_weights[labels >= 1, :] = (1.0, 1.0, 1.0, 1.0)\n\n # Map up to original set of anchors\n labels = data_utils.unmap(labels, total_anchors, inds_inside, fill=-1)\n bbox_inside_weights = data_utils.unmap(\n bbox_inside_weights, total_anchors, inds_inside, fill=0\n )\n bbox_targets = data_utils.unmap(\n bbox_targets, total_anchors, inds_inside, fill=0)\n\n # Split the generated labels, etc. into labels per each field of anchors\n blobs_out = []\n start_idx = 0\n for i, foa in enumerate(foas):\n H = foa.field_size\n W = foa.field_size\n end_idx = start_idx + H * W\n _labels = labels[start_idx:end_idx]\n _bbox_targets = bbox_targets[start_idx:end_idx, :]\n _bbox_inside_weights = bbox_inside_weights[start_idx:end_idx, :]\n start_idx = end_idx\n # labels output with shape (1, height, width)\n _labels = _labels.reshape((1, 1, H, W))\n # bbox_targets output with shape (1, 4 * A, height, width)\n _bbox_targets = _bbox_targets.reshape(\n (1, 1, H, W, 4)).transpose(0, 1, 4, 2, 3)\n\n # bbox_inside_weights output with shape (1, 4 * A, height, width)\n _bbox_inside_weights = _bbox_inside_weights.reshape(\n (1, H, W, 4)).transpose(0, 3, 1, 2)\n\n blobs_out.append(\n dict(\n retnet_cls_labels=_labels.astype(np.int32),\n retnet_roi_bbox_targets=_bbox_targets.astype(np.float32),\n retnet_bbox_inside_weights_wide=_bbox_inside_weights\n ))\n out_num_fg = np.array([num_fg + 1.0], dtype=np.float32)\n out_num_bg = (\n np.array([num_bg + 1.0]) * (cfg.MODEL.NUM_CLASSES - 1) +\n out_num_fg * (cfg.MODEL.NUM_CLASSES - 2))\n return blobs_out, out_num_fg, out_num_bg\n" ]
[ [ "numpy.concatenate", "numpy.array", "numpy.empty", "numpy.zeros", "numpy.round", "numpy.where", "numpy.arange", "numpy.log2" ] ]
ianhi/ipympl-interactions
[ "9c48292d5c5931cd136da25961729858b7846abb" ]
[ "mpl_interactions/pyplot.py" ]
[ "\"\"\"Control the output of standard plotting functions such as :func:`~matplotlib.pyplot.plot` and\n:func:`~matplotlib.pyplot.hist` using sliders and other widgets. When using the ``ipympl`` backend\nthese functions will leverage ipywidgets for the controls, otherwise they will use the built-in\nMatplotlib widgets.\"\"\"\n\n\nfrom collections.abc import Callable\nfrom numbers import Number\n\nimport matplotlib.markers as mmarkers\nimport numpy as np\nfrom matplotlib.collections import PatchCollection\nfrom matplotlib.colors import to_rgba_array\nfrom matplotlib.patches import Rectangle\n\nfrom .controller import gogogo_controls, prep_scalars\nfrom .helpers import (\n callable_else_value,\n callable_else_value_no_cast,\n create_slider_format_dict,\n eval_xy,\n gogogo_figure,\n notebook_backend,\n sca,\n update_datalim_from_bbox,\n)\nfrom .mpl_kwargs import (\n Line2D_kwargs_list,\n Text_kwargs_list,\n collection_kwargs_list,\n imshow_kwargs_list,\n kwarg_popper,\n)\n\n__all__ = [\n \"interactive_plot\",\n \"interactive_hist\",\n \"interactive_scatter\",\n \"interactive_imshow\",\n \"interactive_axhline\",\n \"interactive_axvline\",\n \"interactive_title\",\n \"interactive_xlabel\",\n \"interactive_ylabel\",\n]\n\n\ndef interactive_plot(\n *args,\n parametric=False,\n ax=None,\n slider_formats=None,\n xlim=\"stretch\",\n ylim=\"stretch\",\n force_ipywidgets=False,\n play_buttons=None,\n controls=None,\n display_controls=True,\n **kwargs,\n):\n \"\"\"\n Control a plot using widgets\n\n interactive_plot([x], y, [fmt])\n\n where x/y is are either arraylike or a function that returns arrays. Any kwargs accepted by\n `matplotlib.pyplot.plot` will be passed through, other kwargs will be intrepreted as controls\n\n Parameters\n ----------\n x, y : array-like or scalar or function\n The horizontal / vertical coordinates of the data points.\n *x* values are optional and default to ``range(len(y))``. If both *x* and *y* are\n provided and *y* is a function then it will be called as ``y(x, **params)``. If\n *x* is a function it will be called as ``x(**params)``\n fmt : str, optional\n A format string, e.g. 'ro' for red circles. See matplotlib.pyplot.plot\n for full documentation.\n as xlim\n parametric : boolean\n If True then the function expects to have only received a value for y and that that function will\n return an array for both x and y, or will return an array with shape (N, 2)\n ax : matplotlib axis, optional\n The axis on which to plot. If none the current axis will be used.\n slider_formats : None, string, or dict\n If None a default value of decimal points will be used. Uses the new {} style formatting\n xlim : string or tuple of floats, optional\n If a tuple it will be passed to ax.set_xlim. Other options are:\n 'auto': rescale the x axis for every redraw\n 'stretch': only ever expand the xlims.\n ylim : string or tuple of floats, optional\n If a tuple it will be passed to ax.set_ylim. Other options are same as xlim\n force_ipywidgets : boolean\n If True ipywidgets will always be used, even if not using the ipympl backend.\n If False the function will try to detect if it is ok to use ipywidgets\n If ipywidgets are not used the function will fall back on matplotlib widgets\n play_buttons : bool or str or dict, optional\n Whether to attach an ipywidgets.Play widget to any sliders that get created.\n If a boolean it will apply to all kwargs, if a dictionary you choose which sliders you\n want to attach play buttons too.\n\n - None: no sliders\n - True: sliders on the lft\n - False: no sliders\n - 'left': sliders on the left\n - 'right': sliders on the right\n\n controls : mpl_interactions.controller.Controls\n An existing controls object if you want to tie multiple plot elements to the same set of\n controls\n display_controls : boolean\n Whether the controls should display on creation. Ignored if controls is specified.\n\n Returns\n -------\n controls\n\n Examples\n --------\n With numpy arrays::\n\n x = np.linspace(0,2*np.pi)\n tau = np.linspace(0, np.pi)\n def f(tau):\n return np.sin(x*tau)\n interactive_plot(f, tau=tau)\n\n with tuples::\n\n x = np.linspace(0,2*np.pi)\n def f(x, tau):\n return np.sin(x+tau)\n interactive_plot(x, f, tau=(0, np.pi, 1000))\n\n \"\"\"\n kwargs, plot_kwargs = kwarg_popper(kwargs, Line2D_kwargs_list)\n x_and_y = False\n x = None\n fmt = None\n if len(args) == 0:\n # wot...\n return\n elif len(args) == 1:\n y = args[0]\n elif len(args) == 2:\n # either (y, fmt) or (x, y)\n # hard to know for sure though bc fmt can be a function\n # or maybe just requirement that fmt is a function\n if isinstance(args[1], str):\n y, fmt = args\n else:\n x_and_y = True\n x, y = args\n elif len(args) == 3:\n x_and_y = True\n x, y, fmt = args\n else:\n raise ValueError(f\"You passed in {len(args)} args, but no more than 3 is supported.\")\n\n ipympl = notebook_backend()\n ipympl or force_ipywidgets\n fig, ax = gogogo_figure(ipympl, ax=ax)\n slider_formats = create_slider_format_dict(slider_formats)\n controls, params = gogogo_controls(\n kwargs, controls, display_controls, slider_formats, play_buttons\n )\n\n def update(params, indices, cache):\n if x_and_y:\n x_, y_ = eval_xy(x, y, params, cache)\n # broadcast so that we can always index\n if x_.ndim == 1:\n x_ = np.broadcast_to(x_[:, None], (x_.shape[0], len(lines)))\n if y_.ndim == 1:\n y_ = np.broadcast_to(y_[:, None], (y_.shape[0], len(lines)))\n for i, line in enumerate(lines):\n line.set_data(x_[:, i], y_[:, i])\n elif parametric:\n # transpose to splat bc matplotlib considers columns of arrays to be\n # the datasets\n # I don't think it's possible to have multiple lines here\n # assert len(lines) == 1\n out = callable_else_value_no_cast(y, params, cache)\n if isinstance(out, tuple):\n pass\n elif isinstance(out, np.ndarray):\n # transpose bc set_data expects a different shape than plot\n out = np.asanyarray(out).T\n # else hope for the best lol\n lines[0].set_data(*out)\n else:\n y_ = callable_else_value(y, params, cache)\n if y_.ndim == 1:\n y_ = np.broadcast_to(y_[:, None], (y_.shape[0], len(lines)))\n for i, line in enumerate(lines):\n line.set_ydata(y_[:, i])\n\n cur_xlims = ax.get_xlim()\n cur_ylims = ax.get_ylim()\n ax.relim() # this may be expensive? don't do if not necessary?\n if ylim == \"auto\":\n ax.autoscale_view(scalex=False)\n elif ylim == \"stretch\":\n new_lims = [ax.dataLim.y0, ax.dataLim.y0 + ax.dataLim.height]\n new_lims = [\n new_lims[0] if new_lims[0] < cur_ylims[0] else cur_ylims[0],\n new_lims[1] if new_lims[1] > cur_ylims[1] else cur_ylims[1],\n ]\n ax.set_ylim(new_lims)\n if xlim == \"auto\":\n ax.autoscale_view(scaley=False)\n elif xlim == \"stretch\":\n new_lims = [ax.dataLim.x0, ax.dataLim.x0 + ax.dataLim.width]\n new_lims = [\n new_lims[0] if new_lims[0] < cur_xlims[0] else cur_xlims[0],\n new_lims[1] if new_lims[1] > cur_xlims[1] else cur_xlims[1],\n ]\n ax.set_xlim(new_lims)\n\n controls._register_function(update, fig, params.keys())\n\n if x_and_y:\n x_, y_ = eval_xy(x, y, params)\n if fmt:\n lines = ax.plot(x_, y_, fmt, **plot_kwargs)\n else:\n lines = ax.plot(x_, y_, **plot_kwargs)\n else:\n y_ = callable_else_value_no_cast(y, params)\n # set up to ensure that splatting works well\n if parametric and not isinstance(y_, tuple):\n y_ = np.asanyarray(y_).T\n else:\n # make a tuple so we can splat it\n # reduces the number of if statements necessary to plot\n # parametric functions\n y_ = (y_,)\n\n if fmt:\n lines = ax.plot(*y_, fmt, **plot_kwargs)\n else:\n lines = ax.plot(*y_, **plot_kwargs)\n\n try:\n # hack in the way it feels like matplotlib should behave\n # this is a necessary change to support ODEs which is a reasonable use case for\n # this library - lesser of two evils situation. (the evil here is deviating from matplotlib)\n labels = plot_kwargs[\"label\"]\n if (\n len(lines) > 1\n and (isinstance(labels, list) or isinstance(labels, tuple))\n and len(labels) == len(lines)\n ):\n for label, line in zip(labels, lines):\n line.set_label(label)\n except KeyError:\n pass\n\n if not isinstance(xlim, str):\n ax.set_xlim(xlim)\n if not isinstance(ylim, str):\n ax.set_ylim(ylim)\n\n # make sure the home button will work\n if hasattr(fig.canvas, \"toolbar\") and fig.canvas.toolbar is not None:\n fig.canvas.toolbar.push_current()\n # set current axis to be pyplot-like\n sca(ax)\n\n return controls\n\n\ndef simple_hist(arr, bins=\"auto\", density=None, weights=None):\n heights, bins = np.histogram(arr, bins=bins, density=density, weights=weights)\n width = bins[1] - bins[0]\n new_patches = []\n for i in range(len(heights)):\n new_patches.append(Rectangle((bins[i], 0), width=width, height=heights[i]))\n xlims = (bins.min(), bins.max())\n ylims = (0, heights.max() * 1.05)\n\n return xlims, ylims, new_patches\n\n\ndef stretch(ax, xlims, ylims):\n cur_xlims = ax.get_xlim()\n cur_ylims = ax.get_ylim()\n new_lims = ylims\n new_lims = [\n new_lims[0] if new_lims[0] < cur_ylims[0] else cur_ylims[0],\n new_lims[1] if new_lims[1] > cur_ylims[1] else cur_ylims[1],\n ]\n ax.set_ylim(new_lims)\n new_lims = xlims\n new_lims = [\n new_lims[0] if new_lims[0] < cur_xlims[0] else cur_xlims[0],\n new_lims[1] if new_lims[1] > cur_xlims[1] else cur_xlims[1],\n ]\n ax.set_xlim(new_lims)\n\n\ndef interactive_hist(\n arr,\n density=False,\n bins=\"auto\",\n weights=None,\n ax=None,\n slider_formats=None,\n force_ipywidgets=False,\n play_buttons=False,\n controls=None,\n display_controls=True,\n **kwargs,\n):\n \"\"\"\n Control the contents of a histogram using widgets.\n\n See https://github.com/ianhi/mpl-interactions/pull/73#issue-470638134 for a discussion\n of the limitations of this function. These limitations will be improved once\n https://github.com/matplotlib/matplotlib/pull/18275 has been merged.\n\n Parameters\n ----------\n arr : arraylike or function\n The array or the function that returns an array that is to be histogrammed\n density : bool, optional\n whether to plot as a probability density. Passed to `numpy.histogram`\n bins : int or sequence of scalars or str, optional\n bins argument to `numpy.histogram`\n weights : array_like, optional\n passed to `numpy.histogram`\n ax : matplotlib axis, optional\n The axis on which to plot. If none the current axis will be used.\n slider_formats : None, string, or dict\n If None a default value of decimal points will be used. Uses the new {} style formatting\n force_ipywidgets : boolean\n If True ipywidgets will always be used, even if not using the ipympl backend.\n If False the function will try to detect if it is ok to use ipywidgets\n If ipywidgets are not used the function will fall back on matplotlib widgets\n play_buttons : bool or str or dict, optional\n Whether to attach an ipywidgets.Play widget to any sliders that get created.\n If a boolean it will apply to all kwargs, if a dictionary you choose which sliders you\n want to attach play buttons too.\n\n - None: no sliders\n - True: sliders on the lft\n - False: no sliders\n - 'left': sliders on the left\n - 'right': sliders on the right\n\n controls : mpl_interactions.controller.Controls\n An existing controls object if you want to tie multiple plot elements to the same set of\n controls\n display_controls : boolean\n Whether the controls should display on creation. Ignored if controls is specified.\n\n Returns\n -------\n controls\n\n Examples\n --------\n With numpy arrays::\n\n loc = np.linspace(-5, 5, 500)\n scale = np.linspace(1, 10, 100)\n def f(loc, scale):\n return np.random.randn(1000)*scale + loc\n interactive_hist(f, loc=loc, scale=scale)\n\n with tuples::\n\n def f(loc, scale):\n return np.random.randn(1000)*scale + loc\n interactive_hist(f, loc=(-5, 5, 500), scale=(1, 10, 100))\n \"\"\"\n\n ipympl = notebook_backend()\n fig, ax = gogogo_figure(ipympl, ax=ax)\n ipympl or force_ipywidgets\n slider_formats = create_slider_format_dict(slider_formats)\n controls, params = gogogo_controls(\n kwargs, controls, display_controls, slider_formats, play_buttons\n )\n pc = PatchCollection([])\n ax.add_collection(pc, autolim=True)\n\n def update(params, indices, cache):\n arr_ = callable_else_value(arr, params, cache)\n new_x, new_y, new_patches = simple_hist(arr_, density=density, bins=bins, weights=weights)\n stretch(ax, new_x, new_y)\n pc.set_paths(new_patches)\n ax.autoscale_view()\n\n controls._register_function(update, fig, params.keys())\n\n new_x, new_y, new_patches = simple_hist(\n callable_else_value(arr, params), density=density, bins=bins, weights=weights\n )\n sca(ax)\n pc.set_paths(new_patches)\n ax.set_xlim(new_x)\n ax.set_ylim(new_y)\n\n return controls\n\n\ndef interactive_scatter(\n x,\n y=None,\n s=None,\n c=None,\n cmap=None,\n vmin=None,\n vmax=None,\n alpha=None,\n marker=None,\n edgecolors=None,\n facecolors=None,\n label=None,\n parametric=False,\n ax=None,\n slider_formats=None,\n xlim=\"stretch\",\n ylim=\"stretch\",\n force_ipywidgets=False,\n play_buttons=False,\n controls=None,\n display_controls=True,\n **kwargs,\n):\n \"\"\"\n Control a scatter plot using widgets.\n\n Parameters\n ----------\n x, y : function or float or array-like\n shape (n, ) for array-like. Functions must return the correct shape as well. If y is None\n then parametric must be True and the function for x must return x, y\n c : array-like or list of colors or color or Callable\n Valid input to plt.scatter or a function\n s : float, array-like, function, or index controls object\n valid input to plt.scatter, or a function\n alpha : float, None, or function(s), broadcastable\n Affects all scatter points. This will compound with any alpha introduced by\n the ``c`` argument\n marker : MarkerStyle, or Callable, optional\n The marker style or a function returning marker styles.\n edgecolor[s] : callable or valid argument to scatter\n passed through to scatter.\n facecolor[s] : callable or valid argument to scatter\n Valid input to plt.scatter, or a function\n label : string\n Passed through to Matplotlib\n parametric : boolean\n If True then the function expects to have only received a value for y and that that function will\n return an array for both x and y, or will return an array with shape (N, 2)\n ax : matplotlib axis, optional\n The axis on which to plot. If none the current axis will be used.\n slider_formats : None, string, or dict\n If None a default value of decimal points will be used. Uses the new {} style formatting\n xlim : string or tuple of floats, optional\n If a tuple it will be passed to ax.set_xlim. Other options are:\n 'auto': rescale the x axis for every redraw\n 'stretch': only ever expand the xlims.\n ylim : string or tuple of floats, optional\n If a tuple it will be passed to ax.set_ylim. Other options are same as xlim\n force_ipywidgets : boolean\n If True ipywidgets will always be used, even if not using the ipympl backend.\n If False the function will try to detect if it is ok to use ipywidgets\n If ipywidgets are not used the function will fall back on matplotlib widgets\n play_buttons : bool or str or dict, optional\n Whether to attach an ipywidgets.Play widget to any sliders that get created.\n If a boolean it will apply to all kwargs, if a dictionary you choose which sliders you\n want to attach play buttons too.\n\n - None: no sliders\n - True: sliders on the lft\n - False: no sliders\n - 'left': sliders on the left\n - 'right': sliders on the right\n\n controls : mpl_interactions.controller.Controls\n An existing controls object if you want to tie multiple plot elements to the same set of\n controls\n display_controls : boolean\n Whether the controls should display on creation. Ignored if controls is specified.\n\n Returns\n -------\n controls\n \"\"\"\n\n if isinstance(xlim, str):\n stretch_x = xlim == \"stretch\"\n else:\n stretch_x = False\n\n if isinstance(ylim, str) and ylim.lower() == \"stretch\":\n stretch_y = True\n else:\n stretch_y = False\n\n # yanked from https://github.com/matplotlib/matplotlib/blob/bcc1ce8461f5b6e874baaaa02ef776d0243a4abe/lib/matplotlib/axes/_axes.py#L4271-L4273\n facecolors = kwargs.pop(\"facecolor\", facecolors)\n edgecolors = kwargs.pop(\"edgecolor\", edgecolors)\n\n kwargs, collection_kwargs = kwarg_popper(kwargs, collection_kwargs_list)\n\n ipympl = notebook_backend()\n fig, ax = gogogo_figure(ipympl, ax)\n ipympl or force_ipywidgets\n slider_formats = create_slider_format_dict(slider_formats)\n\n extra_ctrls = []\n funcs, extra_ctrls, param_excluder = prep_scalars(kwargs, s=s, alpha=alpha, marker=marker)\n s = funcs[\"s\"]\n alpha = funcs[\"alpha\"]\n marker = funcs[\"marker\"]\n\n controls, params = gogogo_controls(\n kwargs, controls, display_controls, slider_formats, play_buttons, extra_ctrls\n )\n\n def update(params, indices, cache):\n if parametric:\n out = callable_else_value_no_cast(x, param_excluder(params))\n if not isinstance(out, tuple):\n out = np.asanyarray(out).T\n x_, y_ = out\n else:\n x_, y_ = eval_xy(x, y, param_excluder(params), cache)\n scatter.set_offsets(np.column_stack([x_, y_]))\n c_ = check_callable_xy(c, x_, y_, param_excluder(params), cache)\n s_ = check_callable_xy(s, x_, y_, param_excluder(params, \"s\"), cache)\n ec_ = check_callable_xy(edgecolors, x_, y_, param_excluder(params), cache)\n fc_ = check_callable_xy(facecolors, x_, y_, param_excluder(params), cache)\n a_ = check_callable_alpha(alpha, param_excluder(params, \"alpha\"), cache)\n marker_ = callable_else_value_no_cast(marker, param_excluder(params), cache)\n\n if marker_ is not None:\n if not isinstance(marker_, mmarkers.MarkerStyle):\n marker_ = mmarkers.MarkerStyle(marker_)\n path = marker_.get_path().transformed(marker_.get_transform())\n scatter.set_paths((path,))\n\n if c_ is not None:\n try:\n c_ = to_rgba_array(c_)\n except ValueError:\n try:\n c_ = scatter.cmap(c_)\n except TypeError:\n raise ValueError(\n \"If c is a function it must return either an RGB(A) array\"\n \"or a 1D array of valid color names or values to be colormapped\"\n )\n scatter.set_facecolor(c_)\n if ec_ is not None:\n scatter.set_edgecolor(ec_)\n if fc_ is not None:\n scatter.set_facecolor(c_)\n if s_ is not None:\n if isinstance(s_, Number):\n s_ = np.broadcast_to(s_, (len(x_),))\n scatter.set_sizes(s_)\n if a_ is not None:\n scatter.set_alpha(a_)\n\n update_datalim_from_bbox(\n ax, scatter.get_datalim(ax.transData), stretch_x=stretch_x, stretch_y=stretch_y\n )\n ax.autoscale_view()\n\n controls._register_function(update, fig, params.keys())\n\n def check_callable_xy(arg, x, y, params, cache):\n if isinstance(arg, Callable):\n if arg not in cache:\n cache[arg] = arg(x, y, **params)\n return cache[arg]\n else:\n return arg\n\n def check_callable_alpha(alpha_, params, cache):\n if isinstance(alpha_, Callable):\n if alpha_ not in cache:\n cache[alpha_] = alpha_(**param_excluder(params, \"alpha\"))\n return cache[alpha_]\n else:\n return alpha_\n\n p = param_excluder(params)\n if parametric:\n out = callable_else_value_no_cast(x, p)\n if not isinstance(out, tuple):\n out = np.asanyarray(out).T\n x_, y_ = out\n else:\n x_, y_ = eval_xy(x, y, p)\n c_ = check_callable_xy(c, x_, y_, p, {})\n s_ = check_callable_xy(s, x_, y_, param_excluder(params, \"s\"), {})\n ec_ = check_callable_xy(edgecolors, x_, y_, p, {})\n fc_ = check_callable_xy(facecolors, x_, y_, p, {})\n a_ = check_callable_alpha(alpha, params, {})\n marker_ = callable_else_value_no_cast(marker, p, {})\n scatter = ax.scatter(\n x_,\n y_,\n c=c_,\n s=s_,\n vmin=vmin,\n vmax=vmax,\n cmap=cmap,\n marker=marker_,\n alpha=a_,\n edgecolors=ec_,\n facecolors=fc_,\n label=label,\n **collection_kwargs,\n )\n # this is necessary to make calls to plt.colorbar behave as expected\n sca(ax)\n ax._sci(scatter)\n\n return controls\n\n\n# portions of this docstring were copied directly from the docsting\n# of `matplotlib.pyplot.imshow`\ndef interactive_imshow(\n X,\n cmap=None,\n norm=None,\n aspect=None,\n interpolation=None,\n alpha=None,\n vmin=None,\n vmax=None,\n vmin_vmax=None,\n origin=None,\n extent=None,\n autoscale_cmap=True,\n filternorm=True,\n filterrad=4.0,\n resample=None,\n url=None,\n ax=None,\n slider_formats=None,\n force_ipywidgets=False,\n play_buttons=False,\n controls=None,\n display_controls=True,\n **kwargs,\n):\n \"\"\"\n Control an image using widgets.\n\n Parameters\n ----------\n X : function or image like\n If a function it must return an image-like object. See matplotlib.pyplot.imshow for the\n full set of valid options.\n cmap : str or `~matplotlib.colors.Colormap`\n The Colormap instance or registered colormap name used to map\n scalar data to colors. This parameter is ignored for RGB(A) data.\n forwarded to matplotlib\n norm : `~matplotlib.colors.Normalize`, optional\n The `~matplotlib.colors.Normalize` instance used to scale scalar data to the [0, 1]\n range before mapping to colors using *cmap*. By default, a linear\n scaling mapping the lowest value to 0 and the highest to 1 is used.\n This parameter is ignored for RGB(A) data.\n forwarded to matplotlib\n autoscale_cmap : bool\n If True rescale the colormap for every function update. Will not update\n if vmin and vmax are provided or if the returned image is RGB(A) like.\n forwarded to matplotlib\n aspect : {'equal', 'auto'} or float\n forwarded to matplotlib\n interpolation : str\n forwarded to matplotlib\n alpha : float, callable, shorthand for slider or indexed controls\n The alpha value of the image. Can accept a float for a fixed value,\n or any slider shorthand to control with a slider, or an indexed controls\n object to use an existing slider, or an arbitrary function of the other\n parameters.\n ax : matplotlib axis, optional\n The axis on which to plot. If none the current axis will be used.\n slider_formats : None, string, or dict\n If None a default value of decimal points will be used. Uses the new {} style formatting\n force_ipywidgets : boolean\n If True ipywidgets will always be used, even if not using the ipympl backend.\n If False the function will try to detect if it is ok to use ipywidgets\n If ipywidgets are not used the function will fall back on matplotlib widgets\n play_buttons : bool or str or dict, optional\n Whether to attach an ipywidgets.Play widget to any sliders that get created.\n If a boolean it will apply to all kwargs, if a dictionary you choose which sliders you\n want to attach play buttons too.\n\n - None: no sliders\n - True: sliders on the lft\n - False: no sliders\n - 'left': sliders on the left\n - 'right': sliders on the right\n\n controls : mpl_interactions.controller.Controls\n An existing controls object if you want to tie multiple plot elements to the same set of\n controls\n display_controls : boolean\n Whether the controls should display on creation. Ignored if controls is specified.\n\n Returns\n -------\n controls\n \"\"\"\n ipympl = notebook_backend()\n fig, ax = gogogo_figure(ipympl, ax)\n ipympl or force_ipywidgets\n slider_formats = create_slider_format_dict(slider_formats)\n kwargs, imshow_kwargs = kwarg_popper(kwargs, imshow_kwargs_list)\n\n funcs, extra_ctrls, param_excluder = prep_scalars(kwargs, vmin=vmin, vmax=vmax, alpha=alpha)\n vmin = funcs[\"vmin\"]\n vmax = funcs[\"vmax\"]\n alpha = funcs[\"alpha\"]\n\n if vmin_vmax is not None:\n if isinstance(vmin_vmax, tuple) and not isinstance(vmin_vmax[0], str):\n vmin_vmax = (\"r\", *vmin_vmax)\n kwargs[\"vmin_vmax\"] = vmin_vmax\n\n controls, params = gogogo_controls(\n kwargs, controls, display_controls, slider_formats, play_buttons, extra_ctrls\n )\n if vmin_vmax is not None:\n params.pop(\"vmin_vmax\")\n params[\"vmin\"] = controls.params[\"vmin\"]\n params[\"vmax\"] = controls.params[\"vmax\"]\n\n def vmin(**kwargs):\n return kwargs[\"vmin\"]\n\n def vmax(**kwargs):\n return kwargs[\"vmax\"]\n\n def update(params, indices, cache):\n if isinstance(X, Callable):\n # ignore anything that we added directly to kwargs in prep_scalar\n # if we don't do this then we might pass the user a kwarg their function\n # didn't expect and things may break\n # check this here to avoid setting the data if we don't need to\n # use the callable_else_value fxn to make use of easy caching\n new_data = callable_else_value(X, param_excluder(params), cache)\n im.set_data(new_data)\n if autoscale_cmap and (new_data.ndim != 3) and vmin is None and vmax is None:\n im.norm.autoscale(new_data)\n # caching for these?\n if isinstance(vmin, Callable):\n im.norm.vmin = callable_else_value(vmin, param_excluder(params, \"vmin\"), cache)\n if isinstance(vmax, Callable):\n im.norm.vmax = callable_else_value(vmax, param_excluder(params, \"vmax\"), cache)\n # Don't use callable_else_value to avoid unnecessary updates\n # Seems as though set_alpha doesn't short circuit if the value\n # hasn't been changed\n if isinstance(alpha, Callable):\n im.set_alpha(callable_else_value_no_cast(alpha, param_excluder(params, \"alpha\"), cache))\n\n controls._register_function(update, fig, params.keys())\n\n # make it once here so we can use the dims in update\n # see explanation for excluded_params in the update function\n new_data = callable_else_value(X, param_excluder(params))\n sca(ax)\n im = ax.imshow(\n new_data,\n cmap=cmap,\n norm=norm,\n aspect=aspect,\n interpolation=interpolation,\n alpha=callable_else_value_no_cast(alpha, param_excluder(params, \"alpha\")),\n vmin=callable_else_value(vmin, param_excluder(params, \"vmin\")),\n vmax=callable_else_value(vmax, param_excluder(params, \"vmax\")),\n origin=origin,\n extent=extent,\n filternorm=filternorm,\n filterrad=filterrad,\n resample=resample,\n url=url,\n **imshow_kwargs,\n )\n\n # i know it's bad news to use private methods :(\n # but idk how else to accomplish being a psuedo-pyplot\n ax._sci(im)\n return controls\n\n\ndef interactive_axhline(\n y=0,\n xmin=0,\n xmax=1,\n ax=None,\n slider_formats=None,\n force_ipywidgets=False,\n play_buttons=False,\n controls=None,\n display_controls=True,\n **kwargs,\n):\n \"\"\"\n Control an horizontal line using widgets.\n\n Parameters\n ----------\n y : float or function\n y position in data coordinates of the horizontal line.\n xmin : float or function\n Should be between 0 and 1, 0 being the far left of the plot, 1 the\n far right of the plot.\n xmax : float or function\n Should be between 0 and 1, 0 being the far left of the plot, 1 the\n far right of the plot.\n ax : matplotlib axis, optional\n If None a new figure and axis will be created\n slider_formats : None, string, or dict\n If None a default value of decimal points will be used. Uses the new {} style formatting\n force_ipywidgets : boolean\n If True ipywidgets will always be used, even if not using the ipympl backend.\n If False the function will try to detect if it is ok to use ipywidgets\n If ipywidgets are not used the function will fall back on matplotlib widgets\n play_buttons : bool or str or dict, optional\n Whether to attach an ipywidgets.Play widget to any sliders that get created.\n If a boolean it will apply to all kwargs, if a dictionary you choose which sliders you\n want to attach play buttons too.\n\n - None: no sliders\n - True: sliders on the lft\n - False: no sliders\n - 'left': sliders on the left\n - 'right': sliders on the right\n\n controls : mpl_interactions.controller.Controls\n An existing controls object if you want to tie multiple plot elements to the same set of\n controls\n display_controls : boolean\n Whether the controls should display on creation. Ignored if controls is specified.\n **kwargs\n Kwargs will be used to create control widgets. Except kwargs that are valid for Line2D are\n extracted and passed through to the creation of the line.\n\n Returns\n -------\n controls\n \"\"\"\n ipympl = notebook_backend()\n fig, ax = gogogo_figure(ipympl, ax)\n ipympl or force_ipywidgets\n slider_formats = create_slider_format_dict(slider_formats)\n\n kwargs, line_kwargs = kwarg_popper(kwargs, Line2D_kwargs_list)\n line_kwargs.pop(\"transform\", None) # transform is not a valid kwarg for ax{v,h}line\n\n extra_ctrls = []\n funcs, extra_ctrls, param_excluder = prep_scalars(kwargs, y=y, xmin=xmin, xmax=xmax)\n y = funcs[\"y\"]\n xmin = funcs[\"xmin\"]\n xmax = funcs[\"xmax\"]\n controls, params = gogogo_controls(\n kwargs, controls, display_controls, slider_formats, play_buttons, extra_ctrls\n )\n\n def update(params, indices, cache):\n y_ = callable_else_value(y, param_excluder(params, \"y\"), cache).item()\n line.set_ydata([y_, y_])\n xmin_ = callable_else_value(xmin, param_excluder(params, \"xmin\"), cache).item()\n xmax_ = callable_else_value(xmax, param_excluder(params, \"xmax\"), cache).item()\n line.set_xdata([xmin_, xmax_])\n # TODO consider updating just the ydatalim here\n\n controls._register_function(update, fig, params)\n sca(ax)\n line = ax.axhline(\n callable_else_value(y, param_excluder(params, \"y\")).item(),\n callable_else_value(xmin, param_excluder(params, \"xmin\")).item(),\n callable_else_value(xmax, param_excluder(params, \"xmax\")).item(),\n **line_kwargs,\n )\n return controls\n\n\ndef interactive_axvline(\n x=0,\n ymin=0,\n ymax=1,\n ax=None,\n slider_formats=None,\n force_ipywidgets=False,\n play_buttons=False,\n controls=None,\n display_controls=True,\n **kwargs,\n):\n \"\"\"\n Control a vertical line using widgets.\n\n Parameters\n ----------\n x : float or function\n x position in data coordinates of the horizontal line.\n ymin : float or function\n Should be between 0 and 1, 0 being the bottom of the plot, 1 the\n far top of the plot\n ymax : float or function\n Should be between 0 and 1, 0 being the top of the plot, 1 the\n top of the plot.\n ax : matplotlib axis, optional\n If None a new figure and axis will be created\n slider_formats : None, string, or dict\n If None a default value of decimal points will be used. Uses the new {} style formatting\n force_ipywidgets : boolean\n If True ipywidgets will always be used, even if not using the ipympl backend.\n If False the function will try to detect if it is ok to use ipywidgets\n If ipywidgets are not used the function will fall back on matplotlib widgets\n play_buttons : bool or str or dict, optional\n Whether to attach an ipywidgets.Play widget to any sliders that get created.\n If a boolean it will apply to all kwargs, if a dictionary you choose which sliders you\n want to attach play buttons too.\n\n - None: no sliders\n - True: sliders on the lft\n - False: no sliders\n - 'left': sliders on the left\n - 'right': sliders on the right\n\n controls : mpl_interactions.controller.Controls\n An existing controls object if you want to tie multiple plot elements to the same set of\n controls\n display_controls : boolean\n Whether the controls should display on creation. Ignored if controls is specified.\n **kwargs\n Kwargs will be used to create control widgets. Except kwargs that are valid for Line2D are\n extracted and passed through to the creation of the line.\n\n Returns\n -------\n controls\n \"\"\"\n ipympl = notebook_backend()\n fig, ax = gogogo_figure(ipympl, ax)\n ipympl or force_ipywidgets\n slider_formats = create_slider_format_dict(slider_formats)\n\n kwargs, line_kwargs = kwarg_popper(kwargs, Line2D_kwargs_list)\n line_kwargs.pop(\"transform\", None) # transform is not a valid kwarg for ax{v,h}line\n\n extra_ctrls = []\n funcs, extra_ctrls, param_excluder = prep_scalars(kwargs, x=x, ymin=ymin, ymax=ymax)\n x = funcs[\"x\"]\n ymin = funcs[\"ymin\"]\n ymax = funcs[\"ymax\"]\n\n controls, params = gogogo_controls(\n kwargs, controls, display_controls, slider_formats, play_buttons, extra_ctrls\n )\n\n def update(params, indices, cache):\n x_ = callable_else_value(x, param_excluder(params, \"x\"), cache).item()\n line.set_xdata([x_, x_])\n ymin_ = callable_else_value(ymin, param_excluder(params, \"ymin\"), cache).item()\n ymax_ = callable_else_value(ymax, param_excluder(params, \"ymax\"), cache).item()\n line.set_ydata([ymin_, ymax_])\n # TODO consider updating just the ydatalim here\n\n controls._register_function(update, fig, params)\n sca(ax)\n line = ax.axvline(\n callable_else_value(x, param_excluder(params, \"x\")).item(),\n callable_else_value(ymin, param_excluder(params, \"ymin\")).item(),\n callable_else_value(ymax, param_excluder(params, \"ymax\")).item(),\n **line_kwargs,\n )\n return controls\n\n\ndef interactive_title(\n title,\n controls=None,\n ax=None,\n *,\n fontdict=None,\n loc=None,\n y=None,\n pad=None,\n slider_formats=None,\n display_controls=True,\n play_buttons=False,\n force_ipywidgets=False,\n **kwargs,\n):\n \"\"\"\n Set an title that will update interactively. kwargs for `matplotlib.text.Text` will be passed through,\n other kwargs will be used to create interactive controls.\n\n Parameters\n ----------\n title : str or function\n The title text. Can include {} style formatting. e.g. 'The voltage is {volts:.2f} mV'\n controls : mpl_interactions.controller.Controls\n An existing controls object if you want to tie multiple plot elements to the same set of\n controls\n ax : `matplotlib.axes.Axes`, optional\n The axis on which to plot. If none the current axis will be used.\n loc : {'center', 'left', 'right'}, default: `axes.titlelocation <matplotlib.rcParams>`\n Which title to set.\n y : float, default: `axes.titley <matplotlib.rcParams>`\n Vertical axes loation for the title (1.0 is the top). If\n None (the default), y is determined automatically to avoid\n decorators on the axes.\n pad : float, default: `axes.titlepad <matplotlib.rcParams>`\n The offset of the title from the top of the axes, in points.\n slider_formats : None, string, or dict\n If None a default value of decimal points will be used. Uses {} style formatting\n display_controls : boolean\n Whether the controls should display on creation. Ignored if controls is specified.\n play_buttons : bool or str or dict, optional\n Whether to attach an ipywidgets.Play widget to any sliders that get created.\n If a boolean it will apply to all kwargs, if a dictionary you choose which sliders you\n want to attach play buttons too.\n\n - None: no sliders\n - True: sliders on the lft\n - False: no sliders\n - 'left': sliders on the left\n - 'right': sliders on the right\n\n force_ipywidgets : boolean\n If True ipywidgets will always be used, even if not using the ipympl backend.\n If False the function will try to detect if it is ok to use ipywidgets\n If ipywidgets are not used the function will fall back on matplotlib widgets\n\n Returns\n -------\n controls\n \"\"\"\n ipympl = notebook_backend()\n fig, ax = gogogo_figure(ipympl, ax)\n ipympl or force_ipywidgets\n slider_formats = create_slider_format_dict(slider_formats)\n\n kwargs, text_kwargs = kwarg_popper(kwargs, Text_kwargs_list)\n\n controls, params = gogogo_controls(\n kwargs, controls, display_controls, slider_formats, play_buttons\n )\n\n def update(params, indices, cache):\n ax.set_title(\n callable_else_value_no_cast(title, params, cache).format(**params),\n fontdict=fontdict,\n loc=loc,\n pad=pad,\n y=y,\n **text_kwargs,\n )\n\n controls._register_function(update, fig, params)\n ax.set_title(\n callable_else_value_no_cast(title, params, None).format(**params),\n fontdict=fontdict,\n loc=loc,\n pad=pad,\n y=y,\n **text_kwargs,\n )\n return controls\n\n\ndef interactive_xlabel(\n xlabel,\n controls=None,\n ax=None,\n *,\n fontdict=None,\n labelpad=None,\n loc=None,\n slider_formats=None,\n display_controls=True,\n play_buttons=False,\n force_ipywidgets=False,\n **kwargs,\n):\n \"\"\"\n Set an xlabel that will update interactively. kwargs for `matplotlib.text.Text` will be passed through,\n other kwargs will be used to create interactive controls.\n\n Parameters\n ----------\n xlabel : str or function\n The label text. Can include {} style formatting. e.g. 'The voltage is {volts:.2f} mV'\n controls : mpl_interactions.controller.Controls\n An existing controls object if you want to tie multiple plot elements to the same set of\n controls\n ax : matplotlib axis, optional\n The axis on which to plot. If none the current axis will be used.\n labelpad : float, default: None\n Spacing in points from the axes bounding box including ticks\n and tick labels.\n loc : {'bottom', 'center', 'top'}, default: `yaxis.labellocation <matplotlib.rcParams>`\n The label position. This is a high-level alternative for passing\n parameters *y* and *horizontalalignment*.\n slider_formats : None, string, or dict\n If None a default value of decimal points will be used. Uses {} style formatting\n display_controls : boolean\n Whether the controls should display on creation. Ignored if controls is specified.\n play_buttons : bool or str or dict, optional\n Whether to attach an ipywidgets.Play widget to any sliders that get created.\n If a boolean it will apply to all kwargs, if a dictionary you choose which sliders you\n want to attach play buttons too.\n\n - None: no sliders\n - True: sliders on the lft\n - False: no sliders\n - 'left': sliders on the left\n - 'right': sliders on the right\n\n force_ipywidgets : boolean\n If True ipywidgets will always be used, even if not using the ipympl backend.\n If False the function will try to detect if it is ok to use ipywidgets\n If ipywidgets are not used the function will fall back on matplotlib widgets\n\n Returns\n -------\n controls\n \"\"\"\n ipympl = notebook_backend()\n fig, ax = gogogo_figure(ipympl, ax)\n ipympl or force_ipywidgets\n slider_formats = create_slider_format_dict(slider_formats)\n\n kwargs, text_kwargs = kwarg_popper(kwargs, Text_kwargs_list)\n controls, params = gogogo_controls(\n kwargs, controls, display_controls, slider_formats, play_buttons\n )\n\n def update(params, indices, cache):\n ax.set_xlabel(\n callable_else_value_no_cast(xlabel, params, cache).format(**params),\n fontdict=fontdict,\n labelpad=labelpad,\n loc=loc,\n **text_kwargs,\n )\n\n controls._register_function(update, fig, params)\n ax.set_xlabel(\n callable_else_value_no_cast(xlabel, params, None).format(**params),\n fontdict=fontdict,\n labelpad=labelpad,\n loc=loc,\n **text_kwargs,\n )\n return controls\n\n\ndef interactive_ylabel(\n ylabel,\n controls=None,\n ax=None,\n *,\n fontdict=None,\n labelpad=None,\n loc=None,\n slider_formats=None,\n display_controls=True,\n play_buttons=False,\n force_ipywidgets=False,\n **kwargs,\n):\n \"\"\"\n Set a ylabel that will update interactively. kwargs for `matplotlib.text.Text` will be passed through,\n other kwargs will be used to create interactive controls.\n\n Parameters\n ----------\n ylabel : str or function\n The label text. Can include {} style formatting. e.g. 'The voltage is {volts:.2f}'\n controls : mpl_interactions.controller.Controls\n An existing controls object if you want to tie multiple plot elements to the same set of\n controls\n ax : matplotlib axis, optional\n The axis on which to plot. If none the current axis will be used.\n labelpad : float, default: None\n Spacing in points from the axes bounding box including ticks\n and tick labels.\n loc : {'bottom', 'center', 'top'}, default: `yaxis.labellocation <matplotlib.rcParams>`\n The label position. This is a high-level alternative for passing\n parameters *y* and *horizontalalignment*.\n slider_formats : None, string, or dict\n If None a default value of decimal points will be used. Uses {} style formatting\n display_controls : boolean\n Whether the controls should display on creation. Ignored if controls is specified.\n play_buttons : bool or str or dict, optional\n Whether to attach an ipywidgets.Play widget to any sliders that get created.\n If a boolean it will apply to all kwargs, if a dictionary you choose which sliders you\n want to attach play buttons too.\n\n - None: no sliders\n - True: sliders on the lft\n - False: no sliders\n - 'left': sliders on the left\n - 'right': sliders on the right\n\n force_ipywidgets : boolean\n If True ipywidgets will always be used, even if not using the ipympl backend.\n If False the function will try to detect if it is ok to use ipywidgets\n If ipywidgets are not used the function will fall back on matplotlib widgets\n\n Returns\n -------\n controls\n \"\"\"\n ipympl = notebook_backend()\n fig, ax = gogogo_figure(ipympl, ax)\n ipympl or force_ipywidgets\n slider_formats = create_slider_format_dict(slider_formats)\n\n kwargs, text_kwargs = kwarg_popper(kwargs, Text_kwargs_list)\n controls, params = gogogo_controls(\n kwargs, controls, display_controls, slider_formats, play_buttons\n )\n\n def update(params, indices, cache):\n ax.set_ylabel(\n callable_else_value_no_cast(ylabel, params, cache).format(**params),\n fontdict=fontdict,\n labelpad=labelpad,\n loc=loc,\n **text_kwargs,\n )\n\n controls._register_function(update, fig, params)\n ax.set_ylabel(\n callable_else_value_no_cast(ylabel, params, None).format(**params),\n fontdict=fontdict,\n labelpad=labelpad,\n loc=loc,\n **text_kwargs,\n )\n return controls\n" ]
[ [ "numpy.histogram", "numpy.column_stack", "matplotlib.colors.to_rgba_array", "matplotlib.patches.Rectangle", "matplotlib.markers.MarkerStyle", "matplotlib.collections.PatchCollection", "numpy.asanyarray" ] ]