index
int64
0
731k
package
stringlengths
2
98
name
stringlengths
1
76
docstring
stringlengths
0
281k
code
stringlengths
4
1.07M
signature
stringlengths
2
42.8k
725,940
validator_collection.checkers
is_url
Indicate whether ``value`` is a URL. .. note:: URL validation is...complicated. The methodology that we have adopted here is *generally* compliant with `RFC 1738 <https://tools.ietf.org/html/rfc1738>`_, `RFC 6761 <https://tools.ietf.org/html/rfc6761>`_, `RFC 2181 <https://tools.ietf.org/html/rfc2181>`_ and uses a combination of string parsing and regular expressions, This approach ensures more complete coverage for unusual edge cases, while still letting us use regular expressions that perform quickly. :param value: The value to evaluate. :param allow_special_ips: If ``True``, will succeed when validating special IP addresses, such as loopback IPs like ``127.0.0.1`` or ``0.0.0.0``. If ``False``, will fail if ``value`` is a special IP address. Defaults to ``False``. :type allow_special_ips: :class:`bool <python:bool>` :returns: ``True`` if ``value`` is valid, ``False`` if it is not. :rtype: :class:`bool <python:bool>` :raises SyntaxError: if ``kwargs`` contains duplicate keyword parameters or duplicates keyword parameters passed to the underlying validator
def is_type(obj, type_, **kwargs): """Indicate if ``obj`` is a type in ``type_``. .. hint:: This checker is particularly useful when you want to evaluate whether ``obj`` is of a particular type, but importing that type directly to use in :func:`isinstance() <python:isinstance>` would cause a circular import error. To use this checker in that kind of situation, you can instead pass the *name* of the type you want to check as a string in ``type_``. The checker will evaluate it and see whether ``obj`` is of a type or inherits from a type whose name matches the string you passed. :param obj: The object whose type should be checked. :type obj: :class:`object <python:object>` :param type_: The type(s) to check against. :type type_: :class:`type <python:type>` / iterable of :class:`type <python:type>` / :class:`str <python:str>` with type name / iterable of :class:`str <python:str>` with type name :returns: ``True`` if ``obj`` is a type in ``type_``. Otherwise, ``False``. :rtype: :class:`bool <python:bool>` :raises SyntaxError: if ``kwargs`` contains duplicate keyword parameters or duplicates keyword parameters passed to the underlying validator """ if not is_iterable(type_): type_ = [type_] return_value = False for check_for_type in type_: if isinstance(check_for_type, type): return_value = isinstance(obj, check_for_type) if not return_value: try: return_value = issubclass(obj, check_for_type) except TypeError: pass elif obj.__class__.__name__ == check_for_type: return_value = True else: return_value = _check_base_classes(obj.__class__.__bases__, check_for_type) if not return_value: try: return_value = issubclass(obj, check_for_type) except TypeError: pass if return_value is True: break return return_value
(value, **kwargs)
725,941
validator_collection.checkers
is_uuid
Indicate whether ``value`` contains a :class:`UUID <python:uuid.UUID>` :param value: The value to evaluate. :returns: ``True`` if ``value`` is valid, ``False`` if it is not. :rtype: :class:`bool <python:bool>` :raises SyntaxError: if ``kwargs`` contains duplicate keyword parameters or duplicates keyword parameters passed to the underlying validator
def is_type(obj, type_, **kwargs): """Indicate if ``obj`` is a type in ``type_``. .. hint:: This checker is particularly useful when you want to evaluate whether ``obj`` is of a particular type, but importing that type directly to use in :func:`isinstance() <python:isinstance>` would cause a circular import error. To use this checker in that kind of situation, you can instead pass the *name* of the type you want to check as a string in ``type_``. The checker will evaluate it and see whether ``obj`` is of a type or inherits from a type whose name matches the string you passed. :param obj: The object whose type should be checked. :type obj: :class:`object <python:object>` :param type_: The type(s) to check against. :type type_: :class:`type <python:type>` / iterable of :class:`type <python:type>` / :class:`str <python:str>` with type name / iterable of :class:`str <python:str>` with type name :returns: ``True`` if ``obj`` is a type in ``type_``. Otherwise, ``False``. :rtype: :class:`bool <python:bool>` :raises SyntaxError: if ``kwargs`` contains duplicate keyword parameters or duplicates keyword parameters passed to the underlying validator """ if not is_iterable(type_): type_ = [type_] return_value = False for check_for_type in type_: if isinstance(check_for_type, type): return_value = isinstance(obj, check_for_type) if not return_value: try: return_value = issubclass(obj, check_for_type) except TypeError: pass elif obj.__class__.__name__ == check_for_type: return_value = True else: return_value = _check_base_classes(obj.__class__.__bases__, check_for_type) if not return_value: try: return_value = issubclass(obj, check_for_type) except TypeError: pass if return_value is True: break return return_value
(value, **kwargs)
725,942
validator_collection.checkers
is_variable_name
Indicate whether ``value`` is a valid Python variable name. .. caution:: This function does **NOT** check whether the variable exists. It only checks that the ``value`` would work as a Python variable (or class, or function, etc.) name. :param value: The value to evaluate. :returns: ``True`` if ``value`` is valid, ``False`` if it is not. :rtype: :class:`bool <python:bool>` :raises SyntaxError: if ``kwargs`` contains duplicate keyword parameters or duplicates keyword parameters passed to the underlying validator
def is_type(obj, type_, **kwargs): """Indicate if ``obj`` is a type in ``type_``. .. hint:: This checker is particularly useful when you want to evaluate whether ``obj`` is of a particular type, but importing that type directly to use in :func:`isinstance() <python:isinstance>` would cause a circular import error. To use this checker in that kind of situation, you can instead pass the *name* of the type you want to check as a string in ``type_``. The checker will evaluate it and see whether ``obj`` is of a type or inherits from a type whose name matches the string you passed. :param obj: The object whose type should be checked. :type obj: :class:`object <python:object>` :param type_: The type(s) to check against. :type type_: :class:`type <python:type>` / iterable of :class:`type <python:type>` / :class:`str <python:str>` with type name / iterable of :class:`str <python:str>` with type name :returns: ``True`` if ``obj`` is a type in ``type_``. Otherwise, ``False``. :rtype: :class:`bool <python:bool>` :raises SyntaxError: if ``kwargs`` contains duplicate keyword parameters or duplicates keyword parameters passed to the underlying validator """ if not is_iterable(type_): type_ = [type_] return_value = False for check_for_type in type_: if isinstance(check_for_type, type): return_value = isinstance(obj, check_for_type) if not return_value: try: return_value = issubclass(obj, check_for_type) except TypeError: pass elif obj.__class__.__name__ == check_for_type: return_value = True else: return_value = _check_base_classes(obj.__class__.__bases__, check_for_type) if not return_value: try: return_value = issubclass(obj, check_for_type) except TypeError: pass if return_value is True: break return return_value
(value, **kwargs)
725,943
validator_collection.validators
iterable
Validate that ``value`` is a valid iterable. .. hint:: This validator checks to ensure that ``value`` supports iteration using any of Python's three iteration protocols: the ``__getitem__`` protocol, the ``__iter__`` / ``next()`` protocol, or the inheritance from Python's `Iterable` abstract base class. If ``value`` supports any of these three iteration protocols, it will be validated. However, if iteration across ``value`` raises an unsupported exception, this function will raise an :exc:`IterationFailedError <validator_collection.errors.IterationFailedError>` :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :param forbid_literals: A collection of literals that will be considered invalid even if they are (actually) iterable. Defaults to :class:`str <python:str>` and :class:`bytes <python:bytes>`. :type forbid_literals: iterable :param minimum_length: If supplied, indicates the minimum number of members needed to be valid. :type minimum_length: :class:`int <python:int>` :param maximum_length: If supplied, indicates the minimum number of members needed to be valid. :type maximum_length: :class:`int <python:int>` :returns: ``value`` / :obj:`None <python:None>` :rtype: iterable / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises NotAnIterableError: if ``value`` is not a valid iterable or :obj:`None <python:None>` :raises IterationFailedError: if ``value`` is a valid iterable, but iteration fails for some unexpected exception :raises MinimumLengthError: if ``minimum_length`` is supplied and the length of ``value`` is less than ``minimum_length`` and ``whitespace_padding`` is ``False`` :raises MaximumLengthError: if ``maximum_length`` is supplied and the length of ``value`` is more than the ``maximum_length``
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, forbid_literals=(<class 'str'>, <class 'bytes'>), minimum_length=None, maximum_length=None, **kwargs)
725,944
validator_collection.validators
mac_address
Validate that ``value`` is a valid MAC address. :param value: The value to validate. :type value: :class:`str <python:str>` / :obj:`None <python:None>` :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` / :obj:`None <python:None>` :rtype: :class:`str <python:str>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` is not a valid :class:`str <python:str>` or string-like object :raises InvalidMACAddressError: if ``value`` is not a valid MAC address or empty with ``allow_empty`` set to ``True``
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, **kwargs)
725,945
validator_collection.validators
none
Validate that ``value`` is :obj:`None <python:None>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty but **not** :obj:`None <python:None>`. If ``False``, raises a :class:`NotNoneError` if ``value`` is empty but **not** :obj:`None <python:None>`. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: :obj:`None <python:None>` :raises NotNoneError: if ``allow_empty`` is ``False`` and ``value`` is empty but **not** :obj:`None <python:None>` and
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, **kwargs)
725,946
validator_collection.validators
not_empty
Validate that ``value`` is not empty. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False``
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, **kwargs)
725,947
validator_collection.validators
numeric
Validate that ``value`` is a numeric value. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is :obj:`None <python:None>`. If ``False``, raises an :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is :obj:`None <python:None>`. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :param minimum: If supplied, will make sure that ``value`` is greater than or equal to this value. :type minimum: numeric :param maximum: If supplied, will make sure that ``value`` is less than or equal to this value. :type maximum: numeric :returns: ``value`` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is :obj:`None <python:None>` and ``allow_empty`` is ``False`` :raises MinimumValueError: if ``minimum`` is supplied and ``value`` is less than the ``minimum`` :raises MaximumValueError: if ``maximum`` is supplied and ``value`` is more than the ``maximum`` :raises CannotCoerceError: if ``value`` cannot be coerced to a numeric form
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, minimum=None, maximum=None, **kwargs)
725,949
validator_collection.validators
path
Validate that ``value`` is a valid path-like object. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: The path represented by ``value``. :rtype: Path-like object / :obj:`None <python:None>` :raises EmptyValueError: if ``allow_empty`` is ``False`` and ``value`` is empty :raises NotPathlikeError: if ``value`` is not a valid path-like object
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, **kwargs)
725,950
validator_collection.validators
path_exists
Validate that ``value`` is a path-like object that exists on the local filesystem. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: The file name represented by ``value``. :rtype: Path-like object / :obj:`None <python:None>` :raises EmptyValueError: if ``allow_empty`` is ``False`` and ``value`` is empty :raises NotPathlikeError: if ``value`` is not a path-like object :raises PathExistsError: if ``value`` does not exist
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, **kwargs)
725,951
validator_collection.validators
string
Validate that ``value`` is a valid string. :param value: The value to validate. :type value: :class:`str <python:str>` / :obj:`None <python:None>` :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :param coerce_value: If ``True``, will attempt to coerce ``value`` to a string if it is not already. If ``False``, will raise a :class:`ValueError` if ``value`` is not a string. Defaults to ``False``. :type coerce_value: :class:`bool <python:bool>` :param minimum_length: If supplied, indicates the minimum number of characters needed to be valid. :type minimum_length: :class:`int <python:int>` :param maximum_length: If supplied, indicates the minimum number of characters needed to be valid. :type maximum_length: :class:`int <python:int>` :param whitespace_padding: If ``True`` and the value is below the ``minimum_length``, pad the value with spaces. Defaults to ``False``. :type whitespace_padding: :class:`bool <python:bool>` :returns: ``value`` / :obj:`None <python:None>` :rtype: :class:`str <python:str>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` is not a valid string and ``coerce_value`` is ``False`` :raises MinimumLengthError: if ``minimum_length`` is supplied and the length of ``value`` is less than ``minimum_length`` and ``whitespace_padding`` is ``False`` :raises MaximumLengthError: if ``maximum_length`` is supplied and the length of ``value`` is more than the ``maximum_length``
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, coerce_value=False, minimum_length=None, maximum_length=None, whitespace_padding=False, **kwargs)
725,952
validator_collection.validators
stringIO
Validate that ``value`` is a :class:`StringIO <python:io.StringIO>` object. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` / :obj:`None <python:None>` :rtype: :class:`StringIO <python:io.StringIO>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises NotStringIOError: if ``value`` is not a :class:`StringIO <python:io.StringIO>` object
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, **kwargs)
725,953
validator_collection.validators
time
Validate that ``value`` is a valid :class:`time <python:datetime.time>`. .. caution:: This validator will **always** return the time as timezone naive (effectively UTC). If ``value`` has a timezone / UTC offset applied, the validator will coerce the value returned back to UTC. :param value: The value to validate. :type value: :func:`datetime <validator_collection.validators.datetime>` or :func:`time <validator_collection.validators.time>`-compliant :class:`str <python:str>` / :class:`datetime <python:datetime.datetime>` / :class:`time <python:datetime.time> / numeric / :obj:`None <python:None>` :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :param minimum: If supplied, will make sure that ``value`` is on or after this value. :type minimum: :func:`datetime <validator_collection.validators.datetime>` or :func:`time <validator_collection.validators.time>`-compliant :class:`str <python:str>` / :class:`datetime <python:datetime.datetime>` / :class:`time <python:datetime.time> / numeric / :obj:`None <python:None>` :param maximum: If supplied, will make sure that ``value`` is on or before this value. :type maximum: :func:`datetime <validator_collection.validators.datetime>` or :func:`time <validator_collection.validators.time>`-compliant :class:`str <python:str>` / :class:`datetime <python:datetime.datetime>` / :class:`time <python:datetime.time> / numeric / :obj:`None <python:None>` :param coerce_value: If ``True``, will attempt to coerce/extract a :class:`time <python:datetime.time>` from ``value``. If ``False``, will only respect direct representations of time. Defaults to ``True``. :type coerce_value: :class:`bool <python:bool>` :returns: ``value`` in UTC time / :obj:`None <python:None>` :rtype: :class:`time <python:datetime.time>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`time <python:datetime.time>` and is not :obj:`None <python:None>` :raises MinimumValueError: if ``minimum`` is supplied but ``value`` occurs before ``minimum`` :raises MaximumValueError: if ``maximum`` is supplied but ``value`` occurs after ``minimum``
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, minimum=None, maximum=None, coerce_value=True, **kwargs)
725,954
validator_collection.validators
timezone
Validate that ``value`` is a valid :class:`tzinfo <python:datetime.tzinfo>`. .. caution:: This does **not** verify whether the value is a timezone that actually exists, nor can it resolve timezone names (e.g. ``'Eastern'`` or ``'CET'``). For that kind of functionality, we recommend you utilize: `pytz <https://pypi.python.org/pypi/pytz>`_ :param value: The value to validate. :type value: :class:`str <python:str>` / :class:`tzinfo <python:datetime.tzinfo>` / numeric / :obj:`None <python:None>` :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :param positive: Indicates whether the ``value`` is positive or negative (only has meaning if ``value`` is a string). Defaults to ``True``. :type positive: :class:`bool <python:bool>` :returns: ``value`` / :obj:`None <python:None>` :rtype: :class:`tzinfo <python:datetime.tzinfo>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to :class:`tzinfo <python:datetime.tzinfo>` and is not :obj:`None <python:None>` :raises PositiveOffsetMismatchError: if ``positive`` is ``True``, but the offset indicated by ``value`` is actually negative :raises NegativeOffsetMismatchError: if ``positive`` is ``False``, but the offset indicated by ``value`` is actually positive
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, positive=True, **kwargs)
725,955
validator_collection.validators
url
Validate that ``value`` is a valid URL. .. note:: URL validation is...complicated. The methodology that we have adopted here is *generally* compliant with `RFC 1738 <https://tools.ietf.org/html/rfc1738>`_, `RFC 6761 <https://tools.ietf.org/html/rfc6761>`_, `RFC 2181 <https://tools.ietf.org/html/rfc2181>`_ and uses a combination of string parsing and regular expressions, This approach ensures more complete coverage for unusual edge cases, while still letting us use regular expressions that perform quickly. :param value: The value to validate. :type value: :class:`str <python:str>` / :obj:`None <python:None>` :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :param allow_special_ips: If ``True``, will succeed when validating special IP addresses, such as loopback IPs like ``127.0.0.1`` or ``0.0.0.0``. If ``False``, will raise a :class:`InvalidURLError` if ``value`` is a special IP address. Defaults to ``False``. :type allow_special_ips: :class:`bool <python:bool>` :returns: ``value`` / :obj:`None <python:None>` :rtype: :class:`str <python:str>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` is not a :class:`str <python:str>` or :obj:`None <python:None>` :raises InvalidURLError: if ``value`` is not a valid URL or empty with ``allow_empty`` set to ``True``
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, allow_special_ips=False, **kwargs)
725,956
validator_collection.validators
uuid
Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>`
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, **kwargs)
725,958
validator_collection.validators
variable_name
Validate that the value is a valid Python variable name. .. caution:: This function does **NOT** check whether the variable exists. It only checks that the ``value`` would work as a Python variable (or class, or function, etc.) name. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` / :obj:`None <python:None>` :rtype: :class:`str <python:str>` or :obj:`None <python:None>` :raises EmptyValueError: if ``allow_empty`` is ``False`` and ``value`` is empty
# -*- coding: utf-8 -*- # The lack of a module docstring for this module is **INTENTIONAL**. # The module is imported into the documentation using Sphinx's autodoc # extension, and its member function documentation is automatically incorporated # there as needed. import decimal as decimal_ import fractions import io import math import os import uuid as uuid_ import datetime as datetime_ import string as string_ import sys from ast import parse import jsonschema from validator_collection._compat import numeric_types, integer_types, datetime_types,\ date_types, time_types, timestamp_types, tzinfo_types, POSITIVE_INFINITY, \ NEGATIVE_INFINITY, TimeZone, json_, is_py2, is_py3, dict_, float_, basestring, re from validator_collection._decorators import disable_on_env from validator_collection import errors URL_UNSAFE_CHARACTERS = ('[', ']', '{', '}', '|', '^', '%', '~') URL_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address exclusion # private & local networks r"(?!(?:10|127)(?:\.\d{1,3}){3})" r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" r"(?:" r"(?:localhost|invalid|test|example)|(" # host name r"(?:(?:[A-z\u00a1-\uffff0-9]-*_*)*[A-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[A-z\u00a1-\uffff0-9]-*)*[A-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[A-z\u00a1-\uffff]{2,}))" r")))" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) URL_SPECIAL_IP_REGEX = re.compile( r"^" # protocol identifier r"(?:(?:https?|ftp)://)" # user:pass authentication r"(?:\S+(?::\S*)?@)?" r"(?:" # IP address dotted notation octets # excludes loopback network 0.0.0.0 # excludes reserved space >= 224.0.0.0 # excludes network & broadcast addresses # (first & last IP address of each class) r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" r"|" # host name r"(?:(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)" # domain name r"(?:\.(?:[a-z\u00a1-\uffff0-9]-*)*[a-z\u00a1-\uffff0-9]+)*" # TLD identifier r"(?:\.(?:[a-z\u00a1-\uffff]{2,}))" r")" # port number r"(?::\d{2,5})?" # resource path r"(?:/\S*)?" r"$" , re.UNICODE) DOMAIN_REGEX = re.compile( r"\b((?=[a-z\u00a1-\uffff0-9-]{1,63}\.)(xn--)?[a-z\u00a1-\uffff0-9]+" r"(-[a-z\u00a1-\uffff0-9]+)*\.)+[a-z]{2,63}\b", re.UNICODE|re.IGNORECASE ) URL_PROTOCOLS = ('http://', 'https://', 'ftp://') SPECIAL_USE_DOMAIN_NAMES = ('localhost', 'invalid', 'test', 'example') EMAIL_REGEX = re.compile( r"(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|\"" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*\")" r"@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])" r"?|\[(?:(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9]))\.){3}" r"(?:(2(5[0-5]|[0-4][0-9])|1[0-9][0-9]|[1-9]?[0-9])|[a-z0-9-]*[a-z0-9]:" r"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])" ) VARIABLE_NAME_REGEX = re.compile( r"(^[a-zA-Z_])([a-zA-Z0-9_]*)" ) MAC_ADDRESS_REGEX = re.compile(r'^(?:[0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$') IPV6_REGEX = re.compile( '^(?:(?:[0-9A-Fa-f]{1,4}:){6}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|::(?:[0-9A-Fa-f]{1,4}:){5}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){4}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){3}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,2}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:){2}(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,3}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}:(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,4}[0-9A-Fa-f]{1,4})?::(?:[0-9A-Fa-f]{1,4}:[0-9A-Fa-f]{1,4}|(?:(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}(?:[0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(?:(?:[0-9A-Fa-f]{1,4}:){,5}[0-9A-Fa-f]{1,4})?::[0-9A-Fa-f]{1,4}|(?:(?:[0-9A-Fa-f]{1,4}:){,6}[0-9A-Fa-f]{1,4})?::)(?:%25(?:[A-Za-z0-9\\-._~]|%[0-9A-Fa-f]{2})+)?$' ) TIMEDELTA_REGEX = re.compile(r'((?P<days>\d+) days?, )?(?P<hours>\d+):' r'(?P<minutes>\d+):(?P<seconds>\d+(\.\d+)?)') MIME_TYPE_REGEX = re.compile(r"^multipart|[-\w.]+/[-\w.\+]+$") # pylint: disable=W0613 ## CORE @disable_on_env def uuid(value, allow_empty = False, **kwargs): """Validate that ``value`` is a valid :class:`UUID <python:uuid.UUID>`. :param value: The value to validate. :param allow_empty: If ``True``, returns :obj:`None <python:None>` if ``value`` is empty. If ``False``, raises a :class:`EmptyValueError <validator_collection.errors.EmptyValueError>` if ``value`` is empty. Defaults to ``False``. :type allow_empty: :class:`bool <python:bool>` :returns: ``value`` coerced to a :class:`UUID <python:uuid.UUID>` object / :obj:`None <python:None>` :rtype: :class:`UUID <python:uuid.UUID>` / :obj:`None <python:None>` :raises EmptyValueError: if ``value`` is empty and ``allow_empty`` is ``False`` :raises CannotCoerceError: if ``value`` cannot be coerced to a :class:`UUID <python:uuid.UUID>` """ if not value and not allow_empty: raise errors.EmptyValueError('value (%s) was empty' % value) elif not value: return None if isinstance(value, uuid_.UUID): return value try: value = uuid_.UUID(value) except ValueError: raise errors.CannotCoerceError('value (%s) cannot be coerced to a valid UUID') return value
(value, allow_empty=False, **kwargs)
725,960
pymediainfo
MediaInfo
An object containing information about a media file. :class:`MediaInfo` objects can be created by directly calling code from libmediainfo (in this case, the library must be present on the system): >>> pymediainfo.MediaInfo.parse("/path/to/file.mp4") Alternatively, objects may be created from MediaInfo's XML output. Such output can be obtained using the ``XML`` output format on versions older than v17.10 and the ``OLDXML`` format on newer versions. Using such an XML file, we can create a :class:`MediaInfo` object: >>> with open("output.xml") as f: ... mi = pymediainfo.MediaInfo(f.read()) :param str xml: XML output obtained from MediaInfo. :param str encoding_errors: option to pass to :func:`str.encode`'s `errors` parameter before parsing `xml`. :raises xml.etree.ElementTree.ParseError: if passed invalid XML. :var tracks: A list of :py:class:`Track` objects which the media file contains. For instance: >>> mi = pymediainfo.MediaInfo.parse("/path/to/file.mp4") >>> for t in mi.tracks: ... print(t) <Track track_id='None', track_type='General'> <Track track_id='1', track_type='Text'>
class MediaInfo: """ An object containing information about a media file. :class:`MediaInfo` objects can be created by directly calling code from libmediainfo (in this case, the library must be present on the system): >>> pymediainfo.MediaInfo.parse("/path/to/file.mp4") Alternatively, objects may be created from MediaInfo's XML output. Such output can be obtained using the ``XML`` output format on versions older than v17.10 and the ``OLDXML`` format on newer versions. Using such an XML file, we can create a :class:`MediaInfo` object: >>> with open("output.xml") as f: ... mi = pymediainfo.MediaInfo(f.read()) :param str xml: XML output obtained from MediaInfo. :param str encoding_errors: option to pass to :func:`str.encode`'s `errors` parameter before parsing `xml`. :raises xml.etree.ElementTree.ParseError: if passed invalid XML. :var tracks: A list of :py:class:`Track` objects which the media file contains. For instance: >>> mi = pymediainfo.MediaInfo.parse("/path/to/file.mp4") >>> for t in mi.tracks: ... print(t) <Track track_id='None', track_type='General'> <Track track_id='1', track_type='Text'> """ def __eq__(self, other): # type: ignore return self.tracks == other.tracks def __init__(self, xml: str, encoding_errors: str = "strict"): xml_dom = ET.fromstring(xml.encode("utf-8", encoding_errors)) self.tracks = [] # This is the case for libmediainfo < 18.03 # https://github.com/sbraz/pymediainfo/issues/57 # https://github.com/MediaArea/MediaInfoLib/commit/575a9a32e6960ea34adb3bc982c64edfa06e95eb if xml_dom.tag == "File": xpath = "track" else: xpath = "File/track" for xml_track in xml_dom.iterfind(xpath): self.tracks.append(Track(xml_track)) def _tracks(self, track_type: str) -> List[Track]: return [track for track in self.tracks if track.track_type == track_type] @property def general_tracks(self) -> List[Track]: """ :return: All :class:`Track`\\s of type ``General``. :rtype: list of :class:`Track`\\s """ return self._tracks("General") @property def video_tracks(self) -> List[Track]: """ :return: All :class:`Track`\\s of type ``Video``. :rtype: list of :class:`Track`\\s """ return self._tracks("Video") @property def audio_tracks(self) -> List[Track]: """ :return: All :class:`Track`\\s of type ``Audio``. :rtype: list of :class:`Track`\\s """ return self._tracks("Audio") @property def text_tracks(self) -> List[Track]: """ :return: All :class:`Track`\\s of type ``Text``. :rtype: list of :class:`Track`\\s """ return self._tracks("Text") @property def other_tracks(self) -> List[Track]: """ :return: All :class:`Track`\\s of type ``Other``. :rtype: list of :class:`Track`\\s """ return self._tracks("Other") @property def image_tracks(self) -> List[Track]: """ :return: All :class:`Track`\\s of type ``Image``. :rtype: list of :class:`Track`\\s """ return self._tracks("Image") @property def menu_tracks(self) -> List[Track]: """ :return: All :class:`Track`\\s of type ``Menu``. :rtype: list of :class:`Track`\\s """ return self._tracks("Menu") @staticmethod def _normalize_filename(filename: Any) -> Any: if hasattr(os, "PathLike") and isinstance(filename, os.PathLike): return os.fspath(filename) if pathlib is not None and isinstance(filename, pathlib.PurePath): return str(filename) return filename @classmethod def _define_library_prototypes(cls, lib: Any) -> Any: lib.MediaInfo_Inform.restype = ctypes.c_wchar_p lib.MediaInfo_New.argtypes = [] lib.MediaInfo_New.restype = ctypes.c_void_p lib.MediaInfo_Option.argtypes = [ ctypes.c_void_p, ctypes.c_wchar_p, ctypes.c_wchar_p, ] lib.MediaInfo_Option.restype = ctypes.c_wchar_p lib.MediaInfo_Inform.argtypes = [ctypes.c_void_p, ctypes.c_size_t] lib.MediaInfo_Inform.restype = ctypes.c_wchar_p lib.MediaInfo_Open.argtypes = [ctypes.c_void_p, ctypes.c_wchar_p] lib.MediaInfo_Open.restype = ctypes.c_size_t lib.MediaInfo_Open_Buffer_Init.argtypes = [ ctypes.c_void_p, ctypes.c_uint64, ctypes.c_uint64, ] lib.MediaInfo_Open_Buffer_Init.restype = ctypes.c_size_t lib.MediaInfo_Open_Buffer_Continue.argtypes = [ ctypes.c_void_p, ctypes.c_char_p, ctypes.c_size_t, ] lib.MediaInfo_Open_Buffer_Continue.restype = ctypes.c_size_t lib.MediaInfo_Open_Buffer_Continue_GoTo_Get.argtypes = [ctypes.c_void_p] lib.MediaInfo_Open_Buffer_Continue_GoTo_Get.restype = ctypes.c_uint64 lib.MediaInfo_Open_Buffer_Finalize.argtypes = [ctypes.c_void_p] lib.MediaInfo_Open_Buffer_Finalize.restype = ctypes.c_size_t lib.MediaInfo_Delete.argtypes = [ctypes.c_void_p] lib.MediaInfo_Delete.restype = None lib.MediaInfo_Close.argtypes = [ctypes.c_void_p] lib.MediaInfo_Close.restype = None @staticmethod def _get_library_paths(os_is_nt: bool) -> Tuple[str]: if os_is_nt: library_paths = ("MediaInfo.dll",) elif sys.platform == "darwin": library_paths = ("libmediainfo.0.dylib", "libmediainfo.dylib") else: library_paths = ("libmediainfo.so.0",) script_dir = os.path.dirname(__file__) # Look for the library file in the script folder for library in library_paths: absolute_library_path = os.path.join(script_dir, library) if os.path.isfile(absolute_library_path): # If we find it, don't try any other filename library_paths = (absolute_library_path,) break return library_paths @classmethod def _get_library( cls, library_file: Optional[str] = None, ) -> Tuple[Any, Any, str, Tuple[int, ...]]: os_is_nt = os.name in ("nt", "dos", "os2", "ce") if os_is_nt: lib_type = ctypes.WinDLL # type: ignore else: lib_type = ctypes.CDLL if library_file is None: library_paths = cls._get_library_paths(os_is_nt) else: library_paths = (library_file,) exceptions = [] for library_path in library_paths: try: lib = lib_type(library_path) cls._define_library_prototypes(lib) # Without a handle, there might be problems when using concurrent threads # https://github.com/sbraz/pymediainfo/issues/76#issuecomment-574759621 handle = lib.MediaInfo_New() version = lib.MediaInfo_Option(handle, "Info_Version", "") match = re.search(r"^MediaInfoLib - v(\S+)", version) if match: lib_version_str = match.group(1) lib_version = tuple(int(_) for _ in lib_version_str.split(".")) else: raise RuntimeError("Could not determine library version") return (lib, handle, lib_version_str, lib_version) except OSError as exc: exceptions.append(str(exc)) raise OSError( "Failed to load library from {} - {}".format( ", ".join(library_paths), ", ".join(exceptions) ) ) @classmethod def can_parse(cls, library_file: Optional[str] = None) -> bool: """ Checks whether media files can be analyzed using libmediainfo. :param str library_file: path to the libmediainfo library, this should only be used if the library cannot be auto-detected. :rtype: bool """ try: lib, handle = cls._get_library(library_file)[:2] lib.MediaInfo_Close(handle) lib.MediaInfo_Delete(handle) return True except Exception: # pylint: disable=broad-except return False @classmethod def parse( # pylint: disable=too-many-statements # pylint: disable=too-many-branches, too-many-locals, too-many-arguments cls, filename: Any, library_file: Optional[str] = None, cover_data: bool = False, encoding_errors: str = "strict", parse_speed: float = 0.5, full: bool = True, legacy_stream_display: bool = False, mediainfo_options: Optional[Dict[str, str]] = None, output: Optional[str] = None, buffer_size: Optional[int] = 64 * 1024, ) -> Union[str, "MediaInfo"]: """ Analyze a media file using libmediainfo. .. note:: Because of the way the underlying library works, this method should not be called simultaneously from multiple threads *with different arguments*. Doing so will cause inconsistencies or failures by changing library options that are shared across threads. :param filename: path to the media file or file-like object which will be analyzed. A URL can also be used if libmediainfo was compiled with CURL support. :param str library_file: path to the libmediainfo library, this should only be used if the library cannot be auto-detected. :param bool cover_data: whether to retrieve cover data as base64. :param str encoding_errors: option to pass to :func:`str.encode`'s `errors` parameter before parsing MediaInfo's XML output. :param float parse_speed: passed to the library as `ParseSpeed`, this option takes values between 0 and 1. A higher value will yield more precise results in some cases but will also increase parsing time. :param bool full: display additional tags, including computer-readable values for sizes and durations, corresponds to the CLI's ``--Full``/``-f`` parameter. :param bool legacy_stream_display: display additional information about streams. :param dict mediainfo_options: additional options that will be passed to the `MediaInfo_Option` function, for example: ``{"Language": "raw"}``. Do not use this parameter when running the method simultaneously from multiple threads, it will trigger a reset of all options which will cause inconsistencies or failures. :param str output: custom output format for MediaInfo, corresponds to the CLI's ``--Output`` parameter. Setting this causes the method to return a `str` instead of a :class:`MediaInfo` object. Useful values include: * the empty `str` ``""`` (corresponds to the default text output, obtained when running ``mediainfo`` with no additional parameters) * ``"XML"`` * ``"JSON"`` * ``%``-delimited templates (see ``mediainfo --Info-Parameters``) :param int buffer_size: size of the buffer used to read the file, in bytes. This is only used when `filename` is a file-like object. :type filename: str or pathlib.Path or os.PathLike or file-like object. :rtype: str if `output` is set. :rtype: :class:`MediaInfo` otherwise. :raises FileNotFoundError: if passed a non-existent file. :raises ValueError: if passed a file-like object opened in text mode. :raises OSError: if the library file could not be loaded. :raises RuntimeError: if parsing fails, this should not happen unless libmediainfo itself fails. Examples: >>> pymediainfo.MediaInfo.parse("tests/data/sample.mkv") <pymediainfo.MediaInfo object at 0x7fa83a3db240> >>> import json >>> mi = pymediainfo.MediaInfo.parse("tests/data/sample.mkv", ... output="JSON") >>> json.loads(mi)["media"]["track"][0] {'@type': 'General', 'TextCount': '1', 'FileExtension': 'mkv', 'FileSize': '5904', … } """ lib, handle, lib_version_str, lib_version = cls._get_library(library_file) # The XML option was renamed starting with version 17.10 if lib_version >= (17, 10): xml_option = "OLDXML" else: xml_option = "XML" # Cover_Data is not extracted by default since version 18.03 # See https://github.com/MediaArea/MediaInfoLib/commit/d8fd88a1 if lib_version >= (18, 3): lib.MediaInfo_Option(handle, "Cover_Data", "base64" if cover_data else "") lib.MediaInfo_Option(handle, "CharSet", "UTF-8") lib.MediaInfo_Option(handle, "Inform", xml_option if output is None else output) lib.MediaInfo_Option(handle, "Complete", "1" if full else "") lib.MediaInfo_Option(handle, "ParseSpeed", str(parse_speed)) lib.MediaInfo_Option(handle, "LegacyStreamDisplay", "1" if legacy_stream_display else "") if mediainfo_options is not None: if lib_version < (19, 9): warnings.warn( "This version of MediaInfo (v{}) does not support resetting all " "options to their default values, passing it custom options is not recommended " "and may result in unpredictable behavior, see " "https://github.com/MediaArea/MediaInfoLib/issues/1128".format(lib_version_str), RuntimeWarning, ) for option_name, option_value in mediainfo_options.items(): lib.MediaInfo_Option(handle, option_name, option_value) try: filename.seek(0, 2) file_size = filename.tell() filename.seek(0) except AttributeError: # filename is not a file-like object file_size = None if file_size is not None: # We have a file-like object, use the buffer protocol: # Some file-like objects do not have a mode if "b" not in getattr(filename, "mode", "b"): raise ValueError("File should be opened in binary mode") lib.MediaInfo_Open_Buffer_Init(handle, file_size, 0) while True: buffer = filename.read(buffer_size) if buffer: # https://github.com/MediaArea/MediaInfoLib/blob/v20.09/Source/MediaInfo/File__Analyze.h#L1429 # 4th bit = finished if lib.MediaInfo_Open_Buffer_Continue(handle, buffer, len(buffer)) & 0x08: break # Ask MediaInfo if we need to seek seek = lib.MediaInfo_Open_Buffer_Continue_GoTo_Get(handle) # https://github.com/MediaArea/MediaInfoLib/blob/v20.09/Source/MediaInfoDLL/MediaInfoJNI.cpp#L127 if seek != ctypes.c_uint64(-1).value: filename.seek(seek) # Inform MediaInfo we have sought lib.MediaInfo_Open_Buffer_Init(handle, file_size, filename.tell()) else: break lib.MediaInfo_Open_Buffer_Finalize(handle) else: # We have a filename, simply pass it: filename = cls._normalize_filename(filename) # If an error occured if lib.MediaInfo_Open(handle, filename) == 0: lib.MediaInfo_Close(handle) lib.MediaInfo_Delete(handle) # If filename doesn't look like a URL and doesn't exist if "://" not in filename and not os.path.exists(filename): raise FileNotFoundError(filename) # We ran into another kind of error raise RuntimeError( "An error occured while opening {}" " with libmediainfo".format(filename) ) info: str = lib.MediaInfo_Inform(handle, 0) # Reset all options to their defaults so that they aren't # retained when the parse method is called several times # https://github.com/MediaArea/MediaInfoLib/issues/1128 # Do not call it when it is not required because it breaks threads # https://github.com/sbraz/pymediainfo/issues/76#issuecomment-575245093 if mediainfo_options is not None and lib_version >= (19, 9): lib.MediaInfo_Option(handle, "Reset", "") # Delete the handle lib.MediaInfo_Close(handle) lib.MediaInfo_Delete(handle) if output is None: return cls(info, encoding_errors) return info def to_data(self) -> Dict[str, Any]: """ Returns a dict representation of the object's :py:class:`Tracks <Track>`. :rtype: dict """ return {"tracks": [_.to_data() for _ in self.tracks]} def to_json(self) -> str: """ Returns a JSON representation of the object's :py:class:`Tracks <Track>`. :rtype: str """ return json.dumps(self.to_data())
(xml: str, encoding_errors: str = 'strict')
725,961
pymediainfo
__eq__
null
def __eq__(self, other): # type: ignore return self.tracks == other.tracks
(self, other)
725,962
pymediainfo
__init__
null
def __init__(self, xml: str, encoding_errors: str = "strict"): xml_dom = ET.fromstring(xml.encode("utf-8", encoding_errors)) self.tracks = [] # This is the case for libmediainfo < 18.03 # https://github.com/sbraz/pymediainfo/issues/57 # https://github.com/MediaArea/MediaInfoLib/commit/575a9a32e6960ea34adb3bc982c64edfa06e95eb if xml_dom.tag == "File": xpath = "track" else: xpath = "File/track" for xml_track in xml_dom.iterfind(xpath): self.tracks.append(Track(xml_track))
(self, xml: str, encoding_errors: str = 'strict')
725,963
pymediainfo
_get_library_paths
null
@staticmethod def _get_library_paths(os_is_nt: bool) -> Tuple[str]: if os_is_nt: library_paths = ("MediaInfo.dll",) elif sys.platform == "darwin": library_paths = ("libmediainfo.0.dylib", "libmediainfo.dylib") else: library_paths = ("libmediainfo.so.0",) script_dir = os.path.dirname(__file__) # Look for the library file in the script folder for library in library_paths: absolute_library_path = os.path.join(script_dir, library) if os.path.isfile(absolute_library_path): # If we find it, don't try any other filename library_paths = (absolute_library_path,) break return library_paths
(os_is_nt: bool) -> Tuple[str]
725,964
pymediainfo
_normalize_filename
null
@staticmethod def _normalize_filename(filename: Any) -> Any: if hasattr(os, "PathLike") and isinstance(filename, os.PathLike): return os.fspath(filename) if pathlib is not None and isinstance(filename, pathlib.PurePath): return str(filename) return filename
(filename: Any) -> Any
725,965
pymediainfo
_tracks
null
def _tracks(self, track_type: str) -> List[Track]: return [track for track in self.tracks if track.track_type == track_type]
(self, track_type: str) -> List[pymediainfo.Track]
725,966
pymediainfo
to_data
Returns a dict representation of the object's :py:class:`Tracks <Track>`. :rtype: dict
def to_data(self) -> Dict[str, Any]: """ Returns a dict representation of the object's :py:class:`Tracks <Track>`. :rtype: dict """ return {"tracks": [_.to_data() for _ in self.tracks]}
(self) -> Dict[str, Any]
725,967
pymediainfo
to_json
Returns a JSON representation of the object's :py:class:`Tracks <Track>`. :rtype: str
def to_json(self) -> str: """ Returns a JSON representation of the object's :py:class:`Tracks <Track>`. :rtype: str """ return json.dumps(self.to_data())
(self) -> str
725,968
pymediainfo
Track
An object associated with a media file track. Each :class:`Track` attribute corresponds to attributes parsed from MediaInfo's output. All attributes are lower case. Attributes that are present several times such as `Duration` yield a second attribute starting with `other_` which is a list of all alternative attribute values. When a non-existing attribute is accessed, `None` is returned. Example: >>> t = mi.tracks[0] >>> t <Track track_id='None', track_type='General'> >>> t.duration 3000 >>> t.other_duration ['3 s 0 ms', '3 s 0 ms', '3 s 0 ms', '00:00:03.000', '00:00:03.000'] >>> type(t.non_existing) NoneType All available attributes can be obtained by calling :func:`to_data`.
class Track: """ An object associated with a media file track. Each :class:`Track` attribute corresponds to attributes parsed from MediaInfo's output. All attributes are lower case. Attributes that are present several times such as `Duration` yield a second attribute starting with `other_` which is a list of all alternative attribute values. When a non-existing attribute is accessed, `None` is returned. Example: >>> t = mi.tracks[0] >>> t <Track track_id='None', track_type='General'> >>> t.duration 3000 >>> t.other_duration ['3 s 0 ms', '3 s 0 ms', '3 s 0 ms', '00:00:03.000', '00:00:03.000'] >>> type(t.non_existing) NoneType All available attributes can be obtained by calling :func:`to_data`. """ def __eq__(self, other): # type: ignore return self.__dict__ == other.__dict__ def __getattribute__(self, name): # type: ignore try: return object.__getattribute__(self, name) except AttributeError: pass return None def __getstate__(self): # type: ignore return self.__dict__ def __setstate__(self, state): # type: ignore self.__dict__ = state def __init__(self, xml_dom_fragment: ET.Element): self.track_type = xml_dom_fragment.attrib["type"] repeated_attributes = [] for elem in xml_dom_fragment: node_name = elem.tag.lower().strip().strip("_") if node_name == "id": node_name = "track_id" node_value = elem.text if getattr(self, node_name) is None: setattr(self, node_name, node_value) else: other_node_name = f"other_{node_name}" repeated_attributes.append((node_name, other_node_name)) if getattr(self, other_node_name) is None: setattr(self, other_node_name, [node_value]) else: getattr(self, other_node_name).append(node_value) for primary_key, other_key in repeated_attributes: try: # Attempt to convert the main value to int # Usually, if an attribute is repeated, one of its value # is an int and others are human-readable formats setattr(self, primary_key, int(getattr(self, primary_key))) except ValueError: # If it fails, try to find a secondary value # that is an int and swap it with the main value for other_value in getattr(self, other_key): try: current = getattr(self, primary_key) # Set the main value to an int setattr(self, primary_key, int(other_value)) # Append its previous value to other values getattr(self, other_key).append(current) break except ValueError: pass def __repr__(self): # type: ignore return "<Track track_id='{}', track_type='{}'>".format(self.track_id, self.track_type) def to_data(self) -> Dict[str, Any]: """ Returns a dict representation of the track attributes. Example: >>> sorted(track.to_data().keys())[:3] ['codec', 'codec_extensions_usually_used', 'codec_url'] >>> t.to_data()["file_size"] 5988 :rtype: dict """ return self.__dict__
(xml_dom_fragment: xml.etree.ElementTree.Element)
725,969
pymediainfo
__eq__
null
def __eq__(self, other): # type: ignore return self.__dict__ == other.__dict__
(self, other)
725,970
pymediainfo
__getattribute__
null
def __getattribute__(self, name): # type: ignore try: return object.__getattribute__(self, name) except AttributeError: pass return None
(self, name)
725,971
pymediainfo
__getstate__
null
def __getstate__(self): # type: ignore return self.__dict__
(self)
725,972
pymediainfo
__init__
null
def __init__(self, xml_dom_fragment: ET.Element): self.track_type = xml_dom_fragment.attrib["type"] repeated_attributes = [] for elem in xml_dom_fragment: node_name = elem.tag.lower().strip().strip("_") if node_name == "id": node_name = "track_id" node_value = elem.text if getattr(self, node_name) is None: setattr(self, node_name, node_value) else: other_node_name = f"other_{node_name}" repeated_attributes.append((node_name, other_node_name)) if getattr(self, other_node_name) is None: setattr(self, other_node_name, [node_value]) else: getattr(self, other_node_name).append(node_value) for primary_key, other_key in repeated_attributes: try: # Attempt to convert the main value to int # Usually, if an attribute is repeated, one of its value # is an int and others are human-readable formats setattr(self, primary_key, int(getattr(self, primary_key))) except ValueError: # If it fails, try to find a secondary value # that is an int and swap it with the main value for other_value in getattr(self, other_key): try: current = getattr(self, primary_key) # Set the main value to an int setattr(self, primary_key, int(other_value)) # Append its previous value to other values getattr(self, other_key).append(current) break except ValueError: pass
(self, xml_dom_fragment: xml.etree.ElementTree.Element)
725,973
pymediainfo
__repr__
null
def __repr__(self): # type: ignore return "<Track track_id='{}', track_type='{}'>".format(self.track_id, self.track_type)
(self)
725,974
pymediainfo
__setstate__
null
def __setstate__(self, state): # type: ignore self.__dict__ = state
(self, state)
725,975
pymediainfo
to_data
Returns a dict representation of the track attributes. Example: >>> sorted(track.to_data().keys())[:3] ['codec', 'codec_extensions_usually_used', 'codec_url'] >>> t.to_data()["file_size"] 5988 :rtype: dict
def to_data(self) -> Dict[str, Any]: """ Returns a dict representation of the track attributes. Example: >>> sorted(track.to_data().keys())[:3] ['codec', 'codec_extensions_usually_used', 'codec_url'] >>> t.to_data()["file_size"] 5988 :rtype: dict """ return self.__dict__
(self) -> Dict[str, Any]
725,984
penne.delegates
Attribute
Attribute for a geometry patch Each attribute is a view into a buffer that corresponds to a specific element of the mesh (e.g. position, normal, etc.). Attributes allow information for the vertices to be extracted from buffers Attributes: view (BufferViewID): View of the buffer storing the data semantic (AttributeSemantic): String describing the type of attribute channel (Optional[int]): Channel of attribute, if applicable offset (Optional[int]): Offset into buffer stride (Optional[int]): Distance, in bytes, between data for two vertices in the buffer format (Format): How many bytes per element, how to decode the bytes minimum_value (Optional[List[float]]): Minimum value for attribute data maximum_value (Optional[List[float]]): Maximum value for attribute data normalized (Optional[bool]): Whether to normalize the attribute data
class Attribute(NoodleObject): """Attribute for a geometry patch Each attribute is a view into a buffer that corresponds to a specific element of the mesh (e.g. position, normal, etc.). Attributes allow information for the vertices to be extracted from buffers Attributes: view (BufferViewID): View of the buffer storing the data semantic (AttributeSemantic): String describing the type of attribute channel (Optional[int]): Channel of attribute, if applicable offset (Optional[int]): Offset into buffer stride (Optional[int]): Distance, in bytes, between data for two vertices in the buffer format (Format): How many bytes per element, how to decode the bytes minimum_value (Optional[List[float]]): Minimum value for attribute data maximum_value (Optional[List[float]]): Maximum value for attribute data normalized (Optional[bool]): Whether to normalize the attribute data """ view: BufferViewID semantic: AttributeSemantic channel: Optional[int] = None offset: Optional[int] = 0 stride: Optional[int] = 0 format: Format minimum_value: Optional[List[float]] = None maximum_value: Optional[List[float]] = None normalized: Optional[bool] = False
(*, view: penne.delegates.BufferViewID, semantic: penne.delegates.AttributeSemantic, channel: Optional[int] = None, offset: Optional[int] = 0, stride: Optional[int] = 0, format: penne.delegates.Format, minimum_value: Optional[List[float]] = None, maximum_value: Optional[List[float]] = None, normalized: Optional[bool] = False, **extra_data: Any) -> None
725,987
pydantic.main
__delattr__
null
def __delattr__(self, item: str) -> Any: if item in self.__private_attributes__: attribute = self.__private_attributes__[item] if hasattr(attribute, '__delete__'): attribute.__delete__(self) # type: ignore return try: # Note: self.__pydantic_private__ cannot be None if self.__private_attributes__ has items del self.__pydantic_private__[item] # type: ignore except KeyError as exc: raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') from exc elif item in self.model_fields: object.__delattr__(self, item) elif self.__pydantic_extra__ is not None and item in self.__pydantic_extra__: del self.__pydantic_extra__[item] else: try: object.__delattr__(self, item) except AttributeError: raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
(self, item: str) -> Any
725,988
pydantic.main
__eq__
null
def __eq__(self, other: Any) -> bool: if isinstance(other, BaseModel): # When comparing instances of generic types for equality, as long as all field values are equal, # only require their generic origin types to be equal, rather than exact type equality. # This prevents headaches like MyGeneric(x=1) != MyGeneric[Any](x=1). self_type = self.__pydantic_generic_metadata__['origin'] or self.__class__ other_type = other.__pydantic_generic_metadata__['origin'] or other.__class__ return ( self_type == other_type and self.__dict__ == other.__dict__ and self.__pydantic_private__ == other.__pydantic_private__ and self.__pydantic_extra__ == other.__pydantic_extra__ ) else: return NotImplemented # delegate to the other item in the comparison
(self, other: Any) -> bool
725,989
pydantic.main
__getattr__
null
def __getattr__(self, item: str) -> Any: private_attributes = object.__getattribute__(self, '__private_attributes__') if item in private_attributes: attribute = private_attributes[item] if hasattr(attribute, '__get__'): return attribute.__get__(self, type(self)) # type: ignore try: # Note: self.__pydantic_private__ cannot be None if self.__private_attributes__ has items return self.__pydantic_private__[item] # type: ignore except KeyError as exc: raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') from exc else: pydantic_extra = object.__getattribute__(self, '__pydantic_extra__') if pydantic_extra is not None: try: return pydantic_extra[item] except KeyError as exc: raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') from exc else: if hasattr(self.__class__, item): return super().__getattribute__(item) # Raises AttributeError if appropriate else: # this is the current error raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
(self, item: str) -> Any
725,991
pydantic.main
__init__
Create a new model by parsing and validating input data from keyword arguments. Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. `__init__` uses `__pydantic_self__` instead of the more common `self` for the first arg to allow `self` as a field name.
def __init__(__pydantic_self__, **data: Any) -> None: # type: ignore """Create a new model by parsing and validating input data from keyword arguments. Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. `__init__` uses `__pydantic_self__` instead of the more common `self` for the first arg to allow `self` as a field name. """ # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks __tracebackhide__ = True __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
(__pydantic_self__, **data: Any) -> NoneType
725,992
pydantic.main
__iter__
So `dict(model)` works.
def __iter__(self) -> TupleGenerator: """So `dict(model)` works.""" yield from self.__dict__.items() extra = self.__pydantic_extra__ if extra: yield from extra.items()
(self) -> 'TupleGenerator'
725,995
pydantic.main
__repr_args__
null
def __repr_args__(self) -> _repr.ReprArgs: for k, v in self.__dict__.items(): field = self.model_fields.get(k) if field and field.repr: yield k, v pydantic_extra = self.__pydantic_extra__ if pydantic_extra is not None: yield from ((k, v) for k, v in pydantic_extra.items()) yield from ((k, getattr(self, k)) for k, v in self.model_computed_fields.items() if v.repr)
(self) -> '_repr.ReprArgs'
725,999
pydantic.main
__setattr__
null
def __setattr__(self, name: str, value: Any) -> None: if name in self.__class_vars__: raise AttributeError( f'{name!r} is a ClassVar of `{self.__class__.__name__}` and cannot be set on an instance. ' f'If you want to set a value on the class, use `{self.__class__.__name__}.{name} = value`.' ) elif not _fields.is_valid_field_name(name): if self.__pydantic_private__ is None or name not in self.__private_attributes__: _object_setattr(self, name, value) else: attribute = self.__private_attributes__[name] if hasattr(attribute, '__set__'): attribute.__set__(self, value) # type: ignore else: self.__pydantic_private__[name] = value return elif self.model_config.get('frozen', None): error: pydantic_core.InitErrorDetails = { 'type': 'frozen_instance', 'loc': (name,), 'input': value, } raise pydantic_core.ValidationError.from_exception_data(self.__class__.__name__, [error]) attr = getattr(self.__class__, name, None) if isinstance(attr, property): attr.__set__(self, value) elif self.model_config.get('validate_assignment', None): self.__pydantic_validator__.validate_assignment(self, name, value) elif self.model_config.get('extra') != 'allow' and name not in self.model_fields: # TODO - matching error raise ValueError(f'"{self.__class__.__name__}" object has no field "{name}"') elif self.model_config.get('extra') == 'allow' and name not in self.model_fields: # SAFETY: __pydantic_extra__ is not None when extra = 'allow' self.__pydantic_extra__[name] = value # type: ignore else: self.__dict__[name] = value self.__pydantic_fields_set__.add(name)
(self, name: str, value: Any) -> NoneType
726,008
pydantic.main
model_copy
Returns a copy of the model. Args: update: Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data. deep: Set to `True` to make a deep copy of the model. Returns: New model instance.
def model_copy(self: Model, *, update: dict[str, Any] | None = None, deep: bool = False) -> Model: """Returns a copy of the model. Args: update: Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data. deep: Set to `True` to make a deep copy of the model. Returns: New model instance. """ copied = self.__deepcopy__() if deep else self.__copy__() if update: if self.model_config.get('extra') == 'allow': for k, v in update.items(): if k in self.model_fields: copied.__dict__[k] = v else: if copied.__pydantic_extra__ is None: copied.__pydantic_extra__ = {} copied.__pydantic_extra__[k] = v else: copied.__dict__.update(update) copied.__pydantic_fields_set__.update(update.keys()) return copied
(self: 'Model', *, update: 'dict[str, Any] | None' = None, deep: 'bool' = False) -> 'Model'
726,009
pydantic.main
model_dump
Usage docs: https://docs.pydantic.dev/dev-v2/usage/serialization/#modelmodel_dump Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Args: mode: The mode in which `to_python` should run. If mode is 'json', the dictionary will only contain JSON serializable types. If mode is 'python', the dictionary may contain any Python objects. include: A list of fields to include in the output. exclude: A list of fields to exclude from the output. by_alias: Whether to use the field's alias in the dictionary key if defined. exclude_unset: Whether to exclude fields that are unset or None from the output. exclude_defaults: Whether to exclude fields that are set to their default value from the output. exclude_none: Whether to exclude fields that have a value of `None` from the output. round_trip: Whether to enable serialization and deserialization round-trip support. warnings: Whether to log warnings when invalid fields are encountered. Returns: A dictionary representation of the model.
def model_dump( self, *, mode: Literal['json', 'python'] | str = 'python', include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True, ) -> dict[str, Any]: """Usage docs: https://docs.pydantic.dev/dev-v2/usage/serialization/#modelmodel_dump Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Args: mode: The mode in which `to_python` should run. If mode is 'json', the dictionary will only contain JSON serializable types. If mode is 'python', the dictionary may contain any Python objects. include: A list of fields to include in the output. exclude: A list of fields to exclude from the output. by_alias: Whether to use the field's alias in the dictionary key if defined. exclude_unset: Whether to exclude fields that are unset or None from the output. exclude_defaults: Whether to exclude fields that are set to their default value from the output. exclude_none: Whether to exclude fields that have a value of `None` from the output. round_trip: Whether to enable serialization and deserialization round-trip support. warnings: Whether to log warnings when invalid fields are encountered. Returns: A dictionary representation of the model. """ return self.__pydantic_serializer__.to_python( self, mode=mode, by_alias=by_alias, include=include, exclude=exclude, exclude_unset=exclude_unset, exclude_defaults=exclude_defaults, exclude_none=exclude_none, round_trip=round_trip, warnings=warnings, )
(self, *, mode: "Literal['json', 'python'] | str" = 'python', include: 'IncEx' = None, exclude: 'IncEx' = None, by_alias: 'bool' = False, exclude_unset: 'bool' = False, exclude_defaults: 'bool' = False, exclude_none: 'bool' = False, round_trip: 'bool' = False, warnings: 'bool' = True) -> 'dict[str, Any]'
726,010
pydantic.main
model_dump_json
Usage docs: https://docs.pydantic.dev/dev-v2/usage/serialization/#modelmodel_dump_json Generates a JSON representation of the model using Pydantic's `to_json` method. Args: indent: Indentation to use in the JSON output. If None is passed, the output will be compact. include: Field(s) to include in the JSON output. Can take either a string or set of strings. exclude: Field(s) to exclude from the JSON output. Can take either a string or set of strings. by_alias: Whether to serialize using field aliases. exclude_unset: Whether to exclude fields that have not been explicitly set. exclude_defaults: Whether to exclude fields that have the default value. exclude_none: Whether to exclude fields that have a value of `None`. round_trip: Whether to use serialization/deserialization between JSON and class instance. warnings: Whether to show any warnings that occurred during serialization. Returns: A JSON string representation of the model.
def model_dump_json( self, *, indent: int | None = None, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True, ) -> str: """Usage docs: https://docs.pydantic.dev/dev-v2/usage/serialization/#modelmodel_dump_json Generates a JSON representation of the model using Pydantic's `to_json` method. Args: indent: Indentation to use in the JSON output. If None is passed, the output will be compact. include: Field(s) to include in the JSON output. Can take either a string or set of strings. exclude: Field(s) to exclude from the JSON output. Can take either a string or set of strings. by_alias: Whether to serialize using field aliases. exclude_unset: Whether to exclude fields that have not been explicitly set. exclude_defaults: Whether to exclude fields that have the default value. exclude_none: Whether to exclude fields that have a value of `None`. round_trip: Whether to use serialization/deserialization between JSON and class instance. warnings: Whether to show any warnings that occurred during serialization. Returns: A JSON string representation of the model. """ return self.__pydantic_serializer__.to_json( self, indent=indent, include=include, exclude=exclude, by_alias=by_alias, exclude_unset=exclude_unset, exclude_defaults=exclude_defaults, exclude_none=exclude_none, round_trip=round_trip, warnings=warnings, ).decode()
(self, *, indent: 'int | None' = None, include: 'IncEx' = None, exclude: 'IncEx' = None, by_alias: 'bool' = False, exclude_unset: 'bool' = False, exclude_defaults: 'bool' = False, exclude_none: 'bool' = False, round_trip: 'bool' = False, warnings: 'bool' = True) -> 'str'
726,012
penne.delegates
AttributeSemantic
String indicating type of attribute, used in Attribute inside of geometry patch Takes value of either POSITION, NORMAL, TANGENT, TEXTURE, or COLOR
class AttributeSemantic(Enum): """String indicating type of attribute, used in Attribute inside of geometry patch Takes value of either POSITION, NORMAL, TANGENT, TEXTURE, or COLOR """ position = "POSITION" normal = "NORMAL" tangent = "TANGENT" texture = "TEXTURE" color = "COLOR"
(value, names=None, *, module=None, qualname=None, type=None, start=1)
726,013
pydantic.main
BaseModel
usage docs: https://docs.pydantic.dev/2.0/usage/models/ A base class for creating Pydantic models. Attributes: __class_vars__: The names of classvars defined on the model. __private_attributes__: Metadata about the private attributes of the model. __signature__: The signature for instantiating the model. __pydantic_complete__: Whether model building is completed, or if there are still undefined fields. __pydantic_core_schema__: The pydantic-core schema used to build the SchemaValidator and SchemaSerializer. __pydantic_custom_init__: Whether the model has a custom `__init__` function. __pydantic_decorators__: Metadata containing the decorators defined on the model. This replaces `Model.__validators__` and `Model.__root_validators__` from Pydantic V1. __pydantic_generic_metadata__: Metadata for generic models; contains data used for a similar purpose to __args__, __origin__, __parameters__ in typing-module generics. May eventually be replaced by these. __pydantic_parent_namespace__: Parent namespace of the model, used for automatic rebuilding of models. __pydantic_post_init__: The name of the post-init method for the model, if defined. __pydantic_root_model__: Whether the model is a `RootModel`. __pydantic_serializer__: The pydantic-core SchemaSerializer used to dump instances of the model. __pydantic_validator__: The pydantic-core SchemaValidator used to validate instances of the model. __pydantic_extra__: An instance attribute with the values of extra fields from validation when `model_config['extra'] == 'allow'`. __pydantic_fields_set__: An instance attribute with the names of fields explicitly specified during validation. __pydantic_private__: Instance attribute with the values of private attributes set on the model instance.
class BaseModel(metaclass=_model_construction.ModelMetaclass): """usage docs: https://docs.pydantic.dev/2.0/usage/models/ A base class for creating Pydantic models. Attributes: __class_vars__: The names of classvars defined on the model. __private_attributes__: Metadata about the private attributes of the model. __signature__: The signature for instantiating the model. __pydantic_complete__: Whether model building is completed, or if there are still undefined fields. __pydantic_core_schema__: The pydantic-core schema used to build the SchemaValidator and SchemaSerializer. __pydantic_custom_init__: Whether the model has a custom `__init__` function. __pydantic_decorators__: Metadata containing the decorators defined on the model. This replaces `Model.__validators__` and `Model.__root_validators__` from Pydantic V1. __pydantic_generic_metadata__: Metadata for generic models; contains data used for a similar purpose to __args__, __origin__, __parameters__ in typing-module generics. May eventually be replaced by these. __pydantic_parent_namespace__: Parent namespace of the model, used for automatic rebuilding of models. __pydantic_post_init__: The name of the post-init method for the model, if defined. __pydantic_root_model__: Whether the model is a `RootModel`. __pydantic_serializer__: The pydantic-core SchemaSerializer used to dump instances of the model. __pydantic_validator__: The pydantic-core SchemaValidator used to validate instances of the model. __pydantic_extra__: An instance attribute with the values of extra fields from validation when `model_config['extra'] == 'allow'`. __pydantic_fields_set__: An instance attribute with the names of fields explicitly specified during validation. __pydantic_private__: Instance attribute with the values of private attributes set on the model instance. """ if typing.TYPE_CHECKING: # Here we provide annotations for the attributes of BaseModel. # Many of these are populated by the metaclass, which is why this section is in a `TYPE_CHECKING` block. # However, for the sake of easy review, we have included type annotations of all class and instance attributes # of `BaseModel` here: # Class attributes model_config: ClassVar[ConfigDict] """ Configuration for the model, should be a dictionary conforming to [`ConfigDict`][pydantic.config.ConfigDict]. """ model_fields: ClassVar[dict[str, FieldInfo]] """ Metadata about the fields defined on the model, mapping of field names to [`FieldInfo`][pydantic.fields.FieldInfo]. This replaces `Model.__fields__` from Pydantic V1. """ __class_vars__: ClassVar[set[str]] __private_attributes__: ClassVar[dict[str, ModelPrivateAttr]] __signature__: ClassVar[Signature] __pydantic_complete__: ClassVar[bool] __pydantic_core_schema__: ClassVar[CoreSchema] __pydantic_custom_init__: ClassVar[bool] __pydantic_decorators__: ClassVar[_decorators.DecoratorInfos] __pydantic_generic_metadata__: ClassVar[_generics.PydanticGenericMetadata] __pydantic_parent_namespace__: ClassVar[dict[str, Any] | None] __pydantic_post_init__: ClassVar[None | Literal['model_post_init']] __pydantic_root_model__: ClassVar[bool] __pydantic_serializer__: ClassVar[SchemaSerializer] __pydantic_validator__: ClassVar[SchemaValidator] # Instance attributes # Note: we use the non-existent kwarg `init=False` in pydantic.fields.Field below so that @dataclass_transform # doesn't think these are valid as keyword arguments to the class initializer. __pydantic_extra__: dict[str, Any] | None = _Field(init=False) # type: ignore __pydantic_fields_set__: set[str] = _Field(init=False) # type: ignore __pydantic_private__: dict[str, Any] | None = _Field(init=False) # type: ignore else: # `model_fields` and `__pydantic_decorators__` must be set for # pydantic._internal._generate_schema.GenerateSchema.model_schema to work for a plain BaseModel annotation model_fields = {} __pydantic_decorators__ = _decorators.DecoratorInfos() # Prevent `BaseModel` from being instantiated directly: __pydantic_validator__ = _mock_validator.MockValidator( 'Pydantic models should inherit from BaseModel, BaseModel cannot be instantiated directly', code='base-model-instantiated', ) __slots__ = '__dict__', '__pydantic_fields_set__', '__pydantic_extra__', '__pydantic_private__' model_config = ConfigDict() __pydantic_complete__ = False __pydantic_root_model__ = False def __init__(__pydantic_self__, **data: Any) -> None: # type: ignore """Create a new model by parsing and validating input data from keyword arguments. Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model. `__init__` uses `__pydantic_self__` instead of the more common `self` for the first arg to allow `self` as a field name. """ # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks __tracebackhide__ = True __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__) # The following line sets a flag that we use to determine when `__init__` gets overridden by the user __init__.__pydantic_base_init__ = True # type: ignore @property def model_computed_fields(self) -> dict[str, ComputedFieldInfo]: """Get the computed fields of this model instance. Returns: A dictionary of computed field names and their corresponding `ComputedFieldInfo` objects. """ return {k: v.info for k, v in self.__pydantic_decorators__.computed_fields.items()} @property def model_extra(self) -> dict[str, Any] | None: """Get extra fields set during validation. Returns: A dictionary of extra fields, or `None` if `config.extra` is not set to `"allow"`. """ return self.__pydantic_extra__ @property def model_fields_set(self) -> set[str]: """Returns the set of fields that have been set on this model instance. Returns: A set of strings representing the fields that have been set, i.e. that were not filled from defaults. """ return self.__pydantic_fields_set__ @classmethod def model_construct(cls: type[Model], _fields_set: set[str] | None = None, **values: Any) -> Model: """Creates a new instance of the `Model` class with validated data. Creates a new model setting `__dict__` and `__pydantic_fields_set__` from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if `Config.extra = 'allow'` was set since it adds all passed values Args: _fields_set: The set of field names accepted for the Model instance. values: Trusted or pre-validated data dictionary. Returns: A new instance of the `Model` class with validated data. """ m = cls.__new__(cls) fields_values: dict[str, Any] = {} defaults: dict[str, Any] = {} # keeping this separate from `fields_values` helps us compute `_fields_set` for name, field in cls.model_fields.items(): if field.alias and field.alias in values: fields_values[name] = values.pop(field.alias) elif name in values: fields_values[name] = values.pop(name) elif not field.is_required(): defaults[name] = field.get_default(call_default_factory=True) if _fields_set is None: _fields_set = set(fields_values.keys()) fields_values.update(defaults) _extra: dict[str, Any] | None = None if cls.model_config.get('extra') == 'allow': _extra = {} for k, v in values.items(): _extra[k] = v else: fields_values.update(values) _object_setattr(m, '__dict__', fields_values) _object_setattr(m, '__pydantic_fields_set__', _fields_set) if not cls.__pydantic_root_model__: _object_setattr(m, '__pydantic_extra__', _extra) if cls.__pydantic_post_init__: m.model_post_init(None) elif not cls.__pydantic_root_model__: # Note: if there are any private attributes, cls.__pydantic_post_init__ would exist # Since it doesn't, that means that `__pydantic_private__` should be set to None _object_setattr(m, '__pydantic_private__', None) return m def model_copy(self: Model, *, update: dict[str, Any] | None = None, deep: bool = False) -> Model: """Returns a copy of the model. Args: update: Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data. deep: Set to `True` to make a deep copy of the model. Returns: New model instance. """ copied = self.__deepcopy__() if deep else self.__copy__() if update: if self.model_config.get('extra') == 'allow': for k, v in update.items(): if k in self.model_fields: copied.__dict__[k] = v else: if copied.__pydantic_extra__ is None: copied.__pydantic_extra__ = {} copied.__pydantic_extra__[k] = v else: copied.__dict__.update(update) copied.__pydantic_fields_set__.update(update.keys()) return copied def model_dump( self, *, mode: Literal['json', 'python'] | str = 'python', include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True, ) -> dict[str, Any]: """Usage docs: https://docs.pydantic.dev/dev-v2/usage/serialization/#modelmodel_dump Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. Args: mode: The mode in which `to_python` should run. If mode is 'json', the dictionary will only contain JSON serializable types. If mode is 'python', the dictionary may contain any Python objects. include: A list of fields to include in the output. exclude: A list of fields to exclude from the output. by_alias: Whether to use the field's alias in the dictionary key if defined. exclude_unset: Whether to exclude fields that are unset or None from the output. exclude_defaults: Whether to exclude fields that are set to their default value from the output. exclude_none: Whether to exclude fields that have a value of `None` from the output. round_trip: Whether to enable serialization and deserialization round-trip support. warnings: Whether to log warnings when invalid fields are encountered. Returns: A dictionary representation of the model. """ return self.__pydantic_serializer__.to_python( self, mode=mode, by_alias=by_alias, include=include, exclude=exclude, exclude_unset=exclude_unset, exclude_defaults=exclude_defaults, exclude_none=exclude_none, round_trip=round_trip, warnings=warnings, ) def model_dump_json( self, *, indent: int | None = None, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True, ) -> str: """Usage docs: https://docs.pydantic.dev/dev-v2/usage/serialization/#modelmodel_dump_json Generates a JSON representation of the model using Pydantic's `to_json` method. Args: indent: Indentation to use in the JSON output. If None is passed, the output will be compact. include: Field(s) to include in the JSON output. Can take either a string or set of strings. exclude: Field(s) to exclude from the JSON output. Can take either a string or set of strings. by_alias: Whether to serialize using field aliases. exclude_unset: Whether to exclude fields that have not been explicitly set. exclude_defaults: Whether to exclude fields that have the default value. exclude_none: Whether to exclude fields that have a value of `None`. round_trip: Whether to use serialization/deserialization between JSON and class instance. warnings: Whether to show any warnings that occurred during serialization. Returns: A JSON string representation of the model. """ return self.__pydantic_serializer__.to_json( self, indent=indent, include=include, exclude=exclude, by_alias=by_alias, exclude_unset=exclude_unset, exclude_defaults=exclude_defaults, exclude_none=exclude_none, round_trip=round_trip, warnings=warnings, ).decode() @classmethod def model_json_schema( cls, by_alias: bool = True, ref_template: str = DEFAULT_REF_TEMPLATE, schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: JsonSchemaMode = 'validation', ) -> dict[str, Any]: """Generates a JSON schema for a model class. Args: by_alias: Whether to use attribute aliases or not. ref_template: The reference template. schema_generator: To override the logic used to generate the JSON schema, ass a subclass of `GenerateJsonSchema` with your desired modifications mode: The mode in which to generate the schema. Returns: The JSON schema for the given model class. """ return model_json_schema( cls, by_alias=by_alias, ref_template=ref_template, schema_generator=schema_generator, mode=mode ) @classmethod def model_parametrized_name(cls, params: tuple[type[Any], ...]) -> str: """Compute the class name for parametrizations of generic classes. This method can be overridden to achieve a custom naming scheme for generic BaseModels. Args: params: Tuple of types of the class. Given a generic class `Model` with 2 type variables and a concrete model `Model[str, int]`, the value `(str, int)` would be passed to `params`. Returns: String representing the new class where `params` are passed to `cls` as type variables. Raises: TypeError: Raised when trying to generate concrete names for non-generic models. """ if not issubclass(cls, typing.Generic): # type: ignore[arg-type] raise TypeError('Concrete names should only be generated for generic models.') # Any strings received should represent forward references, so we handle them specially below. # If we eventually move toward wrapping them in a ForwardRef in __class_getitem__ in the future, # we may be able to remove this special case. param_names = [param if isinstance(param, str) else _repr.display_as_type(param) for param in params] params_component = ', '.join(param_names) return f'{cls.__name__}[{params_component}]' def model_post_init(self, __context: Any) -> None: """Override this method to perform additional initialization after `__init__` and `model_construct`. This is useful if you want to do some validation that requires the entire model to be initialized. """ pass @classmethod def model_rebuild( cls, *, force: bool = False, raise_errors: bool = True, _parent_namespace_depth: int = 2, _types_namespace: dict[str, Any] | None = None, ) -> bool | None: """Try to rebuild the pydantic-core schema for the model. This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails. Args: force: Whether to force the rebuilding of the model schema, defaults to `False`. raise_errors: Whether to raise errors, defaults to `True`. _parent_namespace_depth: The depth level of the parent namespace, defaults to 2. _types_namespace: The types namespace, defaults to `None`. Returns: Returns `None` if the schema is already "complete" and rebuilding was not required. If rebuilding _was_ required, returns `True` if rebuilding was successful, otherwise `False`. """ if not force and cls.__pydantic_complete__: return None else: if '__pydantic_core_schema__' in cls.__dict__: delattr(cls, '__pydantic_core_schema__') # delete cached value to ensure full rebuild happens if _types_namespace is not None: types_namespace: dict[str, Any] | None = _types_namespace.copy() else: if _parent_namespace_depth > 0: frame_parent_ns = _typing_extra.parent_frame_namespace(parent_depth=_parent_namespace_depth) or {} cls_parent_ns = ( _model_construction.unpack_lenient_weakvaluedict(cls.__pydantic_parent_namespace__) or {} ) types_namespace = {**cls_parent_ns, **frame_parent_ns} cls.__pydantic_parent_namespace__ = _model_construction.build_lenient_weakvaluedict(types_namespace) else: types_namespace = _model_construction.unpack_lenient_weakvaluedict( cls.__pydantic_parent_namespace__ ) types_namespace = _typing_extra.get_cls_types_namespace(cls, types_namespace) # manually override defer_build so complete_model_class doesn't skip building the model again config = {**cls.model_config, 'defer_build': False} return _model_construction.complete_model_class( cls, cls.__name__, _config.ConfigWrapper(config, check=False), raise_errors=raise_errors, types_namespace=types_namespace, ) @classmethod def model_validate( cls: type[Model], obj: Any, *, strict: bool | None = None, from_attributes: bool | None = None, context: dict[str, Any] | None = None, ) -> Model: """Validate a pydantic model instance. Args: obj: The object to validate. strict: Whether to raise an exception on invalid fields. from_attributes: Whether to extract data from object attributes. context: Additional context to pass to the validator. Raises: ValidationError: If the object could not be validated. Returns: The validated model instance. """ # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks __tracebackhide__ = True return cls.__pydantic_validator__.validate_python( obj, strict=strict, from_attributes=from_attributes, context=context ) @classmethod def model_validate_json( cls: type[Model], json_data: str | bytes | bytearray, *, strict: bool | None = None, context: dict[str, Any] | None = None, ) -> Model: """Validate the given JSON data against the Pydantic model. Args: json_data: The JSON data to validate. strict: Whether to enforce types strictly. context: Extra variables to pass to the validator. Returns: The validated Pydantic model. Raises: ValueError: If `json_data` is not a JSON string. """ # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks __tracebackhide__ = True return cls.__pydantic_validator__.validate_json(json_data, strict=strict, context=context) @classmethod def __get_pydantic_core_schema__( cls, __source: type[BaseModel], __handler: _annotated_handlers.GetCoreSchemaHandler ) -> CoreSchema: """Hook into generating the model's CoreSchema. Args: __source: The class we are generating a schema for. This will generally be the same as the `cls` argument if this is a classmethod. __handler: Call into Pydantic's internal JSON schema generation. A callable that calls into Pydantic's internal CoreSchema generation logic. Returns: A `pydantic-core` `CoreSchema`. """ # Only use the cached value from this _exact_ class; we don't want one from a parent class # This is why we check `cls.__dict__` and don't use `cls.__pydantic_core_schema__` or similar. if '__pydantic_core_schema__' in cls.__dict__: # Due to the way generic classes are built, it's possible that an invalid schema may be temporarily # set on generic classes. I think we could resolve this to ensure that we get proper schema caching # for generics, but for simplicity for now, we just always rebuild if the class has a generic origin. if not cls.__pydantic_generic_metadata__['origin']: return cls.__pydantic_core_schema__ return __handler(__source) @classmethod def __get_pydantic_json_schema__( cls, __core_schema: CoreSchema, __handler: _annotated_handlers.GetJsonSchemaHandler, ) -> JsonSchemaValue: """Hook into generating the model's JSON schema. Args: __core_schema: A `pydantic-core` CoreSchema. You can ignore this argument and call the handler with a new CoreSchema, wrap this CoreSchema (`{'type': 'nullable', 'schema': current_schema}`), or just call the handler with the original schema. __handler: Call into Pydantic's internal JSON schema generation. This will raise a `pydantic.errors.PydanticInvalidForJsonSchema` if JSON schema generation fails. Since this gets called by `BaseModel.model_json_schema` you can override the `schema_generator` argument to that function to change JSON schema generation globally for a type. Returns: A JSON schema, as a Python object. """ return __handler(__core_schema) @classmethod def __pydantic_init_subclass__(cls, **kwargs: Any) -> None: """This is intended to behave just like `__init_subclass__`, but is called by `ModelMetaclass` only after the class is actually fully initialized. In particular, attributes like `model_fields` will be present when this is called. This is necessary because `__init_subclass__` will always be called by `type.__new__`, and it would require a prohibitively large refactor to the `ModelMetaclass` to ensure that `type.__new__` was called in such a manner that the class would already be sufficiently initialized. This will receive the same `kwargs` that would be passed to the standard `__init_subclass__`, namely, any kwargs passed to the class definition that aren't used internally by pydantic. Args: **kwargs: Any keyword arguments passed to the class definition that aren't used internally by pydantic. """ pass def __class_getitem__( cls, typevar_values: type[Any] | tuple[type[Any], ...] ) -> type[BaseModel] | _forward_ref.PydanticRecursiveRef: cached = _generics.get_cached_generic_type_early(cls, typevar_values) if cached is not None: return cached if cls is BaseModel: raise TypeError('Type parameters should be placed on typing.Generic, not BaseModel') if not hasattr(cls, '__parameters__'): raise TypeError(f'{cls} cannot be parametrized because it does not inherit from typing.Generic') if not cls.__pydantic_generic_metadata__['parameters'] and typing.Generic not in cls.__bases__: raise TypeError(f'{cls} is not a generic class') if not isinstance(typevar_values, tuple): typevar_values = (typevar_values,) _generics.check_parameters_count(cls, typevar_values) # Build map from generic typevars to passed params typevars_map: dict[_typing_extra.TypeVarType, type[Any]] = dict( zip(cls.__pydantic_generic_metadata__['parameters'], typevar_values) ) if _utils.all_identical(typevars_map.keys(), typevars_map.values()) and typevars_map: submodel = cls # if arguments are equal to parameters it's the same object _generics.set_cached_generic_type(cls, typevar_values, submodel) else: parent_args = cls.__pydantic_generic_metadata__['args'] if not parent_args: args = typevar_values else: args = tuple(_generics.replace_types(arg, typevars_map) for arg in parent_args) origin = cls.__pydantic_generic_metadata__['origin'] or cls model_name = origin.model_parametrized_name(args) params = tuple( {param: None for param in _generics.iter_contained_typevars(typevars_map.values())} ) # use dict as ordered set with _generics.generic_recursion_self_type(origin, args) as maybe_self_type: if maybe_self_type is not None: return maybe_self_type cached = _generics.get_cached_generic_type_late(cls, typevar_values, origin, args) if cached is not None: return cached # Attempt to rebuild the origin in case new types have been defined try: # depth 3 gets you above this __class_getitem__ call origin.model_rebuild(_parent_namespace_depth=3) except PydanticUndefinedAnnotation: # It's okay if it fails, it just means there are still undefined types # that could be evaluated later. # TODO: Make sure validation fails if there are still undefined types, perhaps using MockValidator pass submodel = _generics.create_generic_submodel(model_name, origin, args, params) # Update cache _generics.set_cached_generic_type(cls, typevar_values, submodel, origin, args) return submodel def __copy__(self: Model) -> Model: """Returns a shallow copy of the model.""" cls = type(self) m = cls.__new__(cls) _object_setattr(m, '__dict__', copy(self.__dict__)) _object_setattr(m, '__pydantic_extra__', copy(self.__pydantic_extra__)) _object_setattr(m, '__pydantic_fields_set__', copy(self.__pydantic_fields_set__)) if self.__pydantic_private__ is None: _object_setattr(m, '__pydantic_private__', None) else: _object_setattr( m, '__pydantic_private__', {k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined}, ) return m def __deepcopy__(self: Model, memo: dict[int, Any] | None = None) -> Model: """Returns a deep copy of the model.""" cls = type(self) m = cls.__new__(cls) _object_setattr(m, '__dict__', deepcopy(self.__dict__, memo=memo)) _object_setattr(m, '__pydantic_extra__', deepcopy(self.__pydantic_extra__, memo=memo)) # This next line doesn't need a deepcopy because __pydantic_fields_set__ is a set[str], # and attempting a deepcopy would be marginally slower. _object_setattr(m, '__pydantic_fields_set__', copy(self.__pydantic_fields_set__)) if self.__pydantic_private__ is None: _object_setattr(m, '__pydantic_private__', None) else: _object_setattr( m, '__pydantic_private__', deepcopy({k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined}, memo=memo), ) return m if not typing.TYPE_CHECKING: # We put `__getattr__` in a non-TYPE_CHECKING block because otherwise, mypy allows arbitrary attribute access def __getattr__(self, item: str) -> Any: private_attributes = object.__getattribute__(self, '__private_attributes__') if item in private_attributes: attribute = private_attributes[item] if hasattr(attribute, '__get__'): return attribute.__get__(self, type(self)) # type: ignore try: # Note: self.__pydantic_private__ cannot be None if self.__private_attributes__ has items return self.__pydantic_private__[item] # type: ignore except KeyError as exc: raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') from exc else: pydantic_extra = object.__getattribute__(self, '__pydantic_extra__') if pydantic_extra is not None: try: return pydantic_extra[item] except KeyError as exc: raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') from exc else: if hasattr(self.__class__, item): return super().__getattribute__(item) # Raises AttributeError if appropriate else: # this is the current error raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') def __setattr__(self, name: str, value: Any) -> None: if name in self.__class_vars__: raise AttributeError( f'{name!r} is a ClassVar of `{self.__class__.__name__}` and cannot be set on an instance. ' f'If you want to set a value on the class, use `{self.__class__.__name__}.{name} = value`.' ) elif not _fields.is_valid_field_name(name): if self.__pydantic_private__ is None or name not in self.__private_attributes__: _object_setattr(self, name, value) else: attribute = self.__private_attributes__[name] if hasattr(attribute, '__set__'): attribute.__set__(self, value) # type: ignore else: self.__pydantic_private__[name] = value return elif self.model_config.get('frozen', None): error: pydantic_core.InitErrorDetails = { 'type': 'frozen_instance', 'loc': (name,), 'input': value, } raise pydantic_core.ValidationError.from_exception_data(self.__class__.__name__, [error]) attr = getattr(self.__class__, name, None) if isinstance(attr, property): attr.__set__(self, value) elif self.model_config.get('validate_assignment', None): self.__pydantic_validator__.validate_assignment(self, name, value) elif self.model_config.get('extra') != 'allow' and name not in self.model_fields: # TODO - matching error raise ValueError(f'"{self.__class__.__name__}" object has no field "{name}"') elif self.model_config.get('extra') == 'allow' and name not in self.model_fields: # SAFETY: __pydantic_extra__ is not None when extra = 'allow' self.__pydantic_extra__[name] = value # type: ignore else: self.__dict__[name] = value self.__pydantic_fields_set__.add(name) def __delattr__(self, item: str) -> Any: if item in self.__private_attributes__: attribute = self.__private_attributes__[item] if hasattr(attribute, '__delete__'): attribute.__delete__(self) # type: ignore return try: # Note: self.__pydantic_private__ cannot be None if self.__private_attributes__ has items del self.__pydantic_private__[item] # type: ignore except KeyError as exc: raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') from exc elif item in self.model_fields: object.__delattr__(self, item) elif self.__pydantic_extra__ is not None and item in self.__pydantic_extra__: del self.__pydantic_extra__[item] else: try: object.__delattr__(self, item) except AttributeError: raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}') def __getstate__(self) -> dict[Any, Any]: private = self.__pydantic_private__ if private: private = {k: v for k, v in private.items() if v is not PydanticUndefined} return { '__dict__': self.__dict__, '__pydantic_extra__': self.__pydantic_extra__, '__pydantic_fields_set__': self.__pydantic_fields_set__, '__pydantic_private__': private, } def __setstate__(self, state: dict[Any, Any]) -> None: _object_setattr(self, '__pydantic_fields_set__', state['__pydantic_fields_set__']) _object_setattr(self, '__pydantic_extra__', state['__pydantic_extra__']) _object_setattr(self, '__pydantic_private__', state['__pydantic_private__']) _object_setattr(self, '__dict__', state['__dict__']) def __eq__(self, other: Any) -> bool: if isinstance(other, BaseModel): # When comparing instances of generic types for equality, as long as all field values are equal, # only require their generic origin types to be equal, rather than exact type equality. # This prevents headaches like MyGeneric(x=1) != MyGeneric[Any](x=1). self_type = self.__pydantic_generic_metadata__['origin'] or self.__class__ other_type = other.__pydantic_generic_metadata__['origin'] or other.__class__ return ( self_type == other_type and self.__dict__ == other.__dict__ and self.__pydantic_private__ == other.__pydantic_private__ and self.__pydantic_extra__ == other.__pydantic_extra__ ) else: return NotImplemented # delegate to the other item in the comparison if typing.TYPE_CHECKING: # We put `__init_subclass__` in a TYPE_CHECKING block because, even though we want the type-checking benefits # described in the signature of `__init_subclass__` below, we don't want to modify the default behavior of # subclass initialization. def __init_subclass__(cls, **kwargs: Unpack[ConfigDict]): """This signature is included purely to help type-checkers check arguments to class declaration, which provides a way to conveniently set model_config key/value pairs. ```py from pydantic import BaseModel class MyModel(BaseModel, extra='allow'): ... ``` However, this may be deceiving, since the _actual_ calls to `__init_subclass__` will not receive any of the config arguments, and will only receive any keyword arguments passed during class initialization that are _not_ expected keys in ConfigDict. (This is due to the way `ModelMetaclass.__new__` works.) Args: **kwargs: Keyword arguments passed to the class definition, which set model_config Note: You may want to override `__pydantic_init_subclass__` instead, which behaves similarly but is called *after* the class is fully initialized. """ def __iter__(self) -> TupleGenerator: """So `dict(model)` works.""" yield from self.__dict__.items() extra = self.__pydantic_extra__ if extra: yield from extra.items() def __repr__(self) -> str: return f'{self.__repr_name__()}({self.__repr_str__(", ")})' def __repr_args__(self) -> _repr.ReprArgs: for k, v in self.__dict__.items(): field = self.model_fields.get(k) if field and field.repr: yield k, v pydantic_extra = self.__pydantic_extra__ if pydantic_extra is not None: yield from ((k, v) for k, v in pydantic_extra.items()) yield from ((k, getattr(self, k)) for k, v in self.model_computed_fields.items() if v.repr) # take logic from `_repr.Representation` without the side effects of inheritance, see #5740 __repr_name__ = _repr.Representation.__repr_name__ __repr_str__ = _repr.Representation.__repr_str__ __pretty__ = _repr.Representation.__pretty__ __rich_repr__ = _repr.Representation.__rich_repr__ def __str__(self) -> str: return self.__repr_str__(' ') # ##### Deprecated methods from v1 ##### @property @typing_extensions.deprecated( 'The `__fields__` attribute is deprecated, use `model_fields` instead.', category=PydanticDeprecatedSince20 ) def __fields__(self) -> dict[str, FieldInfo]: warnings.warn('The `__fields__` attribute is deprecated, use `model_fields` instead.', DeprecationWarning) return self.model_fields @property @typing_extensions.deprecated( 'The `__fields_set__` attribute is deprecated, use `model_fields_set` instead.', category=PydanticDeprecatedSince20, ) def __fields_set__(self) -> set[str]: warnings.warn( 'The `__fields_set__` attribute is deprecated, use `model_fields_set` instead.', DeprecationWarning ) return self.__pydantic_fields_set__ @typing_extensions.deprecated( 'The `dict` method is deprecated; use `model_dump` instead.', category=PydanticDeprecatedSince20 ) def dict( # noqa: D102 self, *, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, ) -> typing.Dict[str, Any]: # noqa UP006 warnings.warn('The `dict` method is deprecated; use `model_dump` instead.', DeprecationWarning) return self.model_dump( include=include, exclude=exclude, by_alias=by_alias, exclude_unset=exclude_unset, exclude_defaults=exclude_defaults, exclude_none=exclude_none, ) @typing_extensions.deprecated( 'The `json` method is deprecated; use `model_dump_json` instead.', category=PydanticDeprecatedSince20 ) def json( # noqa: D102 self, *, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: typing.Callable[[Any], Any] | None = PydanticUndefined, # type: ignore[assignment] models_as_dict: bool = PydanticUndefined, # type: ignore[assignment] **dumps_kwargs: Any, ) -> str: warnings.warn('The `json` method is deprecated; use `model_dump_json` instead.', DeprecationWarning) if encoder is not PydanticUndefined: raise TypeError('The `encoder` argument is no longer supported; use field serializers instead.') if models_as_dict is not PydanticUndefined: raise TypeError('The `models_as_dict` argument is no longer supported; use a model serializer instead.') if dumps_kwargs: raise TypeError('`dumps_kwargs` keyword arguments are no longer supported.') return self.model_dump_json( include=include, exclude=exclude, by_alias=by_alias, exclude_unset=exclude_unset, exclude_defaults=exclude_defaults, exclude_none=exclude_none, ) @classmethod @typing_extensions.deprecated( 'The `parse_obj` method is deprecated; use `model_validate` instead.', category=PydanticDeprecatedSince20 ) def parse_obj(cls: type[Model], obj: Any) -> Model: # noqa: D102 warnings.warn('The `parse_obj` method is deprecated; use `model_validate` instead.', DeprecationWarning) return cls.model_validate(obj) @classmethod @typing_extensions.deprecated( 'The `parse_raw` method is deprecated; if your data is JSON use `model_validate_json`, ' 'otherwise load the data then use `model_validate` instead.', category=PydanticDeprecatedSince20, ) def parse_raw( # noqa: D102 cls: type[Model], b: str | bytes, *, content_type: str | None = None, encoding: str = 'utf8', proto: _deprecated_parse.Protocol | None = None, allow_pickle: bool = False, ) -> Model: # pragma: no cover warnings.warn( 'The `parse_raw` method is deprecated; if your data is JSON use `model_validate_json`, ' 'otherwise load the data then use `model_validate` instead.', DeprecationWarning, ) try: obj = _deprecated_parse.load_str_bytes( b, proto=proto, content_type=content_type, encoding=encoding, allow_pickle=allow_pickle, ) except (ValueError, TypeError) as exc: import json # try to match V1 if isinstance(exc, UnicodeDecodeError): type_str = 'value_error.unicodedecode' elif isinstance(exc, json.JSONDecodeError): type_str = 'value_error.jsondecode' elif isinstance(exc, ValueError): type_str = 'value_error' else: type_str = 'type_error' # ctx is missing here, but since we've added `input` to the error, we're not pretending it's the same error: pydantic_core.InitErrorDetails = { # The type: ignore on the next line is to ignore the requirement of LiteralString 'type': pydantic_core.PydanticCustomError(type_str, str(exc)), # type: ignore 'loc': ('__root__',), 'input': b, } raise pydantic_core.ValidationError.from_exception_data(cls.__name__, [error]) return cls.model_validate(obj) @classmethod @typing_extensions.deprecated( 'The `parse_file` method is deprecated; load the data from file, then if your data is JSON ' 'use `model_validate_json`, otherwise `model_validate` instead.', category=PydanticDeprecatedSince20, ) def parse_file( # noqa: D102 cls: type[Model], path: str | Path, *, content_type: str | None = None, encoding: str = 'utf8', proto: _deprecated_parse.Protocol | None = None, allow_pickle: bool = False, ) -> Model: warnings.warn( 'The `parse_file` method is deprecated; load the data from file, then if your data is JSON ' 'use `model_validate_json` otherwise `model_validate` instead.', DeprecationWarning, ) obj = _deprecated_parse.load_file( path, proto=proto, content_type=content_type, encoding=encoding, allow_pickle=allow_pickle, ) return cls.parse_obj(obj) @classmethod @typing_extensions.deprecated( "The `from_orm` method is deprecated; set " "`model_config['from_attributes']=True` and use `model_validate` instead.", category=PydanticDeprecatedSince20, ) def from_orm(cls: type[Model], obj: Any) -> Model: # noqa: D102 warnings.warn( 'The `from_orm` method is deprecated; set `model_config["from_attributes"]=True` ' 'and use `model_validate` instead.', DeprecationWarning, ) if not cls.model_config.get('from_attributes', None): raise PydanticUserError( 'You must set the config attribute `from_attributes=True` to use from_orm', code=None ) return cls.model_validate(obj) @classmethod @typing_extensions.deprecated( 'The `construct` method is deprecated; use `model_construct` instead.', category=PydanticDeprecatedSince20 ) def construct(cls: type[Model], _fields_set: set[str] | None = None, **values: Any) -> Model: # noqa: D102 warnings.warn('The `construct` method is deprecated; use `model_construct` instead.', DeprecationWarning) return cls.model_construct(_fields_set=_fields_set, **values) @typing_extensions.deprecated( 'The copy method is deprecated; use `model_copy` instead.', category=PydanticDeprecatedSince20 ) def copy( self: Model, *, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, update: typing.Dict[str, Any] | None = None, # noqa UP006 deep: bool = False, ) -> Model: # pragma: no cover """Returns a copy of the model. !!! warning "Deprecated" This method is now deprecated; use `model_copy` instead. If you need `include` or `exclude`, use: ```py data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) ``` Args: include: Optional set or mapping specifying which fields to include in the copied model. exclude: Optional set or mapping specifying which fields to exclude in the copied model. update: Optional dictionary of field-value pairs to override field values in the copied model. deep: If True, the values of fields that are Pydantic models will be deep copied. Returns: A copy of the model with included, excluded and updated fields as specified. """ warnings.warn( 'The `copy` method is deprecated; use `model_copy` instead. ' 'See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`.', DeprecationWarning, ) values = dict( _deprecated_copy_internals._iter( # type: ignore self, to_dict=False, by_alias=False, include=include, exclude=exclude, exclude_unset=False ), **(update or {}), ) if self.__pydantic_private__ is None: private = None else: private = {k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined} if self.__pydantic_extra__ is None: extra: dict[str, Any] | None = None else: extra = self.__pydantic_extra__.copy() for k in list(self.__pydantic_extra__): if k not in values: # k was in the exclude extra.pop(k) for k in list(values): if k in self.__pydantic_extra__: # k must have come from extra extra[k] = values.pop(k) # new `__pydantic_fields_set__` can have unset optional fields with a set value in `update` kwarg if update: fields_set = self.__pydantic_fields_set__ | update.keys() else: fields_set = set(self.__pydantic_fields_set__) # removing excluded fields from `__pydantic_fields_set__` if exclude: fields_set -= set(exclude) return _deprecated_copy_internals._copy_and_set_values(self, values, fields_set, extra, private, deep=deep) @classmethod @typing_extensions.deprecated( 'The `schema` method is deprecated; use `model_json_schema` instead.', category=PydanticDeprecatedSince20 ) def schema( # noqa: D102 cls, by_alias: bool = True, ref_template: str = DEFAULT_REF_TEMPLATE ) -> typing.Dict[str, Any]: # noqa UP006 warnings.warn('The `schema` method is deprecated; use `model_json_schema` instead.', DeprecationWarning) return cls.model_json_schema(by_alias=by_alias, ref_template=ref_template) @classmethod @typing_extensions.deprecated( 'The `schema_json` method is deprecated; use `model_json_schema` and json.dumps instead.', category=PydanticDeprecatedSince20, ) def schema_json( # noqa: D102 cls, *, by_alias: bool = True, ref_template: str = DEFAULT_REF_TEMPLATE, **dumps_kwargs: Any ) -> str: # pragma: no cover import json warnings.warn( 'The `schema_json` method is deprecated; use `model_json_schema` and json.dumps instead.', DeprecationWarning, ) from .deprecated.json import pydantic_encoder return json.dumps( cls.model_json_schema(by_alias=by_alias, ref_template=ref_template), default=pydantic_encoder, **dumps_kwargs, ) @classmethod @typing_extensions.deprecated( 'The `validate` method is deprecated; use `model_validate` instead.', category=PydanticDeprecatedSince20 ) def validate(cls: type[Model], value: Any) -> Model: # noqa: D102 warnings.warn('The `validate` method is deprecated; use `model_validate` instead.', DeprecationWarning) return cls.model_validate(value) @classmethod @typing_extensions.deprecated( 'The `update_forward_refs` method is deprecated; use `model_rebuild` instead.', category=PydanticDeprecatedSince20, ) def update_forward_refs(cls, **localns: Any) -> None: # noqa: D102 warnings.warn( 'The `update_forward_refs` method is deprecated; use `model_rebuild` instead.', DeprecationWarning ) if localns: # pragma: no cover raise TypeError('`localns` arguments are not longer accepted.') cls.model_rebuild(force=True) @typing_extensions.deprecated( 'The private method `_iter` will be removed and should no longer be used.', category=PydanticDeprecatedSince20 ) def _iter(self, *args: Any, **kwargs: Any) -> Any: warnings.warn('The private method `_iter` will be removed and should no longer be used.', DeprecationWarning) return _deprecated_copy_internals._iter(self, *args, **kwargs) # type: ignore @typing_extensions.deprecated( 'The private method `_copy_and_set_values` will be removed and should no longer be used.', category=PydanticDeprecatedSince20, ) def _copy_and_set_values(self, *args: Any, **kwargs: Any) -> Any: warnings.warn( 'The private method `_copy_and_set_values` will be removed and should no longer be used.', DeprecationWarning, ) return _deprecated_copy_internals._copy_and_set_values(self, *args, **kwargs) # type: ignore @classmethod @typing_extensions.deprecated( 'The private method `_get_value` will be removed and should no longer be used.', category=PydanticDeprecatedSince20, ) def _get_value(cls, *args: Any, **kwargs: Any) -> Any: warnings.warn( 'The private method `_get_value` will be removed and should no longer be used.', DeprecationWarning ) return _deprecated_copy_internals._get_value(cls, *args, **kwargs) # type: ignore @typing_extensions.deprecated( 'The private method `_calculate_keys` will be removed and should no longer be used.', category=PydanticDeprecatedSince20, ) def _calculate_keys(self, *args: Any, **kwargs: Any) -> Any: warnings.warn( 'The private method `_calculate_keys` will be removed and should no longer be used.', DeprecationWarning ) return _deprecated_copy_internals._calculate_keys(self, *args, **kwargs) # type: ignore
(**data: 'Any') -> 'None'
726,041
penne.delegates
BoundingBox
Axis-aligned bounding box Attributes: min (Vec3): Minimum point of bounding box max (Vec3): Maximum point of bounding box
class BoundingBox(NoodleObject): """Axis-aligned bounding box Attributes: min (Vec3): Minimum point of bounding box max (Vec3): Maximum point of bounding box """ min: Vec3 max: Vec3
(*, min: List[float], max: List[float], **extra_data: Any) -> None
726,069
penne.delegates
Buffer
A buffer of bytes containing data for an image or a mesh. Bytes can be stored directly in the buffer with inline_bytes, or they can be stored in a URI with uri_bytes. The server should create a separate server to host the bytes, and there is support for this in the ByteServer class. To obtain these bytes, clients would have to make an HTTP request to the URI. A buffer could store a single attribute, or it could store multiple attributes interleaved together. This is where buffer views specify how to interpret the buffer. Attributes: id: ID for the buffer name: Name of the buffer size: Size of the buffer in bytes inline_bytes: Bytes of the buffer uri_bytes: URI for the bytes
class Buffer(Delegate): """A buffer of bytes containing data for an image or a mesh. Bytes can be stored directly in the buffer with inline_bytes, or they can be stored in a URI with uri_bytes. The server should create a separate server to host the bytes, and there is support for this in the ByteServer class. To obtain these bytes, clients would have to make an HTTP request to the URI. A buffer could store a single attribute, or it could store multiple attributes interleaved together. This is where buffer views specify how to interpret the buffer. Attributes: id: ID for the buffer name: Name of the buffer size: Size of the buffer in bytes inline_bytes: Bytes of the buffer uri_bytes: URI for the bytes """ id: BufferID name: Optional[str] = "Unnamed Buffer Delegate" size: int inline_bytes: Optional[bytes] = None uri_bytes: Optional[str] = None @model_validator(mode="after") def one_of(cls, model): if bool(model.inline_bytes) != bool(model.uri_bytes): return model else: raise ValueError("One plot type must be specified")
(*, client: object = None, id: penne.delegates.BufferID, name: Optional[str] = 'Unnamed Buffer Delegate', signals: Optional[dict] = {}, size: int, inline_bytes: Optional[bytes] = None, uri_bytes: Optional[str] = None, **extra_data: Any) -> None
726,086
penne.delegates
__str__
null
def __str__(self): return f"{self.name} - {type(self).__name__} - {self.id.compact_str()}"
(self)
726,097
penne.delegates
on_new
null
def on_new(self, message: dict): pass
(self, message: dict)
726,098
penne.delegates
on_remove
null
def on_remove(self, message: dict): pass
(self, message: dict)
726,099
penne.delegates
on_update
null
def on_update(self, message: dict): pass
(self, message: dict)
726,100
penne.delegates
BufferID
ID specific to buffers
class BufferID(ID): """ID specific to buffers""" pass
(slot: int, gen: int)
726,101
penne.delegates
__key
null
def __key(self): return type(self), self.slot, self.gen
(self)
726,102
penne.delegates
__eq__
null
def __eq__(self, other: object) -> bool: if type(other) is type(self): return self.__key() == other.__key() else: return False
(self, other: object) -> bool
726,104
penne.delegates
__hash__
null
def __hash__(self): return hash(self.__key())
(self)
726,106
namedtuple_ID
__new__
Create new instance of ID(slot, gen)
from builtins import function
(_cls, slot: ForwardRef('int'), gen: ForwardRef('int'))
726,108
penne.delegates
__str__
null
def __str__(self): return f"{type(self).__name__}{self.compact_str()}"
(self)
726,110
collections
_replace
Return a new ID object replacing specified fields with new values
def namedtuple(typename, field_names, *, rename=False, defaults=None, module=None): """Returns a new subclass of tuple with named fields. >>> Point = namedtuple('Point', ['x', 'y']) >>> Point.__doc__ # docstring for the new class 'Point(x, y)' >>> p = Point(11, y=22) # instantiate with positional args or keywords >>> p[0] + p[1] # indexable like a plain tuple 33 >>> x, y = p # unpack like a regular tuple >>> x, y (11, 22) >>> p.x + p.y # fields also accessible by name 33 >>> d = p._asdict() # convert to a dictionary >>> d['x'] 11 >>> Point(**d) # convert from a dictionary Point(x=11, y=22) >>> p._replace(x=100) # _replace() is like str.replace() but targets named fields Point(x=100, y=22) """ # Validate the field names. At the user's option, either generate an error # message or automatically replace the field name with a valid name. if isinstance(field_names, str): field_names = field_names.replace(',', ' ').split() field_names = list(map(str, field_names)) typename = _sys.intern(str(typename)) if rename: seen = set() for index, name in enumerate(field_names): if (not name.isidentifier() or _iskeyword(name) or name.startswith('_') or name in seen): field_names[index] = f'_{index}' seen.add(name) for name in [typename] + field_names: if type(name) is not str: raise TypeError('Type names and field names must be strings') if not name.isidentifier(): raise ValueError('Type names and field names must be valid ' f'identifiers: {name!r}') if _iskeyword(name): raise ValueError('Type names and field names cannot be a ' f'keyword: {name!r}') seen = set() for name in field_names: if name.startswith('_') and not rename: raise ValueError('Field names cannot start with an underscore: ' f'{name!r}') if name in seen: raise ValueError(f'Encountered duplicate field name: {name!r}') seen.add(name) field_defaults = {} if defaults is not None: defaults = tuple(defaults) if len(defaults) > len(field_names): raise TypeError('Got more default values than field names') field_defaults = dict(reversed(list(zip(reversed(field_names), reversed(defaults))))) # Variables used in the methods and docstrings field_names = tuple(map(_sys.intern, field_names)) num_fields = len(field_names) arg_list = ', '.join(field_names) if num_fields == 1: arg_list += ',' repr_fmt = '(' + ', '.join(f'{name}=%r' for name in field_names) + ')' tuple_new = tuple.__new__ _dict, _tuple, _len, _map, _zip = dict, tuple, len, map, zip # Create all the named tuple methods to be added to the class namespace namespace = { '_tuple_new': tuple_new, '__builtins__': {}, '__name__': f'namedtuple_{typename}', } code = f'lambda _cls, {arg_list}: _tuple_new(_cls, ({arg_list}))' __new__ = eval(code, namespace) __new__.__name__ = '__new__' __new__.__doc__ = f'Create new instance of {typename}({arg_list})' if defaults is not None: __new__.__defaults__ = defaults @classmethod def _make(cls, iterable): result = tuple_new(cls, iterable) if _len(result) != num_fields: raise TypeError(f'Expected {num_fields} arguments, got {len(result)}') return result _make.__func__.__doc__ = (f'Make a new {typename} object from a sequence ' 'or iterable') def _replace(self, /, **kwds): result = self._make(_map(kwds.pop, field_names, self)) if kwds: raise ValueError(f'Got unexpected field names: {list(kwds)!r}') return result _replace.__doc__ = (f'Return a new {typename} object replacing specified ' 'fields with new values') def __repr__(self): 'Return a nicely formatted representation string' return self.__class__.__name__ + repr_fmt % self def _asdict(self): 'Return a new dict which maps field names to their values.' return _dict(_zip(self._fields, self)) def __getnewargs__(self): 'Return self as a plain tuple. Used by copy and pickle.' return _tuple(self) # Modify function metadata to help with introspection and debugging for method in ( __new__, _make.__func__, _replace, __repr__, _asdict, __getnewargs__, ): method.__qualname__ = f'{typename}.{method.__name__}' # Build-up the class namespace dictionary # and use type() to build the result class class_namespace = { '__doc__': f'{typename}({arg_list})', '__slots__': (), '_fields': field_names, '_field_defaults': field_defaults, '__new__': __new__, '_make': _make, '_replace': _replace, '__repr__': __repr__, '_asdict': _asdict, '__getnewargs__': __getnewargs__, '__match_args__': field_names, } for index, name in enumerate(field_names): doc = _sys.intern(f'Alias for field number {index}') class_namespace[name] = _tuplegetter(index, doc) result = type(typename, (tuple,), class_namespace) # For pickling to work, the __module__ variable needs to be set to the frame # where the named tuple is created. Bypass this step in environments where # sys._getframe is not defined (Jython for example) or sys._getframe is not # defined for arguments greater than 0 (IronPython), or where the user has # specified a particular module. if module is None: try: module = _sys._getframe(1).f_globals.get('__name__', '__main__') except (AttributeError, ValueError): pass if module is not None: result.__module__ = module return result
(self, /, **kwds)
726,111
penne.delegates
compact_str
null
def compact_str(self): return f"|{self.slot}/{self.gen}|"
(self)
726,112
penne.delegates
BufferType
String indicating type of data stored in a buffer Used in BufferView. Takes value of either UNK, GEOMETRY, or IMAGE
class BufferType(str, Enum): """String indicating type of data stored in a buffer Used in BufferView. Takes value of either UNK, GEOMETRY, or IMAGE """ unknown = "UNK" geometry = "GEOMETRY" image = "IMAGE"
(value, names=None, *, module=None, qualname=None, type=None, start=1)
726,113
penne.delegates
BufferView
A view into a buffer, specifying a subset of the buffer and how to interpret it. Attributes: id: ID for the buffer view name: Name of the buffer view source_buffer: Buffer that the view is referring to type: Type of the buffer view offset: Offset into the buffer in bytes length: Length of the buffer view in bytes
class BufferView(Delegate): """A view into a buffer, specifying a subset of the buffer and how to interpret it. Attributes: id: ID for the buffer view name: Name of the buffer view source_buffer: Buffer that the view is referring to type: Type of the buffer view offset: Offset into the buffer in bytes length: Length of the buffer view in bytes """ id: BufferViewID name: Optional[str] = "Unnamed Buffer-View Delegate" source_buffer: BufferID type: BufferType = BufferType.unknown offset: int length: int @field_validator("type", mode='before') def coerce_type(cls, value): if value in ["UNK", "GEOMETRY", "IMAGE"]: return value if "GEOMETRY" in value.upper(): logging.warning(f"Buffer View Type does not meet the specification: {value} coerced to 'GEOMETRY'") return "GEOMETRY" elif "IMAGE" in value.upper(): logging.warning(f"Buffer View Type does not meet the specification: {value} coerced to 'IMAGE'") return "IMAGE" else: logging.warning(f"Buffer View Type does not meet the specification: {value} coerced to 'UNK'") return "UNK"
(*, client: object = None, id: penne.delegates.BufferViewID, name: Optional[str] = 'Unnamed Buffer-View Delegate', signals: Optional[dict] = {}, source_buffer: penne.delegates.BufferID, type: penne.delegates.BufferType = <BufferType.unknown: 'UNK'>, offset: int, length: int, **extra_data: Any) -> None
726,144
penne.delegates
BufferViewID
ID specific to buffer views
class BufferViewID(ID): """ID specific to buffer views""" pass
(slot: int, gen: int)
726,156
penne.core
Client
Client for communicating with server Attributes: _url (string): address used to connect to server _loop (event loop): event loop used for network thread delegates (dict): map for delegate functions thread (thread object): network thread used by client _socket (WebSocketClientProtocol): socket to connect to server name (str): name of the client state (dict): dict keeping track of created objects client_message_map (dict): mapping message type to corresponding id server_messages (dict): mapping message id's to handle info _current_invoke (str): id for next method invoke callback_map (dict): mapping invoke_id to callback function callback_queue (queue): queue for storing callback functions, useful for polling and running in the main thread is_active (bool): flag for whether client is active
class Client(object): """Client for communicating with server Attributes: _url (string): address used to connect to server _loop (event loop): event loop used for network thread delegates (dict): map for delegate functions thread (thread object): network thread used by client _socket (WebSocketClientProtocol): socket to connect to server name (str): name of the client state (dict): dict keeping track of created objects client_message_map (dict): mapping message type to corresponding id server_messages (dict): mapping message id's to handle info _current_invoke (str): id for next method invoke callback_map (dict): mapping invoke_id to callback function callback_queue (queue): queue for storing callback functions, useful for polling and running in the main thread is_active (bool): flag for whether client is active """ def __init__(self, url: str, custom_delegate_hash: dict[Type[delegates.Delegate], Type[delegates.Delegate]] = None, on_connected=None, strict=False, json=None): """Constructor for the Client Class Args: url (string): address used to connect to server custom_delegate_hash (dict): map for new delegates to be used on client on_connected (Callable): callback function to run once client is set up strict (bool): flag for strict data validation and throwing hard exceptions json (str): path for outputting json log of messages """ if not custom_delegate_hash: custom_delegate_hash = {} self._url = url self._loop = asyncio.new_event_loop() self.on_connected = on_connected self.delegates = delegates.default_delegates.copy() self.strict = strict self.thread = threading.Thread(target=self._start_communication_thread) self.connection_established = threading.Event() self._socket = None self.name = "Python Client" self.state = {} self.client_message_map = { "intro": 0, "invoke": 1 } self.server_messages = [ HandleInfo(delegates.Method, "create"), HandleInfo(delegates.Method, "delete"), HandleInfo(delegates.Signal, "create"), HandleInfo(delegates.Signal, "delete"), HandleInfo(delegates.Entity, "create"), HandleInfo(delegates.Entity, "update"), HandleInfo(delegates.Entity, "delete"), HandleInfo(delegates.Plot, "create"), HandleInfo(delegates.Plot, "update"), HandleInfo(delegates.Plot, "delete"), HandleInfo(delegates.Buffer, "create"), HandleInfo(delegates.Buffer, "delete"), HandleInfo(delegates.BufferView, "create"), HandleInfo(delegates.BufferView, "delete"), HandleInfo(delegates.Material, "create"), HandleInfo(delegates.Material, "update"), HandleInfo(delegates.Material, "delete"), HandleInfo(delegates.Image, "create"), HandleInfo(delegates.Image, "delete"), HandleInfo(delegates.Texture, "create"), HandleInfo(delegates.Texture, "delete"), HandleInfo(delegates.Sampler, "create"), HandleInfo(delegates.Sampler, "delete"), HandleInfo(delegates.Light, "create"), HandleInfo(delegates.Light, "update"), HandleInfo(delegates.Light, "delete"), HandleInfo(delegates.Geometry, "create"), HandleInfo(delegates.Geometry, "delete"), HandleInfo(delegates.Table, "create"), HandleInfo(delegates.Table, "update"), HandleInfo(delegates.Table, "delete"), HandleInfo(delegates.Document, "update"), HandleInfo(delegates.Document, "reset"), HandleInfo(delegates.Signal, "invoke"), HandleInfo(delegates.Method, "reply"), HandleInfo(delegates.Document, "initialized") ] self._current_invoke = 0 self.callback_map = {} self.callback_queue = queue.Queue() self.is_active = False self.json = json if json: with open(json, "w") as outfile: # Clear out old contents outfile.write("JSON Log\n") # Hook up delegate map to customs self.delegates.update(custom_delegate_hash) # Add document delegate as starting element in state self.state["document"] = self.delegates[delegates.Document](client=self) def __enter__(self): """Enter method for context manager Waits for 1 seconds for connection to be established, otherwise throws exception """ self.thread.start() flag = self.connection_established.wait(timeout=1) if not flag: raise ConnectionError("Couldn't connect to server") self.is_active = True return self def __exit__(self, exc_type, exc_value, traceback): """Exit method for context manager""" self.shutdown() self.is_active = False def _log_json(self, message: list): json_message = json.dumps(message, default=default_json_encoder) formatted_message = f"{json_message}\n" with open(self.json, "a") as outfile: outfile.write(formatted_message) def _start_communication_thread(self): """Starts the communication thread for the client All communication is done with the server in this separate thread. That way, the main thread can be used for other code. For example, Orzo's window / gui has to run in the main thread, so this allows the client to be used in conjunction with the gui. """ try: asyncio.set_event_loop(self._loop) self._loop.run_until_complete(self._run()) except Exception as e: self.is_active = False logging.warning(f"Connection terminated in communication thread: {e}") def get_delegate_id(self, name: str) -> Type[delegates.ID]: """Get a delegate's id from its name. Assumes names are unique, or returns the first match Args: name (str): name of method Returns: ID (ID): ID for the delegate Raises: KeyError: if no match is found """ if name == "document": return name state_delegates = self.state.values() for delegate in state_delegates: if delegate.name == name: return delegate.id raise KeyError(f"Couldn't find object '{name}' in state") def get_delegate(self, identifier: Union[delegates.ID, str, Dict[str, delegates.ID]]) -> Type[delegates.Delegate]: """Getter to easily retrieve components from state Accepts multiple types of identifiers for flexibility Args: identifier (ID | str | dict): id, name, or context for the component Returns: Delegate (Delegate): delegate object from state Raises: TypeError: if identifier is not a valid type KeyError: if id or name is not found in state ValueError: if context is not found in state """ if isinstance(identifier, delegates.ID): return self.state[identifier] elif isinstance(identifier, str): return self.state[self.get_delegate_id(identifier)] elif isinstance(identifier, dict): return self.get_delegate_by_context(identifier) else: raise TypeError(f"Invalid type for identifier: {type(identifier)}") def get_delegate_by_context(self, context: dict = None) -> delegates.Delegate: """Get delegate object from a context object Contexts are of the form {"table": TableID}, {"entity": EntityID}, or {"plot": PlotID}. They are only applicable for tables, entities, and plots Args: context (dict): dict containing context Returns: Delegate (Delegate): delegate object from state Raises: ValueError: Couldn't get delegate from context """ if not context: target_delegate = self.state["document"] return target_delegate table = context.get("table") entity = context.get("entity") plot = context.get("plot") if table: target_delegate = self.state[delegates.TableID(*table)] elif entity: target_delegate = self.state[delegates.EntityID(*entity)] elif plot: target_delegate = self.state[delegates.PlotID(*plot)] else: raise ValueError("Couldn't get delegate from context") return target_delegate def invoke_method(self, method: Union[delegates.MethodID, str], args: list = None, context: dict[str, tuple] = None, callback=None): """Invoke method on server Constructs a dictionary of arguments to use in send_message. The Dictionary follows the structure of an InvokeMethodMessage, but using a dictionary prevents from converting back and forth just before sending. Also implements callback functions attached to each invocation. By default, each invocation will store a None object in the callback map, and the handler responsible for reply messages will delete pop it from the map and call the method if there is one Args: method (ID | str): id or name for method args (list): arguments for method context (dict): optional, target context for method call callback (Callable): function to be called upon response Returns: message (list): message to be sent to server in the form of [tag, {content}] """ # Handle default args if not args: args = [] # Get proper ID if isinstance(method, str): method_id = self.get_delegate_id(method) else: method_id = method # Get invoke ID invoke_id = str(self._current_invoke) self._current_invoke += 1 # Keep track of callback self.callback_map[invoke_id] = callback # Construct message dict arg_dict = { "method": method_id, "args": args, "invoke_id": invoke_id } if context: arg_dict["context"] = context return self.send_message(arg_dict, "invoke") def send_message(self, message_dict: dict[str, Any], kind: str): """Send message to server Args: message_dict (dict): dict mapping message attribute to value kind (str): either 'invoke' or 'intro' to indicate type of client message """ # Construct message with ID from map and converted message object message = [self.client_message_map[kind], message_dict] if self.json: self._log_json(message) logging.debug(f"Sending Message: {message}") asyncio.run_coroutine_threadsafe(self._socket.send(dumps(message)), self._loop) return message def _process_message(self, message): """Prep message for handling Messages here are of form: [tag, {content}, tag, {content}, ...] """ content = iter(message) for tag in content: try: handlers.handle(self, tag, next(content)) except Exception as e: if self.strict: raise e else: logging.error(f"Exception: {e} for message {message}") async def _run(self): """Network thread for managing websocket connection""" async with websockets.connect(self._url) as websocket: # update class self._socket = websocket self.name = f"Python Client @ {self._url}" self.is_active = True # send intro message intro = {"client_name": self.name} self.send_message(intro, "intro") # decode and handle all incoming messages async for message in self._socket: message = loads(message) self._process_message(message) def show_methods(self): """Displays Available Methods to the User on the document Uses the document delegate's show_methods function to display """ self.state["document"].show_methods() def shutdown(self): """Method for shutting down the client Closes websocket connection then blocks to finish all callbacks, joins thread as well """ asyncio.run_coroutine_threadsafe(self._socket.close(), self._loop) self.is_active = False self.thread.join()
(url: 'str', custom_delegate_hash: 'dict[Type[delegates.Delegate], Type[delegates.Delegate]]' = None, on_connected=None, strict=False, json=None)
726,157
penne.core
__enter__
Enter method for context manager Waits for 1 seconds for connection to be established, otherwise throws exception
def __enter__(self): """Enter method for context manager Waits for 1 seconds for connection to be established, otherwise throws exception """ self.thread.start() flag = self.connection_established.wait(timeout=1) if not flag: raise ConnectionError("Couldn't connect to server") self.is_active = True return self
(self)
726,158
penne.core
__exit__
Exit method for context manager
def __exit__(self, exc_type, exc_value, traceback): """Exit method for context manager""" self.shutdown() self.is_active = False
(self, exc_type, exc_value, traceback)
726,159
penne.core
__init__
Constructor for the Client Class Args: url (string): address used to connect to server custom_delegate_hash (dict): map for new delegates to be used on client on_connected (Callable): callback function to run once client is set up strict (bool): flag for strict data validation and throwing hard exceptions json (str): path for outputting json log of messages
def __init__(self, url: str, custom_delegate_hash: dict[Type[delegates.Delegate], Type[delegates.Delegate]] = None, on_connected=None, strict=False, json=None): """Constructor for the Client Class Args: url (string): address used to connect to server custom_delegate_hash (dict): map for new delegates to be used on client on_connected (Callable): callback function to run once client is set up strict (bool): flag for strict data validation and throwing hard exceptions json (str): path for outputting json log of messages """ if not custom_delegate_hash: custom_delegate_hash = {} self._url = url self._loop = asyncio.new_event_loop() self.on_connected = on_connected self.delegates = delegates.default_delegates.copy() self.strict = strict self.thread = threading.Thread(target=self._start_communication_thread) self.connection_established = threading.Event() self._socket = None self.name = "Python Client" self.state = {} self.client_message_map = { "intro": 0, "invoke": 1 } self.server_messages = [ HandleInfo(delegates.Method, "create"), HandleInfo(delegates.Method, "delete"), HandleInfo(delegates.Signal, "create"), HandleInfo(delegates.Signal, "delete"), HandleInfo(delegates.Entity, "create"), HandleInfo(delegates.Entity, "update"), HandleInfo(delegates.Entity, "delete"), HandleInfo(delegates.Plot, "create"), HandleInfo(delegates.Plot, "update"), HandleInfo(delegates.Plot, "delete"), HandleInfo(delegates.Buffer, "create"), HandleInfo(delegates.Buffer, "delete"), HandleInfo(delegates.BufferView, "create"), HandleInfo(delegates.BufferView, "delete"), HandleInfo(delegates.Material, "create"), HandleInfo(delegates.Material, "update"), HandleInfo(delegates.Material, "delete"), HandleInfo(delegates.Image, "create"), HandleInfo(delegates.Image, "delete"), HandleInfo(delegates.Texture, "create"), HandleInfo(delegates.Texture, "delete"), HandleInfo(delegates.Sampler, "create"), HandleInfo(delegates.Sampler, "delete"), HandleInfo(delegates.Light, "create"), HandleInfo(delegates.Light, "update"), HandleInfo(delegates.Light, "delete"), HandleInfo(delegates.Geometry, "create"), HandleInfo(delegates.Geometry, "delete"), HandleInfo(delegates.Table, "create"), HandleInfo(delegates.Table, "update"), HandleInfo(delegates.Table, "delete"), HandleInfo(delegates.Document, "update"), HandleInfo(delegates.Document, "reset"), HandleInfo(delegates.Signal, "invoke"), HandleInfo(delegates.Method, "reply"), HandleInfo(delegates.Document, "initialized") ] self._current_invoke = 0 self.callback_map = {} self.callback_queue = queue.Queue() self.is_active = False self.json = json if json: with open(json, "w") as outfile: # Clear out old contents outfile.write("JSON Log\n") # Hook up delegate map to customs self.delegates.update(custom_delegate_hash) # Add document delegate as starting element in state self.state["document"] = self.delegates[delegates.Document](client=self)
(self, url: str, custom_delegate_hash: Optional[dict[Type[penne.delegates.Delegate], Type[penne.delegates.Delegate]]] = None, on_connected=None, strict=False, json=None)
726,160
penne.core
_log_json
null
def _log_json(self, message: list): json_message = json.dumps(message, default=default_json_encoder) formatted_message = f"{json_message}\n" with open(self.json, "a") as outfile: outfile.write(formatted_message)
(self, message: list)
726,161
penne.core
_process_message
Prep message for handling Messages here are of form: [tag, {content}, tag, {content}, ...]
def _process_message(self, message): """Prep message for handling Messages here are of form: [tag, {content}, tag, {content}, ...] """ content = iter(message) for tag in content: try: handlers.handle(self, tag, next(content)) except Exception as e: if self.strict: raise e else: logging.error(f"Exception: {e} for message {message}")
(self, message)
726,162
penne.core
_run
Network thread for managing websocket connection
def _process_message(self, message): """Prep message for handling Messages here are of form: [tag, {content}, tag, {content}, ...] """ content = iter(message) for tag in content: try: handlers.handle(self, tag, next(content)) except Exception as e: if self.strict: raise e else: logging.error(f"Exception: {e} for message {message}")
(self)
726,163
penne.core
_start_communication_thread
Starts the communication thread for the client All communication is done with the server in this separate thread. That way, the main thread can be used for other code. For example, Orzo's window / gui has to run in the main thread, so this allows the client to be used in conjunction with the gui.
def _start_communication_thread(self): """Starts the communication thread for the client All communication is done with the server in this separate thread. That way, the main thread can be used for other code. For example, Orzo's window / gui has to run in the main thread, so this allows the client to be used in conjunction with the gui. """ try: asyncio.set_event_loop(self._loop) self._loop.run_until_complete(self._run()) except Exception as e: self.is_active = False logging.warning(f"Connection terminated in communication thread: {e}")
(self)
726,164
penne.core
get_delegate
Getter to easily retrieve components from state Accepts multiple types of identifiers for flexibility Args: identifier (ID | str | dict): id, name, or context for the component Returns: Delegate (Delegate): delegate object from state Raises: TypeError: if identifier is not a valid type KeyError: if id or name is not found in state ValueError: if context is not found in state
def get_delegate(self, identifier: Union[delegates.ID, str, Dict[str, delegates.ID]]) -> Type[delegates.Delegate]: """Getter to easily retrieve components from state Accepts multiple types of identifiers for flexibility Args: identifier (ID | str | dict): id, name, or context for the component Returns: Delegate (Delegate): delegate object from state Raises: TypeError: if identifier is not a valid type KeyError: if id or name is not found in state ValueError: if context is not found in state """ if isinstance(identifier, delegates.ID): return self.state[identifier] elif isinstance(identifier, str): return self.state[self.get_delegate_id(identifier)] elif isinstance(identifier, dict): return self.get_delegate_by_context(identifier) else: raise TypeError(f"Invalid type for identifier: {type(identifier)}")
(self, identifier: Union[penne.delegates.ID, str, Dict[str, penne.delegates.ID]]) -> Type[penne.delegates.Delegate]
726,165
penne.core
get_delegate_by_context
Get delegate object from a context object Contexts are of the form {"table": TableID}, {"entity": EntityID}, or {"plot": PlotID}. They are only applicable for tables, entities, and plots Args: context (dict): dict containing context Returns: Delegate (Delegate): delegate object from state Raises: ValueError: Couldn't get delegate from context
def get_delegate_by_context(self, context: dict = None) -> delegates.Delegate: """Get delegate object from a context object Contexts are of the form {"table": TableID}, {"entity": EntityID}, or {"plot": PlotID}. They are only applicable for tables, entities, and plots Args: context (dict): dict containing context Returns: Delegate (Delegate): delegate object from state Raises: ValueError: Couldn't get delegate from context """ if not context: target_delegate = self.state["document"] return target_delegate table = context.get("table") entity = context.get("entity") plot = context.get("plot") if table: target_delegate = self.state[delegates.TableID(*table)] elif entity: target_delegate = self.state[delegates.EntityID(*entity)] elif plot: target_delegate = self.state[delegates.PlotID(*plot)] else: raise ValueError("Couldn't get delegate from context") return target_delegate
(self, context: Optional[dict] = None) -> penne.delegates.Delegate
726,166
penne.core
get_delegate_id
Get a delegate's id from its name. Assumes names are unique, or returns the first match Args: name (str): name of method Returns: ID (ID): ID for the delegate Raises: KeyError: if no match is found
def get_delegate_id(self, name: str) -> Type[delegates.ID]: """Get a delegate's id from its name. Assumes names are unique, or returns the first match Args: name (str): name of method Returns: ID (ID): ID for the delegate Raises: KeyError: if no match is found """ if name == "document": return name state_delegates = self.state.values() for delegate in state_delegates: if delegate.name == name: return delegate.id raise KeyError(f"Couldn't find object '{name}' in state")
(self, name: str) -> Type[penne.delegates.ID]
726,167
penne.core
invoke_method
Invoke method on server Constructs a dictionary of arguments to use in send_message. The Dictionary follows the structure of an InvokeMethodMessage, but using a dictionary prevents from converting back and forth just before sending. Also implements callback functions attached to each invocation. By default, each invocation will store a None object in the callback map, and the handler responsible for reply messages will delete pop it from the map and call the method if there is one Args: method (ID | str): id or name for method args (list): arguments for method context (dict): optional, target context for method call callback (Callable): function to be called upon response Returns: message (list): message to be sent to server in the form of [tag, {content}]
def invoke_method(self, method: Union[delegates.MethodID, str], args: list = None, context: dict[str, tuple] = None, callback=None): """Invoke method on server Constructs a dictionary of arguments to use in send_message. The Dictionary follows the structure of an InvokeMethodMessage, but using a dictionary prevents from converting back and forth just before sending. Also implements callback functions attached to each invocation. By default, each invocation will store a None object in the callback map, and the handler responsible for reply messages will delete pop it from the map and call the method if there is one Args: method (ID | str): id or name for method args (list): arguments for method context (dict): optional, target context for method call callback (Callable): function to be called upon response Returns: message (list): message to be sent to server in the form of [tag, {content}] """ # Handle default args if not args: args = [] # Get proper ID if isinstance(method, str): method_id = self.get_delegate_id(method) else: method_id = method # Get invoke ID invoke_id = str(self._current_invoke) self._current_invoke += 1 # Keep track of callback self.callback_map[invoke_id] = callback # Construct message dict arg_dict = { "method": method_id, "args": args, "invoke_id": invoke_id } if context: arg_dict["context"] = context return self.send_message(arg_dict, "invoke")
(self, method: Union[penne.delegates.MethodID, str], args: Optional[list] = None, context: Optional[dict[str, tuple]] = None, callback=None)
726,168
penne.core
send_message
Send message to server Args: message_dict (dict): dict mapping message attribute to value kind (str): either 'invoke' or 'intro' to indicate type of client message
def send_message(self, message_dict: dict[str, Any], kind: str): """Send message to server Args: message_dict (dict): dict mapping message attribute to value kind (str): either 'invoke' or 'intro' to indicate type of client message """ # Construct message with ID from map and converted message object message = [self.client_message_map[kind], message_dict] if self.json: self._log_json(message) logging.debug(f"Sending Message: {message}") asyncio.run_coroutine_threadsafe(self._socket.send(dumps(message)), self._loop) return message
(self, message_dict: dict[str, typing.Any], kind: str)
726,169
penne.core
show_methods
Displays Available Methods to the User on the document Uses the document delegate's show_methods function to display
def show_methods(self): """Displays Available Methods to the User on the document Uses the document delegate's show_methods function to display """ self.state["document"].show_methods()
(self)
726,170
penne.core
shutdown
Method for shutting down the client Closes websocket connection then blocks to finish all callbacks, joins thread as well
def shutdown(self): """Method for shutting down the client Closes websocket connection then blocks to finish all callbacks, joins thread as well """ asyncio.run_coroutine_threadsafe(self._socket.close(), self._loop) self.is_active = False self.thread.join()
(self)
726,171
pydantic_extra_types.color
Color
Represents a color.
class Color(_repr.Representation): """ Represents a color. """ __slots__ = '_original', '_rgba' def __init__(self, value: ColorType) -> None: self._rgba: RGBA self._original: ColorType if isinstance(value, (tuple, list)): self._rgba = parse_tuple(value) elif isinstance(value, str): self._rgba = parse_str(value) elif isinstance(value, Color): self._rgba = value._rgba value = value._original else: raise PydanticCustomError( 'color_error', 'value is not a valid color: value must be a tuple, list or string', ) # if we've got here value must be a valid color self._original = value @classmethod def __get_pydantic_json_schema__( cls, core_schema: core_schema.CoreSchema, handler: GetJsonSchemaHandler ) -> JsonSchemaValue: field_schema: dict[str, Any] = {} field_schema.update(type='string', format='color') return field_schema def original(self) -> ColorType: """ Original value passed to `Color`. """ return self._original def as_named(self, *, fallback: bool = False) -> str: """ Returns the name of the color if it can be found in `COLORS_BY_VALUE` dictionary, otherwise returns the hexadecimal representation of the color or raises `ValueError`. Args: fallback: If True, falls back to returning the hexadecimal representation of the color instead of raising a ValueError when no named color is found. Returns: The name of the color, or the hexadecimal representation of the color. Raises: ValueError: When no named color is found and fallback is `False`. """ if self._rgba.alpha is None: rgb = cast(Tuple[int, int, int], self.as_rgb_tuple()) try: return COLORS_BY_VALUE[rgb] except KeyError as e: if fallback: return self.as_hex() else: raise ValueError('no named color found, use fallback=True, as_hex() or as_rgb()') from e else: return self.as_hex() def as_hex(self) -> str: """Returns the hexadecimal representation of the color. Hex string representing the color can be 3, 4, 6, or 8 characters depending on whether the string a "short" representation of the color is possible and whether there's an alpha channel. Returns: The hexadecimal representation of the color. """ values = [float_to_255(c) for c in self._rgba[:3]] if self._rgba.alpha is not None: values.append(float_to_255(self._rgba.alpha)) as_hex = ''.join(f'{v:02x}' for v in values) if all(c in repeat_colors for c in values): as_hex = ''.join(as_hex[c] for c in range(0, len(as_hex), 2)) return '#' + as_hex def as_rgb(self) -> str: """ Color as an `rgb(<r>, <g>, <b>)` or `rgba(<r>, <g>, <b>, <a>)` string. """ if self._rgba.alpha is None: return f'rgb({float_to_255(self._rgba.r)}, {float_to_255(self._rgba.g)}, {float_to_255(self._rgba.b)})' else: return ( f'rgba({float_to_255(self._rgba.r)}, {float_to_255(self._rgba.g)}, {float_to_255(self._rgba.b)}, ' f'{round(self._alpha_float(), 2)})' ) def as_rgb_tuple(self, *, alpha: bool | None = None) -> ColorTuple: """ Returns the color as an RGB or RGBA tuple. Args: alpha: Whether to include the alpha channel. There are three options for this input: - `None` (default): Include alpha only if it's set. (e.g. not `None`) - `True`: Always include alpha. - `False`: Always omit alpha. Returns: A tuple that contains the values of the red, green, and blue channels in the range 0 to 255. If alpha is included, it is in the range 0 to 1. """ r, g, b = (float_to_255(c) for c in self._rgba[:3]) if alpha is None: if self._rgba.alpha is None: return r, g, b else: return r, g, b, self._alpha_float() elif alpha: return r, g, b, self._alpha_float() else: # alpha is False return r, g, b def as_hsl(self) -> str: """ Color as an `hsl(<h>, <s>, <l>)` or `hsl(<h>, <s>, <l>, <a>)` string. """ if self._rgba.alpha is None: h, s, li = self.as_hsl_tuple(alpha=False) # type: ignore return f'hsl({h * 360:0.0f}, {s:0.0%}, {li:0.0%})' else: h, s, li, a = self.as_hsl_tuple(alpha=True) # type: ignore return f'hsl({h * 360:0.0f}, {s:0.0%}, {li:0.0%}, {round(a, 2)})' def as_hsl_tuple(self, *, alpha: bool | None = None) -> HslColorTuple: """ Returns the color as an HSL or HSLA tuple. Args: alpha: Whether to include the alpha channel. - `None` (default): Include the alpha channel only if it's set (e.g. not `None`). - `True`: Always include alpha. - `False`: Always omit alpha. Returns: The color as a tuple of hue, saturation, lightness, and alpha (if included). All elements are in the range 0 to 1. Note: This is HSL as used in HTML and most other places, not HLS as used in Python's `colorsys`. """ h, l, s = rgb_to_hls(self._rgba.r, self._rgba.g, self._rgba.b) # noqa: E741 if alpha is None: if self._rgba.alpha is None: return h, s, l else: return h, s, l, self._alpha_float() if alpha: return h, s, l, self._alpha_float() else: # alpha is False return h, s, l def _alpha_float(self) -> float: return 1 if self._rgba.alpha is None else self._rgba.alpha @classmethod def __get_pydantic_core_schema__( cls, source: type[Any], handler: Callable[[Any], CoreSchema] ) -> core_schema.CoreSchema: return core_schema.general_plain_validator_function( cls._validate, serialization=core_schema.to_string_ser_schema() ) @classmethod def _validate(cls, __input_value: Any, _: Any) -> Color: return cls(__input_value) def __str__(self) -> str: return self.as_named(fallback=True) def __repr_args__(self) -> _repr.ReprArgs: return [(None, self.as_named(fallback=True))] + [('rgb', self.as_rgb_tuple())] def __eq__(self, other: Any) -> bool: return isinstance(other, Color) and self.as_rgb_tuple() == other.as_rgb_tuple() def __hash__(self) -> int: return hash(self.as_rgb_tuple())
(value: 'ColorType') -> 'None'
726,172
pydantic_extra_types.color
__eq__
null
def __eq__(self, other: Any) -> bool: return isinstance(other, Color) and self.as_rgb_tuple() == other.as_rgb_tuple()
(self, other: Any) -> bool
726,173
pydantic_extra_types.color
__hash__
null
def __hash__(self) -> int: return hash(self.as_rgb_tuple())
(self) -> int
726,174
pydantic_extra_types.color
__init__
null
def __init__(self, value: ColorType) -> None: self._rgba: RGBA self._original: ColorType if isinstance(value, (tuple, list)): self._rgba = parse_tuple(value) elif isinstance(value, str): self._rgba = parse_str(value) elif isinstance(value, Color): self._rgba = value._rgba value = value._original else: raise PydanticCustomError( 'color_error', 'value is not a valid color: value must be a tuple, list or string', ) # if we've got here value must be a valid color self._original = value
(self, value: Union[Tuple[int, int, int], Tuple[int, int, int, float], str, pydantic_extra_types.color.Color]) -> NoneType
726,177
pydantic_extra_types.color
__repr_args__
null
def __repr_args__(self) -> _repr.ReprArgs: return [(None, self.as_named(fallback=True))] + [('rgb', self.as_rgb_tuple())]
(self) -> '_repr.ReprArgs'
726,181
pydantic_extra_types.color
__str__
null
def __str__(self) -> str: return self.as_named(fallback=True)
(self) -> str
726,182
pydantic_extra_types.color
_alpha_float
null
def _alpha_float(self) -> float: return 1 if self._rgba.alpha is None else self._rgba.alpha
(self) -> float
726,183
pydantic_extra_types.color
as_hex
Returns the hexadecimal representation of the color. Hex string representing the color can be 3, 4, 6, or 8 characters depending on whether the string a "short" representation of the color is possible and whether there's an alpha channel. Returns: The hexadecimal representation of the color.
def as_hex(self) -> str: """Returns the hexadecimal representation of the color. Hex string representing the color can be 3, 4, 6, or 8 characters depending on whether the string a "short" representation of the color is possible and whether there's an alpha channel. Returns: The hexadecimal representation of the color. """ values = [float_to_255(c) for c in self._rgba[:3]] if self._rgba.alpha is not None: values.append(float_to_255(self._rgba.alpha)) as_hex = ''.join(f'{v:02x}' for v in values) if all(c in repeat_colors for c in values): as_hex = ''.join(as_hex[c] for c in range(0, len(as_hex), 2)) return '#' + as_hex
(self) -> str
726,184
pydantic_extra_types.color
as_hsl
Color as an `hsl(<h>, <s>, <l>)` or `hsl(<h>, <s>, <l>, <a>)` string.
def as_hsl(self) -> str: """ Color as an `hsl(<h>, <s>, <l>)` or `hsl(<h>, <s>, <l>, <a>)` string. """ if self._rgba.alpha is None: h, s, li = self.as_hsl_tuple(alpha=False) # type: ignore return f'hsl({h * 360:0.0f}, {s:0.0%}, {li:0.0%})' else: h, s, li, a = self.as_hsl_tuple(alpha=True) # type: ignore return f'hsl({h * 360:0.0f}, {s:0.0%}, {li:0.0%}, {round(a, 2)})'
(self) -> str
726,185
pydantic_extra_types.color
as_hsl_tuple
Returns the color as an HSL or HSLA tuple. Args: alpha: Whether to include the alpha channel. - `None` (default): Include the alpha channel only if it's set (e.g. not `None`). - `True`: Always include alpha. - `False`: Always omit alpha. Returns: The color as a tuple of hue, saturation, lightness, and alpha (if included). All elements are in the range 0 to 1. Note: This is HSL as used in HTML and most other places, not HLS as used in Python's `colorsys`.
def as_hsl_tuple(self, *, alpha: bool | None = None) -> HslColorTuple: """ Returns the color as an HSL or HSLA tuple. Args: alpha: Whether to include the alpha channel. - `None` (default): Include the alpha channel only if it's set (e.g. not `None`). - `True`: Always include alpha. - `False`: Always omit alpha. Returns: The color as a tuple of hue, saturation, lightness, and alpha (if included). All elements are in the range 0 to 1. Note: This is HSL as used in HTML and most other places, not HLS as used in Python's `colorsys`. """ h, l, s = rgb_to_hls(self._rgba.r, self._rgba.g, self._rgba.b) # noqa: E741 if alpha is None: if self._rgba.alpha is None: return h, s, l else: return h, s, l, self._alpha_float() if alpha: return h, s, l, self._alpha_float() else: # alpha is False return h, s, l
(self, *, alpha: Optional[bool] = None) -> Union[Tuple[float, float, float], Tuple[float, float, float, float]]
726,186
pydantic_extra_types.color
as_named
Returns the name of the color if it can be found in `COLORS_BY_VALUE` dictionary, otherwise returns the hexadecimal representation of the color or raises `ValueError`. Args: fallback: If True, falls back to returning the hexadecimal representation of the color instead of raising a ValueError when no named color is found. Returns: The name of the color, or the hexadecimal representation of the color. Raises: ValueError: When no named color is found and fallback is `False`.
def as_named(self, *, fallback: bool = False) -> str: """ Returns the name of the color if it can be found in `COLORS_BY_VALUE` dictionary, otherwise returns the hexadecimal representation of the color or raises `ValueError`. Args: fallback: If True, falls back to returning the hexadecimal representation of the color instead of raising a ValueError when no named color is found. Returns: The name of the color, or the hexadecimal representation of the color. Raises: ValueError: When no named color is found and fallback is `False`. """ if self._rgba.alpha is None: rgb = cast(Tuple[int, int, int], self.as_rgb_tuple()) try: return COLORS_BY_VALUE[rgb] except KeyError as e: if fallback: return self.as_hex() else: raise ValueError('no named color found, use fallback=True, as_hex() or as_rgb()') from e else: return self.as_hex()
(self, *, fallback: bool = False) -> str
726,187
pydantic_extra_types.color
as_rgb
Color as an `rgb(<r>, <g>, <b>)` or `rgba(<r>, <g>, <b>, <a>)` string.
def as_rgb(self) -> str: """ Color as an `rgb(<r>, <g>, <b>)` or `rgba(<r>, <g>, <b>, <a>)` string. """ if self._rgba.alpha is None: return f'rgb({float_to_255(self._rgba.r)}, {float_to_255(self._rgba.g)}, {float_to_255(self._rgba.b)})' else: return ( f'rgba({float_to_255(self._rgba.r)}, {float_to_255(self._rgba.g)}, {float_to_255(self._rgba.b)}, ' f'{round(self._alpha_float(), 2)})' )
(self) -> str
726,188
pydantic_extra_types.color
as_rgb_tuple
Returns the color as an RGB or RGBA tuple. Args: alpha: Whether to include the alpha channel. There are three options for this input: - `None` (default): Include alpha only if it's set. (e.g. not `None`) - `True`: Always include alpha. - `False`: Always omit alpha. Returns: A tuple that contains the values of the red, green, and blue channels in the range 0 to 255. If alpha is included, it is in the range 0 to 1.
def as_rgb_tuple(self, *, alpha: bool | None = None) -> ColorTuple: """ Returns the color as an RGB or RGBA tuple. Args: alpha: Whether to include the alpha channel. There are three options for this input: - `None` (default): Include alpha only if it's set. (e.g. not `None`) - `True`: Always include alpha. - `False`: Always omit alpha. Returns: A tuple that contains the values of the red, green, and blue channels in the range 0 to 255. If alpha is included, it is in the range 0 to 1. """ r, g, b = (float_to_255(c) for c in self._rgba[:3]) if alpha is None: if self._rgba.alpha is None: return r, g, b else: return r, g, b, self._alpha_float() elif alpha: return r, g, b, self._alpha_float() else: # alpha is False return r, g, b
(self, *, alpha: Optional[bool] = None) -> Union[Tuple[int, int, int], Tuple[int, int, int, float]]
726,189
pydantic_extra_types.color
original
Original value passed to `Color`.
def original(self) -> ColorType: """ Original value passed to `Color`. """ return self._original
(self) -> Union[Tuple[int, int, int], Tuple[int, int, int, float], str, pydantic_extra_types.color.Color]
726,190
penne.delegates
ColumnType
String indicating type of data stored in a column in a table Used in TableColumnInfo inside TableInitData. Takes value of either TEXT, REAL, or INTEGER
class ColumnType(str, Enum): """String indicating type of data stored in a column in a table Used in TableColumnInfo inside TableInitData. Takes value of either TEXT, REAL, or INTEGER """ text = "TEXT" real = "REAL" integer = "INTEGER"
(value, names=None, *, module=None, qualname=None, type=None, start=1)
726,191
pydantic.config
ConfigDict
Usage docs: https://docs.pydantic.dev/2.0/usage/model_config/ A TypedDict for configuring Pydantic behaviour.
class ConfigDict(TypedDict, total=False): """Usage docs: https://docs.pydantic.dev/2.0/usage/model_config/ A TypedDict for configuring Pydantic behaviour. """ title: str | None """The title for the generated JSON schema, defaults to the model's name""" str_to_lower: bool """Whether to convert all characters to lowercase for str types. Defaults to `False`.""" str_to_upper: bool """Whether to convert all characters to uppercase for str types. Defaults to `False`.""" str_strip_whitespace: bool """Whether to strip leading and trailing whitespace for str types.""" str_min_length: int """The minimum length for str types. Defaults to `None`.""" str_max_length: int | None """The maximum length for str types. Defaults to `None`.""" extra: ExtraValues | None """ Whether to ignore, allow, or forbid extra attributes during model initialization. The value must be a [`ExtraValues`][pydantic.config.ExtraValues] string. Defaults to `'ignore'`. See [Extra Attributes](../usage/model_config.md#extra-attributes) for details. """ frozen: bool """ Whether or not models are faux-immutable, i.e. whether `__setattr__` is allowed, and also generates a `__hash__()` method for the model. This makes instances of the model potentially hashable if all the attributes are hashable. Defaults to `False`. !!! note On V1, this setting was called `allow_mutation`, and was `True` by default. """ populate_by_name: bool """ Whether an aliased field may be populated by its name as given by the model attribute, as well as the alias. Defaults to `False`. !!! note The name of this configuration setting was changed in **v2.0** from `allow_population_by_alias` to `populate_by_name`. """ use_enum_values: bool """ Whether to populate models with the `value` property of enums, rather than the raw enum. This may be useful if you want to serialize `model.model_dump()` later. Defaults to `False`. """ validate_assignment: bool arbitrary_types_allowed: bool from_attributes: bool """ Whether to build models and look up discriminators of tagged unions using python object attributes. """ loc_by_alias: bool """Whether to use the alias for error `loc`s rather than the field's name. Defaults to `True`.""" alias_generator: Callable[[str], str] | None """ A callable that takes a field name and returns an alias for it. See [Alias Generator](../usage/model_config.md#alias-generator) for details. """ ignored_types: tuple[type, ...] """A tuple of types that may occur as values of class attributes without annotations. This is typically used for custom descriptors (classes that behave like `property`). If an attribute is set on a class without an annotation and has a type that is not in this tuple (or otherwise recognized by _pydantic_), an error will be raised. Defaults to `()`. """ allow_inf_nan: bool """Whether to allow infinity (`+inf` an `-inf`) and NaN values to float fields. Defaults to `True`.""" json_schema_extra: dict[str, object] | JsonSchemaExtraCallable | None """A dict or callable to provide extra JSON schema properties. Defaults to `None`.""" json_encoders: dict[type[object], JsonEncoder] | None """ A `dict` of custom JSON encoders for specific types. Defaults to `None`. !!! warning "Deprecated" This config option is a carryover from v1. We originally planned to remove it in v2 but didn't have a 1:1 replacement so we are keeping it for now. It is still deprecated and will likely be removed in the future. """ # new in V2 strict: bool """ _(new in V2)_ If `True`, strict validation is applied to all fields on the model. See [Strict Mode](../usage/strict_mode.md) for details. """ # whether instances of models and dataclasses (including subclass instances) should re-validate, default 'never' revalidate_instances: Literal['always', 'never', 'subclass-instances'] """ When and how to revalidate models and dataclasses during validation. Accepts the string values of `'never'`, `'always'` and `'subclass-instances'`. Defaults to `'never'`. - `'never'` will not revalidate models and dataclasses during validation - `'always'` will revalidate models and dataclasses during validation - `'subclass-instances'` will revalidate models and dataclasses during validation if the instance is a subclass of the model or dataclass See [Revalidate Instances](../usage/model_config.md#revalidate-instances) for details. """ ser_json_timedelta: Literal['iso8601', 'float'] """ The format of JSON serialized timedeltas. Accepts the string values of `'iso8601'` and `'float'`. Defaults to `'iso8601'`. - `'iso8601'` will serialize timedeltas to ISO 8601 durations. - `'float'` will serialize timedeltas to the total number of seconds. """ ser_json_bytes: Literal['utf8', 'base64'] """ The encoding of JSON serialized bytes. Accepts the string values of `'utf8'` and `'base64'`. Defaults to `'utf8'`. - `'utf8'` will serialize bytes to UTF-8 strings. - `'base64'` will serialize bytes to base64 strings. """ # whether to validate default values during validation, default False validate_default: bool """Whether to validate default values during validation. Defaults to `False`.""" validate_return: bool """whether to validate the return value from call validators.""" protected_namespaces: tuple[str, ...] """ A `tuple` of strings that prevent model to have field which conflict with them. Defaults to `('model_', )`). See [Protected Namespaces](../usage/model_config.md#protected-namespaces) for details. """ hide_input_in_errors: bool """ Whether to hide inputs when printing errors. Defaults to `False`. See [Hide Input in Errors](../usage/model_config.md#hide-input-in-errors). """ defer_build: bool """ Whether to defer model validator and serializer construction until the first model validation. This can be useful to avoid the overhead of building models which are only used nested within other models, or when you want to manually define type namespace via [`Model.model_rebuild(_types_namespace=...)`][pydantic.BaseModel.model_rebuild]. Defaults to False. """ schema_generator: type[_GenerateSchema] | None """ A custom core schema generator class to use when generating JSON schemas. Useful if you want to change the way types are validated across an entire model/schema. The `GenerateSchema` interface is subject to change, currently only the `string_schema` method is public. See [#6737](https://github.com/pydantic/pydantic/pull/6737) for details. Defaults to `None`. """
null
726,192
penne.delegates
Delegate
Parent class for all delegates Defines general methods that should be available for all delegates. In this context, a delegate refers to an object in a NOODLES scene that can be subclassed and extended by the user. For example, a user can create an implementation for a table that specifically suits their needs. The server's job is essentially to manage the state of all delegates, and to call the appropriate methods on them when necessary. Most methods defined by the user will also be to manipulate the state of the delegates. Attributes: client (Client): Client delegate is attached to id (ID): Unique identifier for delegate name (str): Name of delegate signals (dict): Signals that can be called on delegate, method name to callable
class Delegate(NoodleObject): """Parent class for all delegates Defines general methods that should be available for all delegates. In this context, a delegate refers to an object in a NOODLES scene that can be subclassed and extended by the user. For example, a user can create an implementation for a table that specifically suits their needs. The server's job is essentially to manage the state of all delegates, and to call the appropriate methods on them when necessary. Most methods defined by the user will also be to manipulate the state of the delegates. Attributes: client (Client): Client delegate is attached to id (ID): Unique identifier for delegate name (str): Name of delegate signals (dict): Signals that can be called on delegate, method name to callable """ client: object = None id: ID = None name: Optional[str] = "No-Name" signals: Optional[dict] = {} def __str__(self): return f"{self.name} - {type(self).__name__} - {self.id.compact_str()}" # For all except Document Delegate def on_new(self, message: dict): pass # For Document, Table, Entity, Plot, Material, Light Delegates def on_update(self, message: dict): pass # For all except Document Delegate def on_remove(self, message: dict): pass
(*, client: object = None, id: penne.delegates.ID = None, name: Optional[str] = 'No-Name', signals: Optional[dict] = {}, **extra_data: Any) -> None
726,223
penne.delegates
DirectionalLight
Directional light information for a light delegate Attributes: range (float): Range of light, -1 defaults to infinite
class DirectionalLight(NoodleObject): """Directional light information for a light delegate Attributes: range (float): Range of light, -1 defaults to infinite """ range: float = -1.0
(*, range: float = -1.0, **extra_data: Any) -> None
726,251
penne.delegates
Document
Delegate for document Attributes: name (str): name will be "Document" methods_list (list[MethodID]): list of methods available on the document signals_list (list[SignalID]): list of signals available on the document
class Document(Delegate): """Delegate for document Attributes: name (str): name will be "Document" methods_list (list[MethodID]): list of methods available on the document signals_list (list[SignalID]): list of signals available on the document """ name: str = "Document" methods_list: List[MethodID] = [] # Server usually sends as an update signals_list: List[SignalID] = [] client_view: Optional[InjectedMethod] = None def on_update(self, message: dict): """Handler when update message is received Should update methods_list and signals_list Args: message (Message): update message with the new document's info """ if "methods_list" in message: self.methods_list = [MethodID(*element) for element in message["methods_list"]] inject_methods(self, self.methods_list) if "signals_list" in message: self.signals_list = [SignalID(*element) for element in message["signals_list"]] def reset(self): """Reset the document Called when document reset message is received. Will reset state, and clear methods and signals on document """ self.client.state = {"document": self} self.methods_list = [] self.signals_list = [] def update_client_view(self, direction: Vec3, angle: float): """Notify the server of an area of interest for the client""" self.client_view(direction, angle) def show_methods(self): """Show methods available on the document""" if not self.methods_list: message = "No methods available" else: message = f"-- Methods on Document --\n--------------------------------------\n" for method_id in self.methods_list: method = self.client.get_delegate(method_id) message += f">> {method}" print(message) return message
(*, client: object = None, id: penne.delegates.ID = None, name: str = 'Document', signals: Optional[dict] = {}, methods_list: List[penne.delegates.MethodID] = [], signals_list: List[penne.delegates.SignalID] = [], client_view: Optional[penne.delegates.InjectedMethod] = None, **extra_data: Any) -> None
726,281
penne.delegates
on_update
Handler when update message is received Should update methods_list and signals_list Args: message (Message): update message with the new document's info
def on_update(self, message: dict): """Handler when update message is received Should update methods_list and signals_list Args: message (Message): update message with the new document's info """ if "methods_list" in message: self.methods_list = [MethodID(*element) for element in message["methods_list"]] inject_methods(self, self.methods_list) if "signals_list" in message: self.signals_list = [SignalID(*element) for element in message["signals_list"]]
(self, message: dict)
726,282
penne.delegates
reset
Reset the document Called when document reset message is received. Will reset state, and clear methods and signals on document
def reset(self): """Reset the document Called when document reset message is received. Will reset state, and clear methods and signals on document """ self.client.state = {"document": self} self.methods_list = [] self.signals_list = []
(self)
726,283
penne.delegates
show_methods
Show methods available on the document
def show_methods(self): """Show methods available on the document""" if not self.methods_list: message = "No methods available" else: message = f"-- Methods on Document --\n--------------------------------------\n" for method_id in self.methods_list: method = self.client.get_delegate(method_id) message += f">> {method}" print(message) return message
(self)