index
int64 0
731k
| package
stringlengths 2
98
⌀ | name
stringlengths 1
76
| docstring
stringlengths 0
281k
⌀ | code
stringlengths 4
1.07M
⌀ | signature
stringlengths 2
42.8k
⌀ |
---|---|---|---|---|---|
39,246 | segno | __init__ | Initializes the QR Code object.
:param code: An object with a ``matrix``, ``version``, ``error``,
``mask`` and ``segments`` attribute.
| def __init__(self, code):
"""\
Initializes the QR Code object.
:param code: An object with a ``matrix``, ``version``, ``error``,
``mask`` and ``segments`` attribute.
"""
matrix = code.matrix
self.matrix = matrix
"""Returns the matrix.
:rtype: tuple of :py:class:`bytearray` instances.
"""
self.mask = code.mask
"""Returns the data mask pattern reference
:rtype: int
"""
self._matrix_size = len(matrix[0]), len(matrix)
self._version = code.version
self._error = code.error
self._mode = code.segments[0].mode if len(code.segments) == 1 else None
| (self, code) |
39,247 | segno | matrix_iter | Returns an iterator over the matrix which includes the border.
The border is returned as sequence of light modules.
Dark modules are reported as ``0x1``, light modules have the value
``0x0``.
The following example converts the QR code matrix into a list of
lists which use boolean values for the modules (True = dark module,
False = light module)::
>>> import segno
>>> qrcode = segno.make('The Beatles')
>>> width, height = qrcode.symbol_size(scale=2)
>>> res = []
>>> # Scaling factor 2, default border
>>> for row in qrcode.matrix_iter(scale=2):
>>> res.append([col == 0x1 for col in row])
>>> width == len(res[0])
True
>>> height == len(res)
True
If `verbose` is ``True``, the iterator returns integer constants which
indicate the type of the module, i.e. ``segno.consts.TYPE_FINDER_PATTERN_DARK``,
``segno.consts.TYPE_FINDER_PATTERN_LIGHT``, ``segno.consts.TYPE_QUIET_ZONE`` etc.
To check if the returned module type is dark or light, use::
if mt >> 8:
print('dark module')
if not mt >> 8:
print('light module')
:param int scale: The scaling factor (default: ``1``).
:param int border: The size of border / quiet zone or ``None`` to
indicate the default border.
:param bool verbose: Indicates if the type of the module should be returned
instead of ``0x1`` and ``0x0`` values.
See :py:mod:`segno.consts` for the return values.
This feature is currently in EXPERIMENTAL state.
:raises: :py:exc:`ValueError` if the scaling factor or the border is
invalid (i.e. negative).
| def matrix_iter(self, scale=1, border=None, verbose=False):
"""\
Returns an iterator over the matrix which includes the border.
The border is returned as sequence of light modules.
Dark modules are reported as ``0x1``, light modules have the value
``0x0``.
The following example converts the QR code matrix into a list of
lists which use boolean values for the modules (True = dark module,
False = light module)::
>>> import segno
>>> qrcode = segno.make('The Beatles')
>>> width, height = qrcode.symbol_size(scale=2)
>>> res = []
>>> # Scaling factor 2, default border
>>> for row in qrcode.matrix_iter(scale=2):
>>> res.append([col == 0x1 for col in row])
>>> width == len(res[0])
True
>>> height == len(res)
True
If `verbose` is ``True``, the iterator returns integer constants which
indicate the type of the module, i.e. ``segno.consts.TYPE_FINDER_PATTERN_DARK``,
``segno.consts.TYPE_FINDER_PATTERN_LIGHT``, ``segno.consts.TYPE_QUIET_ZONE`` etc.
To check if the returned module type is dark or light, use::
if mt >> 8:
print('dark module')
if not mt >> 8:
print('light module')
:param int scale: The scaling factor (default: ``1``).
:param int border: The size of border / quiet zone or ``None`` to
indicate the default border.
:param bool verbose: Indicates if the type of the module should be returned
instead of ``0x1`` and ``0x0`` values.
See :py:mod:`segno.consts` for the return values.
This feature is currently in EXPERIMENTAL state.
:raises: :py:exc:`ValueError` if the scaling factor or the border is
invalid (i.e. negative).
"""
iterfn = utils.matrix_iter_verbose if verbose else utils.matrix_iter
return iterfn(self.matrix, self._matrix_size, scale, border)
| (self, scale=1, border=None, verbose=False) |
39,248 | segno | png_data_uri | Converts the QR code into a PNG data URI.
Uses the same keyword parameters as the usual PNG serializer,
see :py:func:`save` and the available `PNG parameters <#png>`_
:rtype: str
| def png_data_uri(self, **kw):
"""\
Converts the QR code into a PNG data URI.
Uses the same keyword parameters as the usual PNG serializer,
see :py:func:`save` and the available `PNG parameters <#png>`_
:rtype: str
"""
return writers.as_png_data_uri(self.matrix, self._matrix_size, **kw)
| (self, **kw) |
39,249 | segno | save | Serializes the QR code in one of the supported formats.
The serialization format depends on the filename extension.
.. _common_keywords:
**Common keywords**
========== ==============================================================
Name Description
========== ==============================================================
scale Integer or float indicating the size of a single module.
Default: 1. The interpretation of the scaling factor depends
on the serializer. For pixel-based output (like :ref:`PNG <png>`)
the scaling factor is interpreted as pixel-size (1 = 1 pixel).
:ref:`EPS <eps>` interprets ``1`` as 1 point (1/72 inch) per
module.
Some serializers (like :ref:`SVG <svg>`) accept float values.
If the serializer does not accept float values, the value will be
converted to an integer value (note: int(1.6) == 1).
border Integer indicating the size of the quiet zone.
If set to ``None`` (default), the recommended border size
will be used (``4`` for QR codes, ``2`` for a Micro QR codes).
A value of ``0`` indicates that border should be omitted.
dark A string or tuple representing a color value for the dark
modules. The default value is "black". The color can be
provided as ``(R, G, B)`` tuple, as web color name
(like "red") or in hexadecimal format (``#RGB`` or
``#RRGGBB``). Some serializers (i.e. :ref:`SVG <svg>` and
:ref:`PNG <png>`) accept an alpha transparency value like
``#RRGGBBAA``.
light A string or tuple representing a color for the light modules.
See `dark` for valid values.
The default value depends on the serializer. :ref:`SVG <svg>`
uses no color (``None``) for light modules by default, other
serializers, like :ref:`PNG <png>`, use "white" as default
light color.
========== ==============================================================
.. _module_colors:
**Module Colors**
=============== =======================================================
Name Description
=============== =======================================================
finder_dark Color of the dark modules of the finder patterns
Default: undefined, use value of "dark"
finder_light Color of the light modules of the finder patterns
Default: undefined, use value of "light"
data_dark Color of the dark data modules
Default: undefined, use value of "dark"
data_light Color of the light data modules.
Default: undefined, use value of "light".
version_dark Color of the dark modules of the version information.
Default: undefined, use value of "dark".
version_light Color of the light modules of the version information,
Default: undefined, use value of "light".
format_dark Color of the dark modules of the format information.
Default: undefined, use value of "dark".
format_light Color of the light modules of the format information.
Default: undefined, use value of "light".
alignment_dark Color of the dark modules of the alignment patterns.
Default: undefined, use value of "dark".
alignment_light Color of the light modules of the alignment patterns.
Default: undefined, use value of "light".
timing_dark Color of the dark modules of the timing patterns.
Default: undefined, use value of "dark".
timing_light Color of the light modules of the timing patterns.
Default: undefined, use value of "light".
separator Color of the separator.
Default: undefined, use value of "light".
dark_module Color of the dark module (a single dark module which
occurs in all QR Codes but not in Micro QR Codes.
Default: undefined, use value of "dark".
quiet_zone Color of the quiet zone / border.
Default: undefined, use value of "light".
=============== =======================================================
.. _svg:
**Scalable Vector Graphics (SVG)**
All :ref:`common keywords <common_keywords>` and :ref:`module colors <module_colors>`
are supported.
================ ==============================================================
Name Description
================ ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "svg" or "svgz" (to create a gzip compressed SVG)
scale integer or float
dark Default: "#000" (black)
``None`` is a valid value. If set to ``None``, the resulting
path won't have a "stroke" attribute. The "stroke" attribute
may be defined via CSS (external).
If an alpha channel is defined, the output depends of the
used SVG version. For SVG versions >= 2.0, the "stroke"
attribute will have a value like "rgba(R, G, B, A)", otherwise
the path gets another attribute "stroke-opacity" to emulate
the alpha channel.
To minimize the document size, the SVG serializer uses
automatically the shortest color representation: If
a value like "#000000" is provided, the resulting
document will have a color value of "#000". If the color
is "#FF0000", the resulting color is not "#F00", but
the web color name "red".
light Default value ``None``. If this parameter is set to another
value, the resulting image will have another path which
is used to define the color of the light modules.
If an alpha channel is used, the resulting path may
have a "fill-opacity" attribute (for SVG version < 2.0)
or the "fill" attribute has a "rgba(R, G, B, A)" value.
xmldecl Boolean value (default: ``True``) indicating whether the
document should have an XML declaration header.
Set to ``False`` to omit the header.
svgns Boolean value (default: ``True``) indicating whether the
document should have an explicit SVG namespace declaration.
Set to ``False`` to omit the namespace declaration.
The latter might be useful if the document should be
embedded into a HTML 5 document where the SVG namespace
is implicitly defined.
title String (default: ``None``) Optional title of the generated
SVG document.
desc String (default: ``None``) Optional description of the
generated SVG document.
svgid A string indicating the ID of the SVG document
(if set to ``None`` (default), the SVG element won't have
an ID).
svgclass Default: "segno". The CSS class of the SVG document
(if set to ``None``, the SVG element won't have a class).
lineclass Default: "qrline". The CSS class of the path element
(which draws the dark modules (if set to ``None``, the path
won't have a class).
omitsize Indicates if width and height attributes should be
omitted (default: ``False``). If these attributes are
omitted, a ``viewBox`` attribute will be added to the
document.
unit Default: ``None``
Indicates the unit for width / height and other coordinates.
By default, the unit is unspecified and all values are
in the user space.
Valid values: em, ex, px, pt, pc, cm, mm, in, and percentages
(any string is accepted, this parameter is not validated
by the serializer)
encoding Encoding of the XML document. "utf-8" by default.
svgversion SVG version (default: ``None``). If specified (a float),
the resulting document has an explicit "version" attribute.
If set to ``None``, the document won't have a "version"
attribute. This parameter is not validated.
compresslevel Default: 9. This parameter is only valid, if a compressed
SVG document should be created (file extension "svgz").
1 is fastest and produces the least compression, 9 is slowest
and produces the most. 0 is no compression.
draw_transparent Indicates if transparent SVG paths should be
added to the graphic (default: ``False``)
nl Indicates if the document should have a trailing newline
(default: ``True``)
================ ==============================================================
.. _png:
**Portable Network Graphics (PNG)**
This writes either a grayscale (maybe with transparency) PNG (color type 0)
or a palette-based (maybe with transparency) image (color type 3).
If the dark / light values are ``None``, white or black, the serializer
chooses the more compact grayscale mode, in all other cases a palette-based
image is written.
All :ref:`common keywords <common_keywords>` and :ref:`module colors <module_colors>`
are supported.
=============== ==============================================================
Name Description
=============== ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "png"
scale integer
dark Default: "#000" (black)
``None`` is a valid value iff light is not ``None``.
If set to ``None``, the dark modules become transparent.
light Default value "#fff" (white)
See keyword "dark" for further details.
compresslevel Default: 9. Integer indicating the compression level
for the ``IDAT`` (data) chunk.
1 is fastest and produces the least compression, 9 is slowest
and produces the most. 0 is no compression.
dpi Default: ``None``. Specifies the DPI value for the image.
By default, the DPI value is unspecified. Please note
that the DPI value is converted into meters (maybe with
rounding errors) since PNG does not support the unit
"dots per inch".
=============== ==============================================================
.. _eps:
**Encapsulated PostScript (EPS)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "eps"
scale integer or float
dark Default: "#000" (black)
light Default value: ``None`` (transparent light modules)
============= ==============================================================
.. _pdf:
**Portable Document Format (PDF)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "pdf"
scale integer or float
dark Default: "#000" (black)
light Default value: ``None`` (transparent light modules)
compresslevel Default: 9. Integer indicating the compression level.
1 is fastest and produces the least compression, 9 is slowest
and produces the most. 0 is no compression.
============= ==============================================================
.. _txt:
**Text (TXT)**
Aside of "scale", all :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "txt"
dark Default: "1"
light Default: "0"
============= ==============================================================
.. _ansi:
**ANSI escape code**
Supports the "border" keyword, only!
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "ans"
============= ==============================================================
.. _pbm:
**Portable Bitmap (PBM)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "pbm"
scale integer
plain Default: False. Boolean to switch between the P4 and P1 format.
If set to ``True``, the (outdated) P1 serialization format is
used.
============= ==============================================================
.. _pam:
**Portable Arbitrary Map (PAM)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "pam"
scale integer
dark Default: "#000" (black).
light Default value "#fff" (white). Use ``None`` for transparent
light modules.
============= ==============================================================
.. _ppm:
**Portable Pixmap (PPM)**
All :ref:`common keywords <common_keywords>` and :ref:`module colors <module_colors>`
are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "ppm"
scale integer
dark Default: "#000" (black).
light Default value "#fff" (white).
============= ==============================================================
.. _latex:
**LaTeX / PGF/TikZ**
To use the output of this serializer, the ``PGF/TikZ`` (and optionally
``hyperref``) package is required in the LaTeX environment. The
serializer itself does not depend on any external packages.
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "tex"
scale integer or float
dark LaTeX color name (default: "black"). The color is written
"at it is", please ensure that the color is a standard color
or it has been defined in the enclosing LaTeX document.
url Default: ``None``. Optional URL where the QR code should
point to. Requires the ``hyperref`` package in the LaTeX
environment.
============= ==============================================================
.. _xbm:
**X BitMap (XBM)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "xbm"
scale integer
name Name of the variable (default: "img")
============= ==============================================================
.. _xpm:
**X PixMap (XPM)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "xpm"
scale integer
dark Default: "#000" (black).
``None`` indicates transparent dark modules.
light Default value "#fff" (white)
``None`` indicates transparent light modules.
name Name of the variable (default: "img")
============= ==============================================================
:param out: A filename or a writable file-like object with a
``name`` attribute. Use the :paramref:`kind <segno.QRCode.save.kind>`
parameter if `out` is a :py:class:`io.BytesIO` or
:py:class:`io.StringIO` stream which don't have a ``name``
attribute.
:param str kind: Default ``None``.
If the desired output format cannot be determined from
the :paramref:`out <segno.QRCode.save.out>` parameter, this
parameter can be used to indicate the serialization format
(i.e. "svg" to enforce SVG output). The value is case
insensitive.
:param kw: Any of the supported keywords by the specific serializer.
| def save(self, out, kind=None, **kw):
"""\
Serializes the QR code in one of the supported formats.
The serialization format depends on the filename extension.
.. _common_keywords:
**Common keywords**
========== ==============================================================
Name Description
========== ==============================================================
scale Integer or float indicating the size of a single module.
Default: 1. The interpretation of the scaling factor depends
on the serializer. For pixel-based output (like :ref:`PNG <png>`)
the scaling factor is interpreted as pixel-size (1 = 1 pixel).
:ref:`EPS <eps>` interprets ``1`` as 1 point (1/72 inch) per
module.
Some serializers (like :ref:`SVG <svg>`) accept float values.
If the serializer does not accept float values, the value will be
converted to an integer value (note: int(1.6) == 1).
border Integer indicating the size of the quiet zone.
If set to ``None`` (default), the recommended border size
will be used (``4`` for QR codes, ``2`` for a Micro QR codes).
A value of ``0`` indicates that border should be omitted.
dark A string or tuple representing a color value for the dark
modules. The default value is "black". The color can be
provided as ``(R, G, B)`` tuple, as web color name
(like "red") or in hexadecimal format (``#RGB`` or
``#RRGGBB``). Some serializers (i.e. :ref:`SVG <svg>` and
:ref:`PNG <png>`) accept an alpha transparency value like
``#RRGGBBAA``.
light A string or tuple representing a color for the light modules.
See `dark` for valid values.
The default value depends on the serializer. :ref:`SVG <svg>`
uses no color (``None``) for light modules by default, other
serializers, like :ref:`PNG <png>`, use "white" as default
light color.
========== ==============================================================
.. _module_colors:
**Module Colors**
=============== =======================================================
Name Description
=============== =======================================================
finder_dark Color of the dark modules of the finder patterns
Default: undefined, use value of "dark"
finder_light Color of the light modules of the finder patterns
Default: undefined, use value of "light"
data_dark Color of the dark data modules
Default: undefined, use value of "dark"
data_light Color of the light data modules.
Default: undefined, use value of "light".
version_dark Color of the dark modules of the version information.
Default: undefined, use value of "dark".
version_light Color of the light modules of the version information,
Default: undefined, use value of "light".
format_dark Color of the dark modules of the format information.
Default: undefined, use value of "dark".
format_light Color of the light modules of the format information.
Default: undefined, use value of "light".
alignment_dark Color of the dark modules of the alignment patterns.
Default: undefined, use value of "dark".
alignment_light Color of the light modules of the alignment patterns.
Default: undefined, use value of "light".
timing_dark Color of the dark modules of the timing patterns.
Default: undefined, use value of "dark".
timing_light Color of the light modules of the timing patterns.
Default: undefined, use value of "light".
separator Color of the separator.
Default: undefined, use value of "light".
dark_module Color of the dark module (a single dark module which
occurs in all QR Codes but not in Micro QR Codes.
Default: undefined, use value of "dark".
quiet_zone Color of the quiet zone / border.
Default: undefined, use value of "light".
=============== =======================================================
.. _svg:
**Scalable Vector Graphics (SVG)**
All :ref:`common keywords <common_keywords>` and :ref:`module colors <module_colors>`
are supported.
================ ==============================================================
Name Description
================ ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "svg" or "svgz" (to create a gzip compressed SVG)
scale integer or float
dark Default: "#000" (black)
``None`` is a valid value. If set to ``None``, the resulting
path won't have a "stroke" attribute. The "stroke" attribute
may be defined via CSS (external).
If an alpha channel is defined, the output depends of the
used SVG version. For SVG versions >= 2.0, the "stroke"
attribute will have a value like "rgba(R, G, B, A)", otherwise
the path gets another attribute "stroke-opacity" to emulate
the alpha channel.
To minimize the document size, the SVG serializer uses
automatically the shortest color representation: If
a value like "#000000" is provided, the resulting
document will have a color value of "#000". If the color
is "#FF0000", the resulting color is not "#F00", but
the web color name "red".
light Default value ``None``. If this parameter is set to another
value, the resulting image will have another path which
is used to define the color of the light modules.
If an alpha channel is used, the resulting path may
have a "fill-opacity" attribute (for SVG version < 2.0)
or the "fill" attribute has a "rgba(R, G, B, A)" value.
xmldecl Boolean value (default: ``True``) indicating whether the
document should have an XML declaration header.
Set to ``False`` to omit the header.
svgns Boolean value (default: ``True``) indicating whether the
document should have an explicit SVG namespace declaration.
Set to ``False`` to omit the namespace declaration.
The latter might be useful if the document should be
embedded into a HTML 5 document where the SVG namespace
is implicitly defined.
title String (default: ``None``) Optional title of the generated
SVG document.
desc String (default: ``None``) Optional description of the
generated SVG document.
svgid A string indicating the ID of the SVG document
(if set to ``None`` (default), the SVG element won't have
an ID).
svgclass Default: "segno". The CSS class of the SVG document
(if set to ``None``, the SVG element won't have a class).
lineclass Default: "qrline". The CSS class of the path element
(which draws the dark modules (if set to ``None``, the path
won't have a class).
omitsize Indicates if width and height attributes should be
omitted (default: ``False``). If these attributes are
omitted, a ``viewBox`` attribute will be added to the
document.
unit Default: ``None``
Indicates the unit for width / height and other coordinates.
By default, the unit is unspecified and all values are
in the user space.
Valid values: em, ex, px, pt, pc, cm, mm, in, and percentages
(any string is accepted, this parameter is not validated
by the serializer)
encoding Encoding of the XML document. "utf-8" by default.
svgversion SVG version (default: ``None``). If specified (a float),
the resulting document has an explicit "version" attribute.
If set to ``None``, the document won't have a "version"
attribute. This parameter is not validated.
compresslevel Default: 9. This parameter is only valid, if a compressed
SVG document should be created (file extension "svgz").
1 is fastest and produces the least compression, 9 is slowest
and produces the most. 0 is no compression.
draw_transparent Indicates if transparent SVG paths should be
added to the graphic (default: ``False``)
nl Indicates if the document should have a trailing newline
(default: ``True``)
================ ==============================================================
.. _png:
**Portable Network Graphics (PNG)**
This writes either a grayscale (maybe with transparency) PNG (color type 0)
or a palette-based (maybe with transparency) image (color type 3).
If the dark / light values are ``None``, white or black, the serializer
chooses the more compact grayscale mode, in all other cases a palette-based
image is written.
All :ref:`common keywords <common_keywords>` and :ref:`module colors <module_colors>`
are supported.
=============== ==============================================================
Name Description
=============== ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "png"
scale integer
dark Default: "#000" (black)
``None`` is a valid value iff light is not ``None``.
If set to ``None``, the dark modules become transparent.
light Default value "#fff" (white)
See keyword "dark" for further details.
compresslevel Default: 9. Integer indicating the compression level
for the ``IDAT`` (data) chunk.
1 is fastest and produces the least compression, 9 is slowest
and produces the most. 0 is no compression.
dpi Default: ``None``. Specifies the DPI value for the image.
By default, the DPI value is unspecified. Please note
that the DPI value is converted into meters (maybe with
rounding errors) since PNG does not support the unit
"dots per inch".
=============== ==============================================================
.. _eps:
**Encapsulated PostScript (EPS)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "eps"
scale integer or float
dark Default: "#000" (black)
light Default value: ``None`` (transparent light modules)
============= ==============================================================
.. _pdf:
**Portable Document Format (PDF)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "pdf"
scale integer or float
dark Default: "#000" (black)
light Default value: ``None`` (transparent light modules)
compresslevel Default: 9. Integer indicating the compression level.
1 is fastest and produces the least compression, 9 is slowest
and produces the most. 0 is no compression.
============= ==============================================================
.. _txt:
**Text (TXT)**
Aside of "scale", all :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "txt"
dark Default: "1"
light Default: "0"
============= ==============================================================
.. _ansi:
**ANSI escape code**
Supports the "border" keyword, only!
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "ans"
============= ==============================================================
.. _pbm:
**Portable Bitmap (PBM)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "pbm"
scale integer
plain Default: False. Boolean to switch between the P4 and P1 format.
If set to ``True``, the (outdated) P1 serialization format is
used.
============= ==============================================================
.. _pam:
**Portable Arbitrary Map (PAM)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "pam"
scale integer
dark Default: "#000" (black).
light Default value "#fff" (white). Use ``None`` for transparent
light modules.
============= ==============================================================
.. _ppm:
**Portable Pixmap (PPM)**
All :ref:`common keywords <common_keywords>` and :ref:`module colors <module_colors>`
are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.BytesIO`
kind "ppm"
scale integer
dark Default: "#000" (black).
light Default value "#fff" (white).
============= ==============================================================
.. _latex:
**LaTeX / PGF/TikZ**
To use the output of this serializer, the ``PGF/TikZ`` (and optionally
``hyperref``) package is required in the LaTeX environment. The
serializer itself does not depend on any external packages.
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "tex"
scale integer or float
dark LaTeX color name (default: "black"). The color is written
"at it is", please ensure that the color is a standard color
or it has been defined in the enclosing LaTeX document.
url Default: ``None``. Optional URL where the QR code should
point to. Requires the ``hyperref`` package in the LaTeX
environment.
============= ==============================================================
.. _xbm:
**X BitMap (XBM)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "xbm"
scale integer
name Name of the variable (default: "img")
============= ==============================================================
.. _xpm:
**X PixMap (XPM)**
All :ref:`common keywords <common_keywords>` are supported.
============= ==============================================================
Name Description
============= ==============================================================
out Filename or :py:class:`io.StringIO`
kind "xpm"
scale integer
dark Default: "#000" (black).
``None`` indicates transparent dark modules.
light Default value "#fff" (white)
``None`` indicates transparent light modules.
name Name of the variable (default: "img")
============= ==============================================================
:param out: A filename or a writable file-like object with a
``name`` attribute. Use the :paramref:`kind <segno.QRCode.save.kind>`
parameter if `out` is a :py:class:`io.BytesIO` or
:py:class:`io.StringIO` stream which don't have a ``name``
attribute.
:param str kind: Default ``None``.
If the desired output format cannot be determined from
the :paramref:`out <segno.QRCode.save.out>` parameter, this
parameter can be used to indicate the serialization format
(i.e. "svg" to enforce SVG output). The value is case
insensitive.
:param kw: Any of the supported keywords by the specific serializer.
"""
writers.save(self.matrix, self._matrix_size, out, kind, **kw)
| (self, out, kind=None, **kw) |
39,250 | segno | show | Displays this QR code.
This method is mainly intended for debugging purposes.
This method saves the QR code as an image (by default with a scaling
factor of 10) to a temporary file and opens it with the standard PNG
viewer application or within the standard webbrowser.
The temporary file is deleted afterwards (unless
:paramref:`delete_after <segno.QRCode.show.delete_after>` is set to ``None``).
If this method does not show any result, try to increase the
:paramref:`delete_after <segno.QRCode.show.delete_after>` value or set
it to ``None``
:param delete_after: Time in seconds to wait till the temporary file is
deleted.
:type delete_after: int or None
:param int scale: Integer indicating the size of a single module.
:param border: Integer indicating the size of the quiet zone.
If set to ``None`` (default), the recommended border size
will be used.
:type border: int or None
:param dark: The color of the dark modules (default: black).
:param light: The color of the light modules (default: white).
| def show(self, delete_after=20, scale=10, border=None, dark='#000',
light='#fff'): # pragma: no cover
"""\
Displays this QR code.
This method is mainly intended for debugging purposes.
This method saves the QR code as an image (by default with a scaling
factor of 10) to a temporary file and opens it with the standard PNG
viewer application or within the standard webbrowser.
The temporary file is deleted afterwards (unless
:paramref:`delete_after <segno.QRCode.show.delete_after>` is set to ``None``).
If this method does not show any result, try to increase the
:paramref:`delete_after <segno.QRCode.show.delete_after>` value or set
it to ``None``
:param delete_after: Time in seconds to wait till the temporary file is
deleted.
:type delete_after: int or None
:param int scale: Integer indicating the size of a single module.
:param border: Integer indicating the size of the quiet zone.
If set to ``None`` (default), the recommended border size
will be used.
:type border: int or None
:param dark: The color of the dark modules (default: black).
:param light: The color of the light modules (default: white).
"""
import os
import time
import tempfile
import webbrowser
import threading
from urllib.parse import urljoin
from urllib.request import pathname2url
def delete_file(name):
time.sleep(delete_after)
try:
os.unlink(name)
except OSError:
pass
f = tempfile.NamedTemporaryFile('wb', suffix='.png', delete=False)
try:
self.save(f, scale=scale, dark=dark, light=light, border=border)
except: # noqa: E722
f.close()
os.unlink(f.name)
raise
f.close()
webbrowser.open_new_tab(urljoin('file:', pathname2url(f.name)))
if delete_after is not None:
t = threading.Thread(target=delete_file, args=(f.name,))
t.start()
| (self, delete_after=20, scale=10, border=None, dark='#000', light='#fff') |
39,251 | segno | svg_data_uri | Converts the QR code into an SVG data URI.
The XML declaration is omitted by default (set
:paramref:`xmldecl <segno.QRCode.svg_data_uri.xmldecl>` to ``True``
to enable it), further the newline is omitted by default (set ``nl`` to
``True`` to enable it).
Aside from the missing `out` parameter, the different `xmldecl` and
`nl` default values, and the additional parameters
:paramref:`encode_minimal <segno.QRCode.svg_data_uri.encode_minimal>`
and :paramref:`omit_charset <segno.QRCode.svg_data_uri.omit_charset>`,
this method uses the same parameters as the usual SVG serializer, see
:py:func:`save` and the available `SVG parameters <#svg>`_
.. note::
In order to embed a SVG image in HTML without generating a file, the
:py:func:`svg_inline` method could serve better results, as it
usually produces a smaller output.
:param bool xmldecl: Indicates if the XML declaration should be
serialized (default: ``False``)
:param bool encode_minimal: Indicates if the resulting data URI should
use minimal percent encoding (disabled by default).
:param bool omit_charset: Indicates if the ``;charset=...`` should be omitted
(disabled by default)
:param bool nl: Indicates if the document should have a trailing newline
(default: ``False``)
:rtype: str
| def svg_data_uri(self, xmldecl=False, encode_minimal=False,
omit_charset=False, nl=False, **kw):
"""\
Converts the QR code into an SVG data URI.
The XML declaration is omitted by default (set
:paramref:`xmldecl <segno.QRCode.svg_data_uri.xmldecl>` to ``True``
to enable it), further the newline is omitted by default (set ``nl`` to
``True`` to enable it).
Aside from the missing `out` parameter, the different `xmldecl` and
`nl` default values, and the additional parameters
:paramref:`encode_minimal <segno.QRCode.svg_data_uri.encode_minimal>`
and :paramref:`omit_charset <segno.QRCode.svg_data_uri.omit_charset>`,
this method uses the same parameters as the usual SVG serializer, see
:py:func:`save` and the available `SVG parameters <#svg>`_
.. note::
In order to embed a SVG image in HTML without generating a file, the
:py:func:`svg_inline` method could serve better results, as it
usually produces a smaller output.
:param bool xmldecl: Indicates if the XML declaration should be
serialized (default: ``False``)
:param bool encode_minimal: Indicates if the resulting data URI should
use minimal percent encoding (disabled by default).
:param bool omit_charset: Indicates if the ``;charset=...`` should be omitted
(disabled by default)
:param bool nl: Indicates if the document should have a trailing newline
(default: ``False``)
:rtype: str
"""
return writers.as_svg_data_uri(self.matrix, self._matrix_size,
xmldecl=xmldecl, nl=nl,
encode_minimal=encode_minimal,
omit_charset=omit_charset, **kw)
| (self, xmldecl=False, encode_minimal=False, omit_charset=False, nl=False, **kw) |
39,252 | segno | svg_inline | Returns an SVG representation which is embeddable into HTML5 contexts.
Due to the fact that HTML5 directly supports SVG, various elements of
an SVG document can or should be suppressed (i.e. the XML declaration and
the SVG namespace).
This method returns a string that can be used in an HTML context.
This method uses the same parameters as the usual SVG serializer, see
:py:func:`save` and the available `SVG parameters <#svg>`_ (the ``out``
and ``kind`` parameters are not supported).
The returned string can be used directly in
`Jinja <https://jinja.palletsprojects.com/>`_ and
`Django <https://www.djangoproject.com/>`_ templates, provided the
``safe`` filter is used which marks a string as not requiring further
HTML escaping prior to output.
::
<div>{{ qr.svg_inline(dark='#228b22', scale=3) | safe }}</div>
:rtype: str
| def svg_inline(self, **kw):
"""\
Returns an SVG representation which is embeddable into HTML5 contexts.
Due to the fact that HTML5 directly supports SVG, various elements of
an SVG document can or should be suppressed (i.e. the XML declaration and
the SVG namespace).
This method returns a string that can be used in an HTML context.
This method uses the same parameters as the usual SVG serializer, see
:py:func:`save` and the available `SVG parameters <#svg>`_ (the ``out``
and ``kind`` parameters are not supported).
The returned string can be used directly in
`Jinja <https://jinja.palletsprojects.com/>`_ and
`Django <https://www.djangoproject.com/>`_ templates, provided the
``safe`` filter is used which marks a string as not requiring further
HTML escaping prior to output.
::
<div>{{ qr.svg_inline(dark='#228b22', scale=3) | safe }}</div>
:rtype: str
"""
buff = io.BytesIO()
self.save(buff, kind='svg', xmldecl=False, svgns=False, nl=False, **kw)
return buff.getvalue().decode(kw.get('encoding', 'utf-8'))
| (self, **kw) |
39,253 | segno | symbol_size | Returns the symbol size (width x height) with the provided border and
scaling factor.
:param scale: Indicates the size of a single module (default: 1).
The size of a module depends on the used output format; i.e.
in a PNG context, a scaling factor of 2 indicates that a module
has a size of 2 x 2 pixel. Some outputs (i.e. SVG) accept
floating point values.
:type scale: int or float
:param int border: The border size or ``None`` to specify the
default quiet zone (4 for QR Codes, 2 for Micro QR Codes).
:rtype: tuple (width, height)
| def symbol_size(self, scale=1, border=None):
"""\
Returns the symbol size (width x height) with the provided border and
scaling factor.
:param scale: Indicates the size of a single module (default: 1).
The size of a module depends on the used output format; i.e.
in a PNG context, a scaling factor of 2 indicates that a module
has a size of 2 x 2 pixel. Some outputs (i.e. SVG) accept
floating point values.
:type scale: int or float
:param int border: The border size or ``None`` to specify the
default quiet zone (4 for QR Codes, 2 for Micro QR Codes).
:rtype: tuple (width, height)
"""
return utils.get_symbol_size(self._matrix_size, scale=scale, border=border)
| (self, scale=1, border=None) |
39,254 | segno | terminal | Serializes the matrix as ANSI escape code or Unicode Block Elements
(if ``compact`` is ``True``).
Under Windows, no ANSI escape sequence is generated but the Windows
API is used *unless* :paramref:`out <segno.QRCode.terminal.out>`
is a writable object or using WinAPI fails or if ``compact`` is ``True``.
:param out: Filename or a file-like object supporting to write text.
If ``None`` (default), the matrix is written to :py:class:`sys.stdout`.
:param int border: Integer indicating the size of the quiet zone.
If set to ``None`` (default), the recommended border size
will be used (``4`` for QR Codes, ``2`` for Micro QR Codes).
:param bool compact: Indicates if a more compact QR code should be shown
(default: ``False``).
| def terminal(self, out=None, border=None, compact=False):
"""\
Serializes the matrix as ANSI escape code or Unicode Block Elements
(if ``compact`` is ``True``).
Under Windows, no ANSI escape sequence is generated but the Windows
API is used *unless* :paramref:`out <segno.QRCode.terminal.out>`
is a writable object or using WinAPI fails or if ``compact`` is ``True``.
:param out: Filename or a file-like object supporting to write text.
If ``None`` (default), the matrix is written to :py:class:`sys.stdout`.
:param int border: Integer indicating the size of the quiet zone.
If set to ``None`` (default), the recommended border size
will be used (``4`` for QR Codes, ``2`` for Micro QR Codes).
:param bool compact: Indicates if a more compact QR code should be shown
(default: ``False``).
"""
if compact:
writers.write_terminal_compact(self.matrix, self._matrix_size, out or sys.stdout, border)
elif out is None and sys.platform == 'win32': # pragma: no cover
# Windows < 10 does not support ANSI escape sequences, try to
# call the a Windows specific terminal output which uses the
# Windows API.
try:
writers.write_terminal_win(self.matrix, self._matrix_size, border)
except OSError:
# Use the standard output even if it may print garbage
writers.write_terminal(self.matrix, self._matrix_size, sys.stdout, border)
else:
writers.write_terminal(self.matrix, self._matrix_size, out or sys.stdout, border)
| (self, out=None, border=None, compact=False) |
39,255 | segno | QRCodeSequence | Represents a sequence of 1 .. n (max. n = 16) :py:class:`QRCode` instances.
Iff this sequence contains only one item, it behaves like :py:class:`QRCode`.
| class QRCodeSequence(tuple):
"""\
Represents a sequence of 1 .. n (max. n = 16) :py:class:`QRCode` instances.
Iff this sequence contains only one item, it behaves like :py:class:`QRCode`.
"""
__slots__ = ()
def __new__(cls, qrcodes):
return super(QRCodeSequence, cls).__new__(cls, qrcodes)
def terminal(self, out=None, border=None, compact=False):
"""\
Serializes the sequence of QR codes as ANSI escape code.
See :py:meth:`QRCode.terminal()` for details.
"""
for qrcode in self:
qrcode.terminal(out=out, border=border, compact=compact)
def save(self, out, kind=None, **kw):
"""\
Saves the sequence of QR codes to `out`.
If `out` is a filename, this method modifies the filename and adds
``<Number of QR codes>-<Current QR code>`` to it.
``structured-append.svg`` becomes (if the sequence contains two QR codes):
``structured-append-02-01.svg`` and ``structured-append-02-02.svg``
Please note that using a file or file-like object may result into an
invalid serialization format since all QR codes are written to the same
output.
See :py:meth:`QRCode.save()` for a detailed enumeration of options.
"""
filename = lambda o, n: o # noqa: E731
m = len(self)
if m > 1 and isinstance(out, str):
dot_idx = out.rfind('.')
if dot_idx > -1:
out = out[:dot_idx] + '-{0:02d}-{1:02d}' + out[dot_idx:]
filename = lambda o, n: o.format(m, n) # noqa: E731
for n, qrcode in enumerate(self, start=1):
qrcode.save(filename(out, n), kind=kind, **kw)
def __getattr__(self, item):
"""\
Behaves like :py:class:`QRCode` iff this sequence contains a single item.
"""
if len(self) == 1:
return getattr(self[0], item)
raise AttributeError("{0} object has no attribute '{1}'"
.format(self.__class__, item))
| (qrcodes) |
39,256 | segno | __getattr__ | Behaves like :py:class:`QRCode` iff this sequence contains a single item.
| def __getattr__(self, item):
"""\
Behaves like :py:class:`QRCode` iff this sequence contains a single item.
"""
if len(self) == 1:
return getattr(self[0], item)
raise AttributeError("{0} object has no attribute '{1}'"
.format(self.__class__, item))
| (self, item) |
39,257 | segno | __new__ | null | def __new__(cls, qrcodes):
return super(QRCodeSequence, cls).__new__(cls, qrcodes)
| (cls, qrcodes) |
39,258 | segno | save | Saves the sequence of QR codes to `out`.
If `out` is a filename, this method modifies the filename and adds
``<Number of QR codes>-<Current QR code>`` to it.
``structured-append.svg`` becomes (if the sequence contains two QR codes):
``structured-append-02-01.svg`` and ``structured-append-02-02.svg``
Please note that using a file or file-like object may result into an
invalid serialization format since all QR codes are written to the same
output.
See :py:meth:`QRCode.save()` for a detailed enumeration of options.
| def save(self, out, kind=None, **kw):
"""\
Saves the sequence of QR codes to `out`.
If `out` is a filename, this method modifies the filename and adds
``<Number of QR codes>-<Current QR code>`` to it.
``structured-append.svg`` becomes (if the sequence contains two QR codes):
``structured-append-02-01.svg`` and ``structured-append-02-02.svg``
Please note that using a file or file-like object may result into an
invalid serialization format since all QR codes are written to the same
output.
See :py:meth:`QRCode.save()` for a detailed enumeration of options.
"""
filename = lambda o, n: o # noqa: E731
m = len(self)
if m > 1 and isinstance(out, str):
dot_idx = out.rfind('.')
if dot_idx > -1:
out = out[:dot_idx] + '-{0:02d}-{1:02d}' + out[dot_idx:]
filename = lambda o, n: o.format(m, n) # noqa: E731
for n, qrcode in enumerate(self, start=1):
qrcode.save(filename(out, n), kind=kind, **kw)
| (self, out, kind=None, **kw) |
39,259 | segno | terminal | Serializes the sequence of QR codes as ANSI escape code.
See :py:meth:`QRCode.terminal()` for details.
| def terminal(self, out=None, border=None, compact=False):
"""\
Serializes the sequence of QR codes as ANSI escape code.
See :py:meth:`QRCode.terminal()` for details.
"""
for qrcode in self:
qrcode.terminal(out=out, border=border, compact=compact)
| (self, out=None, border=None, compact=False) |
39,263 | segno | make | Creates a (Micro) QR Code.
This is main entry point to create QR Codes and Micro QR Codes.
Aside from `content`, all parameters are optional and an optimal (minimal)
(Micro) QR code with a maximal error correction level is generated.
:param content: The data to encode. Either a Unicode string, an integer or
bytes. If bytes are provided, the `encoding` parameter should be
used to specify the used encoding.
:type content: str, int, bytes
:param error: Error correction level. If ``None`` (default), error
correction level ``L`` is used (note: Micro QR Code version M1 does
not support any error correction. If an explicit error correction
level is used, a M1 QR code won't be generated).
Valid values: ``None`` (allowing generation of M1 codes or use error
correction level "L" or better see :paramref:`boost_error <segno.make.boost_error>`),
"L", "M", "Q", "H" (error correction level "H" isn't available for
Micro QR Codes).
===================================== ===========================
Error correction level Error correction capability
===================================== ===========================
L (Segno's default unless version M1) recovers 7% of data
M recovers 15% of data
Q recovers 25% of data
H (not available for Micro QR Codes) recovers 30% of data
===================================== ===========================
Higher error levels may require larger QR codes (see also
:paramref:`version <segno.make.version>` parameter).
The `error` parameter is case insensitive.
See also the :paramref:`boost_error <segno.make.boost_error>` parameter.
:type error: str or None
:param version: QR Code version. If the value is ``None`` (default), the
minimal version which fits for the input data will be used.
Valid values: "M1", "M2", "M3", "M4" (for Micro QR codes) or an
integer between 1 and 40 (for QR codes).
The `version` parameter is case insensitive.
:type version: int, str or None
:param mode: "numeric", "alphanumeric", "byte", "kanji" or "hanzi".
If the value is ``None`` (default) the appropriate mode will
automatically be determined.
If `version` refers to a Micro QR code, this function may raise a
:py:exc:`ValueError` if the provided `mode` is not supported.
The `mode` parameter is case insensitive.
============ =======================
Mode (Micro) QR Code Version
============ =======================
numeric 1 - 40, M1, M2, M3, M4
alphanumeric 1 - 40, M2, M3, M4
byte 1 - 40, M3, M4
kanji 1 - 40, M3, M4
hanzi 1 - 40
============ =======================
.. note::
The Hanzi mode may not be supported by all QR code readers since
it is not part of ISO/IEC 18004:2015(E).
For this reason, this mode must be specified explicitly by the
user::
import segno
qrcode = segno.make('书读百遍其义自现', mode='hanzi')
:type mode: str or None
:param mask: Data mask. If the value is ``None`` (default), the
appropriate data mask is chosen automatically. If the `mask`
parameter is provided, this function may raise a :py:exc:`ValueError`
if the mask is invalid.
:type mask: int or None
:param encoding: Indicates the encoding in mode "byte". By default
(`encoding` is ``None``) the implementation tries to use the
standard conform ISO/IEC 8859-1 encoding and if it does not fit, it
will use UTF-8. Note that no ECI mode indicator is inserted by
default (see :paramref:`eci <segno.make.eci>`).
The `encoding` parameter is case insensitive.
:type encoding: str or None
:param bool eci: Indicates if binary data which does not use the default
encoding (ISO/IEC 8859-1) should enforce the ECI mode. Since a lot
of QR code readers do not support the ECI mode, this feature is
disabled by default and the data is encoded in the provided
`encoding` using the usual "byte" mode. Set `eci` to ``True`` if
an ECI header should be inserted into the QR Code. Note that
the implementation may not know the ECI designator for the provided
`encoding` and may raise an exception if the ECI designator cannot
be found.
The ECI mode is not supported by Micro QR Codes.
:param micro: If :paramref:`version <segno.make.version>` is ``None`` (default)
this parameter can be used to allow the creation of a Micro QR code.
If set to ``False``, a QR code is generated. If set to
``None`` (default) a Micro QR code may be generated if applicable.
If set to ``True`` the algorithm generates a Micro QR Code or
raises an exception if the `mode` is not compatible or the `content`
is too large for Micro QR codes.
:type micro: bool or None
:param bool boost_error: Indicates if the error correction level may be
increased if it does not affect the version (default: ``True``).
If set to ``True``, the :paramref:`error <segno.make.error>`
parameter is interpreted as minimum error level. If set to ``False``,
the resulting (Micro) QR code uses the provided `error` level
(or the default error correction level, if error is ``None``)
:raises: :py:exc:`ValueError` or :py:exc:`DataOverflowError`: In case the
data does not fit into a (Micro) QR Code or it does not fit into
the provided :paramref:`version`.
:rtype: QRCode
| def make(content, error=None, version=None, mode=None, mask=None, encoding=None,
eci=False, micro=None, boost_error=True):
"""\
Creates a (Micro) QR Code.
This is main entry point to create QR Codes and Micro QR Codes.
Aside from `content`, all parameters are optional and an optimal (minimal)
(Micro) QR code with a maximal error correction level is generated.
:param content: The data to encode. Either a Unicode string, an integer or
bytes. If bytes are provided, the `encoding` parameter should be
used to specify the used encoding.
:type content: str, int, bytes
:param error: Error correction level. If ``None`` (default), error
correction level ``L`` is used (note: Micro QR Code version M1 does
not support any error correction. If an explicit error correction
level is used, a M1 QR code won't be generated).
Valid values: ``None`` (allowing generation of M1 codes or use error
correction level "L" or better see :paramref:`boost_error <segno.make.boost_error>`),
"L", "M", "Q", "H" (error correction level "H" isn't available for
Micro QR Codes).
===================================== ===========================
Error correction level Error correction capability
===================================== ===========================
L (Segno's default unless version M1) recovers 7% of data
M recovers 15% of data
Q recovers 25% of data
H (not available for Micro QR Codes) recovers 30% of data
===================================== ===========================
Higher error levels may require larger QR codes (see also
:paramref:`version <segno.make.version>` parameter).
The `error` parameter is case insensitive.
See also the :paramref:`boost_error <segno.make.boost_error>` parameter.
:type error: str or None
:param version: QR Code version. If the value is ``None`` (default), the
minimal version which fits for the input data will be used.
Valid values: "M1", "M2", "M3", "M4" (for Micro QR codes) or an
integer between 1 and 40 (for QR codes).
The `version` parameter is case insensitive.
:type version: int, str or None
:param mode: "numeric", "alphanumeric", "byte", "kanji" or "hanzi".
If the value is ``None`` (default) the appropriate mode will
automatically be determined.
If `version` refers to a Micro QR code, this function may raise a
:py:exc:`ValueError` if the provided `mode` is not supported.
The `mode` parameter is case insensitive.
============ =======================
Mode (Micro) QR Code Version
============ =======================
numeric 1 - 40, M1, M2, M3, M4
alphanumeric 1 - 40, M2, M3, M4
byte 1 - 40, M3, M4
kanji 1 - 40, M3, M4
hanzi 1 - 40
============ =======================
.. note::
The Hanzi mode may not be supported by all QR code readers since
it is not part of ISO/IEC 18004:2015(E).
For this reason, this mode must be specified explicitly by the
user::
import segno
qrcode = segno.make('书读百遍其义自现', mode='hanzi')
:type mode: str or None
:param mask: Data mask. If the value is ``None`` (default), the
appropriate data mask is chosen automatically. If the `mask`
parameter is provided, this function may raise a :py:exc:`ValueError`
if the mask is invalid.
:type mask: int or None
:param encoding: Indicates the encoding in mode "byte". By default
(`encoding` is ``None``) the implementation tries to use the
standard conform ISO/IEC 8859-1 encoding and if it does not fit, it
will use UTF-8. Note that no ECI mode indicator is inserted by
default (see :paramref:`eci <segno.make.eci>`).
The `encoding` parameter is case insensitive.
:type encoding: str or None
:param bool eci: Indicates if binary data which does not use the default
encoding (ISO/IEC 8859-1) should enforce the ECI mode. Since a lot
of QR code readers do not support the ECI mode, this feature is
disabled by default and the data is encoded in the provided
`encoding` using the usual "byte" mode. Set `eci` to ``True`` if
an ECI header should be inserted into the QR Code. Note that
the implementation may not know the ECI designator for the provided
`encoding` and may raise an exception if the ECI designator cannot
be found.
The ECI mode is not supported by Micro QR Codes.
:param micro: If :paramref:`version <segno.make.version>` is ``None`` (default)
this parameter can be used to allow the creation of a Micro QR code.
If set to ``False``, a QR code is generated. If set to
``None`` (default) a Micro QR code may be generated if applicable.
If set to ``True`` the algorithm generates a Micro QR Code or
raises an exception if the `mode` is not compatible or the `content`
is too large for Micro QR codes.
:type micro: bool or None
:param bool boost_error: Indicates if the error correction level may be
increased if it does not affect the version (default: ``True``).
If set to ``True``, the :paramref:`error <segno.make.error>`
parameter is interpreted as minimum error level. If set to ``False``,
the resulting (Micro) QR code uses the provided `error` level
(or the default error correction level, if error is ``None``)
:raises: :py:exc:`ValueError` or :py:exc:`DataOverflowError`: In case the
data does not fit into a (Micro) QR Code or it does not fit into
the provided :paramref:`version`.
:rtype: QRCode
"""
return QRCode(encoder.encode(content, error, version, mode, mask, encoding,
eci, micro, boost_error=boost_error))
| (content, error=None, version=None, mode=None, mask=None, encoding=None, eci=False, micro=None, boost_error=True) |
39,264 | segno | make_micro | Creates a Micro QR code.
See :py:func:`make` for a description of the parameters.
Note: Error correction level "H" isn't available for Micro QR codes. If
used, this function raises a :py:class:`segno.ErrorLevelError`.
:rtype: QRCode
| def make_micro(content, error=None, version=None, mode=None, mask=None,
encoding=None, boost_error=True):
"""\
Creates a Micro QR code.
See :py:func:`make` for a description of the parameters.
Note: Error correction level "H" isn't available for Micro QR codes. If
used, this function raises a :py:class:`segno.ErrorLevelError`.
:rtype: QRCode
"""
return make(content, error=error, version=version, mode=mode, mask=mask,
encoding=encoding, micro=True, boost_error=boost_error)
| (content, error=None, version=None, mode=None, mask=None, encoding=None, boost_error=True) |
39,265 | segno | make_qr | Creates a QR code (never a Micro QR code).
See :py:func:`make` for a description of the parameters.
:rtype: QRCode
| def make_qr(content, error=None, version=None, mode=None, mask=None,
encoding=None, eci=False, boost_error=True):
"""\
Creates a QR code (never a Micro QR code).
See :py:func:`make` for a description of the parameters.
:rtype: QRCode
"""
return make(content, error=error, version=version, mode=mode, mask=mask,
encoding=encoding, eci=eci, micro=False, boost_error=boost_error)
| (content, error=None, version=None, mode=None, mask=None, encoding=None, eci=False, boost_error=True) |
39,266 | segno | make_sequence | Creates a sequence of QR codes using the Structured Append mode.
If the content fits into one QR code and neither ``version`` nor
``symbol_count`` is provided, this function may return a sequence with
one QR Code which does not use the Structured Append mode. Otherwise a
sequence of 2 .. n (max. n = 16) QR codes is returned which use the
Structured Append mode.
The Structured Append mode allows to split the content over a number
(max. 16) QR Codes.
The Structured Append mode isn't available for Micro QR Codes, therefor
the returned sequence contains QR codes, only.
Since this function returns an iterable object, it may be used as follows:
.. code-block:: python
for i, qrcode in enumerate(segno.make_sequence(data, symbol_count=2)):
qrcode.save('seq-%d.svg' % i, scale=10, color='darkblue')
The number of QR codes is determined by the `version` or `symbol_count`
parameter.
See :py:func:`make` for a description of the other parameters.
:param int symbol_count: Number of symbols.
:rtype: QRCodeSequence
| def make_sequence(content, error=None, version=None, mode=None, mask=None,
encoding=None, boost_error=True, symbol_count=None):
"""\
Creates a sequence of QR codes using the Structured Append mode.
If the content fits into one QR code and neither ``version`` nor
``symbol_count`` is provided, this function may return a sequence with
one QR Code which does not use the Structured Append mode. Otherwise a
sequence of 2 .. n (max. n = 16) QR codes is returned which use the
Structured Append mode.
The Structured Append mode allows to split the content over a number
(max. 16) QR Codes.
The Structured Append mode isn't available for Micro QR Codes, therefor
the returned sequence contains QR codes, only.
Since this function returns an iterable object, it may be used as follows:
.. code-block:: python
for i, qrcode in enumerate(segno.make_sequence(data, symbol_count=2)):
qrcode.save('seq-%d.svg' % i, scale=10, color='darkblue')
The number of QR codes is determined by the `version` or `symbol_count`
parameter.
See :py:func:`make` for a description of the other parameters.
:param int symbol_count: Number of symbols.
:rtype: QRCodeSequence
"""
return QRCodeSequence(map(QRCode,
encoder.encode_sequence(content, error=error,
version=version,
mode=mode, mask=mask,
encoding=encoding,
boost_error=boost_error,
symbol_count=symbol_count)))
| (content, error=None, version=None, mode=None, mask=None, encoding=None, boost_error=True, symbol_count=None) |
39,270 | sparkypandy._column | Columny | null | class Columny(Column): # type: ignore
def __init__(self, jc: JavaObject, df_sparky: DataFrame):
super().__init__(jc=jc)
self.df_sparky = df_sparky
@property
def _name(self) -> str:
return cast(str, self._jc.toString())
@classmethod
def from_spark(cls, col: Column, df_sparky: DataFramy) -> Columny:
# noinspection PyProtectedMember
return cls(jc=col._jc, df_sparky=df_sparky)
def to_pandas(self) -> pd.Series:
# noinspection PyTypeChecker
df: pd.DataFrame = self.df_sparky.select(self._name).toPandas()
return df[self._name]
# def mean(self) -> float:
# r = df_spark.select(F.mean("a").alias("result")).collect()[0].result
| (jc: 'JavaObject', df_sparky: 'DataFrame') |
39,271 | pyspark.sql.column | _ | binary operator | def _bin_op(
name: str,
doc: str = "binary operator",
) -> Callable[
["Column", Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"]], "Column"
]:
"""Create a method for given binary operator"""
def _(
self: "Column",
other: Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"],
) -> "Column":
jc = other._jc if isinstance(other, Column) else other
njc = getattr(self._jc, name)(jc)
return Column(njc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral'), ForwardRef('DateTimeLiteral')]) -> 'Column' |
39,273 | pyspark.sql.column | __nonzero__ | null | def __nonzero__(self) -> None:
raise PySparkValueError(
error_class="CANNOT_CONVERT_COLUMN_INTO_BOOL",
message_parameters={},
)
| (self) -> NoneType |
39,274 | pyspark.sql.column | __contains__ | null | def __contains__(self, item: Any) -> None:
raise PySparkValueError(
error_class="CANNOT_APPLY_IN_FOR_COLUMN",
message_parameters={},
)
| (self, item: Any) -> NoneType |
39,276 | pyspark.sql.column | __eq__ | binary function | def __eq__( # type: ignore[override]
self,
other: Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"],
) -> "Column":
"""binary function"""
return _bin_op("equalTo")(self, other)
| (self, other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral'), ForwardRef('DateTimeLiteral')]) -> 'Column' |
39,278 | pyspark.sql.column | __getattr__ |
An expression that gets an item at position ``ordinal`` out of a list,
or gets an item by key out of a dict.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
item
a literal value.
Returns
-------
:class:`Column`
Column representing the item got by key out of a dict.
Examples
--------
>>> df = spark.createDataFrame([('abcedfg', {"key": "value"})], ["l", "d"])
>>> df.select(df.d.key).show()
+------+
|d[key]|
+------+
| value|
+------+
| def __getattr__(self, item: Any) -> "Column":
"""
An expression that gets an item at position ``ordinal`` out of a list,
or gets an item by key out of a dict.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
item
a literal value.
Returns
-------
:class:`Column`
Column representing the item got by key out of a dict.
Examples
--------
>>> df = spark.createDataFrame([('abcedfg', {"key": "value"})], ["l", "d"])
>>> df.select(df.d.key).show()
+------+
|d[key]|
+------+
| value|
+------+
"""
if item.startswith("__"):
raise PySparkAttributeError(
error_class="CANNOT_ACCESS_TO_DUNDER",
message_parameters={},
)
return self[item]
| (self, item: Any) -> pyspark.sql.column.Column |
39,279 | pyspark.sql.column | __getitem__ |
An expression that gets an item at position ``ordinal`` out of a list,
or gets an item by key out of a dict.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
k
a literal value, or a slice object without step.
Returns
-------
:class:`Column`
Column representing the item got by key out of a dict, or substrings sliced by
the given slice object.
Examples
--------
>>> df = spark.createDataFrame([('abcedfg', {"key": "value"})], ["l", "d"])
>>> df.select(df.l[slice(1, 3)], df.d['key']).show()
+------------------+------+
|substring(l, 1, 3)|d[key]|
+------------------+------+
| abc| value|
+------------------+------+
| def __getitem__(self, k: Any) -> "Column":
"""
An expression that gets an item at position ``ordinal`` out of a list,
or gets an item by key out of a dict.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
k
a literal value, or a slice object without step.
Returns
-------
:class:`Column`
Column representing the item got by key out of a dict, or substrings sliced by
the given slice object.
Examples
--------
>>> df = spark.createDataFrame([('abcedfg', {"key": "value"})], ["l", "d"])
>>> df.select(df.l[slice(1, 3)], df.d['key']).show()
+------------------+------+
|substring(l, 1, 3)|d[key]|
+------------------+------+
| abc| value|
+------------------+------+
"""
if isinstance(k, slice):
if k.step is not None:
raise PySparkValueError(
error_class="SLICE_WITH_STEP",
message_parameters={},
)
return self.substr(k.start, k.stop)
else:
return _bin_op("apply")(self, k)
| (self, k: Any) -> pyspark.sql.column.Column |
39,281 | sparkypandy._column | __init__ | null | def __init__(self, jc: JavaObject, df_sparky: DataFrame):
super().__init__(jc=jc)
self.df_sparky = df_sparky
| (self, jc: py4j.java_gateway.JavaObject, df_sparky: pyspark.sql.dataframe.DataFrame) |
39,282 | pyspark.sql.column | _ | def _func_op(name: str, doc: str = "") -> Callable[["Column"], "Column"]:
def _(self: "Column") -> "Column":
sc = get_active_spark_context()
jc = getattr(cast(JVMView, sc._jvm).functions, name)(self._jc)
return Column(jc)
_.__doc__ = doc
return _
| (self: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
|
39,283 | pyspark.sql.column | __iter__ | null | def __iter__(self) -> None:
raise PySparkTypeError(
error_class="NOT_ITERABLE", message_parameters={"objectName": "Column"}
)
| (self) -> NoneType |
39,288 | pyspark.sql.column | __ne__ | binary function | def __ne__( # type: ignore[override]
self,
other: Any,
) -> "Column":
"""binary function"""
return _bin_op("notEqual")(self, other)
| (self, other: Any) -> pyspark.sql.column.Column |
39,292 | pyspark.sql.column | _ | binary function | def _bin_func_op(
name: str,
reverse: bool = False,
doc: str = "binary function",
) -> Callable[["Column", Union["Column", "LiteralType", "DecimalLiteral"]], "Column"]:
def _(self: "Column", other: Union["Column", "LiteralType", "DecimalLiteral"]) -> "Column":
sc = get_active_spark_context()
fn = getattr(cast(JVMView, sc._jvm).functions, name)
jc = other._jc if isinstance(other, Column) else _create_column_from_literal(other)
njc = fn(self._jc, jc) if not reverse else fn(jc, self._jc)
return Column(njc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral')]) -> 'Column' |
39,295 | pyspark.sql.column | _ | binary operator | def _reverse_op(
name: str,
doc: str = "binary operator",
) -> Callable[["Column", Union["LiteralType", "DecimalLiteral"]], "Column"]:
"""Create a method for binary operator (this object is on right side)"""
def _(self: "Column", other: Union["LiteralType", "DecimalLiteral"]) -> "Column":
jother = _create_column_from_literal(other)
jc = getattr(jother, name)(self._jc)
return Column(jc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('LiteralType'), ForwardRef('DecimalLiteral')]) -> 'Column' |
39,296 | pyspark.sql.column | __repr__ | null | def __repr__(self) -> str:
return "Column<'%s'>" % self._jc.toString()
| (self) -> str |
39,305 | pyspark.sql.column | alias |
Returns this column aliased with a new name or names (in the case of expressions that
return more than one column, such as explode).
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
alias : str
desired column names (collects all positional arguments passed)
Other Parameters
----------------
metadata: dict
a dict of information to be stored in ``metadata`` attribute of the
corresponding :class:`StructField <pyspark.sql.types.StructField>` (optional, keyword
only argument)
.. versionchanged:: 2.2.0
Added optional ``metadata`` argument.
Returns
-------
:class:`Column`
Column representing whether each element of Column is aliased with new name or names.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.age.alias("age2")).collect()
[Row(age2=2), Row(age2=5)]
>>> df.select(df.age.alias("age3", metadata={'max': 99})).schema['age3'].metadata['max']
99
| def alias(self, *alias: str, **kwargs: Any) -> "Column":
"""
Returns this column aliased with a new name or names (in the case of expressions that
return more than one column, such as explode).
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
alias : str
desired column names (collects all positional arguments passed)
Other Parameters
----------------
metadata: dict
a dict of information to be stored in ``metadata`` attribute of the
corresponding :class:`StructField <pyspark.sql.types.StructField>` (optional, keyword
only argument)
.. versionchanged:: 2.2.0
Added optional ``metadata`` argument.
Returns
-------
:class:`Column`
Column representing whether each element of Column is aliased with new name or names.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.age.alias("age2")).collect()
[Row(age2=2), Row(age2=5)]
>>> df.select(df.age.alias("age3", metadata={'max': 99})).schema['age3'].metadata['max']
99
"""
metadata = kwargs.pop("metadata", None)
assert not kwargs, "Unexpected kwargs where passed: %s" % kwargs
sc = get_active_spark_context()
if len(alias) == 1:
if metadata:
assert sc._jvm is not None
jmeta = sc._jvm.org.apache.spark.sql.types.Metadata.fromJson(json.dumps(metadata))
return Column(getattr(self._jc, "as")(alias[0], jmeta))
else:
return Column(getattr(self._jc, "as")(alias[0]))
else:
if metadata:
raise PySparkValueError(
error_class="ONLY_ALLOWED_FOR_SINGLE_COLUMN",
message_parameters={"arg_name": "metadata"},
)
return Column(getattr(self._jc, "as")(_to_seq(sc, list(alias))))
| (self, *alias: str, **kwargs: Any) -> pyspark.sql.column.Column |
39,306 | pyspark.sql.column | _ |
Returns a sort expression based on the ascending order of the column.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([('Tom', 80), ('Alice', None)], ["name", "height"])
>>> df.select(df.name).orderBy(df.name.asc()).collect()
[Row(name='Alice'), Row(name='Tom')]
| def _unary_op(
name: str,
doc: str = "unary operator",
) -> Callable[["Column"], "Column"]:
"""Create a method for given unary operator"""
def _(self: "Column") -> "Column":
jc = getattr(self._jc, name)()
return Column(jc)
_.__doc__ = doc
return _
| (self: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
39,307 | pyspark.sql.column | _ |
Returns a sort expression based on ascending order of the column, and null values
return before non-null values.
.. versionadded:: 2.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([('Tom', 80), (None, 60), ('Alice', None)], ["name", "height"])
>>> df.select(df.name).orderBy(df.name.asc_nulls_first()).collect()
[Row(name=None), Row(name='Alice'), Row(name='Tom')]
| def _unary_op(
name: str,
doc: str = "unary operator",
) -> Callable[["Column"], "Column"]:
"""Create a method for given unary operator"""
def _(self: "Column") -> "Column":
jc = getattr(self._jc, name)()
return Column(jc)
_.__doc__ = doc
return _
| (self: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
39,308 | pyspark.sql.column | _ |
Returns a sort expression based on ascending order of the column, and null values
appear after non-null values.
.. versionadded:: 2.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([('Tom', 80), (None, 60), ('Alice', None)], ["name", "height"])
>>> df.select(df.name).orderBy(df.name.asc_nulls_last()).collect()
[Row(name='Alice'), Row(name='Tom'), Row(name=None)]
| def _unary_op(
name: str,
doc: str = "unary operator",
) -> Callable[["Column"], "Column"]:
"""Create a method for given unary operator"""
def _(self: "Column") -> "Column":
jc = getattr(self._jc, name)()
return Column(jc)
_.__doc__ = doc
return _
| (self: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
39,309 | pyspark.sql.column | cast | :func:`astype` is an alias for :func:`cast`.
.. versionadded:: 1.4 | def cast(self, dataType: Union[DataType, str]) -> "Column":
"""
Casts the column into type ``dataType``.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
dataType : :class:`DataType` or str
a DataType or Python string literal with a DDL-formatted string
to use when parsing the column to the same type.
Returns
-------
:class:`Column`
Column representing whether each element of Column is cast into new type.
Examples
--------
>>> from pyspark.sql.types import StringType
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.age.cast("string").alias('ages')).collect()
[Row(ages='2'), Row(ages='5')]
>>> df.select(df.age.cast(StringType()).alias('ages')).collect()
[Row(ages='2'), Row(ages='5')]
"""
if isinstance(dataType, str):
jc = self._jc.cast(dataType)
elif isinstance(dataType, DataType):
from pyspark.sql import SparkSession
spark = SparkSession._getActiveSessionOrCreate()
jdt = spark._jsparkSession.parseDataType(dataType.json())
jc = self._jc.cast(jdt)
else:
raise PySparkTypeError(
error_class="NOT_DATATYPE_OR_STR",
message_parameters={"arg_name": "dataType", "arg_type": type(dataType).__name__},
)
return Column(jc)
| (self, dataType) |
39,310 | pyspark.sql.column | between |
True if the current column is between the lower bound and upper bound, inclusive.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
lowerBound : :class:`Column`, int, float, string, bool, datetime, date or Decimal
a boolean expression that boundary start, inclusive.
upperBound : :class:`Column`, int, float, string, bool, datetime, date or Decimal
a boolean expression that boundary end, inclusive.
Returns
-------
:class:`Column`
Column of booleans showing whether each element of Column
is between left and right (inclusive).
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.name, df.age.between(2, 4)).show()
+-----+---------------------------+
| name|((age >= 2) AND (age <= 4))|
+-----+---------------------------+
|Alice| true|
| Bob| false|
+-----+---------------------------+
| def between(
self,
lowerBound: Union["Column", "LiteralType", "DateTimeLiteral", "DecimalLiteral"],
upperBound: Union["Column", "LiteralType", "DateTimeLiteral", "DecimalLiteral"],
) -> "Column":
"""
True if the current column is between the lower bound and upper bound, inclusive.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
lowerBound : :class:`Column`, int, float, string, bool, datetime, date or Decimal
a boolean expression that boundary start, inclusive.
upperBound : :class:`Column`, int, float, string, bool, datetime, date or Decimal
a boolean expression that boundary end, inclusive.
Returns
-------
:class:`Column`
Column of booleans showing whether each element of Column
is between left and right (inclusive).
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.name, df.age.between(2, 4)).show()
+-----+---------------------------+
| name|((age >= 2) AND (age <= 4))|
+-----+---------------------------+
|Alice| true|
| Bob| false|
+-----+---------------------------+
"""
return (self >= lowerBound) & (self <= upperBound)
| (self, lowerBound: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DateTimeLiteral'), ForwardRef('DecimalLiteral')], upperBound: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DateTimeLiteral'), ForwardRef('DecimalLiteral')]) -> 'Column' |
39,311 | pyspark.sql.column | _ |
Compute bitwise AND of this expression with another expression.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other
a value or :class:`Column` to calculate bitwise and(&) with
this :class:`Column`.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([Row(a=170, b=75)])
>>> df.select(df.a.bitwiseAND(df.b)).collect()
[Row((a & b)=10)]
| def _bin_op(
name: str,
doc: str = "binary operator",
) -> Callable[
["Column", Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"]], "Column"
]:
"""Create a method for given binary operator"""
def _(
self: "Column",
other: Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"],
) -> "Column":
jc = other._jc if isinstance(other, Column) else other
njc = getattr(self._jc, name)(jc)
return Column(njc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral'), ForwardRef('DateTimeLiteral')]) -> 'Column' |
39,312 | pyspark.sql.column | _ |
Compute bitwise OR of this expression with another expression.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other
a value or :class:`Column` to calculate bitwise or(|) with
this :class:`Column`.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([Row(a=170, b=75)])
>>> df.select(df.a.bitwiseOR(df.b)).collect()
[Row((a | b)=235)]
| def _bin_op(
name: str,
doc: str = "binary operator",
) -> Callable[
["Column", Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"]], "Column"
]:
"""Create a method for given binary operator"""
def _(
self: "Column",
other: Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"],
) -> "Column":
jc = other._jc if isinstance(other, Column) else other
njc = getattr(self._jc, name)(jc)
return Column(njc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral'), ForwardRef('DateTimeLiteral')]) -> 'Column' |
39,313 | pyspark.sql.column | _ |
Compute bitwise XOR of this expression with another expression.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other
a value or :class:`Column` to calculate bitwise xor(^) with
this :class:`Column`.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([Row(a=170, b=75)])
>>> df.select(df.a.bitwiseXOR(df.b)).collect()
[Row((a ^ b)=225)]
| def _bin_op(
name: str,
doc: str = "binary operator",
) -> Callable[
["Column", Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"]], "Column"
]:
"""Create a method for given binary operator"""
def _(
self: "Column",
other: Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"],
) -> "Column":
jc = other._jc if isinstance(other, Column) else other
njc = getattr(self._jc, name)(jc)
return Column(njc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral'), ForwardRef('DateTimeLiteral')]) -> 'Column' |
39,314 | pyspark.sql.column | cast |
Casts the column into type ``dataType``.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
dataType : :class:`DataType` or str
a DataType or Python string literal with a DDL-formatted string
to use when parsing the column to the same type.
Returns
-------
:class:`Column`
Column representing whether each element of Column is cast into new type.
Examples
--------
>>> from pyspark.sql.types import StringType
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.age.cast("string").alias('ages')).collect()
[Row(ages='2'), Row(ages='5')]
>>> df.select(df.age.cast(StringType()).alias('ages')).collect()
[Row(ages='2'), Row(ages='5')]
| def cast(self, dataType: Union[DataType, str]) -> "Column":
"""
Casts the column into type ``dataType``.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
dataType : :class:`DataType` or str
a DataType or Python string literal with a DDL-formatted string
to use when parsing the column to the same type.
Returns
-------
:class:`Column`
Column representing whether each element of Column is cast into new type.
Examples
--------
>>> from pyspark.sql.types import StringType
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.age.cast("string").alias('ages')).collect()
[Row(ages='2'), Row(ages='5')]
>>> df.select(df.age.cast(StringType()).alias('ages')).collect()
[Row(ages='2'), Row(ages='5')]
"""
if isinstance(dataType, str):
jc = self._jc.cast(dataType)
elif isinstance(dataType, DataType):
from pyspark.sql import SparkSession
spark = SparkSession._getActiveSessionOrCreate()
jdt = spark._jsparkSession.parseDataType(dataType.json())
jc = self._jc.cast(jdt)
else:
raise PySparkTypeError(
error_class="NOT_DATATYPE_OR_STR",
message_parameters={"arg_name": "dataType", "arg_type": type(dataType).__name__},
)
return Column(jc)
| (self, dataType: Union[pyspark.sql.types.DataType, str]) -> pyspark.sql.column.Column |
39,315 | pyspark.sql.column | _ |
Contains the other element. Returns a boolean :class:`Column` based on a string match.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other
string in line. A value as a literal or a :class:`Column`.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.filter(df.name.contains('o')).collect()
[Row(age=5, name='Bob')]
| def _bin_op(
name: str,
doc: str = "binary operator",
) -> Callable[
["Column", Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"]], "Column"
]:
"""Create a method for given binary operator"""
def _(
self: "Column",
other: Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"],
) -> "Column":
jc = other._jc if isinstance(other, Column) else other
njc = getattr(self._jc, name)(jc)
return Column(njc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral'), ForwardRef('DateTimeLiteral')]) -> 'Column' |
39,316 | pyspark.sql.column | _ |
Returns a sort expression based on the descending order of the column.
.. versionadded:: 2.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([('Tom', 80), ('Alice', None)], ["name", "height"])
>>> df.select(df.name).orderBy(df.name.desc()).collect()
[Row(name='Tom'), Row(name='Alice')]
| def _unary_op(
name: str,
doc: str = "unary operator",
) -> Callable[["Column"], "Column"]:
"""Create a method for given unary operator"""
def _(self: "Column") -> "Column":
jc = getattr(self._jc, name)()
return Column(jc)
_.__doc__ = doc
return _
| (self: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
39,317 | pyspark.sql.column | _ |
Returns a sort expression based on the descending order of the column, and null values
appear before non-null values.
.. versionadded:: 2.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([('Tom', 80), (None, 60), ('Alice', None)], ["name", "height"])
>>> df.select(df.name).orderBy(df.name.desc_nulls_first()).collect()
[Row(name=None), Row(name='Tom'), Row(name='Alice')]
| def _unary_op(
name: str,
doc: str = "unary operator",
) -> Callable[["Column"], "Column"]:
"""Create a method for given unary operator"""
def _(self: "Column") -> "Column":
jc = getattr(self._jc, name)()
return Column(jc)
_.__doc__ = doc
return _
| (self: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
39,318 | pyspark.sql.column | _ |
Returns a sort expression based on the descending order of the column, and null values
appear after non-null values.
.. versionadded:: 2.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([('Tom', 80), (None, 60), ('Alice', None)], ["name", "height"])
>>> df.select(df.name).orderBy(df.name.desc_nulls_last()).collect()
[Row(name='Tom'), Row(name='Alice'), Row(name=None)]
| def _unary_op(
name: str,
doc: str = "unary operator",
) -> Callable[["Column"], "Column"]:
"""Create a method for given unary operator"""
def _(self: "Column") -> "Column":
jc = getattr(self._jc, name)()
return Column(jc)
_.__doc__ = doc
return _
| (self: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
39,319 | pyspark.sql.column | dropFields |
An expression that drops fields in :class:`StructType` by name.
This is a no-op if the schema doesn't contain field name(s).
.. versionadded:: 3.1.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
fieldNames : str
Desired field names (collects all positional arguments passed)
The result will drop at a location if any field matches in the Column.
Returns
-------
:class:`Column`
Column representing whether each element of Column with field dropped by fieldName.
Examples
--------
>>> from pyspark.sql import Row
>>> from pyspark.sql.functions import col, lit
>>> df = spark.createDataFrame([
... Row(a=Row(b=1, c=2, d=3, e=Row(f=4, g=5, h=6)))])
>>> df.withColumn('a', df['a'].dropFields('b')).show()
+-----------------+
| a|
+-----------------+
|{2, 3, {4, 5, 6}}|
+-----------------+
>>> df.withColumn('a', df['a'].dropFields('b', 'c')).show()
+--------------+
| a|
+--------------+
|{3, {4, 5, 6}}|
+--------------+
This method supports dropping multiple nested fields directly e.g.
>>> df.withColumn("a", col("a").dropFields("e.g", "e.h")).show()
+--------------+
| a|
+--------------+
|{1, 2, 3, {4}}|
+--------------+
However, if you are going to add/replace multiple nested fields,
it is preferred to extract out the nested struct before
adding/replacing multiple fields e.g.
>>> df.select(col("a").withField(
... "e", col("a.e").dropFields("g", "h")).alias("a")
... ).show()
+--------------+
| a|
+--------------+
|{1, 2, 3, {4}}|
+--------------+
| def dropFields(self, *fieldNames: str) -> "Column":
"""
An expression that drops fields in :class:`StructType` by name.
This is a no-op if the schema doesn't contain field name(s).
.. versionadded:: 3.1.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
fieldNames : str
Desired field names (collects all positional arguments passed)
The result will drop at a location if any field matches in the Column.
Returns
-------
:class:`Column`
Column representing whether each element of Column with field dropped by fieldName.
Examples
--------
>>> from pyspark.sql import Row
>>> from pyspark.sql.functions import col, lit
>>> df = spark.createDataFrame([
... Row(a=Row(b=1, c=2, d=3, e=Row(f=4, g=5, h=6)))])
>>> df.withColumn('a', df['a'].dropFields('b')).show()
+-----------------+
| a|
+-----------------+
|{2, 3, {4, 5, 6}}|
+-----------------+
>>> df.withColumn('a', df['a'].dropFields('b', 'c')).show()
+--------------+
| a|
+--------------+
|{3, {4, 5, 6}}|
+--------------+
This method supports dropping multiple nested fields directly e.g.
>>> df.withColumn("a", col("a").dropFields("e.g", "e.h")).show()
+--------------+
| a|
+--------------+
|{1, 2, 3, {4}}|
+--------------+
However, if you are going to add/replace multiple nested fields,
it is preferred to extract out the nested struct before
adding/replacing multiple fields e.g.
>>> df.select(col("a").withField(
... "e", col("a.e").dropFields("g", "h")).alias("a")
... ).show()
+--------------+
| a|
+--------------+
|{1, 2, 3, {4}}|
+--------------+
"""
sc = get_active_spark_context()
jc = self._jc.dropFields(_to_seq(sc, fieldNames))
return Column(jc)
| (self, *fieldNames: str) -> pyspark.sql.column.Column |
39,320 | pyspark.sql.column | _ |
String ends with. Returns a boolean :class:`Column` based on a string match.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : :class:`Column` or str
string at end of line (do not use a regex `$`)
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.filter(df.name.endswith('ice')).collect()
[Row(age=2, name='Alice')]
>>> df.filter(df.name.endswith('ice$')).collect()
[]
| def _bin_op(
name: str,
doc: str = "binary operator",
) -> Callable[
["Column", Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"]], "Column"
]:
"""Create a method for given binary operator"""
def _(
self: "Column",
other: Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"],
) -> "Column":
jc = other._jc if isinstance(other, Column) else other
njc = getattr(self._jc, name)(jc)
return Column(njc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral'), ForwardRef('DateTimeLiteral')]) -> 'Column' |
39,321 | pyspark.sql.column | _ |
Equality test that is safe for null values.
.. versionadded:: 2.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other
a value or :class:`Column`
Examples
--------
>>> from pyspark.sql import Row
>>> df1 = spark.createDataFrame([
... Row(id=1, value='foo'),
... Row(id=2, value=None)
... ])
>>> df1.select(
... df1['value'] == 'foo',
... df1['value'].eqNullSafe('foo'),
... df1['value'].eqNullSafe(None)
... ).show()
+-------------+---------------+----------------+
|(value = foo)|(value <=> foo)|(value <=> NULL)|
+-------------+---------------+----------------+
| true| true| false|
| NULL| false| true|
+-------------+---------------+----------------+
>>> df2 = spark.createDataFrame([
... Row(value = 'bar'),
... Row(value = None)
... ])
>>> df1.join(df2, df1["value"] == df2["value"]).count()
0
>>> df1.join(df2, df1["value"].eqNullSafe(df2["value"])).count()
1
>>> df2 = spark.createDataFrame([
... Row(id=1, value=float('NaN')),
... Row(id=2, value=42.0),
... Row(id=3, value=None)
... ])
>>> df2.select(
... df2['value'].eqNullSafe(None),
... df2['value'].eqNullSafe(float('NaN')),
... df2['value'].eqNullSafe(42.0)
... ).show()
+----------------+---------------+----------------+
|(value <=> NULL)|(value <=> NaN)|(value <=> 42.0)|
+----------------+---------------+----------------+
| false| true| false|
| false| false| true|
| true| false| false|
+----------------+---------------+----------------+
Notes
-----
Unlike Pandas, PySpark doesn't consider NaN values to be NULL. See the
`NaN Semantics <https://spark.apache.org/docs/latest/sql-ref-datatypes.html#nan-semantics>`_
for details.
| def _bin_op(
name: str,
doc: str = "binary operator",
) -> Callable[
["Column", Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"]], "Column"
]:
"""Create a method for given binary operator"""
def _(
self: "Column",
other: Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"],
) -> "Column":
jc = other._jc if isinstance(other, Column) else other
njc = getattr(self._jc, name)(jc)
return Column(njc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral'), ForwardRef('DateTimeLiteral')]) -> 'Column' |
39,322 | pyspark.sql.column | getField |
An expression that gets a field by name in a :class:`StructType`.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name
a literal value, or a :class:`Column` expression.
The result will only be true at a location if the field matches in the Column.
.. deprecated:: 3.0.0
:class:`Column` as a parameter is deprecated.
Returns
-------
:class:`Column`
Column representing whether each element of Column got by name.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([Row(r=Row(a=1, b="b"))])
>>> df.select(df.r.getField("b")).show()
+---+
|r.b|
+---+
| b|
+---+
>>> df.select(df.r.a).show()
+---+
|r.a|
+---+
| 1|
+---+
| def getField(self, name: Any) -> "Column":
"""
An expression that gets a field by name in a :class:`StructType`.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name
a literal value, or a :class:`Column` expression.
The result will only be true at a location if the field matches in the Column.
.. deprecated:: 3.0.0
:class:`Column` as a parameter is deprecated.
Returns
-------
:class:`Column`
Column representing whether each element of Column got by name.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([Row(r=Row(a=1, b="b"))])
>>> df.select(df.r.getField("b")).show()
+---+
|r.b|
+---+
| b|
+---+
>>> df.select(df.r.a).show()
+---+
|r.a|
+---+
| 1|
+---+
"""
if isinstance(name, Column):
warnings.warn(
"A column as 'name' in getField is deprecated as of Spark 3.0, and will not "
"be supported in the future release. Use `column[name]` or `column.name` syntax "
"instead.",
FutureWarning,
)
return self[name]
| (self, name: Any) -> pyspark.sql.column.Column |
39,323 | pyspark.sql.column | getItem |
An expression that gets an item at position ``ordinal`` out of a list,
or gets an item by key out of a dict.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
key
a literal value, or a :class:`Column` expression.
The result will only be true at a location if the item matches in the column.
.. deprecated:: 3.0.0
:class:`Column` as a parameter is deprecated.
Returns
-------
:class:`Column`
Column representing the item(s) got at position out of a list or by key out of a dict.
Examples
--------
>>> df = spark.createDataFrame([([1, 2], {"key": "value"})], ["l", "d"])
>>> df.select(df.l.getItem(0), df.d.getItem("key")).show()
+----+------+
|l[0]|d[key]|
+----+------+
| 1| value|
+----+------+
| def getItem(self, key: Any) -> "Column":
"""
An expression that gets an item at position ``ordinal`` out of a list,
or gets an item by key out of a dict.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
key
a literal value, or a :class:`Column` expression.
The result will only be true at a location if the item matches in the column.
.. deprecated:: 3.0.0
:class:`Column` as a parameter is deprecated.
Returns
-------
:class:`Column`
Column representing the item(s) got at position out of a list or by key out of a dict.
Examples
--------
>>> df = spark.createDataFrame([([1, 2], {"key": "value"})], ["l", "d"])
>>> df.select(df.l.getItem(0), df.d.getItem("key")).show()
+----+------+
|l[0]|d[key]|
+----+------+
| 1| value|
+----+------+
"""
if isinstance(key, Column):
warnings.warn(
"A column as 'key' in getItem is deprecated as of Spark 3.0, and will not "
"be supported in the future release. Use `column[key]` or `column.key` syntax "
"instead.",
FutureWarning,
)
return self[key]
| (self, key: Any) -> pyspark.sql.column.Column |
39,324 | pyspark.sql.column | ilike |
SQL ILIKE expression (case insensitive LIKE). Returns a boolean :class:`Column`
based on a case insensitive match.
.. versionadded:: 3.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : str
a SQL LIKE pattern
See Also
--------
pyspark.sql.Column.rlike
Returns
-------
:class:`Column`
Column of booleans showing whether each element
in the Column is matched by SQL LIKE pattern.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.filter(df.name.ilike('%Ice')).collect()
[Row(age=2, name='Alice')]
| def ilike(self: "Column", other: str) -> "Column":
"""
SQL ILIKE expression (case insensitive LIKE). Returns a boolean :class:`Column`
based on a case insensitive match.
.. versionadded:: 3.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : str
a SQL LIKE pattern
See Also
--------
pyspark.sql.Column.rlike
Returns
-------
:class:`Column`
Column of booleans showing whether each element
in the Column is matched by SQL LIKE pattern.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.filter(df.name.ilike('%Ice')).collect()
[Row(age=2, name='Alice')]
"""
njc = getattr(self._jc, "ilike")(other)
return Column(njc)
| (self: pyspark.sql.column.Column, other: str) -> pyspark.sql.column.Column |
39,325 | pyspark.sql.column | _ |
True if the current expression is NOT null.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([Row(name='Tom', height=80), Row(name='Alice', height=None)])
>>> df.filter(df.height.isNotNull()).collect()
[Row(name='Tom', height=80)]
| def _unary_op(
name: str,
doc: str = "unary operator",
) -> Callable[["Column"], "Column"]:
"""Create a method for given unary operator"""
def _(self: "Column") -> "Column":
jc = getattr(self._jc, name)()
return Column(jc)
_.__doc__ = doc
return _
| (self: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
39,326 | pyspark.sql.column | _ |
True if the current expression is null.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame([Row(name='Tom', height=80), Row(name='Alice', height=None)])
>>> df.filter(df.height.isNull()).collect()
[Row(name='Alice', height=None)]
| def _unary_op(
name: str,
doc: str = "unary operator",
) -> Callable[["Column"], "Column"]:
"""Create a method for given unary operator"""
def _(self: "Column") -> "Column":
jc = getattr(self._jc, name)()
return Column(jc)
_.__doc__ = doc
return _
| (self: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
39,327 | pyspark.sql.column | isin |
A boolean expression that is evaluated to true if the value of this
expression is contained by the evaluated values of the arguments.
.. versionadded:: 1.5.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
cols
The result will only be true at a location if any value matches in the Column.
Returns
-------
:class:`Column`
Column of booleans showing whether each element in the Column is contained in cols.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df[df.name.isin("Bob", "Mike")].collect()
[Row(age=5, name='Bob')]
>>> df[df.age.isin([1, 2, 3])].collect()
[Row(age=2, name='Alice')]
| def isin(self, *cols: Any) -> "Column":
"""
A boolean expression that is evaluated to true if the value of this
expression is contained by the evaluated values of the arguments.
.. versionadded:: 1.5.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
cols
The result will only be true at a location if any value matches in the Column.
Returns
-------
:class:`Column`
Column of booleans showing whether each element in the Column is contained in cols.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df[df.name.isin("Bob", "Mike")].collect()
[Row(age=5, name='Bob')]
>>> df[df.age.isin([1, 2, 3])].collect()
[Row(age=2, name='Alice')]
"""
if len(cols) == 1 and isinstance(cols[0], (list, set)):
cols = cast(Tuple, cols[0])
cols = cast(
Tuple,
[c._jc if isinstance(c, Column) else _create_column_from_literal(c) for c in cols],
)
sc = get_active_spark_context()
jc = getattr(self._jc, "isin")(_to_seq(sc, cols))
return Column(jc)
| (self, *cols: Any) -> pyspark.sql.column.Column |
39,328 | pyspark.sql.column | like |
SQL like expression. Returns a boolean :class:`Column` based on a SQL LIKE match.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : str
a SQL LIKE pattern
See Also
--------
pyspark.sql.Column.rlike
Returns
-------
:class:`Column`
Column of booleans showing whether each element
in the Column is matched by SQL LIKE pattern.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.filter(df.name.like('Al%')).collect()
[Row(age=2, name='Alice')]
| def like(self: "Column", other: str) -> "Column":
"""
SQL like expression. Returns a boolean :class:`Column` based on a SQL LIKE match.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : str
a SQL LIKE pattern
See Also
--------
pyspark.sql.Column.rlike
Returns
-------
:class:`Column`
Column of booleans showing whether each element
in the Column is matched by SQL LIKE pattern.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.filter(df.name.like('Al%')).collect()
[Row(age=2, name='Alice')]
"""
njc = getattr(self._jc, "like")(other)
return Column(njc)
| (self: pyspark.sql.column.Column, other: str) -> pyspark.sql.column.Column |
39,329 | pyspark.sql.column | alias | :func:`name` is an alias for :func:`alias`.
.. versionadded:: 2.0 | def alias(self, *alias: str, **kwargs: Any) -> "Column":
"""
Returns this column aliased with a new name or names (in the case of expressions that
return more than one column, such as explode).
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
alias : str
desired column names (collects all positional arguments passed)
Other Parameters
----------------
metadata: dict
a dict of information to be stored in ``metadata`` attribute of the
corresponding :class:`StructField <pyspark.sql.types.StructField>` (optional, keyword
only argument)
.. versionchanged:: 2.2.0
Added optional ``metadata`` argument.
Returns
-------
:class:`Column`
Column representing whether each element of Column is aliased with new name or names.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.age.alias("age2")).collect()
[Row(age2=2), Row(age2=5)]
>>> df.select(df.age.alias("age3", metadata={'max': 99})).schema['age3'].metadata['max']
99
"""
metadata = kwargs.pop("metadata", None)
assert not kwargs, "Unexpected kwargs where passed: %s" % kwargs
sc = get_active_spark_context()
if len(alias) == 1:
if metadata:
assert sc._jvm is not None
jmeta = sc._jvm.org.apache.spark.sql.types.Metadata.fromJson(json.dumps(metadata))
return Column(getattr(self._jc, "as")(alias[0], jmeta))
else:
return Column(getattr(self._jc, "as")(alias[0]))
else:
if metadata:
raise PySparkValueError(
error_class="ONLY_ALLOWED_FOR_SINGLE_COLUMN",
message_parameters={"arg_name": "metadata"},
)
return Column(getattr(self._jc, "as")(_to_seq(sc, list(alias))))
| (self, *alias, **kwargs) |
39,330 | pyspark.sql.column | otherwise |
Evaluates a list of conditions and returns one of multiple possible result expressions.
If :func:`Column.otherwise` is not invoked, None is returned for unmatched conditions.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
value
a literal value, or a :class:`Column` expression.
Returns
-------
:class:`Column`
Column representing whether each element of Column is unmatched conditions.
Examples
--------
>>> from pyspark.sql import functions as sf
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.name, sf.when(df.age > 3, 1).otherwise(0)).show()
+-----+-------------------------------------+
| name|CASE WHEN (age > 3) THEN 1 ELSE 0 END|
+-----+-------------------------------------+
|Alice| 0|
| Bob| 1|
+-----+-------------------------------------+
See Also
--------
pyspark.sql.functions.when
| def otherwise(self, value: Any) -> "Column":
"""
Evaluates a list of conditions and returns one of multiple possible result expressions.
If :func:`Column.otherwise` is not invoked, None is returned for unmatched conditions.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
value
a literal value, or a :class:`Column` expression.
Returns
-------
:class:`Column`
Column representing whether each element of Column is unmatched conditions.
Examples
--------
>>> from pyspark.sql import functions as sf
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.name, sf.when(df.age > 3, 1).otherwise(0)).show()
+-----+-------------------------------------+
| name|CASE WHEN (age > 3) THEN 1 ELSE 0 END|
+-----+-------------------------------------+
|Alice| 0|
| Bob| 1|
+-----+-------------------------------------+
See Also
--------
pyspark.sql.functions.when
"""
v = value._jc if isinstance(value, Column) else value
jc = self._jc.otherwise(v)
return Column(jc)
| (self, value: Any) -> pyspark.sql.column.Column |
39,331 | pyspark.sql.column | over |
Define a windowing column.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
window : :class:`WindowSpec`
Returns
-------
:class:`Column`
Examples
--------
>>> from pyspark.sql import Window
>>> window = (
... Window.partitionBy("name")
... .orderBy("age")
... .rowsBetween(Window.unboundedPreceding, Window.currentRow)
... )
>>> from pyspark.sql.functions import rank, min, desc
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.withColumn(
... "rank", rank().over(window)
... ).withColumn(
... "min", min('age').over(window)
... ).sort(desc("age")).show()
+---+-----+----+---+
|age| name|rank|min|
+---+-----+----+---+
| 5| Bob| 1| 5|
| 2|Alice| 1| 2|
+---+-----+----+---+
| def over(self, window: "WindowSpec") -> "Column":
"""
Define a windowing column.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
window : :class:`WindowSpec`
Returns
-------
:class:`Column`
Examples
--------
>>> from pyspark.sql import Window
>>> window = (
... Window.partitionBy("name")
... .orderBy("age")
... .rowsBetween(Window.unboundedPreceding, Window.currentRow)
... )
>>> from pyspark.sql.functions import rank, min, desc
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.withColumn(
... "rank", rank().over(window)
... ).withColumn(
... "min", min('age').over(window)
... ).sort(desc("age")).show()
+---+-----+----+---+
|age| name|rank|min|
+---+-----+----+---+
| 5| Bob| 1| 5|
| 2|Alice| 1| 2|
+---+-----+----+---+
"""
from pyspark.sql.window import WindowSpec
if not isinstance(window, WindowSpec):
raise PySparkTypeError(
error_class="NOT_WINDOWSPEC",
message_parameters={"arg_name": "window", "arg_type": type(window).__name__},
)
jc = self._jc.over(window._jspec)
return Column(jc)
| (self, window: 'WindowSpec') -> 'Column' |
39,332 | pyspark.sql.column | rlike |
SQL RLIKE expression (LIKE with Regex). Returns a boolean :class:`Column` based on a regex
match.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : str
an extended regex expression
Returns
-------
:class:`Column`
Column of booleans showing whether each element
in the Column is matched by extended regex expression.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.filter(df.name.rlike('ice$')).collect()
[Row(age=2, name='Alice')]
| def rlike(self: "Column", other: str) -> "Column":
"""
SQL RLIKE expression (LIKE with Regex). Returns a boolean :class:`Column` based on a regex
match.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : str
an extended regex expression
Returns
-------
:class:`Column`
Column of booleans showing whether each element
in the Column is matched by extended regex expression.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.filter(df.name.rlike('ice$')).collect()
[Row(age=2, name='Alice')]
"""
njc = getattr(self._jc, "rlike")(other)
return Column(njc)
| (self: pyspark.sql.column.Column, other: str) -> pyspark.sql.column.Column |
39,333 | pyspark.sql.column | _ |
String starts with. Returns a boolean :class:`Column` based on a string match.
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : :class:`Column` or str
string at start of line (do not use a regex `^`)
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.filter(df.name.startswith('Al')).collect()
[Row(age=2, name='Alice')]
>>> df.filter(df.name.startswith('^Al')).collect()
[]
| def _bin_op(
name: str,
doc: str = "binary operator",
) -> Callable[
["Column", Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"]], "Column"
]:
"""Create a method for given binary operator"""
def _(
self: "Column",
other: Union["Column", "LiteralType", "DecimalLiteral", "DateTimeLiteral"],
) -> "Column":
jc = other._jc if isinstance(other, Column) else other
njc = getattr(self._jc, name)(jc)
return Column(njc)
_.__doc__ = doc
return _
| (self: 'Column', other: Union[ForwardRef('Column'), ForwardRef('LiteralType'), ForwardRef('DecimalLiteral'), ForwardRef('DateTimeLiteral')]) -> 'Column' |
39,334 | pyspark.sql.column | substr |
Return a :class:`Column` which is a substring of the column.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
startPos : :class:`Column` or int
start position
length : :class:`Column` or int
length of the substring
Returns
-------
:class:`Column`
Column representing whether each element of Column is substr of origin Column.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.name.substr(1, 3).alias("col")).collect()
[Row(col='Ali'), Row(col='Bob')]
| def substr(self, startPos: Union[int, "Column"], length: Union[int, "Column"]) -> "Column":
"""
Return a :class:`Column` which is a substring of the column.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
startPos : :class:`Column` or int
start position
length : :class:`Column` or int
length of the substring
Returns
-------
:class:`Column`
Column representing whether each element of Column is substr of origin Column.
Examples
--------
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.name.substr(1, 3).alias("col")).collect()
[Row(col='Ali'), Row(col='Bob')]
"""
if type(startPos) != type(length):
raise PySparkTypeError(
error_class="NOT_SAME_TYPE",
message_parameters={
"arg_name1": "startPos",
"arg_name2": "length",
"arg_type1": type(startPos).__name__,
"arg_type2": type(length).__name__,
},
)
if isinstance(startPos, int):
jc = self._jc.substr(startPos, length)
elif isinstance(startPos, Column):
jc = self._jc.substr(startPos._jc, cast("Column", length)._jc)
else:
raise PySparkTypeError(
error_class="NOT_COLUMN_OR_INT",
message_parameters={"arg_name": "startPos", "arg_type": type(startPos).__name__},
)
return Column(jc)
| (self, startPos: Union[int, pyspark.sql.column.Column], length: Union[int, pyspark.sql.column.Column]) -> pyspark.sql.column.Column |
39,335 | sparkypandy._column | to_pandas | null | def to_pandas(self) -> pd.Series:
# noinspection PyTypeChecker
df: pd.DataFrame = self.df_sparky.select(self._name).toPandas()
return df[self._name]
| (self) -> pandas.core.series.Series |
39,336 | pyspark.sql.column | when |
Evaluates a list of conditions and returns one of multiple possible result expressions.
If :func:`Column.otherwise` is not invoked, None is returned for unmatched conditions.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
condition : :class:`Column`
a boolean :class:`Column` expression.
value
a literal value, or a :class:`Column` expression.
Returns
-------
:class:`Column`
Column representing whether each element of Column is in conditions.
Examples
--------
>>> from pyspark.sql import functions as sf
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.name, sf.when(df.age > 4, 1).when(df.age < 3, -1).otherwise(0)).show()
+-----+------------------------------------------------------------+
| name|CASE WHEN (age > 4) THEN 1 WHEN (age < 3) THEN -1 ELSE 0 END|
+-----+------------------------------------------------------------+
|Alice| -1|
| Bob| 1|
+-----+------------------------------------------------------------+
See Also
--------
pyspark.sql.functions.when
| def when(self, condition: "Column", value: Any) -> "Column":
"""
Evaluates a list of conditions and returns one of multiple possible result expressions.
If :func:`Column.otherwise` is not invoked, None is returned for unmatched conditions.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
condition : :class:`Column`
a boolean :class:`Column` expression.
value
a literal value, or a :class:`Column` expression.
Returns
-------
:class:`Column`
Column representing whether each element of Column is in conditions.
Examples
--------
>>> from pyspark.sql import functions as sf
>>> df = spark.createDataFrame(
... [(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df.select(df.name, sf.when(df.age > 4, 1).when(df.age < 3, -1).otherwise(0)).show()
+-----+------------------------------------------------------------+
| name|CASE WHEN (age > 4) THEN 1 WHEN (age < 3) THEN -1 ELSE 0 END|
+-----+------------------------------------------------------------+
|Alice| -1|
| Bob| 1|
+-----+------------------------------------------------------------+
See Also
--------
pyspark.sql.functions.when
"""
if not isinstance(condition, Column):
raise PySparkTypeError(
error_class="NOT_COLUMN",
message_parameters={"arg_name": "condition", "arg_type": type(condition).__name__},
)
v = value._jc if isinstance(value, Column) else value
jc = self._jc.when(condition._jc, v)
return Column(jc)
| (self, condition: pyspark.sql.column.Column, value: Any) -> pyspark.sql.column.Column |
39,337 | pyspark.sql.column | withField |
An expression that adds/replaces a field in :class:`StructType` by name.
.. versionadded:: 3.1.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
fieldName : str
a literal value.
The result will only be true at a location if any field matches in the Column.
col : :class:`Column`
A :class:`Column` expression for the column with `fieldName`.
Returns
-------
:class:`Column`
Column representing whether each element of Column
which field was added/replaced by fieldName.
Examples
--------
>>> from pyspark.sql import Row
>>> from pyspark.sql.functions import lit
>>> df = spark.createDataFrame([Row(a=Row(b=1, c=2))])
>>> df.withColumn('a', df['a'].withField('b', lit(3))).select('a.b').show()
+---+
| b|
+---+
| 3|
+---+
>>> df.withColumn('a', df['a'].withField('d', lit(4))).select('a.d').show()
+---+
| d|
+---+
| 4|
+---+
| def withField(self, fieldName: str, col: "Column") -> "Column":
"""
An expression that adds/replaces a field in :class:`StructType` by name.
.. versionadded:: 3.1.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
fieldName : str
a literal value.
The result will only be true at a location if any field matches in the Column.
col : :class:`Column`
A :class:`Column` expression for the column with `fieldName`.
Returns
-------
:class:`Column`
Column representing whether each element of Column
which field was added/replaced by fieldName.
Examples
--------
>>> from pyspark.sql import Row
>>> from pyspark.sql.functions import lit
>>> df = spark.createDataFrame([Row(a=Row(b=1, c=2))])
>>> df.withColumn('a', df['a'].withField('b', lit(3))).select('a.b').show()
+---+
| b|
+---+
| 3|
+---+
>>> df.withColumn('a', df['a'].withField('d', lit(4))).select('a.d').show()
+---+
| d|
+---+
| 4|
+---+
"""
if not isinstance(fieldName, str):
raise PySparkTypeError(
error_class="NOT_STR",
message_parameters={"arg_name": "fieldName", "arg_type": type(fieldName).__name__},
)
if not isinstance(col, Column):
raise PySparkTypeError(
error_class="NOT_COLUMN",
message_parameters={"arg_name": "col", "arg_type": type(col).__name__},
)
return Column(self._jc.withField(fieldName, col._jc))
| (self, fieldName: str, col: pyspark.sql.column.Column) -> pyspark.sql.column.Column |
39,338 | sparkypandy._dataframe | DataFramy | null | class DataFramy(DataFrame): # type: ignore
@classmethod
def from_spark(cls, df_spark: DataFrame) -> DataFramy:
# noinspection PyProtectedMember
return cls(jdf=df_spark._jdf, sql_ctx=df_spark.sql_ctx)
@classmethod
def from_pandas(cls, spark_session: SparkSession, df_pandas: pd.DataFrame) -> DataFramy:
df_spark = spark_session.createDataFrame(df_pandas)
return cls.from_spark(df_spark)
def to_pandas(self) -> pd.DataFrame:
"""PEP8-compliant alias to toPandas()"""
# noinspection PyTypeChecker
return super().toPandas()
def __getitem__(self, item: str) -> Columny:
if not isinstance(item, str):
raise TypeError(f"Expected a string key, not {item}")
col = super().__getitem__(item=item)
return Columny.from_spark(col=col, df_sparky=self)
| (jdf: py4j.java_gateway.JavaObject, sql_ctx: Union[ForwardRef('SQLContext'), ForwardRef('SparkSession')]) |
39,339 | pyspark.sql.dataframe | __dir__ |
Examples
--------
>>> from pyspark.sql.functions import lit
Create a dataframe with a column named 'id'.
>>> df = spark.range(3)
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # Includes column id
['id', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty', 'isLocal', 'isStreaming']
Add a column named 'i_like_pancakes'.
>>> df = df.withColumn('i_like_pancakes', lit(1))
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # Includes columns i_like_pancakes, id
['i_like_pancakes', 'id', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty', 'isLocal']
Try to add an existed column 'inputFiles'.
>>> df = df.withColumn('inputFiles', lit(2))
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # Doesn't duplicate inputFiles
['i_like_pancakes', 'id', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty', 'isLocal']
Try to add a column named 'id2'.
>>> df = df.withColumn('id2', lit(3))
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # result includes id2 and sorted
['i_like_pancakes', 'id', 'id2', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty']
Don't include columns that are not valid python identifiers.
>>> df = df.withColumn('1', lit(4))
>>> df = df.withColumn('name 1', lit(5))
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # Doesn't include 1 or name 1
['i_like_pancakes', 'id', 'id2', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty']
| def __dir__(self) -> List[str]:
"""
Examples
--------
>>> from pyspark.sql.functions import lit
Create a dataframe with a column named 'id'.
>>> df = spark.range(3)
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # Includes column id
['id', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty', 'isLocal', 'isStreaming']
Add a column named 'i_like_pancakes'.
>>> df = df.withColumn('i_like_pancakes', lit(1))
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # Includes columns i_like_pancakes, id
['i_like_pancakes', 'id', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty', 'isLocal']
Try to add an existed column 'inputFiles'.
>>> df = df.withColumn('inputFiles', lit(2))
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # Doesn't duplicate inputFiles
['i_like_pancakes', 'id', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty', 'isLocal']
Try to add a column named 'id2'.
>>> df = df.withColumn('id2', lit(3))
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # result includes id2 and sorted
['i_like_pancakes', 'id', 'id2', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty']
Don't include columns that are not valid python identifiers.
>>> df = df.withColumn('1', lit(4))
>>> df = df.withColumn('name 1', lit(5))
>>> [attr for attr in dir(df) if attr[0] == 'i'][:7] # Doesn't include 1 or name 1
['i_like_pancakes', 'id', 'id2', 'inputFiles', 'intersect', 'intersectAll', 'isEmpty']
"""
attrs = set(super().__dir__())
attrs.update(filter(lambda s: s.isidentifier(), self.columns))
return sorted(attrs)
| (self) -> List[str] |
39,340 | pyspark.sql.dataframe | __getattr__ | Returns the :class:`Column` denoted by ``name``.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Column name to return as :class:`Column`.
Returns
-------
:class:`Column`
Requested column.
Examples
--------
>>> df = spark.createDataFrame([
... (2, "Alice"), (5, "Bob")], schema=["age", "name"])
Retrieve a column instance.
>>> df.select(df.age).show()
+---+
|age|
+---+
| 2|
| 5|
+---+
| def __getattr__(self, name: str) -> Column:
"""Returns the :class:`Column` denoted by ``name``.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Column name to return as :class:`Column`.
Returns
-------
:class:`Column`
Requested column.
Examples
--------
>>> df = spark.createDataFrame([
... (2, "Alice"), (5, "Bob")], schema=["age", "name"])
Retrieve a column instance.
>>> df.select(df.age).show()
+---+
|age|
+---+
| 2|
| 5|
+---+
"""
if name not in self.columns:
raise AttributeError(
"'%s' object has no attribute '%s'" % (self.__class__.__name__, name)
)
jc = self._jdf.apply(name)
return Column(jc)
| (self, name: str) -> pyspark.sql.column.Column |
39,341 | sparkypandy._dataframe | __getitem__ | null | def __getitem__(self, item: str) -> Columny:
if not isinstance(item, str):
raise TypeError(f"Expected a string key, not {item}")
col = super().__getitem__(item=item)
return Columny.from_spark(col=col, df_sparky=self)
| (self, item: str) -> sparkypandy._column.Columny |
39,342 | pyspark.sql.dataframe | __init__ | null | def __init__(
self,
jdf: JavaObject,
sql_ctx: Union["SQLContext", "SparkSession"],
):
from pyspark.sql.context import SQLContext
self._sql_ctx: Optional["SQLContext"] = None
if isinstance(sql_ctx, SQLContext):
assert not os.environ.get("SPARK_TESTING") # Sanity check for our internal usage.
assert isinstance(sql_ctx, SQLContext)
# We should remove this if-else branch in the future release, and rename
# sql_ctx to session in the constructor. This is an internal code path but
# was kept with a warning because it's used intensively by third-party libraries.
warnings.warn("DataFrame constructor is internal. Do not directly use it.")
self._sql_ctx = sql_ctx
session = sql_ctx.sparkSession
else:
session = sql_ctx
self._session: "SparkSession" = session
self._sc: SparkContext = sql_ctx._sc
self._jdf: JavaObject = jdf
self.is_cached = False
# initialized lazily
self._schema: Optional[StructType] = None
self._lazy_rdd: Optional[RDD[Row]] = None
# Check whether _repr_html is supported or not, we use it to avoid calling _jdf twice
# by __repr__ and _repr_html_ while eager evaluation opens.
self._support_repr_html = False
| (self, jdf: py4j.java_gateway.JavaObject, sql_ctx: Union[ForwardRef('SQLContext'), ForwardRef('SparkSession')]) |
39,343 | pyspark.sql.dataframe | __repr__ | null | def __repr__(self) -> str:
if not self._support_repr_html and self.sparkSession._jconf.isReplEagerEvalEnabled():
vertical = False
return self._jdf.showString(
self.sparkSession._jconf.replEagerEvalMaxNumRows(),
self.sparkSession._jconf.replEagerEvalTruncate(),
vertical,
)
else:
return "DataFrame[%s]" % (", ".join("%s: %s" % c for c in self.dtypes))
| (self) -> str |
39,344 | pyspark.sql.pandas.conversion | _collect_as_arrow |
Returns all records as a list of ArrowRecordBatches, pyarrow must be installed
and available on driver and worker Python environments.
This is an experimental feature.
:param split_batches: split batches such that each column is in its own allocation, so
that the selfDestruct optimization is effective; default False.
.. note:: Experimental.
| def _collect_as_arrow(self, split_batches: bool = False) -> List["pa.RecordBatch"]:
"""
Returns all records as a list of ArrowRecordBatches, pyarrow must be installed
and available on driver and worker Python environments.
This is an experimental feature.
:param split_batches: split batches such that each column is in its own allocation, so
that the selfDestruct optimization is effective; default False.
.. note:: Experimental.
"""
from pyspark.sql.dataframe import DataFrame
assert isinstance(self, DataFrame)
with SCCallSiteSync(self._sc):
(
port,
auth_secret,
jsocket_auth_server,
) = self._jdf.collectAsArrowToPython()
# Collect list of un-ordered batches where last element is a list of correct order indices
try:
batch_stream = _load_from_socket((port, auth_secret), ArrowCollectSerializer())
if split_batches:
# When spark.sql.execution.arrow.pyspark.selfDestruct.enabled, ensure
# each column in each record batch is contained in its own allocation.
# Otherwise, selfDestruct does nothing; it frees each column as its
# converted, but each column will actually be a list of slices of record
# batches, and so no memory is actually freed until all columns are
# converted.
import pyarrow as pa
results = []
for batch_or_indices in batch_stream:
if isinstance(batch_or_indices, pa.RecordBatch):
batch_or_indices = pa.RecordBatch.from_arrays(
[
# This call actually reallocates the array
pa.concat_arrays([array])
for array in batch_or_indices
],
schema=batch_or_indices.schema,
)
results.append(batch_or_indices)
else:
results = list(batch_stream)
finally:
with unwrap_spark_exception():
# Join serving thread and raise any exceptions from collectAsArrowToPython
jsocket_auth_server.getResult()
# Separate RecordBatches from batch order indices in results
batches = results[:-1]
batch_order = results[-1]
# Re-order the batch list using the correct order
return [batches[i] for i in batch_order]
| (self, split_batches: bool = False) -> List[ForwardRef('pa.RecordBatch')] |
39,345 | pyspark.sql.dataframe | _ipython_key_completions_ | Returns the names of columns in this :class:`DataFrame`.
Examples
--------
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df._ipython_key_completions_()
['age', 'name']
Would return illegal identifiers.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], ["age 1", "name?1"])
>>> df._ipython_key_completions_()
['age 1', 'name?1']
| def _ipython_key_completions_(self) -> List[str]:
"""Returns the names of columns in this :class:`DataFrame`.
Examples
--------
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], ["age", "name"])
>>> df._ipython_key_completions_()
['age', 'name']
Would return illegal identifiers.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], ["age 1", "name?1"])
>>> df._ipython_key_completions_()
['age 1', 'name?1']
"""
return self.columns
| (self) -> List[str] |
39,346 | pyspark.sql.dataframe | _jcols | Return a JVM Seq of Columns from a list of Column or column names
If `cols` has only one list in it, cols[0] will be used as the list.
| def _jcols(self, *cols: "ColumnOrName") -> JavaObject:
"""Return a JVM Seq of Columns from a list of Column or column names
If `cols` has only one list in it, cols[0] will be used as the list.
"""
if len(cols) == 1 and isinstance(cols[0], list):
cols = cols[0]
return self._jseq(cols, _to_java_column)
| (self, *cols: 'ColumnOrName') -> py4j.java_gateway.JavaObject |
39,347 | pyspark.sql.dataframe | _jmap | Return a JVM Scala Map from a dict | def _jmap(self, jm: Dict) -> JavaObject:
"""Return a JVM Scala Map from a dict"""
return _to_scala_map(self.sparkSession._sc, jm)
| (self, jm: Dict) -> py4j.java_gateway.JavaObject |
39,348 | pyspark.sql.dataframe | _joinAsOf |
Perform an as-of join.
This is similar to a left-join except that we match on the nearest
key rather than equal keys.
Parameters
----------
other : :class:`DataFrame`
Right side of the join
leftAsOfColumn : str or :class:`Column`
a string for the as-of join column name, or a Column
rightAsOfColumn : str or :class:`Column`
a string for the as-of join column name, or a Column
on : str, list or :class:`Column`, optional
a string for the join column name, a list of column names,
a join expression (Column), or a list of Columns.
If `on` is a string or a list of strings indicating the name of the join column(s),
the column(s) must exist on both sides, and this performs an equi-join.
how : str, optional
default ``inner``. Must be one of: ``inner`` and ``left``.
tolerance : :class:`Column`, optional
an asof tolerance within this range; must be compatible
with the merge index.
allowExactMatches : bool, optional
default ``True``.
direction : str, optional
default ``backward``. Must be one of: ``backward``, ``forward``, and ``nearest``.
Examples
--------
The following performs an as-of join between ``left`` and ``right``.
>>> left = spark.createDataFrame([(1, "a"), (5, "b"), (10, "c")], ["a", "left_val"])
>>> right = spark.createDataFrame([(1, 1), (2, 2), (3, 3), (6, 6), (7, 7)],
... ["a", "right_val"])
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a"
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=1, left_val='a', right_val=1),
Row(a=5, left_val='b', right_val=3),
Row(a=10, left_val='c', right_val=7)]
>>> from pyspark.sql import functions as sf
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a", tolerance=sf.lit(1)
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=1, left_val='a', right_val=1)]
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a", how="left", tolerance=sf.lit(1)
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=1, left_val='a', right_val=1),
Row(a=5, left_val='b', right_val=None),
Row(a=10, left_val='c', right_val=None)]
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a", allowExactMatches=False
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=5, left_val='b', right_val=3),
Row(a=10, left_val='c', right_val=7)]
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a", direction="forward"
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=1, left_val='a', right_val=1),
Row(a=5, left_val='b', right_val=6)]
| def _joinAsOf(
self,
other: "DataFrame",
leftAsOfColumn: Union[str, Column],
rightAsOfColumn: Union[str, Column],
on: Optional[Union[str, List[str], Column, List[Column]]] = None,
how: Optional[str] = None,
*,
tolerance: Optional[Column] = None,
allowExactMatches: bool = True,
direction: str = "backward",
) -> "DataFrame":
"""
Perform an as-of join.
This is similar to a left-join except that we match on the nearest
key rather than equal keys.
Parameters
----------
other : :class:`DataFrame`
Right side of the join
leftAsOfColumn : str or :class:`Column`
a string for the as-of join column name, or a Column
rightAsOfColumn : str or :class:`Column`
a string for the as-of join column name, or a Column
on : str, list or :class:`Column`, optional
a string for the join column name, a list of column names,
a join expression (Column), or a list of Columns.
If `on` is a string or a list of strings indicating the name of the join column(s),
the column(s) must exist on both sides, and this performs an equi-join.
how : str, optional
default ``inner``. Must be one of: ``inner`` and ``left``.
tolerance : :class:`Column`, optional
an asof tolerance within this range; must be compatible
with the merge index.
allowExactMatches : bool, optional
default ``True``.
direction : str, optional
default ``backward``. Must be one of: ``backward``, ``forward``, and ``nearest``.
Examples
--------
The following performs an as-of join between ``left`` and ``right``.
>>> left = spark.createDataFrame([(1, "a"), (5, "b"), (10, "c")], ["a", "left_val"])
>>> right = spark.createDataFrame([(1, 1), (2, 2), (3, 3), (6, 6), (7, 7)],
... ["a", "right_val"])
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a"
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=1, left_val='a', right_val=1),
Row(a=5, left_val='b', right_val=3),
Row(a=10, left_val='c', right_val=7)]
>>> from pyspark.sql import functions as sf
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a", tolerance=sf.lit(1)
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=1, left_val='a', right_val=1)]
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a", how="left", tolerance=sf.lit(1)
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=1, left_val='a', right_val=1),
Row(a=5, left_val='b', right_val=None),
Row(a=10, left_val='c', right_val=None)]
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a", allowExactMatches=False
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=5, left_val='b', right_val=3),
Row(a=10, left_val='c', right_val=7)]
>>> left._joinAsOf(
... right, leftAsOfColumn="a", rightAsOfColumn="a", direction="forward"
... ).select(left.a, 'left_val', 'right_val').sort("a").collect()
[Row(a=1, left_val='a', right_val=1),
Row(a=5, left_val='b', right_val=6)]
"""
if isinstance(leftAsOfColumn, str):
leftAsOfColumn = self[leftAsOfColumn]
left_as_of_jcol = leftAsOfColumn._jc
if isinstance(rightAsOfColumn, str):
rightAsOfColumn = other[rightAsOfColumn]
right_as_of_jcol = rightAsOfColumn._jc
if on is not None and not isinstance(on, list):
on = [on] # type: ignore[assignment]
if on is not None:
if isinstance(on[0], str):
on = self._jseq(cast(List[str], on))
else:
assert isinstance(on[0], Column), "on should be Column or list of Column"
on = reduce(lambda x, y: x.__and__(y), cast(List[Column], on))
on = on._jc
if how is None:
how = "inner"
assert isinstance(how, str), "how should be a string"
if tolerance is not None:
assert isinstance(tolerance, Column), "tolerance should be Column"
tolerance = tolerance._jc
jdf = self._jdf.joinAsOf(
other._jdf,
left_as_of_jcol,
right_as_of_jcol,
on,
how,
tolerance,
allowExactMatches,
direction,
)
return DataFrame(jdf, self.sparkSession)
| (self, other: pyspark.sql.dataframe.DataFrame, leftAsOfColumn: Union[str, pyspark.sql.column.Column], rightAsOfColumn: Union[str, pyspark.sql.column.Column], on: Union[str, List[str], pyspark.sql.column.Column, List[pyspark.sql.column.Column], NoneType] = None, how: Optional[str] = None, *, tolerance: Optional[pyspark.sql.column.Column] = None, allowExactMatches: bool = True, direction: str = 'backward') -> pyspark.sql.dataframe.DataFrame |
39,349 | pyspark.sql.dataframe | _jseq | Return a JVM Seq of Columns from a list of Column or names | def _jseq(
self,
cols: Sequence,
converter: Optional[Callable[..., Union["PrimitiveType", JavaObject]]] = None,
) -> JavaObject:
"""Return a JVM Seq of Columns from a list of Column or names"""
return _to_seq(self.sparkSession._sc, cols, converter)
| (self, cols: Sequence, converter: Optional[Callable[..., Union[ForwardRef('PrimitiveType'), py4j.java_gateway.JavaObject]]] = None) -> py4j.java_gateway.JavaObject |
39,350 | pyspark.sql.dataframe | _repr_html_ | Returns a :class:`DataFrame` with html code when you enabled eager evaluation
by 'spark.sql.repl.eagerEval.enabled', this only called by REPL you are
using support eager evaluation with HTML.
| def _repr_html_(self) -> Optional[str]:
"""Returns a :class:`DataFrame` with html code when you enabled eager evaluation
by 'spark.sql.repl.eagerEval.enabled', this only called by REPL you are
using support eager evaluation with HTML.
"""
if not self._support_repr_html:
self._support_repr_html = True
if self.sparkSession._jconf.isReplEagerEvalEnabled():
return self._jdf.htmlString(
self.sparkSession._jconf.replEagerEvalMaxNumRows(),
self.sparkSession._jconf.replEagerEvalTruncate(),
)
else:
return None
| (self) -> Optional[str] |
39,351 | pyspark.sql.dataframe | _show_string | null | def _show_string(
self, n: int = 20, truncate: Union[bool, int] = True, vertical: bool = False
) -> str:
if not isinstance(n, int) or isinstance(n, bool):
raise PySparkTypeError(
error_class="NOT_INT",
message_parameters={"arg_name": "n", "arg_type": type(n).__name__},
)
if not isinstance(vertical, bool):
raise PySparkTypeError(
error_class="NOT_BOOL",
message_parameters={"arg_name": "vertical", "arg_type": type(vertical).__name__},
)
if isinstance(truncate, bool) and truncate:
return self._jdf.showString(n, 20, vertical)
else:
try:
int_truncate = int(truncate)
except ValueError:
raise PySparkTypeError(
error_class="NOT_BOOL",
message_parameters={
"arg_name": "truncate",
"arg_type": type(truncate).__name__,
},
)
return self._jdf.showString(n, int_truncate, vertical)
| (self, n: int = 20, truncate: Union[bool, int] = True, vertical: bool = False) -> str |
39,352 | pyspark.sql.dataframe | _sort_cols | Return a JVM Seq of Columns that describes the sort order | def _sort_cols(
self, cols: Sequence[Union[str, Column, List[Union[str, Column]]]], kwargs: Dict[str, Any]
) -> JavaObject:
"""Return a JVM Seq of Columns that describes the sort order"""
if not cols:
raise PySparkValueError(
error_class="CANNOT_BE_EMPTY",
message_parameters={"item": "column"},
)
if len(cols) == 1 and isinstance(cols[0], list):
cols = cols[0]
jcols = [_to_java_column(cast("ColumnOrName", c)) for c in cols]
ascending = kwargs.get("ascending", True)
if isinstance(ascending, (bool, int)):
if not ascending:
jcols = [jc.desc() for jc in jcols]
elif isinstance(ascending, list):
jcols = [jc if asc else jc.desc() for asc, jc in zip(ascending, jcols)]
else:
raise PySparkTypeError(
error_class="NOT_BOOL_OR_LIST",
message_parameters={"arg_name": "ascending", "arg_type": type(ascending).__name__},
)
return self._jseq(jcols)
| (self, cols: Sequence[Union[str, pyspark.sql.column.Column, List[Union[str, pyspark.sql.column.Column]]]], kwargs: Dict[str, Any]) -> py4j.java_gateway.JavaObject |
39,353 | pyspark.sql.dataframe | agg | Aggregate on the entire :class:`DataFrame` without groups
(shorthand for ``df.groupBy().agg()``).
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
exprs : :class:`Column` or dict of key and value strings
Columns or expressions to aggregate DataFrame by.
Returns
-------
:class:`DataFrame`
Aggregated DataFrame.
Examples
--------
>>> from pyspark.sql import functions as sf
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.agg({"age": "max"}).show()
+--------+
|max(age)|
+--------+
| 5|
+--------+
>>> df.agg(sf.min(df.age)).show()
+--------+
|min(age)|
+--------+
| 2|
+--------+
| def agg(self, *exprs: Union[Column, Dict[str, str]]) -> "DataFrame":
"""Aggregate on the entire :class:`DataFrame` without groups
(shorthand for ``df.groupBy().agg()``).
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
exprs : :class:`Column` or dict of key and value strings
Columns or expressions to aggregate DataFrame by.
Returns
-------
:class:`DataFrame`
Aggregated DataFrame.
Examples
--------
>>> from pyspark.sql import functions as sf
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.agg({"age": "max"}).show()
+--------+
|max(age)|
+--------+
| 5|
+--------+
>>> df.agg(sf.min(df.age)).show()
+--------+
|min(age)|
+--------+
| 2|
+--------+
"""
return self.groupBy().agg(*exprs) # type: ignore[arg-type]
| (self, *exprs: Union[pyspark.sql.column.Column, Dict[str, str]]) -> pyspark.sql.dataframe.DataFrame |
39,354 | pyspark.sql.dataframe | alias | Returns a new :class:`DataFrame` with an alias set.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
alias : str
an alias name to be set for the :class:`DataFrame`.
Returns
-------
:class:`DataFrame`
Aliased DataFrame.
Examples
--------
>>> from pyspark.sql.functions import col, desc
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
>>> df_as1 = df.alias("df_as1")
>>> df_as2 = df.alias("df_as2")
>>> joined_df = df_as1.join(df_as2, col("df_as1.name") == col("df_as2.name"), 'inner')
>>> joined_df.select(
... "df_as1.name", "df_as2.name", "df_as2.age").sort(desc("df_as1.name")).show()
+-----+-----+---+
| name| name|age|
+-----+-----+---+
| Tom| Tom| 14|
| Bob| Bob| 16|
|Alice|Alice| 23|
+-----+-----+---+
| def alias(self, alias: str) -> "DataFrame":
"""Returns a new :class:`DataFrame` with an alias set.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
alias : str
an alias name to be set for the :class:`DataFrame`.
Returns
-------
:class:`DataFrame`
Aliased DataFrame.
Examples
--------
>>> from pyspark.sql.functions import col, desc
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
>>> df_as1 = df.alias("df_as1")
>>> df_as2 = df.alias("df_as2")
>>> joined_df = df_as1.join(df_as2, col("df_as1.name") == col("df_as2.name"), 'inner')
>>> joined_df.select(
... "df_as1.name", "df_as2.name", "df_as2.age").sort(desc("df_as1.name")).show()
+-----+-----+---+
| name| name|age|
+-----+-----+---+
| Tom| Tom| 14|
| Bob| Bob| 16|
|Alice|Alice| 23|
+-----+-----+---+
"""
assert isinstance(alias, str), "alias should be a string"
return DataFrame(getattr(self._jdf, "as")(alias), self.sparkSession)
| (self, alias: str) -> pyspark.sql.dataframe.DataFrame |
39,355 | pyspark.sql.dataframe | approxQuantile |
Calculates the approximate quantiles of numerical columns of a
:class:`DataFrame`.
The result of this algorithm has the following deterministic bound:
If the :class:`DataFrame` has N elements and if we request the quantile at
probability `p` up to error `err`, then the algorithm will return
a sample `x` from the :class:`DataFrame` so that the *exact* rank of `x` is
close to (p * N). More precisely,
floor((p - err) * N) <= rank(x) <= ceil((p + err) * N).
This method implements a variation of the Greenwald-Khanna
algorithm (with some speed optimizations). The algorithm was first
present in [[https://doi.org/10.1145/375663.375670
Space-efficient Online Computation of Quantile Summaries]]
by Greenwald and Khanna.
.. versionadded:: 2.0.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
col: str, tuple or list
Can be a single column name, or a list of names for multiple columns.
.. versionchanged:: 2.2.0
Added support for multiple columns.
probabilities : list or tuple
a list of quantile probabilities
Each number must belong to [0, 1].
For example 0 is the minimum, 0.5 is the median, 1 is the maximum.
relativeError : float
The relative target precision to achieve
(>= 0). If set to zero, the exact quantiles are computed, which
could be very expensive. Note that values greater than 1 are
accepted but gives the same result as 1.
Returns
-------
list
the approximate quantiles at the given probabilities.
* If the input `col` is a string, the output is a list of floats.
* If the input `col` is a list or tuple of strings, the output is also a
list, but each element in it is a list of floats, i.e., the output
is a list of list of floats.
Notes
-----
Null values will be ignored in numerical columns before calculation.
For columns only containing null values, an empty list is returned.
| def approxQuantile(
self,
col: Union[str, List[str], Tuple[str]],
probabilities: Union[List[float], Tuple[float]],
relativeError: float,
) -> Union[List[float], List[List[float]]]:
"""
Calculates the approximate quantiles of numerical columns of a
:class:`DataFrame`.
The result of this algorithm has the following deterministic bound:
If the :class:`DataFrame` has N elements and if we request the quantile at
probability `p` up to error `err`, then the algorithm will return
a sample `x` from the :class:`DataFrame` so that the *exact* rank of `x` is
close to (p * N). More precisely,
floor((p - err) * N) <= rank(x) <= ceil((p + err) * N).
This method implements a variation of the Greenwald-Khanna
algorithm (with some speed optimizations). The algorithm was first
present in [[https://doi.org/10.1145/375663.375670
Space-efficient Online Computation of Quantile Summaries]]
by Greenwald and Khanna.
.. versionadded:: 2.0.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
col: str, tuple or list
Can be a single column name, or a list of names for multiple columns.
.. versionchanged:: 2.2.0
Added support for multiple columns.
probabilities : list or tuple
a list of quantile probabilities
Each number must belong to [0, 1].
For example 0 is the minimum, 0.5 is the median, 1 is the maximum.
relativeError : float
The relative target precision to achieve
(>= 0). If set to zero, the exact quantiles are computed, which
could be very expensive. Note that values greater than 1 are
accepted but gives the same result as 1.
Returns
-------
list
the approximate quantiles at the given probabilities.
* If the input `col` is a string, the output is a list of floats.
* If the input `col` is a list or tuple of strings, the output is also a
list, but each element in it is a list of floats, i.e., the output
is a list of list of floats.
Notes
-----
Null values will be ignored in numerical columns before calculation.
For columns only containing null values, an empty list is returned.
"""
if not isinstance(col, (str, list, tuple)):
raise PySparkTypeError(
error_class="NOT_LIST_OR_STR_OR_TUPLE",
message_parameters={"arg_name": "col", "arg_type": type(col).__name__},
)
isStr = isinstance(col, str)
if isinstance(col, tuple):
col = list(col)
elif isStr:
col = [cast(str, col)]
for c in col:
if not isinstance(c, str):
raise PySparkTypeError(
error_class="DISALLOWED_TYPE_FOR_CONTAINER",
message_parameters={
"arg_name": "col",
"arg_type": type(col).__name__,
"allowed_types": "str",
"return_type": type(c).__name__,
},
)
col = _to_list(self._sc, cast(List["ColumnOrName"], col))
if not isinstance(probabilities, (list, tuple)):
raise PySparkTypeError(
error_class="NOT_LIST_OR_TUPLE",
message_parameters={
"arg_name": "probabilities",
"arg_type": type(probabilities).__name__,
},
)
if isinstance(probabilities, tuple):
probabilities = list(probabilities)
for p in probabilities:
if not isinstance(p, (float, int)) or p < 0 or p > 1:
raise PySparkTypeError(
error_class="NOT_LIST_OF_FLOAT_OR_INT",
message_parameters={
"arg_name": "probabilities",
"arg_type": type(p).__name__,
},
)
probabilities = _to_list(self._sc, cast(List["ColumnOrName"], probabilities))
if not isinstance(relativeError, (float, int)):
raise PySparkTypeError(
error_class="NOT_FLOAT_OR_INT",
message_parameters={
"arg_name": "relativeError",
"arg_type": type(relativeError).__name__,
},
)
if relativeError < 0:
raise PySparkValueError(
error_class="NEGATIVE_VALUE",
message_parameters={
"arg_name": "relativeError",
"arg_value": str(relativeError),
},
)
relativeError = float(relativeError)
jaq = self._jdf.stat().approxQuantile(col, probabilities, relativeError)
jaq_list = [list(j) for j in jaq]
return jaq_list[0] if isStr else jaq_list
| (self, col: Union[str, List[str], Tuple[str]], probabilities: Union[List[float], Tuple[float]], relativeError: float) -> Union[List[float], List[List[float]]] |
39,356 | pyspark.sql.dataframe | cache | Persists the :class:`DataFrame` with the default storage level (`MEMORY_AND_DISK_DESER`).
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Notes
-----
The default storage level has changed to `MEMORY_AND_DISK_DESER` to match Scala in 3.0.
Returns
-------
:class:`DataFrame`
Cached DataFrame.
Examples
--------
>>> df = spark.range(1)
>>> df.cache()
DataFrame[id: bigint]
>>> df.explain()
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- InMemoryTableScan ...
| def cache(self) -> "DataFrame":
"""Persists the :class:`DataFrame` with the default storage level (`MEMORY_AND_DISK_DESER`).
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Notes
-----
The default storage level has changed to `MEMORY_AND_DISK_DESER` to match Scala in 3.0.
Returns
-------
:class:`DataFrame`
Cached DataFrame.
Examples
--------
>>> df = spark.range(1)
>>> df.cache()
DataFrame[id: bigint]
>>> df.explain()
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- InMemoryTableScan ...
"""
self.is_cached = True
self._jdf.cache()
return self
| (self) -> pyspark.sql.dataframe.DataFrame |
39,357 | pyspark.sql.dataframe | checkpoint | Returns a checkpointed version of this :class:`DataFrame`. Checkpointing can be used to
truncate the logical plan of this :class:`DataFrame`, which is especially useful in
iterative algorithms where the plan may grow exponentially. It will be saved to files
inside the checkpoint directory set with :meth:`SparkContext.setCheckpointDir`.
.. versionadded:: 2.1.0
Parameters
----------
eager : bool, optional, default True
Whether to checkpoint this :class:`DataFrame` immediately.
Returns
-------
:class:`DataFrame`
Checkpointed DataFrame.
Notes
-----
This API is experimental.
Examples
--------
>>> import tempfile
>>> df = spark.createDataFrame([
... (14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
>>> with tempfile.TemporaryDirectory() as d:
... spark.sparkContext.setCheckpointDir("/tmp/bb")
... df.checkpoint(False)
DataFrame[age: bigint, name: string]
| def checkpoint(self, eager: bool = True) -> "DataFrame":
"""Returns a checkpointed version of this :class:`DataFrame`. Checkpointing can be used to
truncate the logical plan of this :class:`DataFrame`, which is especially useful in
iterative algorithms where the plan may grow exponentially. It will be saved to files
inside the checkpoint directory set with :meth:`SparkContext.setCheckpointDir`.
.. versionadded:: 2.1.0
Parameters
----------
eager : bool, optional, default True
Whether to checkpoint this :class:`DataFrame` immediately.
Returns
-------
:class:`DataFrame`
Checkpointed DataFrame.
Notes
-----
This API is experimental.
Examples
--------
>>> import tempfile
>>> df = spark.createDataFrame([
... (14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
>>> with tempfile.TemporaryDirectory() as d:
... spark.sparkContext.setCheckpointDir("/tmp/bb")
... df.checkpoint(False)
DataFrame[age: bigint, name: string]
"""
jdf = self._jdf.checkpoint(eager)
return DataFrame(jdf, self.sparkSession)
| (self, eager: bool = True) -> pyspark.sql.dataframe.DataFrame |
39,358 | pyspark.sql.dataframe | coalesce |
Returns a new :class:`DataFrame` that has exactly `numPartitions` partitions.
Similar to coalesce defined on an :class:`RDD`, this operation results in a
narrow dependency, e.g. if you go from 1000 partitions to 100 partitions,
there will not be a shuffle, instead each of the 100 new partitions will
claim 10 of the current partitions. If a larger number of partitions is requested,
it will stay at the current number of partitions.
However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,
this may result in your computation taking place on fewer nodes than
you like (e.g. one node in the case of numPartitions = 1). To avoid this,
you can call repartition(). This will add a shuffle step, but means the
current upstream partitions will be executed in parallel (per whatever
the current partitioning is).
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
numPartitions : int
specify the target number of partitions
Returns
-------
:class:`DataFrame`
Examples
--------
>>> df = spark.range(10)
>>> df.coalesce(1).rdd.getNumPartitions()
1
| def coalesce(self, numPartitions: int) -> "DataFrame":
"""
Returns a new :class:`DataFrame` that has exactly `numPartitions` partitions.
Similar to coalesce defined on an :class:`RDD`, this operation results in a
narrow dependency, e.g. if you go from 1000 partitions to 100 partitions,
there will not be a shuffle, instead each of the 100 new partitions will
claim 10 of the current partitions. If a larger number of partitions is requested,
it will stay at the current number of partitions.
However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,
this may result in your computation taking place on fewer nodes than
you like (e.g. one node in the case of numPartitions = 1). To avoid this,
you can call repartition(). This will add a shuffle step, but means the
current upstream partitions will be executed in parallel (per whatever
the current partitioning is).
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
numPartitions : int
specify the target number of partitions
Returns
-------
:class:`DataFrame`
Examples
--------
>>> df = spark.range(10)
>>> df.coalesce(1).rdd.getNumPartitions()
1
"""
return DataFrame(self._jdf.coalesce(numPartitions), self.sparkSession)
| (self, numPartitions: int) -> pyspark.sql.dataframe.DataFrame |
39,359 | pyspark.sql.dataframe | colRegex |
Selects column based on the column name specified as a regex and returns it
as :class:`Column`.
.. versionadded:: 2.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
colName : str
string, column name specified as a regex.
Returns
-------
:class:`Column`
Examples
--------
>>> df = spark.createDataFrame([("a", 1), ("b", 2), ("c", 3)], ["Col1", "Col2"])
>>> df.select(df.colRegex("`(Col1)?+.+`")).show()
+----+
|Col2|
+----+
| 1|
| 2|
| 3|
+----+
| def colRegex(self, colName: str) -> Column:
"""
Selects column based on the column name specified as a regex and returns it
as :class:`Column`.
.. versionadded:: 2.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
colName : str
string, column name specified as a regex.
Returns
-------
:class:`Column`
Examples
--------
>>> df = spark.createDataFrame([("a", 1), ("b", 2), ("c", 3)], ["Col1", "Col2"])
>>> df.select(df.colRegex("`(Col1)?+.+`")).show()
+----+
|Col2|
+----+
| 1|
| 2|
| 3|
+----+
"""
if not isinstance(colName, str):
raise PySparkTypeError(
error_class="NOT_STR",
message_parameters={"arg_name": "colName", "arg_type": type(colName).__name__},
)
jc = self._jdf.colRegex(colName)
return Column(jc)
| (self, colName: str) -> pyspark.sql.column.Column |
39,360 | pyspark.sql.dataframe | collect | Returns all the records as a list of :class:`Row`.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Returns
-------
list
List of rows.
Examples
--------
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
>>> df.collect()
[Row(age=14, name='Tom'), Row(age=23, name='Alice'), Row(age=16, name='Bob')]
| def collect(self) -> List[Row]:
"""Returns all the records as a list of :class:`Row`.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Returns
-------
list
List of rows.
Examples
--------
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
>>> df.collect()
[Row(age=14, name='Tom'), Row(age=23, name='Alice'), Row(age=16, name='Bob')]
"""
with SCCallSiteSync(self._sc):
sock_info = self._jdf.collectToPython()
return list(_load_from_socket(sock_info, BatchedSerializer(CPickleSerializer())))
| (self) -> List[pyspark.sql.types.Row] |
39,361 | pyspark.sql.dataframe | corr |
Calculates the correlation of two columns of a :class:`DataFrame` as a double value.
Currently only supports the Pearson Correlation Coefficient.
:func:`DataFrame.corr` and :func:`DataFrameStatFunctions.corr` are aliases of each other.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
col1 : str
The name of the first column
col2 : str
The name of the second column
method : str, optional
The correlation method. Currently only supports "pearson"
Returns
-------
float
Pearson Correlation Coefficient of two columns.
Examples
--------
>>> df = spark.createDataFrame([(1, 12), (10, 1), (19, 8)], ["c1", "c2"])
>>> df.corr("c1", "c2")
-0.3592106040535498
>>> df = spark.createDataFrame([(11, 12), (10, 11), (9, 10)], ["small", "bigger"])
>>> df.corr("small", "bigger")
1.0
| def corr(self, col1: str, col2: str, method: Optional[str] = None) -> float:
"""
Calculates the correlation of two columns of a :class:`DataFrame` as a double value.
Currently only supports the Pearson Correlation Coefficient.
:func:`DataFrame.corr` and :func:`DataFrameStatFunctions.corr` are aliases of each other.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
col1 : str
The name of the first column
col2 : str
The name of the second column
method : str, optional
The correlation method. Currently only supports "pearson"
Returns
-------
float
Pearson Correlation Coefficient of two columns.
Examples
--------
>>> df = spark.createDataFrame([(1, 12), (10, 1), (19, 8)], ["c1", "c2"])
>>> df.corr("c1", "c2")
-0.3592106040535498
>>> df = spark.createDataFrame([(11, 12), (10, 11), (9, 10)], ["small", "bigger"])
>>> df.corr("small", "bigger")
1.0
"""
if not isinstance(col1, str):
raise PySparkTypeError(
error_class="NOT_STR",
message_parameters={"arg_name": "col1", "arg_type": type(col1).__name__},
)
if not isinstance(col2, str):
raise PySparkTypeError(
error_class="NOT_STR",
message_parameters={"arg_name": "col2", "arg_type": type(col2).__name__},
)
if not method:
method = "pearson"
if not method == "pearson":
raise PySparkValueError(
error_class="VALUE_NOT_PEARSON",
message_parameters={"arg_name": "method", "arg_value": method},
)
return self._jdf.stat().corr(col1, col2, method)
| (self, col1: str, col2: str, method: Optional[str] = None) -> float |
39,362 | pyspark.sql.dataframe | count | Returns the number of rows in this :class:`DataFrame`.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Returns
-------
int
Number of rows.
Examples
--------
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
Return the number of rows in the :class:`DataFrame`.
>>> df.count()
3
| def count(self) -> int:
"""Returns the number of rows in this :class:`DataFrame`.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Returns
-------
int
Number of rows.
Examples
--------
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
Return the number of rows in the :class:`DataFrame`.
>>> df.count()
3
"""
return int(self._jdf.count())
| (self) -> int |
39,363 | pyspark.sql.dataframe | cov |
Calculate the sample covariance for the given columns, specified by their names, as a
double value. :func:`DataFrame.cov` and :func:`DataFrameStatFunctions.cov` are aliases.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
col1 : str
The name of the first column
col2 : str
The name of the second column
Returns
-------
float
Covariance of two columns.
Examples
--------
>>> df = spark.createDataFrame([(1, 12), (10, 1), (19, 8)], ["c1", "c2"])
>>> df.cov("c1", "c2")
-18.0
>>> df = spark.createDataFrame([(11, 12), (10, 11), (9, 10)], ["small", "bigger"])
>>> df.cov("small", "bigger")
1.0
| def cov(self, col1: str, col2: str) -> float:
"""
Calculate the sample covariance for the given columns, specified by their names, as a
double value. :func:`DataFrame.cov` and :func:`DataFrameStatFunctions.cov` are aliases.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
col1 : str
The name of the first column
col2 : str
The name of the second column
Returns
-------
float
Covariance of two columns.
Examples
--------
>>> df = spark.createDataFrame([(1, 12), (10, 1), (19, 8)], ["c1", "c2"])
>>> df.cov("c1", "c2")
-18.0
>>> df = spark.createDataFrame([(11, 12), (10, 11), (9, 10)], ["small", "bigger"])
>>> df.cov("small", "bigger")
1.0
"""
if not isinstance(col1, str):
raise PySparkTypeError(
error_class="NOT_STR",
message_parameters={"arg_name": "col1", "arg_type": type(col1).__name__},
)
if not isinstance(col2, str):
raise PySparkTypeError(
error_class="NOT_STR",
message_parameters={"arg_name": "col2", "arg_type": type(col2).__name__},
)
return self._jdf.stat().cov(col1, col2)
| (self, col1: str, col2: str) -> float |
39,364 | pyspark.sql.dataframe | createGlobalTempView | Creates a global temporary view with this :class:`DataFrame`.
The lifetime of this temporary view is tied to this Spark application.
throws :class:`TempTableAlreadyExistsException`, if the view name already exists in the
catalog.
.. versionadded:: 2.1.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Name of the view.
Examples
--------
Create a global temporary view.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.createGlobalTempView("people")
>>> df2 = spark.sql("SELECT * FROM global_temp.people")
>>> sorted(df.collect()) == sorted(df2.collect())
True
Throws an exception if the global temporary view already exists.
>>> df.createGlobalTempView("people") # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
AnalysisException: "Temporary table 'people' already exists;"
>>> spark.catalog.dropGlobalTempView("people")
True
| def createGlobalTempView(self, name: str) -> None:
"""Creates a global temporary view with this :class:`DataFrame`.
The lifetime of this temporary view is tied to this Spark application.
throws :class:`TempTableAlreadyExistsException`, if the view name already exists in the
catalog.
.. versionadded:: 2.1.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Name of the view.
Examples
--------
Create a global temporary view.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.createGlobalTempView("people")
>>> df2 = spark.sql("SELECT * FROM global_temp.people")
>>> sorted(df.collect()) == sorted(df2.collect())
True
Throws an exception if the global temporary view already exists.
>>> df.createGlobalTempView("people") # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
AnalysisException: "Temporary table 'people' already exists;"
>>> spark.catalog.dropGlobalTempView("people")
True
"""
self._jdf.createGlobalTempView(name)
| (self, name: str) -> NoneType |
39,365 | pyspark.sql.dataframe | createOrReplaceGlobalTempView | Creates or replaces a global temporary view using the given name.
The lifetime of this temporary view is tied to this Spark application.
.. versionadded:: 2.2.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Name of the view.
Examples
--------
Create a global temporary view.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.createOrReplaceGlobalTempView("people")
Replace the global temporary view.
>>> df2 = df.filter(df.age > 3)
>>> df2.createOrReplaceGlobalTempView("people")
>>> df3 = spark.sql("SELECT * FROM global_temp.people")
>>> sorted(df3.collect()) == sorted(df2.collect())
True
>>> spark.catalog.dropGlobalTempView("people")
True
| def createOrReplaceGlobalTempView(self, name: str) -> None:
"""Creates or replaces a global temporary view using the given name.
The lifetime of this temporary view is tied to this Spark application.
.. versionadded:: 2.2.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Name of the view.
Examples
--------
Create a global temporary view.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.createOrReplaceGlobalTempView("people")
Replace the global temporary view.
>>> df2 = df.filter(df.age > 3)
>>> df2.createOrReplaceGlobalTempView("people")
>>> df3 = spark.sql("SELECT * FROM global_temp.people")
>>> sorted(df3.collect()) == sorted(df2.collect())
True
>>> spark.catalog.dropGlobalTempView("people")
True
"""
self._jdf.createOrReplaceGlobalTempView(name)
| (self, name: str) -> NoneType |
39,366 | pyspark.sql.dataframe | createOrReplaceTempView | Creates or replaces a local temporary view with this :class:`DataFrame`.
The lifetime of this temporary table is tied to the :class:`SparkSession`
that was used to create this :class:`DataFrame`.
.. versionadded:: 2.0.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Name of the view.
Examples
--------
Create a local temporary view named 'people'.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.createOrReplaceTempView("people")
Replace the local temporary view.
>>> df2 = df.filter(df.age > 3)
>>> df2.createOrReplaceTempView("people")
>>> df3 = spark.sql("SELECT * FROM people")
>>> sorted(df3.collect()) == sorted(df2.collect())
True
>>> spark.catalog.dropTempView("people")
True
| def createOrReplaceTempView(self, name: str) -> None:
"""Creates or replaces a local temporary view with this :class:`DataFrame`.
The lifetime of this temporary table is tied to the :class:`SparkSession`
that was used to create this :class:`DataFrame`.
.. versionadded:: 2.0.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Name of the view.
Examples
--------
Create a local temporary view named 'people'.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.createOrReplaceTempView("people")
Replace the local temporary view.
>>> df2 = df.filter(df.age > 3)
>>> df2.createOrReplaceTempView("people")
>>> df3 = spark.sql("SELECT * FROM people")
>>> sorted(df3.collect()) == sorted(df2.collect())
True
>>> spark.catalog.dropTempView("people")
True
"""
self._jdf.createOrReplaceTempView(name)
| (self, name: str) -> NoneType |
39,367 | pyspark.sql.dataframe | createTempView | Creates a local temporary view with this :class:`DataFrame`.
The lifetime of this temporary table is tied to the :class:`SparkSession`
that was used to create this :class:`DataFrame`.
throws :class:`TempTableAlreadyExistsException`, if the view name already exists in the
catalog.
.. versionadded:: 2.0.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Name of the view.
Examples
--------
Create a local temporary view.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.createTempView("people")
>>> df2 = spark.sql("SELECT * FROM people")
>>> sorted(df.collect()) == sorted(df2.collect())
True
Throw an exception if the table already exists.
>>> df.createTempView("people") # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
AnalysisException: "Temporary table 'people' already exists;"
>>> spark.catalog.dropTempView("people")
True
| def createTempView(self, name: str) -> None:
"""Creates a local temporary view with this :class:`DataFrame`.
The lifetime of this temporary table is tied to the :class:`SparkSession`
that was used to create this :class:`DataFrame`.
throws :class:`TempTableAlreadyExistsException`, if the view name already exists in the
catalog.
.. versionadded:: 2.0.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
name : str
Name of the view.
Examples
--------
Create a local temporary view.
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.createTempView("people")
>>> df2 = spark.sql("SELECT * FROM people")
>>> sorted(df.collect()) == sorted(df2.collect())
True
Throw an exception if the table already exists.
>>> df.createTempView("people") # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
AnalysisException: "Temporary table 'people' already exists;"
>>> spark.catalog.dropTempView("people")
True
"""
self._jdf.createTempView(name)
| (self, name: str) -> NoneType |
39,368 | pyspark.sql.dataframe | crossJoin | Returns the cartesian product with another :class:`DataFrame`.
.. versionadded:: 2.1.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : :class:`DataFrame`
Right side of the cartesian product.
Returns
-------
:class:`DataFrame`
Joined DataFrame.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
>>> df2 = spark.createDataFrame(
... [Row(height=80, name="Tom"), Row(height=85, name="Bob")])
>>> df.crossJoin(df2.select("height")).select("age", "name", "height").show()
+---+-----+------+
|age| name|height|
+---+-----+------+
| 14| Tom| 80|
| 14| Tom| 85|
| 23|Alice| 80|
| 23|Alice| 85|
| 16| Bob| 80|
| 16| Bob| 85|
+---+-----+------+
| def crossJoin(self, other: "DataFrame") -> "DataFrame":
"""Returns the cartesian product with another :class:`DataFrame`.
.. versionadded:: 2.1.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
other : :class:`DataFrame`
Right side of the cartesian product.
Returns
-------
:class:`DataFrame`
Joined DataFrame.
Examples
--------
>>> from pyspark.sql import Row
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
>>> df2 = spark.createDataFrame(
... [Row(height=80, name="Tom"), Row(height=85, name="Bob")])
>>> df.crossJoin(df2.select("height")).select("age", "name", "height").show()
+---+-----+------+
|age| name|height|
+---+-----+------+
| 14| Tom| 80|
| 14| Tom| 85|
| 23|Alice| 80|
| 23|Alice| 85|
| 16| Bob| 80|
| 16| Bob| 85|
+---+-----+------+
"""
jdf = self._jdf.crossJoin(other._jdf)
return DataFrame(jdf, self.sparkSession)
| (self, other: pyspark.sql.dataframe.DataFrame) -> pyspark.sql.dataframe.DataFrame |
39,369 | pyspark.sql.dataframe | crosstab |
Computes a pair-wise frequency table of the given columns. Also known as a contingency
table.
The first column of each row will be the distinct values of `col1` and the column names
will be the distinct values of `col2`. The name of the first column will be `$col1_$col2`.
Pairs that have no occurrences will have zero as their counts.
:func:`DataFrame.crosstab` and :func:`DataFrameStatFunctions.crosstab` are aliases.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
col1 : str
The name of the first column. Distinct items will make the first item of
each row.
col2 : str
The name of the second column. Distinct items will make the column names
of the :class:`DataFrame`.
Returns
-------
:class:`DataFrame`
Frequency matrix of two columns.
Examples
--------
>>> df = spark.createDataFrame([(1, 11), (1, 11), (3, 10), (4, 8), (4, 8)], ["c1", "c2"])
>>> df.crosstab("c1", "c2").sort("c1_c2").show()
+-----+---+---+---+
|c1_c2| 10| 11| 8|
+-----+---+---+---+
| 1| 0| 2| 0|
| 3| 1| 0| 0|
| 4| 0| 0| 2|
+-----+---+---+---+
| def crosstab(self, col1: str, col2: str) -> "DataFrame":
"""
Computes a pair-wise frequency table of the given columns. Also known as a contingency
table.
The first column of each row will be the distinct values of `col1` and the column names
will be the distinct values of `col2`. The name of the first column will be `$col1_$col2`.
Pairs that have no occurrences will have zero as their counts.
:func:`DataFrame.crosstab` and :func:`DataFrameStatFunctions.crosstab` are aliases.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
col1 : str
The name of the first column. Distinct items will make the first item of
each row.
col2 : str
The name of the second column. Distinct items will make the column names
of the :class:`DataFrame`.
Returns
-------
:class:`DataFrame`
Frequency matrix of two columns.
Examples
--------
>>> df = spark.createDataFrame([(1, 11), (1, 11), (3, 10), (4, 8), (4, 8)], ["c1", "c2"])
>>> df.crosstab("c1", "c2").sort("c1_c2").show()
+-----+---+---+---+
|c1_c2| 10| 11| 8|
+-----+---+---+---+
| 1| 0| 2| 0|
| 3| 1| 0| 0|
| 4| 0| 0| 2|
+-----+---+---+---+
"""
if not isinstance(col1, str):
raise PySparkTypeError(
error_class="NOT_STR",
message_parameters={"arg_name": "col1", "arg_type": type(col1).__name__},
)
if not isinstance(col2, str):
raise PySparkTypeError(
error_class="NOT_STR",
message_parameters={"arg_name": "col2", "arg_type": type(col2).__name__},
)
return DataFrame(self._jdf.stat().crosstab(col1, col2), self.sparkSession)
| (self, col1: str, col2: str) -> pyspark.sql.dataframe.DataFrame |
39,370 | pyspark.sql.dataframe | cube |
Create a multi-dimensional cube for the current :class:`DataFrame` using
the specified columns, so we can run aggregations on them.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
cols : list, str or :class:`Column`
columns to create cube by.
Each element should be a column name (string) or an expression (:class:`Column`)
or list of them.
Returns
-------
:class:`GroupedData`
Cube of the data by given columns.
Examples
--------
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.cube("name", df.age).count().orderBy("name", "age").show()
+-----+----+-----+
| name| age|count|
+-----+----+-----+
| NULL|NULL| 2|
| NULL| 2| 1|
| NULL| 5| 1|
|Alice|NULL| 1|
|Alice| 2| 1|
| Bob|NULL| 1|
| Bob| 5| 1|
+-----+----+-----+
| def cube(self, *cols: "ColumnOrName") -> "GroupedData": # type: ignore[misc]
"""
Create a multi-dimensional cube for the current :class:`DataFrame` using
the specified columns, so we can run aggregations on them.
.. versionadded:: 1.4.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Parameters
----------
cols : list, str or :class:`Column`
columns to create cube by.
Each element should be a column name (string) or an expression (:class:`Column`)
or list of them.
Returns
-------
:class:`GroupedData`
Cube of the data by given columns.
Examples
--------
>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
>>> df.cube("name", df.age).count().orderBy("name", "age").show()
+-----+----+-----+
| name| age|count|
+-----+----+-----+
| NULL|NULL| 2|
| NULL| 2| 1|
| NULL| 5| 1|
|Alice|NULL| 1|
|Alice| 2| 1|
| Bob|NULL| 1|
| Bob| 5| 1|
+-----+----+-----+
"""
jgd = self._jdf.cube(self._jcols(*cols))
from pyspark.sql.group import GroupedData
return GroupedData(jgd, self)
| (self, *cols: 'ColumnOrName') -> 'GroupedData' |
39,371 | pyspark.sql.dataframe | describe | Computes basic statistics for numeric and string columns.
.. versionadded:: 1.3.1
.. versionchanged:: 3.4.0
Supports Spark Connect.
This includes count, mean, stddev, min, and max. If no columns are
given, this function computes statistics for all numerical or string columns.
Notes
-----
This function is meant for exploratory data analysis, as we make no
guarantee about the backward compatibility of the schema of the resulting
:class:`DataFrame`.
Use summary for expanded statistics and control over which statistics to compute.
Parameters
----------
cols : str, list, optional
Column name or list of column names to describe by (default All columns).
Returns
-------
:class:`DataFrame`
A new DataFrame that describes (provides statistics) given DataFrame.
Examples
--------
>>> df = spark.createDataFrame(
... [("Bob", 13, 40.3, 150.5), ("Alice", 12, 37.8, 142.3), ("Tom", 11, 44.1, 142.2)],
... ["name", "age", "weight", "height"],
... )
>>> df.describe(['age']).show()
+-------+----+
|summary| age|
+-------+----+
| count| 3|
| mean|12.0|
| stddev| 1.0|
| min| 11|
| max| 13|
+-------+----+
>>> df.describe(['age', 'weight', 'height']).show()
+-------+----+------------------+-----------------+
|summary| age| weight| height|
+-------+----+------------------+-----------------+
| count| 3| 3| 3|
| mean|12.0| 40.73333333333333| 145.0|
| stddev| 1.0|3.1722757341273704|4.763402145525822|
| min| 11| 37.8| 142.2|
| max| 13| 44.1| 150.5|
+-------+----+------------------+-----------------+
See Also
--------
DataFrame.summary
| def describe(self, *cols: Union[str, List[str]]) -> "DataFrame":
"""Computes basic statistics for numeric and string columns.
.. versionadded:: 1.3.1
.. versionchanged:: 3.4.0
Supports Spark Connect.
This includes count, mean, stddev, min, and max. If no columns are
given, this function computes statistics for all numerical or string columns.
Notes
-----
This function is meant for exploratory data analysis, as we make no
guarantee about the backward compatibility of the schema of the resulting
:class:`DataFrame`.
Use summary for expanded statistics and control over which statistics to compute.
Parameters
----------
cols : str, list, optional
Column name or list of column names to describe by (default All columns).
Returns
-------
:class:`DataFrame`
A new DataFrame that describes (provides statistics) given DataFrame.
Examples
--------
>>> df = spark.createDataFrame(
... [("Bob", 13, 40.3, 150.5), ("Alice", 12, 37.8, 142.3), ("Tom", 11, 44.1, 142.2)],
... ["name", "age", "weight", "height"],
... )
>>> df.describe(['age']).show()
+-------+----+
|summary| age|
+-------+----+
| count| 3|
| mean|12.0|
| stddev| 1.0|
| min| 11|
| max| 13|
+-------+----+
>>> df.describe(['age', 'weight', 'height']).show()
+-------+----+------------------+-----------------+
|summary| age| weight| height|
+-------+----+------------------+-----------------+
| count| 3| 3| 3|
| mean|12.0| 40.73333333333333| 145.0|
| stddev| 1.0|3.1722757341273704|4.763402145525822|
| min| 11| 37.8| 142.2|
| max| 13| 44.1| 150.5|
+-------+----+------------------+-----------------+
See Also
--------
DataFrame.summary
"""
if len(cols) == 1 and isinstance(cols[0], list):
cols = cols[0] # type: ignore[assignment]
jdf = self._jdf.describe(self._jseq(cols))
return DataFrame(jdf, self.sparkSession)
| (self, *cols: Union[str, List[str]]) -> pyspark.sql.dataframe.DataFrame |
39,372 | pyspark.sql.dataframe | distinct | Returns a new :class:`DataFrame` containing the distinct rows in this :class:`DataFrame`.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Returns
-------
:class:`DataFrame`
DataFrame with distinct records.
Examples
--------
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (23, "Alice")], ["age", "name"])
Return the number of distinct rows in the :class:`DataFrame`
>>> df.distinct().count()
2
| def distinct(self) -> "DataFrame":
"""Returns a new :class:`DataFrame` containing the distinct rows in this :class:`DataFrame`.
.. versionadded:: 1.3.0
.. versionchanged:: 3.4.0
Supports Spark Connect.
Returns
-------
:class:`DataFrame`
DataFrame with distinct records.
Examples
--------
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (23, "Alice")], ["age", "name"])
Return the number of distinct rows in the :class:`DataFrame`
>>> df.distinct().count()
2
"""
return DataFrame(self._jdf.distinct(), self.sparkSession)
| (self) -> pyspark.sql.dataframe.DataFrame |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.