Dataset Viewer
Auto-converted to Parquet
library
stringclasses
18 values
name
stringlengths
1
66
source_code
stringlengths
20
1.84k
docstring
stringlengths
3
1.35k
type
stringclasses
3 values
file_path
stringlengths
12
109
ast_data
stringlengths
17
872
django
is_counterclockwise
@property def is_counterclockwise(self): ret = c_byte() if not capi.cs_is_ccw(self.ptr, byref(ret)): raise GEOSException('Error encountered in GEOS C function "%s".' % capi.cs_is_ccw.func_name) return ret.value == 1
Return whether this coordinate sequence is counterclockwise.
method
django\django\contrib\gis\geos\coordseq.py
FunctionDef name:is_counterclockwise arg:self arguments arg Assign Call If Call Call Raise Call Return return:yes Compare
scrapy
from_settings
def from_settings(settings): pass
Return an instance of the class for the given settings
method
scrapy\scrapy\interfaces.py
FunctionDef name:from_settings arg:settings arguments arg
scipy
_matvec
def _matvec(self, x): x = x.reshape(self.shape[0], -1) result_dtype = np.promote_types(x.dtype, self.dtype) kx = np.zeros_like(x, dtype=result_dtype) d1 = self._diag1 d0 = self._diag0 kx[0, :] = d0[0] * x[0, :] + d1[0] * x[1, :] kx[-1, :] = d1[-1] * x[-2, :] + d0[-1] * x[-1, :] kx[1:-1, :] = d1[:-1, None] * x[:-2, :] + d0[1:-1, None] * x[1:-1, :] + d1[1:, None] * x[2:, :] return kx
Construct matrix-free callable banded-matrix-vector multiplication by the Mikota stiffness matrix without constructing or storing the matrix itself using the knowledge of its entries and the 3-diagonal format.
method
scipy\scipy\sparse\linalg\_special_sparse_arrays.py
FunctionDef name:_matvec arg:self arg:x arguments arg arg Assign Call Assign Call Assign Call Assign Assign Assign Assign Assign Return return:yes
scipy
hilbert
def hilbert(n): values = 1.0 / (1.0 + np.arange(2 * n - 1)) h = hankel(values[:n], r=values[n - 1:]) return h
Create a Hilbert matrix of order . Returns the by array with entries . Parameters ---------- n : int The size of the array to create. Returns ------- h : (n, n) ndarray The Hilbert matrix. See Also -------- invhilbert : Compute the inverse of a Hilbert matrix. Notes ----- .. versionadded:: 0.10.0 Examples -------- >>> from scipy.linalg import hilbert >>> hilbert(3) array([[ 1. , 0.5 , 0.33333333], [ 0.5 , 0.33333333, 0.25 ], [ 0.33333333, 0.25 , 0.2 ]])
function
scipy\scipy\linalg\_special_matrices.py
FunctionDef name:hilbert arg:n arguments arg Assign Call Assign Call Return return:yes
tensorflow
RegressionOutput
class RegressionOutput(ExportOutput): def __init__(self, value): if not (isinstance(value, tensor.Tensor) and value.dtype.is_floating): raise ValueError('Regression output value must be a float32 Tensor; got {}'.format(value)) self._value = value @property def value(self): return self._value def as_signature_def(self, receiver_tensors): if len(receiver_tensors) != 1: raise ValueError(f'Regression signatures can only accept a single tensor input of type tf.string. Please check to make sure that you have structured the serving_input_receiver_fn so that it creates a single string placeholder. If your model function expects multiple inputs, then use `tf.io.parse_example()` to parse the string into multiple tensors.\n Received: {receiver_tensors}') (_, examples), = receiver_tensors.items() if dtypes.as_dtype(examples.dtype) != dtypes.string: raise ValueError(f'Regression signatures can only accept a single tensor input of type tf.string. Please check to make sure that you have structured the serving_input_receiver_fn so that it creates a single string placeholder. If your model function expects multiple inputs, then use `tf.io.parse_example()` to parse the string into multiple tensors.\n Received: {receiver_tensors}') return signature_def_utils.regression_signature_def(examples, self.value)
Represents the output of a regression head.
class
tensorflow\tensorflow\python\saved_model\model_utils\export_output.py
ClassDef name:RegressionOutput FunctionDef name:__init__ arg:self arg:value arguments arg arg If BoolOp Call Raise Call Call Assign FunctionDef name:value arg:self arguments arg Return return:yes FunctionDef name:as_signature_def arg:self arg:receiver_tensors arguments arg arg If Compare Call Raise Call Assign Call If Compare Call Raise Call Return return:yes Call
pytorch
_unregister_deepcopy_hook
def _unregister_deepcopy_hook(self, f): assert callable(f), 'deepcopy hook must be a callable.' self._deepcopy_hooks.remove(f)
Takes a callable which was previously registered to be called after deepcopy. This function will unregister that callable so it is no longer invoked on deepcopy.
method
pytorch\torch\fx\graph_module.py
FunctionDef name:_unregister_deepcopy_hook arg:self arg:f arguments arg arg Call Call
pytorch
is_embedding_node
def is_embedding_node(node: Node) -> bool: if node.op == 'call_module': submodule = self.graph_module for atom in str(node.target).split('.'): if not hasattr(submodule, atom): raise RuntimeError(f'Module {submodule} has no attribute {atom}') submodule = getattr(submodule, atom) if 'Embedding' in str(submodule): return True return False
Check if a node is an embedding node
method
pytorch\torch\fx\experimental\accelerator_partitioner.py
FunctionDef name:is_embedding_node arg:node arguments arg If Compare Assign For Call Call If Call Raise Call Assign Call If Compare Call Return return:yes Return return:yes
pytorch
_wait_for_computation_stream
def _wait_for_computation_stream(computation_stream: torch.Stream, unshard_stream: torch.Stream, pre_unshard_stream: torch.Stream): if torch.distributed._functional_collectives.is_torchdynamo_compiling(): return unshard_stream.wait_stream(computation_stream) pre_unshard_stream.wait_stream(computation_stream)
Has the unshard and pre-unshard streams wait for the computation stream. For example, this should be called in the FSDP root's pre-forward to respect optimizer step computation.
function
pytorch\torch\distributed\fsdp\_runtime_utils.py
FunctionDef name:_wait_for_computation_stream arg:computation_stream arg:unshard_stream arg:pre_unshard_stream arguments arg arg arg If Call Return return:no Call Call
pytorch
_is_compiled
def _is_compiled() -> bool: return hasattr(torch._C, '_cuda_getDeviceCount')
Return true if compile with CUDA support.
function
pytorch\torch\cuda\__init__.py
FunctionDef name:_is_compiled arguments Return return:yes Call
matplotlib
get_bbox_to_anchor
def get_bbox_to_anchor(self): if self._bbox_to_anchor is None: return self.axes.bbox else: transform = self._bbox_to_anchor_transform if transform is None: return self._bbox_to_anchor else: return TransformedBbox(self._bbox_to_anchor, transform)
Return the bbox that the box is anchored to.
method
matplotlib\lib\matplotlib\offsetbox.py
FunctionDef name:get_bbox_to_anchor arg:self arguments arg If Compare Return return:yes Assign If Compare Return return:yes Return return:yes Call
pytorch
_record_memory_stats
@no_type_check def _record_memory_stats(self, fn_name: str) -> None: memory_allocated: float = torch.cuda.memory_allocated() / BYTES_PER_MB memory_reserved: float = torch.cuda.memory_reserved() / BYTES_PER_MB memory_active: float = torch.cuda.memory_stats().get('active_bytes.all.current', 0) / BYTES_PER_MB self.memories_allocated[self._op_index] = (fn_name, memory_allocated) self.memories_reserved[self._op_index] = (fn_name, memory_reserved) self.memories_active[self._op_index] = (fn_name, memory_active) self._op_index += 1
Record current memory allocated, current memory active and current memory reserved. The memory stats dict is indexed with ``.
method
pytorch\torch\distributed\_tools\memory_tracker.py
FunctionDef name:_record_memory_stats arg:self arg:fn_name arguments arg arg Call Call Call Call Assign Assign Assign
django
__eq__
def __eq__(self, other): return isinstance(other, OGRGeometry) and self.equals(other)
Is this Geometry equal to the other?
method
django\django\contrib\gis\gdal\geometries.py
FunctionDef name:__eq__ arg:self arg:other arguments arg arg Return return:yes BoolOp Call Call
pytorch
named_parameters
def named_parameters(self, prefix: str='', recurse: bool=True, remove_duplicate: bool=True) -> Iterator[tuple[str, Parameter]]: gen = self._named_members(lambda module: module._parameters.items(), prefix=prefix, recurse=recurse, remove_duplicate=remove_duplicate) yield from gen
Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Args: prefix (str): prefix to prepend to all parameter names. recurse (bool): if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. remove_duplicate (bool, optional): whether to remove the duplicated parameters in the result. Defaults to True. Yields: (str, Parameter): Tuple containing the name and parameter Example:: >>> # xdoctest: +SKIP("undefined vars") >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
method
pytorch\torch\nn\modules\module.py
FunctionDef name:named_parameters arg:self arg:prefix arg:recurse arg:remove_duplicate arguments arg arg arg arg Assign Call arguments arg Call
scipy
solve_bdf_system
def solve_bdf_system(fun, t_new, y_predict, c, psi, LU, solve_lu, scale, tol): d = 0 y = y_predict.copy() dy_norm_old = None converged = False for k in range(NEWTON_MAXITER): f = fun(t_new, y) if not np.all(np.isfinite(f)): break dy = solve_lu(LU, c * f - psi - d) dy_norm = norm(dy / scale) if dy_norm_old is None: rate = None else: rate = dy_norm / dy_norm_old if rate is not None and (rate >= 1 or rate ** (NEWTON_MAXITER - k) / (1 - rate) * dy_norm > tol): break y += dy d += dy if dy_norm == 0 or (rate is not None and rate / (1 - rate) * dy_norm < tol): converged = True break dy_norm_old = dy_norm return (converged, k + 1, y, d)
Solve the algebraic system resulting from BDF method.
function
scipy\scipy\integrate\_ivp\bdf.py
FunctionDef name:solve_bdf_system arg:fun arg:t_new arg:y_predict arg:c arg:psi arg:LU arg:solve_lu arg:scale arg:tol arguments arg arg arg arg arg arg arg arg arg Assign Assign Call Assign Assign For Call Assign Call If Call Call Assign Call Assign Call If Compare Assign Assign If BoolOp Compare BoolOp Compare Compare If BoolOp Compare BoolOp Compare Compare Assign Assign Return return:yes
cherrypy
expired
def expired(self): if self.timer.expired(): raise LockTimeout('Timeout acquiring lock for %(session_id)s' % vars(self)) return False
Check whether the lock checker has expired.
method
cherrypy\cherrypy\lib\locking.py
FunctionDef name:expired arg:self arguments arg If Call Raise Call Call Return return:yes
pytorch
SimplifyIndexing
class SimplifyIndexing(V.WrapperHandler): def __init__(self, inner, var_ranges: VarRanges) -> None: super().__init__(inner) self.name = 'SimplifyIndexing' self._simplify: Callable[[Expr], Expr] = lambda index: V.graph.sizevars.simplify_with_ranges(index, var_ranges) def load(self, name: str, index: sympy.Expr): return self._inner.load(name, self._simplify(index)) def store(self, name, index, value, mode=None): return self._inner.store(name, self._simplify(index), value, mode=mode) def store_reduction(self, name, index, value): return self._inner.store_reduction(name, self._simplify(index), value) def index_expr(self, index, dtype): return self._inner.index_expr(self._simplify(index), dtype) def check_bounds(self, index, size, lower, upper): return self._inner.check_bounds(self._simplify(index), size, lower, upper)
A wrapper around .virtualize.ops that uses var range information to simplify ModularIndexing/FloorDiv.
class
pytorch\torch\_inductor\sizevars.py
ClassDef name:SimplifyIndexing FunctionDef name:__init__ arg:self arg:inner arg:var_ranges arguments arg arg arg Call Call Assign arguments arg Call FunctionDef name:load arg:self arg:name arg:index arguments arg arg arg Return return:yes Call Call FunctionDef name:store arg:self arg:name arg:index arg:value arg:mode arguments arg arg arg arg arg Return return:yes Call Call FunctionDef name:store_reduction arg:self arg:name arg:index arg:value arguments arg arg arg arg Return return:yes Call Call FunctionDef name:index_expr arg:self arg:index arg:dtype arguments arg arg arg Return return:yes Call Call FunctionDef name:check_bounds arg:self arg:index arg:size arg:lower arg:upper arguments arg arg arg arg arg Return return:yes Call Call
matplotlib
set_boxstyle
@_docstring.interpd def set_boxstyle(self, boxstyle=None, **kwargs): if boxstyle is None: return BoxStyle.pprint_styles() self._bbox_transmuter = BoxStyle(boxstyle, **kwargs) if isinstance(boxstyle, str) else boxstyle self.stale = True
Set the box style, possibly with further attributes. Attributes from the previous box style are not reused. Without argument (or with `~matplotlib.patches.BoxStyle.BoxStyle.BoxStyle` object, as documented in that class. The following box styles are available: %(BoxStyle:table_and_accepts)s **kwargs Additional attributes for the box style. See the table above for supported parameters. Examples -------- :: set_boxstyle("Round,pad=0.2") set_boxstyle("round", pad=0.2)
method
matplotlib\lib\matplotlib\patches.py
FunctionDef name:set_boxstyle arg:self arg:boxstyle arguments arg arg arg If Compare Return return:yes Call Assign Call Call Assign
authlib
validate_ui_locales_supported
def validate_ui_locales_supported(self): validate_array_value(self, 'ui_locales_supported')
OPTIONAL. Languages and scripts supported for the user interface, represented as a JSON array of language tag values from BCP 47 [RFC5646]. If omitted, the set of supported languages and scripts is unspecified.
method
authlib\authlib\oauth2\rfc8414\models.py
FunctionDef name:validate_ui_locales_supported arg:self arguments arg Call
tensorflow
_BatchGatherGrad
def _BatchGatherGrad(params_shape, values, indices, batch_dims, gather_dim_size): indices_size = array_ops.expand_dims(array_ops.size(indices), 0) if batch_dims: values_shape = array_ops.shape(values) outer_shape = values_shape[:batch_dims] inner_shape = values_shape[batch_dims:][1:] batch_size = gen_math_ops.prod(outer_shape, [0], False) flat_values_shape = array_ops.concat([[-1], inner_shape], 0) gather_dim_size *= batch_size indices = _GetBatchIndices(params_shape, indices, batch_dims) values = array_ops.reshape(_IndexedSlicesToTensorNoWarning(values), flat_values_shape) indices = array_ops.reshape(indices, indices_size) params_grad = math_ops.unsorted_segment_sum(values, indices, gather_dim_size) if batch_dims: params_grad = array_ops.reshape(params_grad, array_ops.concat([outer_shape, flat_values_shape], 0)) return params_grad
Returns the gradient of GatherV2 with batch dimensions.
function
tensorflow\tensorflow\python\ops\array_grad.py
FunctionDef name:_BatchGatherGrad arg:params_shape arg:values arg:indices arg:batch_dims arg:gather_dim_size arguments arg arg arg arg arg Assign Call Call If Assign Call Assign Assign Assign Call Assign Call Assign Call Assign Call Call Assign Call Assign Call If Assign Call Call Return return:yes
tensorflow
MutationAwareDict
class MutationAwareDict(py_collections.OrderedDict): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._mutated = True def pop(self, key, default=None): self._mutated = True return super().pop(key, default) def __setitem__(self, key, value): self._mutated = True return super().__setitem__(key, value) def __delitem__(self, key): self._mutated = True return super().__delitem__(key) def clear(self): self._mutated = True return super().clear() @property def mutated(self): return self._mutated @mutated.setter def mutated(self, value): self._mutated = value
A dict with a mutation flag.
class
tensorflow\tensorflow\core\function\capture\capture_container.py
ClassDef name:MutationAwareDict FunctionDef name:__init__ arg:self arguments arg arg arg Call Call Assign FunctionDef name:pop arg:self arg:key arg:default arguments arg arg arg Assign Return return:yes Call Call FunctionDef name:__setitem__ arg:self arg:key arg:value arguments arg arg arg Assign Return return:yes Call Call FunctionDef name:__delitem__ arg:self arg:key arguments arg arg Assign Return return:yes Call Call FunctionDef name:clear arg:self arguments arg Assign Return return:yes Call Call FunctionDef name:mutated arg:self arguments arg Return return:yes FunctionDef name:mutated arg:self arg:value arguments arg arg Assign
kornia
rotation
@property def rotation(self) -> So3 | So2: return self._dst_from_src.rotation
Rotation part of the pose.
method
kornia\kornia\geometry\pose.py
FunctionDef name:rotation arg:self arguments arg Return return:yes
tensorflow
HessiansV2
@tf_export('hessians', v1=[]) def HessiansV2(ys, xs, gate_gradients=False, aggregation_method=None, name='hessians'): return hessians(ys, xs, name=name, colocate_gradients_with_ops=True, gate_gradients=gate_gradients, aggregation_method=aggregation_method)
Constructs the Hessian of sum of with respect to in . adds ops to the graph to output the Hessian matrix of with respect to . It returns a list of of length where each tensor is the Hessian of . The Hessian is a matrix of second-order partial derivatives of a scalar tensor (see for more details). Args: ys: A or list of tensors to be differentiated. xs: A or list of tensors to be used for differentiation. gate_gradients: See documentation for details. aggregation_method: See documentation for details. name: Optional name to use for grouping all the gradient ops together. defaults to 'hessians'. Returns: A list of Hessian matrices of for each in . Raises: LookupError: if one of the operations between and does not have a registered gradient function.
function
tensorflow\tensorflow\python\ops\gradients_impl.py
FunctionDef name:HessiansV2 arg:ys arg:xs arg:gate_gradients arg:aggregation_method arg:name arguments arg arg arg arg arg Return return:yes Call Call
django
PostgresOperatorLookup
class PostgresOperatorLookup(Lookup): postgres_operator = None def as_postgresql(self, compiler, connection): lhs, lhs_params = self.process_lhs(compiler, connection) rhs, rhs_params = self.process_rhs(compiler, connection) params = tuple(lhs_params) + tuple(rhs_params) return ('%s %s %s' % (lhs, self.postgres_operator, rhs), params)
Lookup defined by operators on PostgreSQL.
class
django\django\db\models\lookups.py
ClassDef name:PostgresOperatorLookup Assign FunctionDef name:as_postgresql arg:self arg:compiler arg:connection arguments arg arg arg Assign Call Assign Call Assign Call Call Return return:yes
scikit-learn
get_metadata_routing
def get_metadata_routing(self): router = MetadataRouter(owner=self.__class__.__name__).add_self_request(self).add(scorer=self.scoring, method_mapping=MethodMapping().add(caller='fit', callee='score')).add(splitter=self.cv, method_mapping=MethodMapping().add(caller='fit', callee='split')) return router
Get metadata routing of this object. Please check :ref: on how the routing mechanism works. .. versionadded:: 1.5 Returns ------- routing : MetadataRouter A :class: encapsulating routing information.
method
scikit-learn\sklearn\linear_model\_ridge.py
FunctionDef name:get_metadata_routing arg:self arguments arg Assign Call Call Call Call Call Call Call Call Return return:yes
tensorflow
is_registered
def is_registered(self, prefix): return self._resolve_prefix(prefix) is not None
Test if a command prefix or its alias is has a registered handler. Args: prefix: A prefix or its alias, as a str. Returns: True iff a handler is registered for prefix.
method
tensorflow\tensorflow\python\debug\cli\debugger_cli_common.py
FunctionDef name:is_registered arg:self arg:prefix arguments arg arg Return return:yes Compare Call
kornia
__dir__
def __dir__(self) -> List[str]: self._load() return dir(self.module)
Load the module (if not already loaded) and returns the list of attributes of the module. This method is called when the built-in dir() function is used on the LazyLoader instance. It ensures that the module is loaded and then returns the list of attributes of the module. Returns: list: The list of attributes of the loaded module.
method
kornia\kornia\core\external.py
FunctionDef name:__dir__ arg:self arguments arg Call Return return:yes Call
pandas
require_length_match
def require_length_match(data, index: Index) -> None: if len(data) != len(index): raise ValueError(f'Length of values ({len(data)}) does not match length of index ({len(index)})')
Check the length of data matches the length of the index.
function
pandas\pandas\core\common.py
FunctionDef name:require_length_match arg:data arg:index arguments arg arg If Compare Call Call Raise Call Call Call
pandas
render_pep440_post_branch
def render_pep440_post_branch(pieces): if pieces['closest-tag']: rendered = pieces['closest-tag'] if pieces['distance'] or pieces['dirty']: rendered += f'.post{pieces['distance']}' if pieces['branch'] != 'master': rendered += '.dev0' rendered += plus_or_dot(pieces) rendered += f'g{pieces['short']}' if pieces['dirty']: rendered += '.dirty' else: rendered = f'0.post{pieces['distance']}' if pieces['branch'] != 'master': rendered += '.dev0' rendered += f'+g{pieces['short']}' if pieces['dirty']: rendered += '.dirty' return rendered
TAG[.postDISTANCE[.dev0]+gHEX[.dirty]] . The ".dev0" means not master branch. Exceptions: 1: no tags. 0.postDISTANCE[.dev0]+gHEX[.dirty]
function
pandas\pandas\_version.py
FunctionDef name:render_pep440_post_branch arg:pieces arguments arg If Assign If BoolOp If Compare Call If Assign If Compare If Return return:yes
tensorflow
get_debug_quantized_model
def get_debug_quantized_model(self) -> bytes: return self._get_quantized_model(is_debug=True)
Returns an instrumented quantized model. Convert the quantized model with the initialized converter and return bytes for model. The model will be instrumented with numeric verification operations and should only be used for debugging. Returns: Model bytes corresponding to the model. Raises: ValueError: if converter is not passed to the debugger.
method
tensorflow\tensorflow\lite\tools\optimize\debugging\python\debugger.py
FunctionDef name:get_debug_quantized_model arg:self arguments arg Return return:yes Call
pandas
_format_native_types
def _format_native_types(self, *, na_rep: str | float='NaT', date_format=None, **kwargs) -> npt.NDArray[np.object_]: return libperiod.period_array_strftime(self.asi8, self.dtype._dtype_code, na_rep, date_format)
actually format my specific types
method
pandas\pandas\core\arrays\period.py
FunctionDef name:_format_native_types arg:self arguments arg arg arg arg Return return:yes Call
pandas
_validate_dialect
def _validate_dialect(dialect: csv.Dialect) -> None: for param in MANDATORY_DIALECT_ATTRS: if not hasattr(dialect, param): raise ValueError(f'Invalid dialect {dialect} provided')
Validate csv dialect instance. Raises ------ ValueError If incorrect dialect is provided.
function
pandas\pandas\io\parsers\readers.py
FunctionDef name:_validate_dialect arg:dialect arguments arg For If Call Raise Call
kornia
adjoint
def adjoint(self) -> Tensor: rt = self.matrix() rt[..., 0:2, 2] = stack((self.t.data[..., 1], -self.t.data[..., 0]), -1) return rt
Return the adjoint matrix of shape :math:. Example: >>> s = Se2.identity() >>> s.adjoint() tensor([[1., -0., 0.], [0., 1., -0.], [0., 0., 1.]], grad_fn=)
method
kornia\kornia\geometry\liegroup\se2.py
FunctionDef name:adjoint arg:self arguments arg Assign Call Assign Call Return return:yes
tensorflow
__init__
def __init__(self, compression_type=None, flush_mode=None, input_buffer_size=None, output_buffer_size=None, window_bits=None, compression_level=None, compression_method=None, mem_level=None, compression_strategy=None): self.get_compression_type_string(compression_type) self.compression_type = compression_type self.flush_mode = flush_mode self.input_buffer_size = input_buffer_size self.output_buffer_size = output_buffer_size self.window_bits = window_bits self.compression_level = compression_level self.compression_method = compression_method self.mem_level = mem_level self.compression_strategy = compression_strategy
Creates a instance. Options only effect TFRecordWriter when compression_type is not . Documentation, details, and defaults can be found in []( and in the [zlib manual]( Leaving an option as allows C++ to set a reasonable default. Args: compression_type: , , or (no compression). flush_mode: flush mode or , Default: Z_NO_FLUSH. input_buffer_size: int or . output_buffer_size: int or . window_bits: int or . compression_level: 0 to 9, or . compression_method: compression method or . mem_level: 1 to 9, or . compression_strategy: strategy or . Default: Z_DEFAULT_STRATEGY. Returns: A object. Raises: ValueError: If compression_type is invalid.
method
tensorflow\tensorflow\python\lib\io\tf_record.py
FunctionDef name:__init__ arg:self arg:compression_type arg:flush_mode arg:input_buffer_size arg:output_buffer_size arg:window_bits arg:compression_level arg:compression_method arg:mem_level arg:compression_strategy arguments arg arg arg arg arg arg arg arg arg arg Call Assign Assign Assign Assign Assign Assign Assign Assign Assign
tensorflow
peek_top_obj
def peek_top_obj(self) -> T: return self._stack[-1].obj
Return the most recent stored object.
method
tensorflow\tensorflow\python\framework\traceable_stack.py
FunctionDef name:peek_top_obj arg:self arguments arg Return return:yes
tensorflow
inner_shape
@property def inner_shape(self): return self._inner_shape
The inner dimension sizes for this shape. Returns: A 1-D integer .
method
tensorflow\tensorflow\python\ops\ragged\dynamic_ragged_shape.py
FunctionDef name:inner_shape arg:self arguments arg Return return:yes
seaborn
_nested_offsets
def _nested_offsets(self, width, dodge): offsets = None if 'hue' in self.variables and self._hue_map.levels is not None: n_levels = len(self._hue_map.levels) if dodge: each_width = width / n_levels offsets = np.linspace(0, width - each_width, n_levels) offsets -= offsets.mean() else: offsets = np.zeros(n_levels) return offsets
Return offsets for each hue level for dodged plots.
method
seaborn\seaborn\categorical.py
FunctionDef name:_nested_offsets arg:self arg:width arg:dodge arguments arg arg arg Assign If BoolOp Compare Compare Assign Call If Assign Assign Call Call Assign Call Return return:yes
authlib
validate_token_endpoint_auth_methods_supported
def validate_token_endpoint_auth_methods_supported(self): validate_array_value(self, 'token_endpoint_auth_methods_supported')
OPTIONAL. JSON array containing a list of client authentication methods supported by this token endpoint. Client authentication method values are used in the "token_endpoint_auth_method" parameter defined in Section 2 of [RFC7591]. If omitted, the default is "client_secret_basic" -- the HTTP Basic Authentication Scheme specified in Section 2.3.1 of OAuth 2.0 [RFC6749].
method
authlib\authlib\oauth2\rfc8414\models.py
FunctionDef name:validate_token_endpoint_auth_methods_supported arg:self arguments arg Call
tensorflow
_ifft
def _ifft(self, x): x_complex = _to_complex(x) return _IFFT_OP[self.block_depth](x_complex)
IFFT along the last self.block_depth dimensions of x. Args: x: with floating or complex dtype. Should be in the form returned by self._vectorize_then_blockify. Returns: with .
method
tensorflow\tensorflow\python\ops\linalg\linear_operator_circulant.py
FunctionDef name:_ifft arg:self arg:x arguments arg arg Assign Call Return return:yes Call
tensorflow
abort_collective_ops
def abort_collective_ops(self, code, message): self.ensure_initialized() pywrap_tfe.TFE_AbortCollectiveOps(self._handle, code, message)
Abort the collective ops. This is intended to be used when a peer failure is detected, which allows the user to handle the case instead of hanging. This aborts all on-going collectives. After all subsequent collectives error immediately, and you need to reset_context() to use collectives again. Args: code: a error code. message: a string. The error message.
method
tensorflow\tensorflow\python\eager\context.py
FunctionDef name:abort_collective_ops arg:self arg:code arg:message arguments arg arg arg Call Call
pytorch
complex_double
def complex_double(self): _warn_typed_storage_removal() return self._to(torch.cdouble)
Casts this storage to complex double type.
method
pytorch\torch\storage.py
FunctionDef name:complex_double arg:self arguments arg Call Return return:yes Call
pytorch
memory_efficient_fusion
def memory_efficient_fusion(fn: Union[Callable, nn.Module], **kwargs): config = {'fw_compiler': ts_compile, 'bw_compiler': ts_compile, 'partition_fn': min_cut_rematerialization_partition, 'decompositions': default_decompositions} config.update(kwargs) if isinstance(fn, torch.nn.Module): return aot_module(fn, **config) else: return aot_function(fn, **config)
Wrapper function over :func: and :func: to perform memory efficient fusion. It uses the :func: partitioner to perform efficient recomputation. It uses NVFuser to compile the generated forward and backward graphs. .. warning:: This API is experimental and likely to change. Args: fn (Union[Callable, nn.Module]): A Python function or a `fn`, but whose forward and backward graphs have gone through recomputation optimizations, and the graphs have been compiled with nvfuser.
function
pytorch\torch\_functorch\compilers.py
FunctionDef name:memory_efficient_fusion arg:fn arguments arg arg Assign Call If Call Return return:yes Call Return return:yes Call
django
do_if
@register.tag('if') def do_if(parser, token): bits = token.split_contents()[1:] condition = TemplateIfParser(parser, bits).parse() nodelist = parser.parse(('elif', 'else', 'endif')) conditions_nodelists = [(condition, nodelist)] token = parser.next_token() while token.contents.startswith('elif'): bits = token.split_contents()[1:] condition = TemplateIfParser(parser, bits).parse() nodelist = parser.parse(('elif', 'else', 'endif')) conditions_nodelists.append((condition, nodelist)) token = parser.next_token() if token.contents == 'else': nodelist = parser.parse(('endif',)) conditions_nodelists.append((None, nodelist)) token = parser.next_token() if token.contents != 'endif': raise TemplateSyntaxError('Malformed template tag at line {}: "{}"'.format(token.lineno, token.contents)) return IfNode(conditions_nodelists)
Evaluate a variable, and if that variable is "true" (i.e., exists, is not empty, and is not a false boolean value), output the contents of the block: :: {% if athlete_list %} Number of athletes: {{ athlete_list|count }} {% elif athlete_in_locker_room_list %} Athletes should be out of the locker room soon! {% else %} No athletes. {% endif %} In the above, if ``. Operator precedence follows Python.
function
django\django\template\defaulttags.py
FunctionDef name:do_if arg:parser arg:token arguments arg arg Assign Call Assign Call Call Assign Call Assign Assign Call While Call Assign Call Assign Call Call Assign Call Call Assign Call If Compare Assign Call Call Assign Call If Compare Raise Call Call Return return:yes Call Call
django
auth_name
def auth_name(self, target): return capi.get_auth_name(self.ptr, target if target is None else force_bytes(target))
Return the authority name for the given string target node.
method
django\django\contrib\gis\gdal\srs.py
FunctionDef name:auth_name arg:self arg:target arguments arg arg Return return:yes Call Compare Call
cherrypy
readline
def readline(self, size=None): chunks = [] while size is None or size > 0: chunksize = self.bufsize if size is not None and size < self.bufsize: chunksize = size data = self.read(chunksize) if not data: break pos = data.find(b'\n') + 1 if pos: chunks.append(data[:pos]) remainder = data[pos:] self.buffer += remainder self.bytes_read -= len(remainder) break else: chunks.append(data) return b''.join(chunks)
Read a line from the request body and return it.
method
cherrypy\cherrypy\_cpreqbody.py
FunctionDef name:readline arg:self arg:size arguments arg arg Assign While BoolOp Compare Compare Assign If BoolOp Compare Compare Assign Assign Call If Assign Call If Call Assign Call Call Return return:yes Call
matplotlib
_get_lowers_and_uppers
def _get_lowers_and_uppers(self): lowers = self._levels[:-1] if self.zmin == lowers[0]: lowers = lowers.copy() if self.logscale: lowers[0] = 0.99 * self.zmin else: lowers[0] -= 1 uppers = self._levels[1:] return (lowers, uppers)
Return `` for filled contours.
method
matplotlib\lib\matplotlib\contour.py
FunctionDef name:_get_lowers_and_uppers arg:self arguments arg Assign If Compare Assign Call If Assign Assign Return return:yes
matplotlib
get_under
def get_under(self): if not self._isinit: self._init() return np.array(self._lut[self._i_under])
Get the color for low out-of-range values.
method
matplotlib\lib\matplotlib\colors.py
FunctionDef name:get_under arg:self arguments arg If Call Return return:yes Call
django
can_filter
def can_filter(self): return not self.is_sliced
Return True if adding filters to this instance is still possible. Typically, this means no limits or offsets have been put on the results.
method
django\django\db\models\sql\query.py
FunctionDef name:can_filter arg:self arguments arg Return return:yes
tensorflow
read
def read(self, index, name=None): del name if isinstance(index, ops.EagerTensor): index = index.numpy() if index < 0: raise errors_impl.OutOfRangeError(None, None, 'Reading from negative indices (index %d) is not allowed.' % index) if index >= len(self._tensor_array): raise errors_impl.OutOfRangeError(None, None, 'Tried to read from index %d but array size is: %d ' % (index, len(self._tensor_array))) tensor = self._tensor_array[index] if tensor is None: if index in self._previously_read_indices: raise errors_impl.InvalidArgumentError(None, None, 'Could not read index %d twice because it was cleared after a previous read (perhaps try setting clear_after_read = false?)' % index) else: tensor = self._maybe_zero(index) if self._clear_after_read: self._tensor_array[index] = None self._previously_read_indices.append(index) return tensor
See TensorArray.
method
tensorflow\tensorflow\python\ops\tensor_array_ops.py
FunctionDef name:read arg:self arg:index arg:name arguments arg arg arg If Call Assign Call If Compare Raise Call If Compare Call Raise Call Call Assign If Compare If Compare Raise Call Assign Call If Assign Call Return return:yes
pytorch
floor_to_int
def floor_to_int(self, x: T, dtype: torch.dtype) -> T: raise NotImplementedError
Convert x to dtype with ceiling semantics. See also trunc_to_int.
method
pytorch\torch\_inductor\ops_handler.py
FunctionDef name:floor_to_int arg:self arg:x arg:dtype arguments arg arg arg Raise
tensorflow
create_dummy_tensor
def create_dummy_tensor(spec): if hasattr(spec, '_create_empty_value'): return spec._create_empty_value() if isinstance(spec, ragged_tensor.RaggedTensorSpec): feature_shape = spec._shape[:1].concatenate(spec._shape[1 + spec._ragged_rank:]) feature_type = spec._dtype else: feature_shape = spec.shape feature_type = spec.dtype dims = [dim if dim is not None else 0 for dim in feature_shape.as_list()] if feature_shape else [] if dims and (isinstance(spec, ragged_tensor.RaggedTensorSpec) or feature_shape.is_fully_defined()): dims[0] = tensor_shape.Dimension(0) if isinstance(spec, sparse_tensor.SparseTensorSpec): return sparse_tensor.SparseTensor(values=array_ops.zeros(0, feature_type), indices=array_ops.zeros((0, len(dims)), dtypes.int64), dense_shape=dims) dummy_tensor = array_ops.zeros(tensor_shape.TensorShape(dims), feature_type) if isinstance(spec, ragged_tensor.RaggedTensorSpec): row_splits = array_ops.zeros(1, spec._row_splits_dtype) dummy_tensor = ragged_tensor.RaggedTensor.from_nested_row_splits(dummy_tensor, (row_splits,) * spec._ragged_rank, validate=False) return dummy_tensor
Create a dummy tensor with possible batch dimensions set to 0.
function
tensorflow\tensorflow\python\distribute\input_lib.py
FunctionDef name:create_dummy_tensor arg:spec arguments arg If Call Return return:yes Call If Call Assign Call Assign Assign Assign Assign Compare Call If BoolOp BoolOp Call Call Assign Call If Call Return return:yes Call Call Call Call Assign Call Call If Call Assign Call Assign Call Return return:yes
numpy
diagonal
@array_function_dispatch(_diagonal_dispatcher) def diagonal(x, /, *, offset=0): return _core_diagonal(x, offset, axis1=-2, axis2=-1)
Returns specified diagonals of a matrix (or a stack of matrices) `numpy.diagonaloffsetnumpy.flipudnumpy.fliplr`. >>> a = np.arange(9).reshape(3, 3) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.linalg.diagonal(np.fliplr(a)) # Horizontal flip array([2, 4, 6]) >>> np.linalg.diagonal(np.flipud(a)) # Vertical flip array([6, 4, 2]) Note that the order in which the diagonal is retrieved varies depending on the flip function.
function
numpy\numpy\linalg\_linalg.py
FunctionDef name:diagonal arguments arg arg Return return:yes Call Call
authlib
validate_default_acr_values
def validate_default_acr_values(self): self._validate_claim_value('default_acr_values')
Default requested Authentication Context Class Reference values. Array of strings that specifies the default acr values that the OP is being requested to use for processing requests from this Client, with the values appearing in order of preference. The Authentication Context Class satisfied by the authentication performed is returned as the acr Claim Value in the issued ID Token. The acr Claim is requested as a Voluntary Claim by this parameter. The acr_values_supported discovery element contains a list of the supported acr values supported by the OP. Values specified in the acr_values request parameter or an individual acr Claim request override these default values.
method
authlib\authlib\oidc\registration\claims.py
FunctionDef name:validate_default_acr_values arg:self arguments arg Call
pytorch
list_options
def list_options() -> list[str]: from torch._inductor import config current_config: dict[str, Any] = config.get_config_copy() return list(current_config.keys())
Returns a dictionary describing the optimizations and debug configurations that are available to . The options are documented in . Example:: >>> torch._inductor.list_options()
function
pytorch\torch\_inductor\__init__.py
FunctionDef name:list_options arguments Call Return return:yes Call Call
matplotlib
_get_dist_to_box
def _get_dist_to_box(self, rotation, x0, y0, figure_box): if rotation > 270: quad = rotation - 270 h1 = (y0 - figure_box.y0) / math.cos(math.radians(quad)) h2 = (figure_box.x1 - x0) / math.cos(math.radians(90 - quad)) elif rotation > 180: quad = rotation - 180 h1 = (x0 - figure_box.x0) / math.cos(math.radians(quad)) h2 = (y0 - figure_box.y0) / math.cos(math.radians(90 - quad)) elif rotation > 90: quad = rotation - 90 h1 = (figure_box.y1 - y0) / math.cos(math.radians(quad)) h2 = (x0 - figure_box.x0) / math.cos(math.radians(90 - quad)) else: h1 = (figure_box.x1 - x0) / math.cos(math.radians(rotation)) h2 = (figure_box.y1 - y0) / math.cos(math.radians(90 - rotation)) return min(h1, h2)
Return the distance from the given points to the boundaries of a rotated box, in pixels.
method
matplotlib\lib\matplotlib\text.py
FunctionDef name:_get_dist_to_box arg:self arg:rotation arg:x0 arg:y0 arg:figure_box arguments arg arg arg arg arg If Compare Assign Assign Call Call Assign Call Call If Compare Assign Assign Call Call Assign Call Call If Compare Assign Assign Call Call Assign Call Call Assign Call Call Assign Call Call Return return:yes Call
numpy
__ne__
def __ne__(self, other): return self._comparison(other, operator.ne)
Check whether other does not equal self elementwise. When either of the elements is masked, the result is masked as well, but the underlying boolean data are still set, with self and other considered equal if both are masked, and unequal otherwise. For structured arrays, all fields are combined, with masked values ignored. The result is masked if all fields were masked, with self and other considered equal only if both were fully masked.
method
numpy\numpy\ma\core.py
FunctionDef name:__ne__ arg:self arg:other arguments arg arg Return return:yes Call
tensorflow
_log_weights
def _log_weights(self, epoch): with self._train_writer.as_default(): with summary_ops_v2.record_if(True): for layer in self.model.layers: for weight in layer.weights: weight_name = weight.name.replace(':', '_') summary_ops_v2.histogram(weight_name, weight, step=epoch) if self.write_images: self._log_weight_as_image(weight, weight_name, epoch) self._train_writer.flush()
Logs the weights of the Model to TensorBoard.
method
tensorflow\tensorflow\python\keras\callbacks.py
FunctionDef name:_log_weights arg:self arg:epoch arguments arg arg With Call With Call For For Assign Call Call If Call Call
tensorflow
_get_ops_details
def _get_ops_details(self): return [self._get_op_details(idx) for idx in range(self._interpreter.NumNodes())]
Gets op details for every node. Returns: A list of dictionaries containing arrays with lists of tensor ids for tensors involved in the op.
method
tensorflow\tensorflow\lite\python\interpreter.py
FunctionDef name:_get_ops_details arg:self arguments arg Return return:yes Call Call Call
scipy
special_ortho_group_gen
class special_ortho_group_gen(multi_rv_generic): def __init__(self, seed=None): super().__init__(seed) self.__doc__ = doccer.docformat(self.__doc__) def __call__(self, dim=None, seed=None): return special_ortho_group_frozen(dim, seed=seed) def _process_parameters(self, dim): if dim is None or not np.isscalar(dim) or dim < 0 or (dim != int(dim)): raise ValueError('Dimension of rotation must be specified,\n and must be a scalar nonnegative integer.') return dim def rvs(self, dim, size=1, random_state=None): random_state = self._get_random_state(random_state) q = ortho_group.rvs(dim, size, random_state) dets = np.linalg.det(q) if dim: q[..., 0, :] /= dets[..., np.newaxis] return q
A Special Orthogonal matrix (SO(N)) random variable. Return a random rotation matrix, drawn from the Haar distribution (the only uniform distribution on SO(N)) with a determinant of +1. The keyword specifies the dimension N. Methods ------- rvs(dim=None, size=1, random_state=None) Draw random samples from SO(N). Parameters ---------- dim : scalar Dimension of matrices seed : {None, int, np.random.RandomState, np.random.Generator}, optional Used for drawing random variates. If is , the singleton is used. If is an int, a new `seedNoneortho_groupscipy.spatial.transform.Rotation.randomdim` parameter, returning a "frozen" special_ortho_group random variable: >>> rv = special_ortho_group(5) >>> # Frozen object with the same methods but holding the >>> # dimension parameter fixed. See Also -------- ortho_group, scipy.spatial.transform.Rotation.random
class
scipy\scipy\stats\_multivariate.py
ClassDef name:special_ortho_group_gen FunctionDef name:__init__ arg:self arg:seed arguments arg arg Call Call Assign Call FunctionDef name:__call__ arg:self arg:dim arg:seed arguments arg arg arg Return return:yes Call FunctionDef name:_process_parameters arg:self arg:dim arguments arg arg If BoolOp Compare Call Compare Compare Call Raise Call Return return:yes FunctionDef name:rvs arg:self arg:dim arg:size arg:random_state arguments arg arg arg arg Assign Call Assign Call Assign Call If Return return:yes
pygame
pixels_blue
def pixels_blue(surface): return numpy.array(surface.get_view('B'), copy=False)
pygame.surfarray.pixels_blue(Surface): return array Reference pixel blue into a 2d array. Create a new 2D array that directly references the blue values in a Surface. Any changes to the array will affect the pixels in the Surface. This is a fast operation since no data is copied. This can only work on 24-bit or 32-bit Surfaces. The Surface this array references will remain locked for the lifetime of the array.
function
pygame\src_py\surfarray.py
FunctionDef name:pixels_blue arg:surface arguments arg Return return:yes Call Call
pytorch
codegen_cooperative_reduction_peer_combine
def codegen_cooperative_reduction_peer_combine(self, result_var, dtype, default_val): xnumel = self.numels['x'] mask = 'xindex < xnumel' if not self._has_constant_xmask() else None nbytes = xnumel * dtype.itemsize * self.max_rsplit() ws_name, ws_offset = self.cooperative_reduction_workspace_cache.allocate(nbytes) self.post_loop_combine.splice(f'\n {result_var}_ws = ({ws_name} + {self.index_to_str(ws_offset)}).to(tl.pointer_type({triton_type(dtype)}))\n tl.store({result_var}_ws + (xindex * RSPLIT + rsplit_id), {result_var}, {mask})\n ', strip=True) self.post_loop_store.writeline(f"{result_var}_peers = tl.load({result_var}_ws + (xindex * RSPLIT + rsplit_arange), rsplit_mask, eviction_policy='evict_first', other=triton_helpers.if_mask(rsplit_mask, {constant_repr(default_val)}))") return f'{result_var}_peers'
Generate code to save a [XBLOCK, RSPLIT] temporary workspace, where each thread block writes a different column. After the barrier, every thread block loads the completed value so that it can compute the final value independently.
method
pytorch\torch\_inductor\codegen\triton.py
FunctionDef name:codegen_cooperative_reduction_peer_combine arg:self arg:result_var arg:dtype arg:default_val arguments arg arg arg arg Assign Assign Call Assign Call Assign Call Call Call Call Call Call Return return:yes
numpy
gpaths
def gpaths(paths, local_path='', include_non_existing=True): if is_string(paths): paths = (paths,) return _fix_paths(paths, local_path, include_non_existing)
Apply glob to paths and prepend local_path if needed.
function
numpy\numpy\distutils\misc_util.py
FunctionDef name:gpaths arg:paths arg:local_path arg:include_non_existing arguments arg arg arg If Call Assign Return return:yes Call
kornia
extract_tensor_patches
def extract_tensor_patches(input: Tensor, window_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]]=1, padding: PadType=0, allow_auto_padding: bool=False) -> Tensor: if not torch.is_tensor(input): raise TypeError(f'Input input type is not a Tensor. Got {type(input)}') if len(input.shape) != 4: raise ValueError(f'Invalid input shape, we expect BxCxHxW. Got: {input.shape}') window_size = cast(Tuple[int, int], _pair(window_size)) stride = cast(Tuple[int, int], _pair(stride)) original_size = (input.shape[-2], input.shape[-1]) if not padding: if not _check_patch_fit(original_size, window_size, stride): if not allow_auto_padding: warn(f'The window will not fit into the image. \nWindow size: {window_size}\nStride: {stride}\nImage size: {original_size}\nThis means that the final incomplete patches will be dropped. By enabling `allow_auto_padding`, the input will be padded to fit the window and stride.', stacklevel=1) else: padding = compute_padding(original_size=original_size, window_size=window_size, stride=stride) if padding: padding = create_padding_tuple(padding) input = pad(input, padding) return _extract_tensor_patchesnd(input, window_size, stride)
Extract patches from tensors and stacks them. See :class: for details. Args: input: tensor image where to extract the patches with shape :math:. window_size: the size of the sliding window and the output patch size. stride: stride of the sliding window. padding: Zero-padding added to both side of the input. allow_auto_padding: whether to allow automatic padding if the window and stride do not fit into the image. Returns: the tensor with the extracted patches with shape :math:. Examples: >>> input = torch.arange(9.).view(1, 1, 3, 3) >>> patches = extract_tensor_patches(input, (2, 3)) >>> input tensor([[[[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]]]) >>> patches[:, -1] tensor([[[[3., 4., 5.], [6., 7., 8.]]]])
function
kornia\kornia\contrib\extract_patches.py
FunctionDef name:extract_tensor_patches arg:input arg:window_size arg:stride arg:padding arg:allow_auto_padding arguments arg arg arg arg arg If Call Raise Call Call If Compare Call Raise Call Assign Call Call Assign Call Call Assign If If Call If Call Assign Call If Assign Call Assign Call Return return:yes Call
tensorflow
__init__
def __init__(self, performed_action, run_metadata=None, client_graph_def=None, tf_error=None): _check_type(performed_action, str) self.performed_action = performed_action if run_metadata is not None: _check_type(run_metadata, config_pb2.RunMetadata) self.run_metadata = run_metadata self.client_graph_def = client_graph_def self.tf_error = tf_error
Constructor for . Args: performed_action: () Actually-performed action by the debug-wrapper session. run_metadata: run_metadata output from the run() call (if any). client_graph_def: (GraphDef) GraphDef from the client side, i.e., from the python front end of TensorFlow. Can be obtained with session.graph.as_graph_def(). tf_error: (errors.OpError subtypes) TensorFlow OpError that occurred during the run (if any).
method
tensorflow\tensorflow\python\debug\wrappers\framework.py
FunctionDef name:__init__ arg:self arg:performed_action arg:run_metadata arg:client_graph_def arg:tf_error arguments arg arg arg arg arg Call Assign If Compare Call Assign Assign Assign
matplotlib
_update_glyph_map_defs
def _update_glyph_map_defs(self, glyph_map_new): writer = self.writer if glyph_map_new: writer.start('defs') for char_id, (vertices, codes) in glyph_map_new.items(): char_id = self._adjust_char_id(char_id) path_data = self._convert_path(Path(vertices * 64, codes), simplify=False) writer.element('path', id=char_id, d=path_data, transform=_generate_transform([('scale', (1 / 64,))])) writer.end('defs') self._glyph_map.update(glyph_map_new)
Emit definitions for not-yet-defined glyphs, and record them as having been defined.
method
matplotlib\lib\matplotlib\backends\backend_svg.py
FunctionDef name:_update_glyph_map_defs arg:self arg:glyph_map_new arguments arg arg Assign If Call For Call Assign Call Assign Call Call Call Call Call Call
pytorch
untyped
def untyped(self): _warn_typed_storage_removal() return self._untyped_storage
Return the internal :class:.
method
pytorch\torch\storage.py
FunctionDef name:untyped arg:self arguments arg Call Return return:yes
tensorflow
_get_parent_graph
def _get_parent_graph(self, graph): parent_graph = graph.outer_graph if not isinstance(parent_graph, func_graph.FuncGraph) and ops.executing_eagerly_outside_functions(): return _DUMMY_EAGER_GRAPH.key return parent_graph
Returns the parent graph or dummy eager object.
method
tensorflow\tensorflow\python\keras\backend.py
FunctionDef name:_get_parent_graph arg:self arg:graph arguments arg arg Assign If BoolOp Call Call Return return:yes Return return:yes
django
get_context_object_name
def get_context_object_name(self, obj): if self.context_object_name: return self.context_object_name elif isinstance(obj, models.Model): return obj._meta.model_name else: return None
Get the name to use for the object.
method
django\django\views\generic\detail.py
FunctionDef name:get_context_object_name arg:self arg:obj arguments arg arg If Return return:yes If Call Return return:yes Return return:no
scrapy
StartSpiderMiddleware
class StartSpiderMiddleware(BaseSpiderMiddleware): def get_processed_request(self, request: Request, response: Response | None) -> Request | None: if response is None: request.meta.setdefault('is_start_request', True) return request
Set :reqmeta:. .. reqmeta:: is_start_request is_start_request ---------------- :attr: key that is set to `start requests downloader middlewares `.
class
scrapy\scrapy\spidermiddlewares\start.py
ClassDef name:StartSpiderMiddleware FunctionDef name:get_processed_request arg:self arg:request arg:response arguments arg arg arg If Compare Call Return return:yes
pytorch
_post_order_apply
def _post_order_apply(root_module: nn.Module, fn: Callable[[nn.Module], Optional[nn.Module]]): visited_modules: set[nn.Module] = {root_module} def _post_order_apply_inner(module: nn.Module, module_name: str, parent_module: Optional[nn.Module]): for child_module_name, child_module in module.named_children(): if child_module not in visited_modules: visited_modules.add(child_module) _post_order_apply_inner(child_module, child_module_name, module) optional_module = fn(module) if optional_module is not None: assert isinstance(parent_module, nn.Module), f'Non-root modules should have their parent module set but got {parent_module} for {module}' assert module_name, f'Non-root modules should have their module name set but got an empty module name for {module}' assert isinstance(optional_module, nn.Module), f'fn should return None or an nn.Module but got {optional_module}' setattr(parent_module, module_name, optional_module) _post_order_apply_inner(root_module, '', None)
This applies `nn.Module`, in which case the module is not changed.
function
pytorch\torch\distributed\fsdp\wrap.py
FunctionDef name:_post_order_apply arg:root_module arg:fn arguments arg arg FunctionDef name:_post_order_apply_inner arg:module arg:module_name arg:parent_module arguments arg arg arg For Call If Compare Call Call Assign Call If Compare Call Call Call Call
numpy
get_info
def get_info(self, *names): from .system_info import get_info, dict_append info_dict = {} for a in names: dict_append(info_dict, **get_info(a)) return info_dict
Get resources information. Return information (from system_info.get_info) for all of the names in the argument list in a single dictionary.
method
numpy\numpy\distutils\misc_util.py
FunctionDef name:get_info arg:self arguments arg arg Assign For Call Call Return return:yes
django
get_autocommit
def get_autocommit(using=None): return get_connection(using).get_autocommit()
Get the autocommit status of the connection.
function
django\django\db\transaction.py
FunctionDef name:get_autocommit arg:using arguments arg Return return:yes Call Call
tensorflow
softplus
@dispatch.add_dispatch_support @doc_controls.do_not_generate_docs def softplus(x): return math_ops.softplus(x)
Softplus of a tensor. Args: x: A tensor or variable. Returns: A tensor.
function
tensorflow\tensorflow\python\keras\backend.py
FunctionDef name:softplus arg:x arguments arg Return return:yes Call
pytorch
asdict
def asdict(self) -> dict[str, Any]: return {'name': self.name, 'max_abs_diff': self.max_abs_diff, 'max_rel_diff': self.max_rel_diff, 'abs_diff_hist': [self.abs_diff_hist[0].tolist(), self.abs_diff_hist[1].tolist()], 'rel_diff_hist': [self.rel_diff_hist[0].tolist(), self.rel_diff_hist[1].tolist()], 'expected_dtype': str(self.expected_dtype), 'actual_dtype': str(self.actual_dtype)}
Convert the VerificationInfo object to a dictionary. Returns: A dictionary representation of the VerificationInfo object.
method
pytorch\torch\onnx\_internal\exporter\_verification.py
FunctionDef name:asdict arg:self arguments arg Return return:yes Call Call Call Call Call Call
pytorch
register_callback
def register_callback(self, output_file_path: str) -> Self: def get_temp_uncompressed_file() -> str: fp = tempfile.NamedTemporaryFile('w+b', suffix='.json', delete=False) fp.close() return fp.name if not self._registered: self.output_file_path = output_file_path if output_file_path.endswith('.gz'): output_file_path = get_temp_uncompressed_file() self.output_file_path_observer = output_file_path self._registered = _add_execution_trace_observer(output_file_path) return self
Adds ET observer to record function callbacks. The data will be written to output_file_path.
method
pytorch\torch\profiler\profiler.py
FunctionDef name:register_callback arg:self arg:output_file_path arguments arg arg FunctionDef name:get_temp_uncompressed_file arguments Assign Call Call Return return:yes If Assign If Call Assign Call Assign Assign Call Return return:yes
kornia
compile
def compile(self, *, fullgraph: bool=False, dynamic: bool=False, backend: str='inductor', mode: Optional[str]=None, options: Optional[dict[str, str | int | bool]]=None, disable: bool=False) -> None: self.model = torch.compile(self.model, fullgraph=fullgraph, dynamic=dynamic, backend=backend, mode=mode, options=options, disable=disable)
Compile the internal object detection model with :py:func:.
method
kornia\kornia\models\detection\base.py
FunctionDef name:compile arg:self arguments arg arg arg arg arg arg arg Assign Call
django
check_cs_op
def check_cs_op(result, func, cargs): if result == 0: raise GEOSException('Could not set value on coordinate sequence') else: return result
Check the status code of a coordinate sequence operation.
function
django\django\contrib\gis\geos\prototypes\coordseq.py
FunctionDef name:check_cs_op arg:result arg:func arg:cargs arguments arg arg arg If Compare Raise Call Return return:yes
tensorflow
_merge_tensors
def _merge_tensors(t1, t2, name, validate): if t1 is None: return (t2, False) elif t2 is None: return (t1, False) elif t1 is t2: return (t1, True) else: err_msg = 'RowPartition._merge_precomputed_encodings: partitions have incompatible %s' % name if not t1.shape.is_compatible_with(t2.shape): raise ValueError(err_msg) if validate: checks = [check_ops.assert_equal(t1, t2, message=err_msg)] return (control_flow_ops.with_dependencies(checks, t1), True) else: return (t1, False)
Merge two optional Tensors with equal values into a single Tensor. Args: t1: tf.Tensor or None t2: tf.Tensor or None name: A name for the tensors (for error messages) validate: If true, then check that is compatible with (if both are non-None). Returns: A pair : * is if it is not None; or otherwise. * is true if we validated that t1 and t2 are equal (either by adding a check, or because t1 is t2).
function
tensorflow\tensorflow\python\ops\ragged\row_partition.py
FunctionDef name:_merge_tensors arg:t1 arg:t2 arg:name arg:validate arguments arg arg arg arg If Compare Return return:yes If Compare Return return:yes If Compare Return return:yes Assign If Call Raise Call If Assign Call Return return:yes Call Return return:yes
tensorflow
_kl_normal_normal
@kullback_leibler.RegisterKL(Normal, Normal) def _kl_normal_normal(n_a, n_b, name=None): with ops.name_scope(name, 'kl_normal_normal', [n_a.loc, n_b.loc]): one = constant_op.constant(1, dtype=n_a.dtype) two = constant_op.constant(2, dtype=n_a.dtype) half = constant_op.constant(0.5, dtype=n_a.dtype) s_a_squared = math_ops.square(n_a.scale) s_b_squared = math_ops.square(n_b.scale) ratio = s_a_squared / s_b_squared return math_ops.squared_difference(n_a.loc, n_b.loc) / (two * s_b_squared) + half * (ratio - one - math_ops.log(ratio))
Calculate the batched KL divergence KL(n_a || n_b) with n_a and n_b Normal. Args: n_a: instance of a Normal distribution object. n_b: instance of a Normal distribution object. name: (optional) Name to use for created operations. default is "kl_normal_normal". Returns: Batchwise KL(n_a || n_b)
function
tensorflow\tensorflow\python\ops\distributions\normal.py
FunctionDef name:_kl_normal_normal arg:n_a arg:n_b arg:name arguments arg arg arg With Call Assign Call Assign Call Assign Call Assign Call Assign Call Assign Return return:yes Call Call Call
tensorflow
__init__
def __init__(self, thunk): self._thunk = thunk self._master_tensor = thunk()
Initializes a _LazyEvalTensor object. Args: thunk: A callable. A thunk which computes the value of the tensor.
method
tensorflow\tensorflow\python\ops\variable_scope.py
FunctionDef name:__init__ arg:self arg:thunk arguments arg arg Assign Assign Call
cherrypy
update
def update(self, d): if not self.loaded: self.load() self._data.update(d)
Update multiple session-stored objects in one go. D.update(E) -> None. Update D from E: for k in E: D[k] = E[k].
method
cherrypy\cherrypy\lib\sessions.py
FunctionDef name:update arg:self arg:d arguments arg arg If Call Call
tensorflow
from_frozen_graph
@classmethod @_deprecation.deprecated(None, 'Use `lite.TFLiteConverter.from_frozen_graph` instead.') def from_frozen_graph(cls, graph_def_file, input_arrays, output_arrays, input_shapes=None): return TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays, input_shapes)
Creates a TocoConverter class from a file containing a frozen graph.
method
tensorflow\tensorflow\lite\python\lite.py
FunctionDef name:from_frozen_graph arg:cls arg:graph_def_file arg:input_arrays arg:output_arrays arg:input_shapes arguments arg arg arg arg arg Return return:yes Call Call
pytorch
uniform_
def uniform_(tensor: Tensor, a: float=0.0, b: float=1.0, generator: _Optional[torch.Generator]=None) -> Tensor: if torch.overrides.has_torch_function_variadic(tensor): return torch.overrides.handle_torch_function(uniform_, (tensor,), tensor=tensor, a=a, b=b, generator=generator) return _no_grad_uniform_(tensor, a, b, generator)
Fill the input Tensor with values drawn from the uniform distribution. :math:. Args: tensor: an n-dimensional a: the lower bound of the uniform distribution b: the upper bound of the uniform distribution generator: the torch Generator to sample from (default: None) Examples: >>> w = torch.empty(3, 5) >>> nn.init.uniform_(w)
function
pytorch\torch\nn\init.py
FunctionDef name:uniform_ arg:tensor arg:a arg:b arg:generator arguments arg arg arg arg If Call Return return:yes Call Return return:yes Call
tensorflow
AppendDocstring
class AppendDocstring: def __init__(self, additional_note='', kwargs_dict=None): self._additional_note = additional_note if kwargs_dict: bullets = [] for key in sorted(kwargs_dict.keys()): value = kwargs_dict[key] if any((x.isspace() for x in key)): raise ValueError('Parameter name "%s" contains whitespace.' % key) value = value.lstrip() if '\n' in value: raise ValueError('Parameter description for "%s" contains newlines.' % key) bullets.append('* `%s`: %s' % (key, value)) self._additional_note += '\n\n##### `kwargs`:\n\n' + '\n'.join(bullets) def __call__(self, fn): @functools.wraps(fn) def _fn(*args, **kwargs): return fn(*args, **kwargs) if _fn.__doc__ is None: _fn.__doc__ = self._additional_note else: _fn.__doc__ += '\n%s' % self._additional_note return _fn
Helper class to promote private subclass docstring to public counterpart. Example: In this case, the decorator appends the to the docstring of (not ) and adds a new section with each dictionary item as a bullet-point. For a more detailed example, see .
class
tensorflow\tensorflow\python\ops\distributions\util.py
ClassDef name:AppendDocstring FunctionDef name:__init__ arg:self arg:additional_note arg:kwargs_dict arguments arg arg arg Assign If Assign For Call Call Assign If Call Call Raise Call Assign Call If Compare Raise Call Call Call FunctionDef name:__call__ arg:self arg:fn arguments arg arg FunctionDef name:_fn arguments arg arg Return return:yes Call Call If Compare Assign Return return:yes
seaborn
_determine_grid_dimensions
def _determine_grid_dimensions(self, facet_spec: FacetSpec, pair_spec: PairSpec) -> None: self.grid_dimensions: dict[str, list] = {} for dim, axis in zip(['col', 'row'], ['x', 'y']): facet_vars = facet_spec.get('variables', {}) if dim in facet_vars: self.grid_dimensions[dim] = facet_spec['structure'][dim] elif axis in pair_spec.get('structure', {}): self.grid_dimensions[dim] = [None for _ in pair_spec.get('structure', {})[axis]] else: self.grid_dimensions[dim] = [None] self.subplot_spec[f'n{dim}s'] = len(self.grid_dimensions[dim]) if not pair_spec.get('cross', True): self.subplot_spec['nrows'] = 1 self.n_subplots = self.subplot_spec['ncols'] * self.subplot_spec['nrows']
Parse faceting and pairing information to define figure structure.
method
seaborn\seaborn\_core\subplots.py
FunctionDef name:_determine_grid_dimensions arg:self arg:facet_spec arg:pair_spec arguments arg arg arg For Call Assign Call If Compare Assign If Compare Call Assign Call Assign Assign Call If Call Assign Assign
django
send_messages
def send_messages(self, email_messages): raise NotImplementedError('subclasses of BaseEmailBackend must override send_messages() method')
Send one or more EmailMessage objects and return the number of email messages sent.
method
django\django\core\mail\backends\base.py
FunctionDef name:send_messages arg:self arg:email_messages arguments arg arg Raise Call
django
c
def c(self): return self.data.isoformat()
ISO 8601 Format Example : '2008-01-02T10:30:00.000123'
method
django\django\utils\dateformat.py
FunctionDef name:c arg:self arguments arg Return return:yes Call
pytorch
statically_known_false
def statically_known_false(x: BoolLikeType) -> bool: if not isinstance(x, SymBool): assert isinstance(x, bool) return not x result = _static_eval_sym_bool(x) if result is None: return False return not result
Returns True if x can be simplified to a constant and is False. If x cannot be evaluated from static, we return False .. note:: This function doesn't introduce new guards, so the expression may end up evaluating to False at runtime even if this function returns False. Args: x (bool, SymBool): The expression to try statically evaluating
function
pytorch\torch\fx\experimental\symbolic_shapes.py
FunctionDef name:statically_known_false arg:x arguments arg If Call Call Return return:yes Assign Call If Compare Return return:yes Return return:yes
tensorflow
get_updates_for
@doc_controls.do_not_generate_docs def get_updates_for(self, inputs): warnings.warn('`layer.get_updates_for` is deprecated and will be removed in a future version. Please use `layer.updates` method instead.') return self.updates
Deprecated, do NOT use! Retrieves updates relevant to a specific set of inputs. Args: inputs: Input tensor or list/tuple of input tensors. Returns: List of update ops of the layer that depend on .
method
tensorflow\tensorflow\python\keras\engine\base_layer.py
FunctionDef name:get_updates_for arg:self arg:inputs arguments arg arg Call Return return:yes
scipy
_allowance
def _allowance(self, confidence_level: DecimalNumber=0.95, tol: DecimalNumber=0.001) -> float: alpha = 1 - confidence_level def pvalue_from_stat(statistic): statistic = np.array(statistic) sf = _pvalue_dunnett(rho=self._rho, df=self._df, statistic=statistic, alternative=self._alternative, rng=self._rng) return abs(sf - alpha) / alpha res = minimize_scalar(pvalue_from_stat, method='brent', tol=tol) critical_value = res.x if res.success is False or res.fun >= tol * 10: warnings.warn(f'Computation of the confidence interval did not converge to the desired level. The confidence level corresponding with the returned interval is approximately {alpha * (1 + res.fun)}.', stacklevel=3) allowance = critical_value * self._std * np.sqrt(1 / self._n_samples + 1 / self._n_control) return abs(allowance)
Allowance. It is the quantity to add/subtract from the observed difference between the means of observed groups and the mean of the control group. The result gives confidence limits. Parameters ---------- confidence_level : float, optional Confidence level for the computed confidence interval. Default is .95. tol : float, optional A tolerance for numerical optimization: the allowance will produce a confidence within `` of the specified level, or a warning will be emitted. Tight tolerances may be impractical due to noisy evaluation of the objective. Default is 1e-3. Returns ------- allowance : float Allowance around the mean.
method
scipy\scipy\stats\_multicomp.py
FunctionDef name:_allowance arg:self arg:confidence_level arg:tol arguments arg arg arg Assign FunctionDef name:pvalue_from_stat arg:statistic arguments arg Assign Call Assign Call Return return:yes Call Assign Call Assign If BoolOp Compare Compare Call Assign Call Return return:yes Call
kornia
rgb255_to_rgb
def rgb255_to_rgb(image: Tensor) -> Tensor: KORNIA_CHECK_IS_COLOR(image) rgb = image / 255.0 return rgb
Convert an image from RGB [0, 255] to RGB for visualization purposes. Args: image: RGB Image to be converted to RGB of shape :math:. Returns: RGB version of the image with shape of shape :math:. Example: >>> input = torch.rand(2, 3, 4, 5) >>> output = rgb255_to_rgb(input) # 2x3x4x5
function
kornia\kornia\color\rgb.py
FunctionDef name:rgb255_to_rgb arg:image arguments arg Call Assign Return return:yes
pytorch
_reduce_op
class _reduce_op: def __init__(self) -> None: for k, v in ReduceOp.RedOpType.__members__.items(): setattr(self, k, v) self.__members__ = ReduceOp.RedOpType.__members__ @deprecated('`torch.distributed.reduce_op` is deprecated, please use `torch.distributed.ReduceOp` instead', category=FutureWarning) def __getattribute__(self, key): return object.__getattribute__(self, key)
Deprecated enum-like class. For reduction operations: `~torch.distributed.ReduceOp` is recommended to use instead.
class
pytorch\torch\distributed\distributed_c10d.py
ClassDef name:_reduce_op FunctionDef name:__init__ arg:self arguments arg For Call Call Assign FunctionDef name:__getattribute__ arg:self arg:key arguments arg arg Return return:yes Call Call
pytorch
RendezvousSettings
@dataclass(repr=False, eq=False, frozen=True) class RendezvousSettings: run_id: str min_nodes: int max_nodes: int timeout: RendezvousTimeout keep_alive_interval: timedelta keep_alive_max_attempt: int
Hold the settings of the rendezvous. Attributes: run_id: The run id of the rendezvous. min_nodes: The minimum number of nodes to admit to the rendezvous. max_nodes: The maximum number of nodes to admit to the rendezvous. timeout: The timeout configuration of the rendezvous. keep_alive_interval: The amount of time a node waits before sending a heartbeat to keep it alive in the rendezvous. keep_alive_max_attempt: The maximum number of failed heartbeat attempts after which a node is considered dead.
class
pytorch\torch\distributed\elastic\rendezvous\dynamic_rendezvous.py
ClassDef name:RendezvousSettings Call
django
pagination
def pagination(cl): pagination_required = (not cl.show_all or not cl.can_show_all) and cl.multi_page page_range = cl.paginator.get_elided_page_range(cl.page_num) if pagination_required else [] need_show_all_link = cl.can_show_all and (not cl.show_all) and cl.multi_page return {'cl': cl, 'pagination_required': pagination_required, 'show_all_url': need_show_all_link and cl.get_query_string({ALL_VAR: ''}), 'page_range': page_range, 'ALL_VAR': ALL_VAR, '1': 1}
Generate the series of links to the pages in a paginated list.
function
django\django\contrib\admin\templatetags\admin_list.py
FunctionDef name:pagination arg:cl arguments arg Assign BoolOp BoolOp Assign Call Assign BoolOp Return return:yes BoolOp Call
numpy
_opt_info
def _opt_info(): from numpy._core._multiarray_umath import __cpu_baseline__, __cpu_dispatch__, __cpu_features__ if len(__cpu_baseline__) == 0 and len(__cpu_dispatch__) == 0: return '' enabled_features = ' '.join(__cpu_baseline__) for feature in __cpu_dispatch__: if __cpu_features__[feature]: enabled_features += f' {feature}*' else: enabled_features += f' {feature}?' return enabled_features
Returns a string containing the CPU features supported by the current build. The format of the string can be explained as follows: - Dispatched features supported by the running machine end with . - Dispatched features not supported by the running machine end with . - Remaining features represent the baseline. Returns: str: A formatted string indicating the supported CPU features.
function
numpy\numpy\lib\_utils_impl.py
FunctionDef name:_opt_info arguments If BoolOp Compare Call Compare Call Return return:yes Assign Call For If Return return:yes
django
__str__
def __str__(self): return str(self.tuple)
Return the string representation of the coordinate sequence.
method
django\django\contrib\gis\geos\coordseq.py
FunctionDef name:__str__ arg:self arguments arg Return return:yes Call
pytorch
_post_reduce_grad_callback
@no_type_check def _post_reduce_grad_callback(state: _FSDPState, handle: FlatParamHandle, grad_to_offload: torch.Tensor): _offload_grad(state, handle, grad_to_offload) _post_backward_use_sharded_grad_views(handle)
This callback captures any logic to run after the gradient reduction finishes. Currently, this offloads the gradient to CPU if CPU offloading is enabled and uses sharded gradient views if ``.
function
pytorch\torch\distributed\fsdp\_runtime_utils.py
FunctionDef name:_post_reduce_grad_callback arg:state arg:handle arg:grad_to_offload arguments arg arg arg Call Call
matplotlib
clf
def clf() -> None: gcf().clear()
Clear the current figure.
function
matplotlib\lib\matplotlib\pyplot.py
FunctionDef name:clf arguments Call Call
pytorch
add_summary
def add_summary(self, summary, global_step=None, walltime=None): event = event_pb2.Event(summary=summary) self.add_event(event, global_step, walltime)
Add a protocol buffer to the event file. This method wraps the provided summary in an protocol buffer and adds it to the event file. Args: summary: A protocol buffer. global_step: Number. Optional global step value for training process to record with the summary. walltime: float. Optional walltime to override the default (current) walltime (from time.time()) seconds after epoch
method
pytorch\torch\utils\tensorboard\writer.py
FunctionDef name:add_summary arg:self arg:summary arg:global_step arg:walltime arguments arg arg arg arg Assign Call Call
scipy
__new__
def __new__(cls, *system, **kwargs): if cls is LinearTimeInvariant: raise NotImplementedError('The LinearTimeInvariant class is not meant to be used directly, use `lti` or `dlti` instead.') return super().__new__(cls)
Create a new object, don't allow direct instances.
method
scipy\scipy\signal\_ltisys.py
FunctionDef name:__new__ arg:cls arguments arg arg arg If Compare Raise Call Return return:yes Call Call
scikit-learn
_bisect
def _bisect(self, X, x_squared_norms, sample_weight, cluster_to_bisect): X = X[cluster_to_bisect.indices] x_squared_norms = x_squared_norms[cluster_to_bisect.indices] sample_weight = sample_weight[cluster_to_bisect.indices] best_inertia = None for _ in range(self.n_init): centers_init = self._init_centroids(X, x_squared_norms=x_squared_norms, init=self.init, random_state=self._random_state, n_centroids=2, sample_weight=sample_weight) labels, inertia, centers, _ = self._kmeans_single(X, sample_weight, centers_init, max_iter=self.max_iter, verbose=self.verbose, tol=self.tol, n_threads=self._n_threads) if best_inertia is None or inertia < best_inertia * (1 - 1e-06): best_labels = labels best_centers = centers best_inertia = inertia if self.verbose: print(f'New centroids from bisection: {best_centers}') if self.bisecting_strategy == 'biggest_inertia': scores = self._inertia_per_cluster(X, best_centers, best_labels, sample_weight) else: scores = np.bincount(best_labels, minlength=2) cluster_to_bisect.split(best_labels, best_centers, scores)
Split a cluster into 2 subsclusters. Parameters ---------- X : {ndarray, csr_matrix} of shape (n_samples, n_features) Training instances to cluster. x_squared_norms : ndarray of shape (n_samples,) Squared euclidean norm of each data point. sample_weight : ndarray of shape (n_samples,) The weights for each observation in X. cluster_to_bisect : _BisectingTree node object The cluster node to split.
method
scikit-learn\sklearn\cluster\_bisect_k_means.py
FunctionDef name:_bisect arg:self arg:X arg:x_squared_norms arg:sample_weight arg:cluster_to_bisect arguments arg arg arg arg arg Assign Assign Assign Assign For Call Assign Call Assign Call If BoolOp Compare Compare Assign Assign Assign If Call If Compare Assign Call Assign Call Call
End of preview. Expand in Data Studio

Overview

This dataset contains 34,000+ rows of code-docstring-ast data along with additional metadata. Data was gathered from various Python libraries and frameworks and their publicly available GitHub repos. This dataset was created for the purpose of training the CodeT5+ transformer on AST-enhanced code-to-doc tasks.

Sources

The dataset was gathered from various GitHub repos sampled from this repo by Vinta.

The 26 repos are:

  • matplotlib
  • pytorch
  • cryptography
  • django
  • prospector
  • scikit-learn
  • pandas
  • numpy
  • uvicorn
  • feincms
  • algorithms
  • scrapy
  • authlib
  • seaborn
  • coconut
  • tensorflow
  • flexx
  • salmon
  • mongo-python-driver
  • virtualenv
  • sphinx
  • schema
  • kornia
  • scipy
  • cherrypy
  • pygame

Sampling was at random; I simply browsed through each category from Vinta's list and chose one from a random interesting category.

Dataset Instance

An instance of the dataset is as follows:

{
  <library> : <The library from which the source code came from>,
  <name> : <The name of the function/class/method>,
  <source_code> : <The raw source code itself stripped of its docstrings and comments>,
  <docstring> : <The corresponding docstring of the code>,
  <type> : <Whether it's a function, method, or class>,
  <file_path> : <The relative path of the file containing the function>,
  <ast_data> : <A flattened representation of the parsed AST. For more info about this, see section below>
}

The AST Data

An extractor only focuses on specific nodes relevant for docstring generation denoted by this set:

KEEP_NODES = {
        'FunctionDef', 'AsyncFunctionDef', 'ClassDef',
        'arguments', 'arg', 'Return',
        'If', 'For', 'While', 'Try', 'With',
        'Assign', 'Call',
        'Raise', 'ExceptHandler',
        'decorator', 'bases',
        'Compare', 'BoolOp'
    }

Everything else is discarded. For example

Source Code

def tox_append_version_info() -> str: return '[toxfile]'

Resulting AST Dictionary

"ast_data": {
      "type": "FunctionDef",
      "children": [
        {
          "type": "arguments",
          "args": []
        },
        {
          "type": "Return",
          "has_value": true
        }
      ],
      "name": "tox_append_version_info"
    }

This dictionary is then flattened via a helper function which would then look something like FunctionDef name:tox_append_version_info arguments Return return:yes.

Preprocessing

The dataset generally follows CodeBERT's code2nl dataset cleaning standards which are as follows:

  • Removed comments from the code
  • Removed examples where code cannot be parsed into an AST
  • Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
  • Remove examples that documents are not English

Furthermore, the following cleaning steps specific to this dataset were applied:

  • Removed examples where, using CodeT5+'s tokenizer, the combined tokens of the source_code + ast_data is > 512
  • Removed examples where, using CodeT5+'s tokenizer, the docstrings are > 512

Final Statistics

{
  "original_samples": 128880,
  "processed_samples": 34537,
  "filter_stats": {
    "success": 34537,
    "non_english": 1202,
    "docstring_too_long": 3848,
    "input_too_long": 8366,
    "docstring_too_short": 74013,
    "error": 0,
    "error: unhashable type: 'list'": 6914
  },
  "split_sizes": {
    "train": 24175,
    "val": 5180,
    "test": 5182
  },
  "input_token_stats": {
    "min": 16,
    "max": 505,
    "avg": 164.071
  },
  "target_token_stats": {
    "min": 4,
    "max": 254,
    "avg": 52.758
  },
  "type_distribution": {
    "method": 12682,
    "function": 8733,
    "class": 2760
  }
}

NOTE

This dataset is imperfect. Use under your own volition.

Downloads last month
148