code
string
signature
string
docstring
string
loss_without_docstring
float64
loss_with_docstring
float64
factor
float64
r12, r22 = self.z1*self.z1, self.z2*self.z2 r = np.sqrt(r12 + r22) return r
def mod_c(self)
Complex modulus
4.176311
3.548812
1.17682
m = len(x) _assert(n < m, 'len(x) must be larger than n') weights = np.zeros((m, n + 1)) _fd_weights_all(weights, x, x0, n) return weights.T
def fd_weights_all(x, x0=0, n=1)
Return finite difference weights for derivatives of all orders up to n. Parameters ---------- x : vector, length m x-coordinates for grid points x0 : scalar location where approximations are to be accurate n : scalar integer highest derivative that we want to find weights for Returns ------- weights : array, shape n+1 x m contains coefficients for the j'th derivative in row j (0 <= j <= n) Notes ----- The x values can be arbitrarily spaced but must be distinct and len(x) > n. The Fornberg algorithm is much more stable numerically than regular vandermonde systems for large values of n. See also -------- fd_weights References ---------- B. Fornberg (1998) "Calculation of weights_and_points in finite difference formulas", SIAM Review 40, pp. 685-691. http://www.scholarpedia.org/article/Finite_difference_method
3.701678
4.638862
0.797971
num_x = len(x) _assert(n < num_x, 'len(x) must be larger than n') _assert(num_x == len(fx), 'len(x) must be equal len(fx)') du = np.zeros_like(fx) mm = n // 2 + m size = 2 * mm + 2 # stencil size at boundary # 2 * mm boundary points for i in range(mm): du[i] = np.dot(fd_weights(x[:size], x0=x[i], n=n), fx[:size]) du[-i-1] = np.dot(fd_weights(x[-size:], x0=x[-i-1], n=n), fx[-size:]) # interior points for i in range(mm, num_x-mm): du[i] = np.dot(fd_weights(x[i-mm:i+mm+1], x0=x[i], n=n), fx[i-mm:i+mm+1]) return du
def fd_derivative(fx, x, n=1, m=2)
Return the n'th derivative for all points using Finite Difference method. Parameters ---------- fx : vector function values which are evaluated on x i.e. fx[i] = f(x[i]) x : vector abscissas on which fx is evaluated. The x values can be arbitrarily spaced but must be distinct and len(x) > n. n : scalar integer order of derivative. m : scalar integer defines the stencil size. The stencil size is of 2 * mm + 1 points in the interior, and 2 * mm + 2 points for each of the 2 * mm boundary points where mm = n // 2 + m. fd_derivative evaluates an approximation for the n'th derivative of the vector function f(x) using the Fornberg finite difference method. Restrictions: 0 < n < len(x) and 2*mm+2 <= len(x) Examples -------- >>> import numpy as np >>> import numdifftools.fornberg as ndf >>> x = np.linspace(-1, 1, 25) >>> fx = np.exp(x) >>> df = ndf.fd_derivative(fx, x, n=1) >>> np.allclose(df, fx) True See also -------- fd_weights
2.752019
2.750626
1.000506
check_points = (-0.4 + 0.3j, 0.7 + 0.2j, 0.02 - 0.06j) diffs = [] ftests = [] for check_point in check_points: rtest = r * check_point ztest = z + rtest ftest = f(ztest) # Evaluate powerseries: comp = np.sum(bn * np.power(check_point, mvec)) ftests.append(ftest) diffs.append(comp - ftest) max_abs_error = np.max(np.abs(diffs)) max_f_value = np.max(np.abs(ftests)) return max_abs_error > 1e-3 * max_f_value
def _poor_convergence(z, r, f, bn, mvec)
Test for poor convergence based on three function evaluations. This test evaluates the function at the three points and returns false if the relative error is greater than 1e-3.
3.601862
3.52176
1.022745
_assert(n < 193, 'Number of derivatives too large. Must be less than 193') correction = np.array([0, 0, 1, 3, 4, 7])[_get_logn(n)] log2n = _get_logn(n - correction) m = 2 ** (log2n + 3) return m
def _num_taylor_coefficients(n)
Return number of taylor coefficients Parameters ---------- n : scalar integer Wanted number of taylor coefficients Returns ------- m : scalar integer Number of taylor coefficients calculated 8 if n <= 6 16 if 6 < n <= 12 32 if 12 < n <= 25 64 if 25 < n <= 51 128 if 51 < n <= 103 256 if 103 < n <= 192
5.777588
5.500472
1.05038
if c is None: c = richardson_parameter(vals, k) return vals[k] - (vals[k] - vals[k - 1]) / c
def richardson(vals, k, c=None)
Richardson extrapolation with parameter estimation
3.589638
3.471691
1.033974
return Taylor(fun, n=n, r=r, num_extrap=num_extrap, step_ratio=step_ratio, **kwds)(z0)
def taylor(fun, z0=0, n=1, r=0.0061, num_extrap=3, step_ratio=1.6, **kwds)
Return Taylor coefficients of complex analytic function using FFT Parameters ---------- fun : callable function to differentiate z0 : real or complex scalar at which to evaluate the derivatives n : scalar integer, default 1 Number of taylor coefficents to compute. Maximum number is 100. r : real scalar, default 0.0061 Initial radius at which to evaluate. For well-behaved functions, the computation should be insensitive to the initial radius to within about four orders of magnitude. num_extrap : scalar integer, default 3 number of extrapolation steps used in the calculation step_ratio : real scalar, default 1.6 Initial grow/shrinking factor for finding the best radius. max_iter : scalar integer, default 30 Maximum number of iterations min_iter : scalar integer, default max_iter // 2 Minimum number of iterations before the solution may be deemed degenerate. A larger number allows the algorithm to correct a bad initial radius. full_output : bool, optional If `full_output` is False, only the coefficents is returned (default). If `full_output` is True, then (coefs, status) is returned Returns ------- coefs : ndarray array of taylor coefficents status: Optional object into which output information is written: degenerate: True if the algorithm was unable to bound the error iterations: Number of iterations executed function_count: Number of function calls final_radius: Ending radius of the algorithm failed: True if the maximum number of iterations was reached error_estimate: approximate bounds of the rounding error. Notes ----- This module uses the method of Fornberg to compute the Taylor series coefficents of a complex analytic function along with error bounds. The method uses a Fast Fourier Transform to invert function evaluations around a circle into Taylor series coefficients and uses Richardson Extrapolation to improve and bound the estimate. Unlike real-valued finite differences, the method searches for a desirable radius and so is reasonably insensitive to the initial radius-to within a number of orders of magnitude at least. For most cases, the default configuration is likely to succeed. Restrictions The method uses the coefficients themselves to control the truncation error, so the error will not be properly bounded for functions like low-order polynomials whose Taylor series coefficients are nearly zero. If the error cannot be bounded, degenerate flag will be set to true, and an answer will still be computed and returned but should be used with caution. Examples -------- Compute the first 6 taylor coefficients 1 / (1 - z) expanded round z0 = 0: >>> import numdifftools.fornberg as ndf >>> import numpy as np >>> c, info = ndf.taylor(lambda x: 1./(1-x), z0=0, n=6, full_output=True) >>> np.allclose(c, np.ones(8)) True >>> np.all(info.error_estimate < 1e-9) True >>> (info.function_count, info.iterations, info.failed) == (144, 18, False) True References ---------- [1] Fornberg, B. (1981). Numerical Differentiation of Analytic Functions. ACM Transactions on Mathematical Software (TOMS), 7(4), 512-526. http://doi.org/10.1145/355972.355979
2.167357
3.891861
0.556895
result = taylor(fun, z0, n=n, **kwds) # convert taylor series --> actual derivatives. m = _num_taylor_coefficients(n) fact = factorial(np.arange(m)) if kwds.get('full_output'): coefs, info_ = result info = _INFO(info_.error_estimate * fact, *info_[1:]) return coefs * fact, info return result * fact
def derivative(fun, z0, n=1, **kwds)
Calculate n-th derivative of complex analytic function using FFT Parameters ---------- fun : callable function to differentiate z0 : real or complex scalar at which to evaluate the derivatives n : scalar integer, default 1 Number of derivatives to compute where 0 represents the value of the function and n represents the nth derivative. Maximum number is 100. r : real scalar, default 0.0061 Initial radius at which to evaluate. For well-behaved functions, the computation should be insensitive to the initial radius to within about four orders of magnitude. max_iter : scalar integer, default 30 Maximum number of iterations min_iter : scalar integer, default max_iter // 2 Minimum number of iterations before the solution may be deemed degenerate. A larger number allows the algorithm to correct a bad initial radius. step_ratio : real scalar, default 1.6 Initial grow/shrinking factor for finding the best radius. num_extrap : scalar integer, default 3 number of extrapolation steps used in the calculation full_output : bool, optional If `full_output` is False, only the derivative is returned (default). If `full_output` is True, then (der, status) is returned `der` is the derivative, and `status` is a Results object. Returns ------- der : ndarray array of derivatives status: Optional object into which output information is written. Fields: degenerate: True if the algorithm was unable to bound the error iterations: Number of iterations executed function_count: Number of function calls final_radius: Ending radius of the algorithm failed: True if the maximum number of iterations was reached error_estimate: approximate bounds of the rounding error. This module uses the method of Fornberg to compute the derivatives of a complex analytic function along with error bounds. The method uses a Fast Fourier Transform to invert function evaluations around a circle into Taylor series coefficients, uses Richardson Extrapolation to improve and bound the estimate, then multiplies by a factorial to compute the derivatives. Unlike real-valued finite differences, the method searches for a desirable radius and so is reasonably insensitive to the initial radius-to within a number of orders of magnitude at least. For most cases, the default configuration is likely to succeed. Restrictions The method uses the coefficients themselves to control the truncation error, so the error will not be properly bounded for functions like low-order polynomials whose Taylor series coefficients are nearly zero. If the error cannot be bounded, degenerate flag will be set to true, and an answer will still be computed and returned but should be used with caution. Examples -------- To compute the first five derivatives of 1 / (1 - z) at z = 0: Compute the first 6 taylor derivatives of 1 / (1 - z) at z0 = 0: >>> import numdifftools.fornberg as ndf >>> import numpy as np >>> def fun(x): ... return 1./(1-x) >>> c, info = ndf.derivative(fun, z0=0, n=6, full_output=True) >>> np.allclose(c, [1, 1, 2, 6, 24, 120, 720, 5040]) True >>> np.all(info.error_estimate < 1e-9*c.real) True >>> (info.function_count, info.iterations, info.failed) == (144, 18, False) True References ---------- [1] Fornberg, B. (1981). Numerical Differentiation of Analytic Functions. ACM Transactions on Mathematical Software (TOMS), 7(4), 512-526. http://doi.org/10.1145/355972.355979
7.41344
7.74562
0.957114
reverse = mo.DECSCNM in self.mode return Char(data=" ", fg="default", bg="default", reverse=reverse)
def default_char(self)
An empty character with default foreground and background colors.
34.538921
23.646709
1.460623
def render(line): is_wide_char = False for x in range(self.columns): if is_wide_char: # Skip stub is_wide_char = False continue char = line[x].data assert sum(map(wcwidth, char[1:])) == 0 is_wide_char = wcwidth(char[0]) == 2 yield char return ["".join(render(self.buffer[y])) for y in range(self.lines)]
def display(self)
A :func:`list` of screen lines as unicode strings.
6.083933
5.252027
1.158397
self.dirty.update(range(self.lines)) self.buffer.clear() self.margins = None self.mode = set([mo.DECAWM, mo.DECTCEM]) self.title = "" self.icon_name = "" self.charset = 0 self.g0_charset = cs.LAT1_MAP self.g1_charset = cs.VT100_MAP # From ``man terminfo`` -- "... hardware tabs are initially # set every `n` spaces when the terminal is powered up. Since # we aim to support VT102 / VT220 and linux -- we use n = 8. self.tabstops = set(range(8, self.columns, 8)) self.cursor = Cursor(0, 0) self.cursor_position() self.saved_columns = None
def reset(self)
Reset the terminal to its initial state. * Scrolling margins are reset to screen boundaries. * Cursor is moved to home location -- ``(0, 0)`` and its attributes are set to defaults (see :attr:`default_char`). * Screen is cleared -- each character is reset to :attr:`default_char`. * Tabstops are reset to "every eight columns". * All lines are marked as :attr:`dirty`. .. note:: Neither VT220 nor VT102 manuals mention that terminal modes and tabstops should be reset as well, thanks to :manpage:`xterm` -- we now know that.
10.379145
8.436747
1.230231
lines = lines or self.lines columns = columns or self.columns if lines == self.lines and columns == self.columns: return # No changes. self.dirty.update(range(lines)) if lines < self.lines: self.save_cursor() self.cursor_position(0, 0) self.delete_lines(self.lines - lines) # Drop from the top. self.restore_cursor() if columns < self.columns: for line in self.buffer.values(): for x in range(columns, self.columns): line.pop(x, None) self.lines, self.columns = lines, columns self.set_margins()
def resize(self, lines=None, columns=None)
Resize the screen to the given size. If the requested screen size has more lines than the existing screen, lines will be added at the bottom. If the requested size has less lines than the existing screen lines will be clipped at the top of the screen. Similarly, if the existing screen has less columns than the requested screen, columns will be added at the right, and if it has more -- columns will be clipped at the right. :param int lines: number of lines in the new screen. :param int columns: number of columns in the new screen. .. versionchanged:: 0.7.0 If the requested screen size is identical to the current screen size, the method does nothing.
2.990275
3.456776
0.865047
# XXX 0 corresponds to the CSI with no parameters. if (top is None or top == 0) and bottom is None: self.margins = None return margins = self.margins or Margins(0, self.lines - 1) # Arguments are 1-based, while :attr:`margins` are zero # based -- so we have to decrement them by one. We also # make sure that both of them is bounded by [0, lines - 1]. if top is None: top = margins.top else: top = max(0, min(top - 1, self.lines - 1)) if bottom is None: bottom = margins.bottom else: bottom = max(0, min(bottom - 1, self.lines - 1)) # Even though VT102 and VT220 require DECSTBM to ignore # regions of width less than 2, some programs (like aptitude # for example) rely on it. Practicality beats purity. if bottom - top >= 1: self.margins = Margins(top, bottom) # The cursor moves to the home position when the top and # bottom margins of the scrolling region (DECSTBM) changes. self.cursor_position()
def set_margins(self, top=None, bottom=None)
Select top and bottom margins for the scrolling region. :param int top: the smallest line number that is scrolled. :param int bottom: the biggest line number that is scrolled.
5.377151
5.206276
1.032821
# Private mode codes are shifted, to be distingiushed from non # private ones. if kwargs.get("private"): modes = [mode << 5 for mode in modes] if mo.DECSCNM in modes: self.dirty.update(range(self.lines)) self.mode.update(modes) # When DECOLM mode is set, the screen is erased and the cursor # moves to the home position. if mo.DECCOLM in modes: self.saved_columns = self.columns self.resize(columns=132) self.erase_in_display(2) self.cursor_position() # According to VT520 manual, DECOM should also home the cursor. if mo.DECOM in modes: self.cursor_position() # Mark all displayed characters as reverse. if mo.DECSCNM in modes: for line in self.buffer.values(): line.default = self.default_char for x in line: line[x] = line[x]._replace(reverse=True) self.select_graphic_rendition(7) # +reverse. # Make the cursor visible. if mo.DECTCEM in modes: self.cursor.hidden = False
def set_mode(self, *modes, **kwargs)
Set (enable) a given list of modes. :param list modes: modes to set, where each mode is a constant from :mod:`pyte.modes`.
6.43054
6.472188
0.993565
if code in cs.MAPS: if mode == "(": self.g0_charset = cs.MAPS[code] elif mode == ")": self.g1_charset = cs.MAPS[code]
def define_charset(self, code, mode)
Define ``G0`` or ``G1`` charset. :param str code: character set code, should be a character from ``"B0UK"``, otherwise ignored. :param str mode: if ``"("`` ``G0`` charset is defined, if ``")"`` -- we operate on ``G1``. .. warning:: User-defined charsets are currently not supported.
5.104568
3.831189
1.332372
data = data.translate( self.g1_charset if self.charset else self.g0_charset) for char in data: char_width = wcwidth(char) # If this was the last column in a line and auto wrap mode is # enabled, move the cursor to the beginning of the next line, # otherwise replace characters already displayed with newly # entered. if self.cursor.x == self.columns: if mo.DECAWM in self.mode: self.dirty.add(self.cursor.y) self.carriage_return() self.linefeed() elif char_width > 0: self.cursor.x -= char_width # If Insert mode is set, new characters move old characters to # the right, otherwise terminal is in Replace mode and new # characters replace old characters at cursor position. if mo.IRM in self.mode and char_width > 0: self.insert_characters(char_width) line = self.buffer[self.cursor.y] if char_width == 1: line[self.cursor.x] = self.cursor.attrs._replace(data=char) elif char_width == 2: # A two-cell character has a stub slot after it. line[self.cursor.x] = self.cursor.attrs._replace(data=char) if self.cursor.x + 1 < self.columns: line[self.cursor.x + 1] = self.cursor.attrs \ ._replace(data="") elif char_width == 0 and unicodedata.combining(char): # A zero-cell character is combined with the previous # character either on this or preceeding line. if self.cursor.x: last = line[self.cursor.x - 1] normalized = unicodedata.normalize("NFC", last.data + char) line[self.cursor.x - 1] = last._replace(data=normalized) elif self.cursor.y: last = self.buffer[self.cursor.y - 1][self.columns - 1] normalized = unicodedata.normalize("NFC", last.data + char) self.buffer[self.cursor.y - 1][self.columns - 1] = \ last._replace(data=normalized) else: break # Unprintable character or doesn't advance the cursor. # .. note:: We can't use :meth:`cursor_forward()`, because that # way, we'll never know when to linefeed. if char_width > 0: self.cursor.x = min(self.cursor.x + char_width, self.columns) self.dirty.add(self.cursor.y)
def draw(self, data)
Display decoded characters at the current cursor position and advances the cursor if :data:`~pyte.modes.DECAWM` is set. :param str data: text to display. .. versionchanged:: 0.5.0 Character width is taken into account. Specifically, zero-width and unprintable characters do not affect screen state. Full-width characters are rendered into two consecutive character containers.
3.704601
3.430418
1.079927
top, bottom = self.margins or Margins(0, self.lines - 1) if self.cursor.y == bottom: # TODO: mark only the lines within margins? self.dirty.update(range(self.lines)) for y in range(top, bottom): self.buffer[y] = self.buffer[y + 1] self.buffer.pop(bottom, None) else: self.cursor_down()
def index(self)
Move the cursor down one line in the same column. If the cursor is at the last line, create a new line at the bottom.
6.468721
5.209255
1.241775
top, bottom = self.margins or Margins(0, self.lines - 1) if self.cursor.y == top: # TODO: mark only the lines within margins? self.dirty.update(range(self.lines)) for y in range(bottom, top, -1): self.buffer[y] = self.buffer[y - 1] self.buffer.pop(top, None) else: self.cursor_up()
def reverse_index(self)
Move the cursor up one line in the same column. If the cursor is at the first line, create a new line at the top.
5.900881
4.891219
1.206423
self.index() if mo.LNM in self.mode: self.carriage_return()
def linefeed(self)
Perform an index and, if :data:`~pyte.modes.LNM` is set, a carriage return.
33.586933
13.390537
2.508259
for stop in sorted(self.tabstops): if self.cursor.x < stop: column = stop break else: column = self.columns - 1 self.cursor.x = column
def tab(self)
Move to the next tab space, or the end of the screen if there aren't anymore left.
6.031346
4.932523
1.222771
self.savepoints.append(Savepoint(copy.copy(self.cursor), self.g0_charset, self.g1_charset, self.charset, mo.DECOM in self.mode, mo.DECAWM in self.mode))
def save_cursor(self)
Push the current cursor position onto the stack.
11.83103
10.166073
1.163776
if self.savepoints: savepoint = self.savepoints.pop() self.g0_charset = savepoint.g0_charset self.g1_charset = savepoint.g1_charset self.charset = savepoint.charset if savepoint.origin: self.set_mode(mo.DECOM) if savepoint.wrap: self.set_mode(mo.DECAWM) self.cursor = savepoint.cursor self.ensure_hbounds() self.ensure_vbounds(use_margins=True) else: # If nothing was saved, the cursor moves to home position; # origin mode is reset. :todo: DECAWM? self.reset_mode(mo.DECOM) self.cursor_position()
def restore_cursor(self)
Set the current cursor position to whatever cursor is on top of the stack.
5.894153
5.673369
1.038916
count = count or 1 top, bottom = self.margins or Margins(0, self.lines - 1) # If cursor is outside scrolling margins it -- do nothin'. if top <= self.cursor.y <= bottom: self.dirty.update(range(self.cursor.y, self.lines)) for y in range(bottom, self.cursor.y - 1, -1): if y + count <= bottom and y in self.buffer: self.buffer[y + count] = self.buffer[y] self.buffer.pop(y, None) self.carriage_return()
def insert_lines(self, count=None)
Insert the indicated # of lines at line with cursor. Lines displayed **at** and below the cursor move down. Lines moved past the bottom margin are lost. :param count: number of lines to insert.
5.177877
5.434926
0.952704
self.dirty.add(self.cursor.y) count = count or 1 line = self.buffer[self.cursor.y] for x in range(self.columns, self.cursor.x - 1, -1): if x + count <= self.columns: line[x + count] = line[x] line.pop(x, None)
def insert_characters(self, count=None)
Insert the indicated # of blank characters at the cursor position. The cursor does not move and remains at the beginning of the inserted blank characters. Data on the line is shifted forward. :param int count: number of characters to insert.
4.017681
4.351581
0.923269
self.dirty.add(self.cursor.y) count = count or 1 line = self.buffer[self.cursor.y] for x in range(self.cursor.x, min(self.cursor.x + count, self.columns)): line[x] = self.cursor.attrs
def erase_characters(self, count=None)
Erase the indicated # of characters, starting with the character at cursor position. Character attributes are set cursor attributes. The cursor remains in the same position. :param int count: number of characters to erase. .. note:: Using cursor attributes for character attributes may seem illogical, but if recall that a terminal emulator emulates a type writer, it starts to make sense. The only way a type writer could erase a character is by typing over it.
3.929976
4.358742
0.901631
self.dirty.add(self.cursor.y) if how == 0: interval = range(self.cursor.x, self.columns) elif how == 1: interval = range(self.cursor.x + 1) elif how == 2: interval = range(self.columns) line = self.buffer[self.cursor.y] for x in interval: line[x] = self.cursor.attrs
def erase_in_line(self, how=0, private=False)
Erase a line in a specific way. Character attributes are set to cursor attributes. :param int how: defines the way the line should be erased in: * ``0`` -- Erases from cursor to end of line, including cursor position. * ``1`` -- Erases from beginning of line to cursor, including cursor position. * ``2`` -- Erases complete line. :param bool private: when ``True`` only characters marked as eraseable are affected **not implemented**.
2.84769
2.903975
0.980618
if how == 0: interval = range(self.cursor.y + 1, self.lines) elif how == 1: interval = range(self.cursor.y) elif how == 2 or how == 3: interval = range(self.lines) self.dirty.update(interval) for y in interval: line = self.buffer[y] for x in line: line[x] = self.cursor.attrs if how == 0 or how == 1: self.erase_in_line(how)
def erase_in_display(self, how=0, *args, **kwargs)
Erases display in a specific way. Character attributes are set to cursor attributes. :param int how: defines the way the line should be erased in: * ``0`` -- Erases from cursor to end of screen, including cursor position. * ``1`` -- Erases from beginning of screen to cursor, including cursor position. * ``2`` and ``3`` -- Erases complete display. All lines are erased and changed to single-width. Cursor does not move. :param bool private: when ``True`` only characters marked as eraseable are affected **not implemented**. .. versionchanged:: 0.8.1 The method accepts any number of positional arguments as some ``clear`` implementations include a ``;`` after the first parameter causing the stream to assume a ``0`` second parameter.
3.182497
3.185037
0.999203
if how == 0: # Clears a horizontal tab stop at cursor position, if it's # present, or silently fails if otherwise. self.tabstops.discard(self.cursor.x) elif how == 3: self.tabstops = set()
def clear_tab_stop(self, how=0)
Clear a horizontal tab stop. :param int how: defines a way the tab stop should be cleared: * ``0`` or nothing -- Clears a horizontal tab stop at cursor position. * ``3`` -- Clears all horizontal tab stops.
7.55569
5.1714
1.461053
self.cursor.x = min(max(0, self.cursor.x), self.columns - 1)
def ensure_hbounds(self)
Ensure the cursor is within horizontal screen bounds.
6.260522
3.110991
2.012388
if (use_margins or mo.DECOM in self.mode) and self.margins is not None: top, bottom = self.margins else: top, bottom = 0, self.lines - 1 self.cursor.y = min(max(top, self.cursor.y), bottom)
def ensure_vbounds(self, use_margins=None)
Ensure the cursor is within vertical screen bounds. :param bool use_margins: when ``True`` or when :data:`~pyte.modes.DECOM` is set, cursor is bounded by top and and bottom margins, instead of ``[0; lines - 1]``.
5.264294
3.111628
1.691814
top, _bottom = self.margins or Margins(0, self.lines - 1) self.cursor.y = max(self.cursor.y - (count or 1), top)
def cursor_up(self, count=None)
Move cursor up the indicated # of lines in same column. Cursor stops at top margin. :param int count: number of lines to skip.
6.456885
6.729429
0.9595
_top, bottom = self.margins or Margins(0, self.lines - 1) self.cursor.y = min(self.cursor.y + (count or 1), bottom)
def cursor_down(self, count=None)
Move cursor down the indicated # of lines in same column. Cursor stops at bottom margin. :param int count: number of lines to skip.
7.143523
6.602462
1.081948
# Handle the case when we've just drawn in the last column # and would wrap the line on the next :meth:`draw()` call. if self.cursor.x == self.columns: self.cursor.x -= 1 self.cursor.x -= count or 1 self.ensure_hbounds()
def cursor_back(self, count=None)
Move cursor left the indicated # of columns. Cursor stops at left margin. :param int count: number of columns to skip.
10.167242
10.765308
0.944445
column = (column or 1) - 1 line = (line or 1) - 1 # If origin mode (DECOM) is set, line number are relative to # the top scrolling margin. if self.margins is not None and mo.DECOM in self.mode: line += self.margins.top # Cursor is not allowed to move out of the scrolling region. if not self.margins.top <= line <= self.margins.bottom: return self.cursor.x = column self.cursor.y = line self.ensure_hbounds() self.ensure_vbounds()
def cursor_position(self, line=None, column=None)
Set the cursor to a specific `line` and `column`. Cursor is allowed to move out of the scrolling region only when :data:`~pyte.modes.DECOM` is reset, otherwise -- the position doesn't change. :param int line: line number to move the cursor to. :param int column: column number to move the cursor to.
5.404924
5.127125
1.054182
self.cursor.y = (line or 1) - 1 # If origin mode (DECOM) is set, line number are relative to # the top scrolling margin. if mo.DECOM in self.mode: self.cursor.y += self.margins.top # FIXME: should we also restrict the cursor to the scrolling # region? self.ensure_vbounds()
def cursor_to_line(self, line=None)
Move cursor to a specific line in the current column. :param int line: line number to move the cursor to.
10.433176
12.44341
0.83845
self.dirty.update(range(self.lines)) for y in range(self.lines): for x in range(self.columns): self.buffer[y][x] = self.buffer[y][x]._replace(data="E")
def alignment_display(self)
Fills screen with uppercase E's for screen focus and alignment.
5.243856
3.736251
1.403508
replace = {} # Fast path for resetting all attributes. if not attrs or attrs == (0, ): self.cursor.attrs = self.default_char return else: attrs = list(reversed(attrs)) while attrs: attr = attrs.pop() if attr == 0: # Reset all attributes. replace.update(self.default_char._asdict()) elif attr in g.FG_ANSI: replace["fg"] = g.FG_ANSI[attr] elif attr in g.BG: replace["bg"] = g.BG_ANSI[attr] elif attr in g.TEXT: attr = g.TEXT[attr] replace[attr[1:]] = attr.startswith("+") elif attr in g.FG_AIXTERM: replace.update(fg=g.FG_AIXTERM[attr], bold=True) elif attr in g.BG_AIXTERM: replace.update(bg=g.BG_AIXTERM[attr], bold=True) elif attr in (g.FG_256, g.BG_256): key = "fg" if attr == g.FG_256 else "bg" try: n = attrs.pop() if n == 5: # 256. m = attrs.pop() replace[key] = g.FG_BG_256[m] elif n == 2: # 24bit. # This is somewhat non-standard but is nonetheless # supported in quite a few terminals. See discussion # here https://gist.github.com/XVilka/8346728. replace[key] = "{0:02x}{1:02x}{2:02x}".format( attrs.pop(), attrs.pop(), attrs.pop()) except IndexError: pass self.cursor.attrs = self.cursor.attrs._replace(**replace)
def select_graphic_rendition(self, *attrs)
Set display attributes. :param list attrs: a list of display attributes to set.
3.213616
3.272947
0.981872
# We only implement "primary" DA which is the only DA request # VT102 understood, see ``VT102ID`` in ``linux/drivers/tty/vt.c``. if mode == 0 and not kwargs.get("private"): self.write_process_input(ctrl.CSI + "?6c")
def report_device_attributes(self, mode=0, **kwargs)
Report terminal identity. .. versionadded:: 0.5.0 .. versionchanged:: 0.7.0 If ``private`` keyword argument is set, the method does nothing. This behaviour is consistent with VT220 manual.
25.332228
19.776926
1.280898
if mode == 5: # Request for terminal status. self.write_process_input(ctrl.CSI + "0n") elif mode == 6: # Request for cursor position. x = self.cursor.x + 1 y = self.cursor.y + 1 # "Origin mode (DECOM) selects line numbering." if mo.DECOM in self.mode: y -= self.margins.top self.write_process_input(ctrl.CSI + "{0};{1}R".format(y, x))
def report_device_status(self, mode)
Report terminal status or cursor position. :param int mode: if 5 -- terminal status, 6 -- cursor position, otherwise a noop. .. versionadded:: 0.5.0
6.191528
5.435065
1.139182
if event not in ["prev_page", "next_page"]: while self.history.position < self.history.size: self.next_page()
def before_event(self, event)
Ensure a screen is at the bottom of the history buffer. :param str event: event name, for example ``"linefeed"``.
7.630309
7.933999
0.961723
if event in ["prev_page", "next_page"]: for line in self.buffer.values(): for x in line: if x > self.columns: line.pop(x) # If we're at the bottom of the history buffer and `DECTCEM` # mode is set -- show the cursor. self.cursor.hidden = not ( self.history.position == self.history.size and mo.DECTCEM in self.mode )
def after_event(self, event)
Ensure all lines on a screen have proper width (:attr:`columns`). Extra characters are truncated, missing characters are filled with whitespace. :param str event: event name, for example ``"linefeed"``.
8.722448
8.357073
1.043721
super(HistoryScreen, self).erase_in_display(how, *args, **kwargs) if how == 3: self._reset_history()
def erase_in_display(self, how=0, *args, **kwargs)
Overloaded to reset history state.
4.757756
3.352547
1.419147
top, bottom = self.margins or Margins(0, self.lines - 1) if self.cursor.y == bottom: self.history.top.append(self.buffer[top]) super(HistoryScreen, self).index()
def index(self)
Overloaded to update top history with the removed lines.
9.998446
7.768658
1.287023
if self.history.position > self.lines and self.history.top: mid = min(len(self.history.top), int(math.ceil(self.lines * self.history.ratio))) self.history.bottom.extendleft( self.buffer[y] for y in range(self.lines - 1, self.lines - mid - 1, -1)) self.history = self.history \ ._replace(position=self.history.position - mid) for y in range(self.lines - 1, mid - 1, -1): self.buffer[y] = self.buffer[y - mid] for y in range(mid - 1, -1, -1): self.buffer[y] = self.history.top.pop() self.dirty = set(range(self.lines))
def prev_page(self)
Move the screen page up through the history buffer. Page size is defined by ``history.ratio``, so for instance ``ratio = .5`` means that half the screen is restored from history on page switch.
3.304693
3.064532
1.078368
if self.history.position < self.history.size and self.history.bottom: mid = min(len(self.history.bottom), int(math.ceil(self.lines * self.history.ratio))) self.history.top.extend(self.buffer[y] for y in range(mid)) self.history = self.history \ ._replace(position=self.history.position + mid) for y in range(self.lines - mid): self.buffer[y] = self.buffer[y + mid] for y in range(self.lines - mid, self.lines): self.buffer[y] = self.history.bottom.popleft() self.dirty = set(range(self.lines))
def next_page(self)
Move the screen page down through the history buffer.
3.766107
3.548038
1.061462
if self.listener is not None: warnings.warn("As of version 0.6.0 the listener queue is " "restricted to a single element. Existing " "listener {0} will be replaced." .format(self.listener), DeprecationWarning) if self.strict: for event in self.events: if not hasattr(screen, event): raise TypeError("{0} is missing {1}".format(screen, event)) self.listener = screen self._parser = None self._initialize_parser()
def attach(self, screen)
Adds a given screen to the listener queue. :param pyte.screens.Screen screen: a screen to attach to.
5.325392
4.963192
1.072977
send = self._send_to_parser draw = self.listener.draw match_text = self._text_pattern.match taking_plain_text = self._taking_plain_text length = len(data) offset = 0 while offset < length: if taking_plain_text: match = match_text(data, offset) if match: start, offset = match.span() draw(data[start:offset]) else: taking_plain_text = False else: taking_plain_text = send(data[offset:offset + 1]) offset += 1 self._taking_plain_text = taking_plain_text
def feed(self, data)
Consume some data and advances the state as necessary. :param str data: a blob of data to feed from.
3.470231
3.666207
0.946545
global is_shutting_down is_shutting_down = True for task in app["websockets"]: task.cancel() try: await task except asyncio.CancelledError: pass
async def on_shutdown(app)
Closes all WS connections on shutdown.
3.334846
2.853949
1.168503
D = len(V[0]) N = len(V) result = 0.0 for v, w in zip(V, W): result += sum([(v[i] - w[i])**2.0 for i in range(D)]) return np.sqrt(result/N)
def rmsd(V, W)
Calculate Root-mean-square deviation from two sets of vectors V and W. Parameters ---------- V : array (N,D) matrix, where N is points and D is dimension. W : array (N,D) matrix, where N is points and D is dimension. Returns ------- rmsd : float Root-mean-square deviation between the two vectors
2.256982
2.971889
0.759443
if translate: Q = Q - centroid(Q) P = P - centroid(P) P = kabsch_rotate(P, Q) return rmsd(P, Q)
def kabsch_rmsd(P, Q, translate=False)
Rotate matrix P unto Q using Kabsch algorithm and calculate the RMSD. Parameters ---------- P : array (N,D) matrix, where N is points and D is dimension. Q : array (N,D) matrix, where N is points and D is dimension. translate : bool Use centroids to translate vector P and Q unto each other. Returns ------- rmsd : float root-mean squared deviation
3.144368
4.382146
0.717541
U = kabsch(P, Q) # Rotate P P = np.dot(P, U) return P
def kabsch_rotate(P, Q)
Rotate matrix P unto matrix Q using Kabsch algorithm. Parameters ---------- P : array (N,D) matrix, where N is points and D is dimension. Q : array (N,D) matrix, where N is points and D is dimension. Returns ------- P : array (N,D) matrix, where N is points and D is dimension, rotated
4.486374
10.689507
0.419699
# Computation of the covariance matrix C = np.dot(np.transpose(P), Q) # Computation of the optimal rotation matrix # This can be done using singular value decomposition (SVD) # Getting the sign of the det(V)*(W) to decide # whether we need to correct our rotation matrix to ensure a # right-handed coordinate system. # And finally calculating the optimal rotation matrix U # see http://en.wikipedia.org/wiki/Kabsch_algorithm V, S, W = np.linalg.svd(C) d = (np.linalg.det(V) * np.linalg.det(W)) < 0.0 if d: S[-1] = -S[-1] V[:, -1] = -V[:, -1] # Create Rotation matrix U U = np.dot(V, W) return U
def kabsch(P, Q)
Using the Kabsch algorithm with two sets of paired point P and Q, centered around the centroid. Each vector set is represented as an NxD matrix, where D is the the dimension of the space. The algorithm works in three steps: - a centroid translation of P and Q (assumed done before this function call) - the computation of a covariance matrix C - computation of the optimal rotation matrix U For more info see http://en.wikipedia.org/wiki/Kabsch_algorithm Parameters ---------- P : array (N,D) matrix, where N is points and D is dimension. Q : array (N,D) matrix, where N is points and D is dimension. Returns ------- U : matrix Rotation matrix (D,D)
2.9927
2.902829
1.03096
rot = quaternion_rotate(P, Q) P = np.dot(P, rot) return rmsd(P, Q)
def quaternion_rmsd(P, Q)
Rotate matrix P unto Q and calculate the RMSD based on doi:10.1016/1049-9660(91)90036-O Parameters ---------- P : array (N,D) matrix, where N is points and D is dimension. Q : array (N,D) matrix, where N is points and D is dimension. Returns ------- rmsd : float
4.180674
5.131698
0.814676
Wt_r = makeW(*r).T Q_r = makeQ(*r) rot = Wt_r.dot(Q_r)[:3, :3] return rot
def quaternion_transform(r)
Get optimal rotation note: translation will be zero when the centroids of each molecule are the same
6.033924
7.079361
0.852326
W = np.asarray([ [r4, r3, -r2, r1], [-r3, r4, r1, r2], [r2, -r1, r4, r3], [-r1, -r2, -r3, r4]]) return W
def makeW(r1, r2, r3, r4=0)
matrix involved in quaternion rotation
1.930695
1.76936
1.091182
Q = np.asarray([ [r4, -r3, r2, r1], [r3, r4, -r1, r2], [-r2, r1, r4, r3], [-r1, -r2, -r3, r4]]) return Q
def makeQ(r1, r2, r3, r4=0)
matrix involved in quaternion rotation
1.893034
1.721604
1.099576
N = X.shape[0] W = np.asarray([makeW(*Y[k]) for k in range(N)]) Q = np.asarray([makeQ(*X[k]) for k in range(N)]) Qt_dot_W = np.asarray([np.dot(Q[k].T, W[k]) for k in range(N)]) W_minus_Q = np.asarray([W[k] - Q[k] for k in range(N)]) A = np.sum(Qt_dot_W, axis=0) eigen = np.linalg.eigh(A) r = eigen[1][:, eigen[0].argmax()] rot = quaternion_transform(r) return rot
def quaternion_rotate(X, Y)
Calculate the rotation Parameters ---------- X : array (N,D) matrix, where N is points and D is dimension. Y: array (N,D) matrix, where N is points and D is dimension. Returns ------- rot : matrix Rotation matrix (D,D)
2.905195
2.99611
0.969655
# Find unique atoms unique_atoms = np.unique(p_atoms) # generate full view from q shape to fill in atom view on the fly view_reorder = np.zeros(q_atoms.shape, dtype=int) for atom in unique_atoms: p_atom_idx, = np.where(p_atoms == atom) q_atom_idx, = np.where(q_atoms == atom) A_coord = p_coord[p_atom_idx] B_coord = q_coord[q_atom_idx] # Calculate distance from each atom to centroid A_norms = np.linalg.norm(A_coord, axis=1) B_norms = np.linalg.norm(B_coord, axis=1) reorder_indices_A = np.argsort(A_norms) reorder_indices_B = np.argsort(B_norms) # Project the order of P onto Q translator = np.argsort(reorder_indices_A) view = reorder_indices_B[translator] view_reorder[p_atom_idx] = q_atom_idx[view] return view_reorder
def reorder_distance(p_atoms, q_atoms, p_coord, q_coord)
Re-orders the input atom list and xyz coordinates by atom type and then by distance of each atom from the centroid. Parameters ---------- atoms : array (N,1) matrix, where N is points holding the atoms' names coord : array (N,D) matrix, where N is points and D is dimension Returns ------- atoms_reordered : array (N,1) matrix, where N is points holding the ordered atoms' names coords_reordered : array (N,D) matrix, where N is points and D is dimension (rows re-ordered)
2.802602
2.876475
0.974318
# should be kabasch here i think distances = cdist(A, B, 'euclidean') # Perform Hungarian analysis on distance matrix between atoms of 1st # structure and trial structure indices_a, indices_b = linear_sum_assignment(distances) return indices_b
def hungarian(A, B)
Hungarian reordering. Assume A and B are coordinates for atoms of SAME type only
11.475031
11.142329
1.029859
c = [0] * n yield elements i = 0 while i < n: if c[i] < i: if i % 2 == 0: elements[0], elements[i] = elements[i], elements[0] else: elements[c[i]], elements[i] = elements[i], elements[c[i]] yield elements c[i] += 1 i = 0 else: c[i] = 0 i += 1
def generate_permutations(elements, n)
Heap's algorithm for generating all n! permutations in a list https://en.wikipedia.org/wiki/Heap%27s_algorithm
1.983996
1.956863
1.013865
rmsd_min = np.inf view_min = None # Sets initial ordering for row indices to [0, 1, 2, ..., len(A)], used in # brute-force method num_atoms = A.shape[0] initial_order = list(range(num_atoms)) for reorder_indices in generate_permutations(initial_order, num_atoms): # Re-order the atom array and coordinate matrix coords_ordered = B[reorder_indices] # Calculate the RMSD between structure 1 and the Hungarian re-ordered # structure 2 rmsd_temp = kabsch_rmsd(A, coords_ordered) # Replaces the atoms and coordinates with the current structure if the # RMSD is lower if rmsd_temp < rmsd_min: rmsd_min = rmsd_temp view_min = copy.deepcopy(reorder_indices) return view_min
def brute_permutation(A, B)
Re-orders the input atom list and xyz coordinates using the brute force method of permuting all rows of the input coordinates Parameters ---------- A : array (N,D) matrix, where N is points and D is dimension B : array (N,D) matrix, where N is points and D is dimension Returns ------- view : array (N,1) matrix, reordered view of B projected to A
4.62026
4.359204
1.059886
# Find unique atoms unique_atoms = np.unique(p_atoms) # generate full view from q shape to fill in atom view on the fly view_reorder = np.zeros(q_atoms.shape, dtype=int) view_reorder -= 1 for atom in unique_atoms: p_atom_idx, = np.where(p_atoms == atom) q_atom_idx, = np.where(q_atoms == atom) A_coord = p_coord[p_atom_idx] B_coord = q_coord[q_atom_idx] view = brute_permutation(A_coord, B_coord) view_reorder[p_atom_idx] = q_atom_idx[view] return view_reorder
def reorder_brute(p_atoms, q_atoms, p_coord, q_coord)
Re-orders the input atom list and xyz coordinates using all permutation of rows (using optimized column results) Parameters ---------- p_atoms : array (N,1) matrix, where N is points holding the atoms' names q_atoms : array (N,1) matrix, where N is points holding the atoms' names p_coord : array (N,D) matrix, where N is points and D is dimension q_coord : array (N,D) matrix, where N is points and D is dimension Returns ------- view_reorder : array (N,1) matrix, reordered indexes of atom alignment based on the coordinates of the atoms
3.174826
3.037962
1.045051
min_rmsd = np.inf min_swap = None min_reflection = None min_review = None tmp_review = None swap_mask = [1,-1,-1,1,-1,1] reflection_mask = [1,-1,-1,-1,1,1,1,-1] for swap, i in zip(AXIS_SWAPS, swap_mask): for reflection, j in zip(AXIS_REFLECTIONS, reflection_mask): if keep_stereo and i * j == -1: continue # skip enantiomers tmp_atoms = copy.copy(q_atoms) tmp_coord = copy.deepcopy(q_coord) tmp_coord = tmp_coord[:, swap] tmp_coord = np.dot(tmp_coord, np.diag(reflection)) tmp_coord -= centroid(tmp_coord) # Reorder if reorder_method is not None: tmp_review = reorder_method(p_atoms, tmp_atoms, p_coord, tmp_coord) tmp_coord = tmp_coord[tmp_review] tmp_atoms = tmp_atoms[tmp_review] # Rotation if rotation_method is None: this_rmsd = rmsd(p_coord, tmp_coord) else: this_rmsd = rotation_method(p_coord, tmp_coord) if this_rmsd < min_rmsd: min_rmsd = this_rmsd min_swap = swap min_reflection = reflection min_review = tmp_review if not (p_atoms == q_atoms[min_review]).all(): print("error: Not aligned") quit() return min_rmsd, min_swap, min_reflection, min_review
def check_reflections(p_atoms, q_atoms, p_coord, q_coord, reorder_method=reorder_hungarian, rotation_method=kabsch_rmsd, keep_stereo=False)
Minimize RMSD using reflection planes for molecule P and Q Warning: This will affect stereo-chemistry Parameters ---------- p_atoms : array (N,1) matrix, where N is points holding the atoms' names q_atoms : array (N,1) matrix, where N is points holding the atoms' names p_coord : array (N,D) matrix, where N is points and D is dimension q_coord : array (N,D) matrix, where N is points and D is dimension Returns ------- min_rmsd min_swap min_reflection min_review
2.57875
2.468999
1.044452
N, D = V.shape fmt = "{:2s}" + (" {:15."+str(decimals)+"f}")*3 out = list() out += [str(N)] out += [title] for i in range(N): atom = atoms[i] atom = atom[0].upper() + atom[1:] out += [fmt.format(atom, V[i, 0], V[i, 1], V[i, 2])] return "\n".join(out)
def set_coordinates(atoms, V, title="", decimals=8)
Print coordinates V with corresponding atoms to stdout in XYZ format. Parameters ---------- atoms : list List of atomic types V : array (N,3) matrix of atomic coordinates title : string (optional) Title of molecule decimals : int (optional) number of decimals for the coordinates Return ------ output : str Molecule in XYZ format
2.792078
3.101023
0.900373
print(set_coordinates(atoms, V, title=title)) return
def print_coordinates(atoms, V, title="")
Print coordinates V with corresponding atoms to stdout in XYZ format. Parameters ---------- atoms : list List of element types V : array (N,3) matrix of atomic coordinates title : string (optional) Title of molecule
9.877104
12.839799
0.769257
if fmt == "xyz": get_func = get_coordinates_xyz elif fmt == "pdb": get_func = get_coordinates_pdb else: exit("Could not recognize file format: {:s}".format(fmt)) return get_func(filename)
def get_coordinates(filename, fmt)
Get coordinates from filename in format fmt. Supports XYZ and PDB. Parameters ---------- filename : string Filename to read fmt : string Format of filename. Either xyz or pdb. Returns ------- atoms : list List of atomic types V : array (N,3) where N is number of atoms
3.092142
3.580817
0.86353
# PDB files tend to be a bit of a mess. The x, y and z coordinates # are supposed to be in column 31-38, 39-46 and 47-54, but this is # not always the case. # Because of this the three first columns containing a decimal is used. # Since the format doesn't require a space between columns, we use the # above column indices as a fallback. x_column = None V = list() # Same with atoms and atom naming. # The most robust way to do this is probably # to assume that the atomtype is given in column 3. atoms = list() with open(filename, 'r') as f: lines = f.readlines() for line in lines: if line.startswith("TER") or line.startswith("END"): break if line.startswith("ATOM"): tokens = line.split() # Try to get the atomtype try: atom = tokens[2][0] if atom in ("H", "C", "N", "O", "S", "P"): atoms.append(atom) else: # e.g. 1HD1 atom = tokens[2][1] if atom == "H": atoms.append(atom) else: raise Exception except: exit("error: Parsing atomtype for the following line: \n{0:s}".format(line)) if x_column == None: try: # look for x column for i, x in enumerate(tokens): if "." in x and "." in tokens[i + 1] and "." in tokens[i + 2]: x_column = i break except IndexError: exit("error: Parsing coordinates for the following line: \n{0:s}".format(line)) # Try to read the coordinates try: V.append(np.asarray(tokens[x_column:x_column + 3], dtype=float)) except: # If that doesn't work, use hardcoded indices try: x = line[30:38] y = line[38:46] z = line[46:54] V.append(np.asarray([x, y ,z], dtype=float)) except: exit("error: Parsing input for the following line: \n{0:s}".format(line)) V = np.asarray(V) atoms = np.asarray(atoms) assert V.shape[0] == atoms.size return atoms, V
def get_coordinates_pdb(filename)
Get coordinates from the first chain in a pdb file and return a vectorset with all the coordinates. Parameters ---------- filename : string Filename to read Returns ------- atoms : list List of atomic types V : array (N,3) where N is number of atoms
3.374658
3.334298
1.012104
f = open(filename, 'r') V = list() atoms = list() n_atoms = 0 # Read the first line to obtain the number of atoms to read try: n_atoms = int(f.readline()) except ValueError: exit("error: Could not obtain the number of atoms in the .xyz file.") # Skip the title line f.readline() # Use the number of atoms to not read beyond the end of a file for lines_read, line in enumerate(f): if lines_read == n_atoms: break atom = re.findall(r'[a-zA-Z]+', line)[0] atom = atom.upper() numbers = re.findall(r'[-]?\d+\.\d*(?:[Ee][-\+]\d+)?', line) numbers = [float(number) for number in numbers] # The numbers are not valid unless we obtain exacly three if len(numbers) >= 3: V.append(np.array(numbers)[:3]) atoms.append(atom) else: exit("Reading the .xyz file failed in line {0}. Please check the format.".format(lines_read + 2)) f.close() atoms = np.array(atoms) V = np.array(V) return atoms, V
def get_coordinates_xyz(filename)
Get coordinates from filename and return a vectorset with all the coordinates, in XYZ format. Parameters ---------- filename : string Filename to read Returns ------- atoms : list List of atomic types V : array (N,3) where N is number of atoms
3.192504
3.109896
1.026563
string_parts = [] if self.data_stream: string_parts.append('data stream: {0:s}'.format(self.data_stream)) if self.inode is not None: string_parts.append('inode: {0:d}'.format(self.inode)) if self.location is not None: string_parts.append('location: {0:s}'.format(self.location)) return self._GetComparable(sub_comparable_string=', '.join(string_parts))
def comparable(self)
str: comparable representation of the path specification.
2.819988
2.492711
1.131294
format_specification = specification.FormatSpecification( self.type_indicator) # TODO: add support for signature chains so that we add the 'BZ' at # offset 0. # BZIP2 compressed steam signature. format_specification.AddNewSignature(b'\x31\x41\x59\x26\x53\x59', offset=4) return format_specification
def GetFormatSpecification(self)
Retrieves the format specification. Returns: FormatSpecification: format specification or None if the format cannot be defined by a specification object.
12.075977
12.307755
0.981168
location = getattr(self.path_spec, 'location', None) if location and location.startswith(self._file_system.PATH_SEPARATOR): # The zip_info filename does not have the leading path separator # as the location string does. zip_path = location[1:] # Set of top level sub directories that have been yielded. processed_directories = set() zip_file = self._file_system.GetZipFile() for zip_info in zip_file.infolist(): path = getattr(zip_info, 'filename', None) if path is not None and not isinstance(path, py2to3.UNICODE_TYPE): try: path = path.decode(self._file_system.encoding) except UnicodeDecodeError: path = None if not path or not path.startswith(zip_path): continue # Ignore the directory itself. if path == zip_path: continue path_segment, suffix = self._file_system.GetPathSegmentAndSuffix( zip_path, path) if not path_segment: continue # Some times the ZIP file lacks directories, therefore we will # provide virtual ones. if suffix: path_spec_location = self._file_system.JoinPath([ location, path_segment]) is_directory = True else: path_spec_location = self._file_system.JoinPath([path]) is_directory = path.endswith('/') if is_directory: if path_spec_location in processed_directories: continue processed_directories.add(path_spec_location) # Restore / at end path to indicate a directory. path_spec_location += self._file_system.PATH_SEPARATOR yield zip_path_spec.ZipPathSpec( location=path_spec_location, parent=self.path_spec.parent)
def _EntriesGenerator(self)
Retrieves directory entries. Since a directory can contain a vast number of entries using a generator is more memory efficient. Yields: ZipPathSpec: a path specification.
3.271665
3.150082
1.038597
if self.entry_type != definitions.FILE_ENTRY_TYPE_DIRECTORY: return None return ZipDirectory(self._file_system, self.path_spec)
def _GetDirectory(self)
Retrieves a directory. Returns: ZipDirectory: a directory or None if not available.
7.072517
4.902534
1.442625
stat_object = super(ZipFileEntry, self)._GetStat() if self._zip_info is not None: # File data stat information. stat_object.size = getattr(self._zip_info, 'file_size', None) # Ownership and permissions stat information. if self._external_attributes != 0: if self._creator_system == self._CREATOR_SYSTEM_UNIX: st_mode = self._external_attributes >> 16 stat_object.mode = st_mode & 0x0fff # Other stat information. # zip_info.compress_type # zip_info.comment # zip_info.extra # zip_info.create_version # zip_info.extract_version # zip_info.flag_bits # zip_info.volume # zip_info.internal_attr # zip_info.compress_size return stat_object
def _GetStat(self)
Retrieves information about the file entry. Returns: VFSStat: a stat object.
3.545521
3.510841
1.009878
if self._directory is None: self._directory = self._GetDirectory() zip_file = self._file_system.GetZipFile() if self._directory and zip_file: for path_spec in self._directory.entries: location = getattr(path_spec, 'location', None) if location is None: continue kwargs = {} try: kwargs['zip_info'] = zip_file.getinfo(location[1:]) except KeyError: kwargs['is_virtual'] = True yield ZipFileEntry( self._resolver_context, self._file_system, path_spec, **kwargs)
def _GetSubFileEntries(self)
Retrieves sub file entries. Yields: ZipFileEntry: a sub file entry.
2.967717
2.724393
1.089313
path = getattr(self.path_spec, 'location', None) if path is not None and not isinstance(path, py2to3.UNICODE_TYPE): try: path = path.decode(self._file_system.encoding) except UnicodeDecodeError: path = None return self._file_system.BasenamePath(path)
def name(self)
str: name of the file entry, without the full path.
3.518484
3.063402
1.148554
if self._zip_info is None: return None time_elements = getattr(self._zip_info, 'date_time', None) return dfdatetime_time_elements.TimeElements(time_elements)
def modification_time(self)
dfdatetime.DateTimeValues: modification time or None if not available.
6.33353
3.812967
1.66105
location = getattr(self.path_spec, 'location', None) if location is None: return None parent_location = self._file_system.DirnamePath(location) if parent_location is None: return None parent_path_spec = getattr(self.path_spec, 'parent', None) if parent_location == '': parent_location = self._file_system.PATH_SEPARATOR is_root = True is_virtual = True else: is_root = False is_virtual = False path_spec = zip_path_spec.ZipPathSpec( location=parent_location, parent=parent_path_spec) return ZipFileEntry( self._resolver_context, self._file_system, path_spec, is_root=is_root, is_virtual=is_virtual)
def GetParentFileEntry(self)
Retrieves the parent file entry. Returns: ZipFileEntry: parent file entry or None if not available.
2.245843
2.076884
1.081352
if not self._zip_info: location = getattr(self.path_spec, 'location', None) if location is None: raise errors.PathSpecError('Path specification missing location.') if not location.startswith(self._file_system.LOCATION_ROOT): raise errors.PathSpecError('Invalid location in path specification.') if len(location) == 1: return None zip_file = self._file_system.GetZipFile() try: self._zip_info = zip_file.getinfo(location[1:]) except KeyError: pass return self._zip_info
def GetZipInfo(self)
Retrieves the ZIP info object. Returns: zipfile.ZipInfo: a ZIP info object or None if not available. Raises: PathSpecError: if the path specification is incorrect.
2.450241
2.324339
1.054167
stat_object = super(FVDEFileEntry, self)._GetStat() stat_object.size = self._fvde_volume.get_size() return stat_object
def _GetStat(self)
Retrieves information about the file entry. Returns: VFSStat: a stat object.
9.592112
8.147826
1.17726
if not scan_node or not scan_node.path_spec: raise errors.ScannerError('Invalid scan node.') volume_system = apfs_volume_system.APFSVolumeSystem() volume_system.Open(scan_node.path_spec) volume_identifiers = self._source_scanner.GetVolumeIdentifiers( volume_system) if not volume_identifiers: return [] if len(volume_identifiers) > 1: if not self._mediator: raise errors.ScannerError( 'Unable to proceed. APFS volumes found but no mediator to ' 'determine how they should be used.') try: volume_identifiers = self._mediator.GetAPFSVolumeIdentifiers( volume_system, volume_identifiers) except KeyboardInterrupt: raise errors.UserAbort('File system scan aborted.') return self._NormalizedVolumeIdentifiers( volume_system, volume_identifiers, prefix='apfs')
def _GetAPFSVolumeIdentifiers(self, scan_node)
Determines the APFS volume identifiers. Args: scan_node (SourceScanNode): scan node. Returns: list[str]: APFS volume identifiers. Raises: ScannerError: if the format of or within the source is not supported or the the scan node is invalid. UserAbort: if the user requested to abort.
3.006446
2.901901
1.036026
if not scan_node or not scan_node.path_spec: raise errors.ScannerError('Invalid scan node.') volume_system = tsk_volume_system.TSKVolumeSystem() volume_system.Open(scan_node.path_spec) volume_identifiers = self._source_scanner.GetVolumeIdentifiers( volume_system) if not volume_identifiers: return [] if len(volume_identifiers) == 1: return volume_identifiers if not self._mediator: raise errors.ScannerError( 'Unable to proceed. Partitions found but no mediator to determine ' 'how they should be used.') try: volume_identifiers = self._mediator.GetPartitionIdentifiers( volume_system, volume_identifiers) except KeyboardInterrupt: raise errors.UserAbort('File system scan aborted.') return self._NormalizedVolumeIdentifiers( volume_system, volume_identifiers, prefix='p')
def _GetTSKPartitionIdentifiers(self, scan_node)
Determines the TSK partition identifiers. Args: scan_node (SourceScanNode): scan node. Returns: list[str]: TSK partition identifiers. Raises: ScannerError: if the format of or within the source is not supported or the scan node is invalid or if the volume for a specific identifier cannot be retrieved. UserAbort: if the user requested to abort.
3.09919
3.035539
1.020968
if not scan_node or not scan_node.path_spec: raise errors.ScannerError('Invalid scan node.') volume_system = vshadow_volume_system.VShadowVolumeSystem() volume_system.Open(scan_node.path_spec) volume_identifiers = self._source_scanner.GetVolumeIdentifiers( volume_system) if not volume_identifiers: return [] if not self._mediator: raise errors.ScannerError( 'Unable to proceed. VSS stores found but no mediator to determine ' 'how they should be used.') try: volume_identifiers = self._mediator.GetVSSStoreIdentifiers( volume_system, volume_identifiers) except KeyboardInterrupt: raise errors.UserAbort('File system scan aborted.') return self._NormalizedVolumeIdentifiers( volume_system, volume_identifiers, prefix='vss')
def _GetVSSStoreIdentifiers(self, scan_node)
Determines the VSS store identifiers. Args: scan_node (SourceScanNode): scan node. Returns: list[str]: VSS store identifiers. Raises: ScannerError: if the format the scan node is invalid or no mediator is provided and VSS store identifiers are found. UserAbort: if the user requested to abort.
3.440593
3.078172
1.117739
normalized_volume_identifiers = [] for volume_identifier in volume_identifiers: if isinstance(volume_identifier, int): volume_identifier = '{0:s}{1:d}'.format(prefix, volume_identifier) elif not volume_identifier.startswith(prefix): try: volume_identifier = int(volume_identifier, 10) volume_identifier = '{0:s}{1:d}'.format(prefix, volume_identifier) except (TypeError, ValueError): pass try: volume = volume_system.GetVolumeByIdentifier(volume_identifier) except KeyError: volume = None if not volume: raise errors.ScannerError( 'Volume missing for identifier: {0:s}.'.format(volume_identifier)) normalized_volume_identifiers.append(volume_identifier) return normalized_volume_identifiers
def _NormalizedVolumeIdentifiers( self, volume_system, volume_identifiers, prefix='v')
Normalizes volume identifiers. Args: volume_system (VolumeSystem): volume system. volume_identifiers (list[int|str]): allowed volume identifiers, formatted as an integer or string with prefix. prefix (Optional[str]): volume identifier prefix. Returns: list[str]: volume identifiers with prefix. Raises: ScannerError: if the volume identifier is not supported or no volume could be found that corresponds with the identifier.
1.82296
1.819319
1.002001
if not scan_node or not scan_node.path_spec: raise errors.ScannerError('Invalid or missing scan node.') credentials = credentials_manager.CredentialsManager.GetCredentials( scan_node.path_spec) if not credentials: raise errors.ScannerError('Missing credentials for scan node.') if not self._mediator: raise errors.ScannerError( 'Unable to proceed. Encrypted volume found but no mediator to ' 'determine how it should be unlocked.') if self._mediator.UnlockEncryptedVolume( self._source_scanner, scan_context, scan_node, credentials): self._source_scanner.Scan( scan_context, scan_path_spec=scan_node.path_spec)
def _ScanEncryptedVolume(self, scan_context, scan_node)
Scans an encrypted volume scan node for volume and file systems. Args: scan_context (SourceScannerContext): source scanner context. scan_node (SourceScanNode): volume scan node. Raises: ScannerError: if the format of or within the source is not supported, the scan node is invalid, there are no credentials defined for the format or no mediator is provided and a locked scan node was found, e.g. an encrypted volume,
3.222453
3.019677
1.067152
if not scan_node or not scan_node.path_spec: raise errors.ScannerError('Invalid or missing file system scan node.') base_path_specs.append(scan_node.path_spec)
def _ScanFileSystem(self, scan_node, base_path_specs)
Scans a file system scan node for file systems. Args: scan_node (SourceScanNode): file system scan node. base_path_specs (list[PathSpec]): file system base path specifications. Raises: ScannerError: if the scan node is invalid.
3.519789
3.368532
1.044903
if not scan_node or not scan_node.path_spec: raise errors.ScannerError('Invalid or missing scan node.') if scan_context.IsLockedScanNode(scan_node.path_spec): # The source scanner found a locked volume and we need a credential to # unlock it. self._ScanEncryptedVolume(scan_context, scan_node) if scan_context.IsLockedScanNode(scan_node.path_spec): return if scan_node.IsVolumeSystemRoot(): self._ScanVolumeSystemRoot(scan_context, scan_node, base_path_specs) elif scan_node.IsFileSystem(): self._ScanFileSystem(scan_node, base_path_specs) elif scan_node.type_indicator == definitions.TYPE_INDICATOR_VSHADOW: # TODO: look into building VSS store on demand. # We "optimize" here for user experience, alternatively we could scan for # a file system instead of hard coding a TSK child path specification. path_spec = path_spec_factory.Factory.NewPathSpec( definitions.TYPE_INDICATOR_TSK, location='/', parent=scan_node.path_spec) base_path_specs.append(path_spec) else: for sub_scan_node in scan_node.sub_nodes: self._ScanVolume(scan_context, sub_scan_node, base_path_specs)
def _ScanVolume(self, scan_context, scan_node, base_path_specs)
Scans a volume scan node for volume and file systems. Args: scan_context (SourceScannerContext): source scanner context. scan_node (SourceScanNode): volume scan node. base_path_specs (list[PathSpec]): file system base path specifications. Raises: ScannerError: if the format of or within the source is not supported or the scan node is invalid.
3.146539
3.142617
1.001248
if not scan_node or not scan_node.path_spec: raise errors.ScannerError('Invalid scan node.') if scan_node.type_indicator == definitions.TYPE_INDICATOR_APFS_CONTAINER: volume_identifiers = self._GetAPFSVolumeIdentifiers(scan_node) elif scan_node.type_indicator == definitions.TYPE_INDICATOR_VSHADOW: volume_identifiers = self._GetVSSStoreIdentifiers(scan_node) # Process VSS stores (snapshots) starting with the most recent one. volume_identifiers.reverse() else: raise errors.ScannerError( 'Unsupported volume system type: {0:s}.'.format( scan_node.type_indicator)) for volume_identifier in volume_identifiers: location = '/{0:s}'.format(volume_identifier) sub_scan_node = scan_node.GetSubNodeByLocation(location) if not sub_scan_node: raise errors.ScannerError( 'Scan node missing for volume identifier: {0:s}.'.format( volume_identifier)) self._ScanVolume(scan_context, sub_scan_node, base_path_specs)
def _ScanVolumeSystemRoot(self, scan_context, scan_node, base_path_specs)
Scans a volume system root scan node for volume and file systems. Args: scan_context (SourceScannerContext): source scanner context. scan_node (SourceScanNode): volume system root scan node. base_path_specs (list[PathSpec]): file system base path specifications. Raises: ScannerError: if the scan node is invalid, the scan node type is not supported or if a sub scan node cannot be retrieved.
2.120546
2.119101
1.000682
if not source_path: raise errors.ScannerError('Invalid source path.') # Note that os.path.exists() does not support Windows device paths. if (not source_path.startswith('\\\\.\\') and not os.path.exists(source_path)): raise errors.ScannerError( 'No such device, file or directory: {0:s}.'.format(source_path)) scan_context = source_scanner.SourceScannerContext() scan_context.OpenSourcePath(source_path) try: self._source_scanner.Scan(scan_context) except (ValueError, errors.BackEndError) as exception: raise errors.ScannerError( 'Unable to scan source with error: {0!s}'.format(exception)) self._source_path = source_path self._source_type = scan_context.source_type if self._source_type not in [ definitions.SOURCE_TYPE_STORAGE_MEDIA_DEVICE, definitions.SOURCE_TYPE_STORAGE_MEDIA_IMAGE]: scan_node = scan_context.GetRootScanNode() return [scan_node.path_spec] # Get the first node where where we need to decide what to process. scan_node = scan_context.GetRootScanNode() while len(scan_node.sub_nodes) == 1: scan_node = scan_node.sub_nodes[0] base_path_specs = [] if scan_node.type_indicator != definitions.TYPE_INDICATOR_TSK_PARTITION: self._ScanVolume(scan_context, scan_node, base_path_specs) else: # Determine which partition needs to be processed. partition_identifiers = self._GetTSKPartitionIdentifiers(scan_node) for partition_identifier in partition_identifiers: location = '/{0:s}'.format(partition_identifier) sub_scan_node = scan_node.GetSubNodeByLocation(location) self._ScanVolume(scan_context, sub_scan_node, base_path_specs) return base_path_specs
def GetBasePathSpecs(self, source_path)
Determines the base path specifications. Args: source_path (str): source path. Returns: list[PathSpec]: path specifications. Raises: ScannerError: if the source path does not exists, or if the source path is not a file or directory, or if the format of or within the source file is not supported.
2.374848
2.285548
1.039071
if not scan_node or not scan_node.path_spec: raise errors.ScannerError('Invalid or missing file system scan node.') file_system = resolver.Resolver.OpenFileSystem(scan_node.path_spec) if not file_system: return try: path_resolver = windows_path_resolver.WindowsPathResolver( file_system, scan_node.path_spec.parent) if self._ScanFileSystemForWindowsDirectory(path_resolver): base_path_specs.append(scan_node.path_spec) finally: file_system.Close()
def _ScanFileSystem(self, scan_node, base_path_specs)
Scans a file system scan node for file systems. This method checks if the file system contains a known Windows directory. Args: scan_node (SourceScanNode): file system scan node. base_path_specs (list[PathSpec]): file system base path specifications. Raises: ScannerError: if the scan node is invalid.
2.699178
2.530191
1.066788
result = False for windows_path in self._WINDOWS_DIRECTORIES: windows_path_spec = path_resolver.ResolvePath(windows_path) result = windows_path_spec is not None if result: self._windows_directory = windows_path break return result
def _ScanFileSystemForWindowsDirectory(self, path_resolver)
Scans a file system for a known Windows directory. Args: path_resolver (WindowsPathResolver): Windows path resolver. Returns: bool: True if a known Windows directory was found.
3.490393
3.756427
0.929179
path_spec = self._path_resolver.ResolvePath(windows_path) if path_spec is None: return None return self._file_system.GetFileObjectByPathSpec(path_spec)
def OpenFile(self, windows_path)
Opens the file specificed by the Windows path. Args: windows_path (str): Windows path to the file. Returns: FileIO: file-like object or None if the file does not exist.
3.313993
3.526081
0.939852
windows_path_specs = self.GetBasePathSpecs(source_path) if (not windows_path_specs or self._source_type == definitions.SOURCE_TYPE_FILE): return False file_system_path_spec = windows_path_specs[0] self._file_system = resolver.Resolver.OpenFileSystem(file_system_path_spec) if file_system_path_spec.type_indicator == definitions.TYPE_INDICATOR_OS: mount_point = file_system_path_spec else: mount_point = file_system_path_spec.parent self._path_resolver = windows_path_resolver.WindowsPathResolver( self._file_system, mount_point) # The source is a directory or single volume storage media image. if not self._windows_directory: self._ScanFileSystemForWindowsDirectory(self._path_resolver) if not self._windows_directory: return False self._path_resolver.SetEnvironmentVariable( 'SystemRoot', self._windows_directory) self._path_resolver.SetEnvironmentVariable( 'WinDir', self._windows_directory) return True
def ScanForWindowsVolume(self, source_path)
Scans for a Windows volume. Args: source_path (str): source path. Returns: bool: True if a Windows volume was found. Raises: ScannerError: if the source path does not exists, or if the source path is not a file or directory, or if the format of or within the source file is not supported.
2.716804
2.767559
0.981661
if definitions.FORMAT_CATEGORY_ARCHIVE in format_categories: cls._archive_remainder_list = None cls._archive_scanner = None cls._archive_store = None if definitions.FORMAT_CATEGORY_COMPRESSED_STREAM in format_categories: cls._compressed_stream_remainder_list = None cls._compressed_stream_scanner = None cls._compressed_stream_store = None if definitions.FORMAT_CATEGORY_FILE_SYSTEM in format_categories: cls._file_system_remainder_list = None cls._file_system_scanner = None cls._file_system_store = None if definitions.FORMAT_CATEGORY_STORAGE_MEDIA_IMAGE in format_categories: cls._storage_media_image_remainder_list = None cls._storage_media_image_scanner = None cls._storage_media_image_store = None if definitions.FORMAT_CATEGORY_VOLUME_SYSTEM in format_categories: cls._volume_system_remainder_list = None cls._volume_system_scanner = None cls._volume_system_store = None
def _FlushCache(cls, format_categories)
Flushes the cached objects for the specified format categories. Args: format_categories (set[str]): format categories.
1.756878
1.71983
1.021542
signature_scanner = pysigscan.scanner() signature_scanner.set_scan_buffer_size(cls._SCAN_BUFFER_SIZE) for format_specification in specification_store.specifications: for signature in format_specification.signatures: pattern_offset = signature.offset if pattern_offset is None: signature_flags = pysigscan.signature_flags.NO_OFFSET elif pattern_offset < 0: pattern_offset *= -1 signature_flags = pysigscan.signature_flags.RELATIVE_FROM_END else: signature_flags = pysigscan.signature_flags.RELATIVE_FROM_START signature_scanner.add_signature( signature.identifier, pattern_offset, signature.pattern, signature_flags) return signature_scanner
def _GetSignatureScanner(cls, specification_store)
Initializes a signature scanner based on a specification store. Args: specification_store (FormatSpecificationStore): specification store. Returns: pysigscan.scanner: signature scanner.
2.789685
2.755229
1.012506
specification_store = specification.FormatSpecificationStore() remainder_list = [] for analyzer_helper in iter(cls._analyzer_helpers.values()): if not analyzer_helper.IsEnabled(): continue if format_category in analyzer_helper.format_categories: format_specification = analyzer_helper.GetFormatSpecification() if format_specification is not None: specification_store.AddSpecification(format_specification) else: remainder_list.append(analyzer_helper) return specification_store, remainder_list
def _GetSpecificationStore(cls, format_category)
Retrieves the specification store for specified format category. Args: format_category (str): format category. Returns: tuple[FormatSpecificationStore, list[AnalyzerHelper]]: a format specification store and remaining analyzer helpers that do not have a format specification.
3.150068
2.568574
1.226388
type_indicator_list = [] file_object = resolver.Resolver.OpenFileObject( path_spec, resolver_context=resolver_context) scan_state = pysigscan.scan_state() try: signature_scanner.scan_file_object(scan_state, file_object) for scan_result in iter(scan_state.scan_results): format_specification = specification_store.GetSpecificationBySignature( scan_result.identifier) if format_specification.identifier not in type_indicator_list: type_indicator_list.append(format_specification.identifier) for analyzer_helper in remainder_list: result = analyzer_helper.AnalyzeFileObject(file_object) if result is not None: type_indicator_list.append(result) finally: file_object.close() return type_indicator_list
def _GetTypeIndicators( cls, signature_scanner, specification_store, remainder_list, path_spec, resolver_context=None)
Determines if a file contains a supported format types. Args: signature_scanner (pysigscan.scanner): signature scanner. specification_store (FormatSpecificationStore): specification store. remainder_list (list[AnalyzerHelper]): remaining analyzer helpers that do not have a format specification. path_spec (PathSpec): path specification. resolver_context (Optional[Context]): resolver context, where None represents the built-in context which is not multi process safe. Returns: list[str]: supported format type indicators.
2.732823
2.717406
1.005674
if analyzer_helper.type_indicator not in cls._analyzer_helpers: raise KeyError( 'Analyzer helper object not set for type indicator: {0:s}.'.format( analyzer_helper.type_indicator)) analyzer_helper = cls._analyzer_helpers[analyzer_helper.type_indicator] cls._FlushCache(analyzer_helper.format_categories) del cls._analyzer_helpers[analyzer_helper.type_indicator]
def DeregisterHelper(cls, analyzer_helper)
Deregisters a format analyzer helper. Args: analyzer_helper (AnalyzerHelper): analyzer helper. Raises: KeyError: if analyzer helper object is not set for the corresponding type indicator.
3.383931
2.49309
1.357324
if (cls._archive_remainder_list is None or cls._archive_store is None): specification_store, remainder_list = cls._GetSpecificationStore( definitions.FORMAT_CATEGORY_ARCHIVE) cls._archive_remainder_list = remainder_list cls._archive_store = specification_store if cls._archive_scanner is None: cls._archive_scanner = cls._GetSignatureScanner(cls._archive_store) return cls._GetTypeIndicators( cls._archive_scanner, cls._archive_store, cls._archive_remainder_list, path_spec, resolver_context=resolver_context)
def GetArchiveTypeIndicators(cls, path_spec, resolver_context=None)
Determines if a file contains a supported archive types. Args: path_spec (PathSpec): path specification. resolver_context (Optional[Context]): resolver context, where None represents the built-in context which is not multi process safe. Returns: list[str]: supported format type indicators.
3.708283
4.196292
0.883705
if (cls._compressed_stream_remainder_list is None or cls._compressed_stream_store is None): specification_store, remainder_list = cls._GetSpecificationStore( definitions.FORMAT_CATEGORY_COMPRESSED_STREAM) cls._compressed_stream_remainder_list = remainder_list cls._compressed_stream_store = specification_store if cls._compressed_stream_scanner is None: cls._compressed_stream_scanner = cls._GetSignatureScanner( cls._compressed_stream_store) return cls._GetTypeIndicators( cls._compressed_stream_scanner, cls._compressed_stream_store, cls._compressed_stream_remainder_list, path_spec, resolver_context=resolver_context)
def GetCompressedStreamTypeIndicators(cls, path_spec, resolver_context=None)
Determines if a file contains a supported compressed stream types. Args: path_spec (PathSpec): path specification. resolver_context (Optional[Context]): resolver context, where None represents the built-in context which is not multi process safe. Returns: list[str]: supported format type indicators.
3.083316
3.345134
0.921732
if (cls._file_system_remainder_list is None or cls._file_system_store is None): specification_store, remainder_list = cls._GetSpecificationStore( definitions.FORMAT_CATEGORY_FILE_SYSTEM) cls._file_system_remainder_list = remainder_list cls._file_system_store = specification_store if cls._file_system_scanner is None: cls._file_system_scanner = cls._GetSignatureScanner( cls._file_system_store) return cls._GetTypeIndicators( cls._file_system_scanner, cls._file_system_store, cls._file_system_remainder_list, path_spec, resolver_context=resolver_context)
def GetFileSystemTypeIndicators(cls, path_spec, resolver_context=None)
Determines if a file contains a supported file system types. Args: path_spec (PathSpec): path specification. resolver_context (Optional[Context]): resolver context, where None represents the built-in context which is not multi process safe. Returns: list[str]: supported format type indicators.
3.138577
3.447787
0.910316