code
string | signature
string | docstring
string | loss_without_docstring
float64 | loss_with_docstring
float64 | factor
float64 |
---|---|---|---|---|---|
if (_np.shape(W)[0] == 1):
if W[0,0] < epsilon:
raise _ZeroRankError(
'All eigenvalues are smaller than %g, rank reduction would discard all dimensions.' % epsilon)
Winv = 1./W[0,0]
else:
sm, Vm = spd_eig(W, epsilon=epsilon, method=method)
Winv = _np.dot(Vm, _np.diag(1.0 / sm)).dot(Vm.T)
# return split
return Winv | def spd_inv(W, epsilon=1e-10, method='QR') | Compute matrix inverse of symmetric positive-definite matrix :math:`W`.
by first reducing W to a low-rank approximation that is truly spd
(Moore-Penrose inverse).
Parameters
----------
W : ndarray((m,m), dtype=float)
Symmetric positive-definite (spd) matrix.
epsilon : float
Truncation parameter. Eigenvalues with norms smaller than this cutoff will
be removed.
method : str
Method to perform the decomposition of :math:`W` before inverting. Options are:
* 'QR': QR-based robust eigenvalue decomposition of W
* 'schur': Schur decomposition of W
Returns
-------
L : ndarray((n, r))
the Moore-Penrose inverse of the symmetric positive-definite matrix :math:`W` | 6.122593 | 6.345891 | 0.964812 |
if _np.shape(W)[0] == 1:
if W[0,0] < epsilon:
raise _ZeroRankError(
'All eigenvalues are smaller than %g, rank reduction would discard all dimensions.' % epsilon)
Winv = 1./_np.sqrt(W[0, 0])
sm = _np.ones(1)
else:
sm, Vm = spd_eig(W, epsilon=epsilon, method=method)
Winv = _np.dot(Vm, _np.diag(1.0 / _np.sqrt(sm))).dot(Vm.T)
# return split
if return_rank:
return Winv, sm.shape[0]
else:
return Winv | def spd_inv_sqrt(W, epsilon=1e-10, method='QR', return_rank=False) | Computes :math:`W^{-1/2}` of symmetric positive-definite matrix :math:`W`.
by first reducing W to a low-rank approximation that is truly spd.
Parameters
----------
W : ndarray((m,m), dtype=float)
Symmetric positive-definite (spd) matrix.
epsilon : float
Truncation parameter. Eigenvalues with norms smaller than this cutoff will
be removed.
method : str
Method to perform the decomposition of :math:`W` before inverting. Options are:
* 'QR': QR-based robust eigenvalue decomposition of W
* 'schur': Schur decomposition of W
Returns
-------
L : ndarray((n, r))
:math:`W^{-1/2}` after reduction of W to a low-rank spd matrix | 4.371351 | 4.546438 | 0.961489 |
if (_np.shape(W)[0] == 1):
if W[0,0] < epsilon:
raise _ZeroRankError(
'All eigenvalues are smaller than %g, rank reduction would discard all dimensions.' % epsilon)
L = 1./_np.sqrt(W[0,0])
else:
sm, Vm = spd_eig(W, epsilon=epsilon, method=method, canonical_signs=canonical_signs)
L = _np.dot(Vm, _np.diag(1.0/_np.sqrt(sm)))
# return split
return L | def spd_inv_split(W, epsilon=1e-10, method='QR', canonical_signs=False) | Compute :math:`W^{-1} = L L^T` of the symmetric positive-definite matrix :math:`W`.
by first reducing W to a low-rank approximation that is truly spd.
Parameters
----------
W : ndarray((m,m), dtype=float)
Symmetric positive-definite (spd) matrix.
epsilon : float
Truncation parameter. Eigenvalues with norms smaller than this cutoff will
be removed.
method : str
Method to perform the decomposition of :math:`W` before inverting. Options are:
* 'QR': QR-based robust eigenvalue decomposition of W
* 'schur': Schur decomposition of W
canonical_signs : boolean, default = False
Fix signs in L, s. t. the largest element of in every row of L is positive.
Returns
-------
L : ndarray((n, r))
Matrix :math:`L` from the decomposition :math:`W^{-1} = L L^T`. | 4.932445 | 5.128781 | 0.961719 |
r
L = spd_inv_split(C0, epsilon=epsilon, method=method, canonical_signs=True)
Ct_trans = _np.dot(_np.dot(L.T, Ct), L)
# solve the symmetric eigenvalue problem in the new basis
if _np.allclose(Ct.T, Ct):
from scipy.linalg import eigh
l, R_trans = eigh(Ct_trans)
else:
from scipy.linalg import eig
l, R_trans = eig(Ct_trans)
# sort eigenpairs
l, R_trans = sort_by_norm(l, R_trans)
# transform the eigenvectors back to the old basis
R = _np.dot(L, R_trans)
# Change signs of eigenvectors:
if sign_maxelement:
for j in range(R.shape[1]):
imax = _np.argmax(_np.abs(R[:, j]))
R[:, j] *= _np.sign(R[imax, j])
# return result
return l, R | def eig_corr(C0, Ct, epsilon=1e-10, method='QR', sign_maxelement=False) | r""" Solve generalized eigenvalue problem with correlation matrices C0 and Ct
Numerically robust solution of a generalized Hermitian (symmetric) eigenvalue
problem of the form
.. math::
\mathbf{C}_t \mathbf{r}_i = \mathbf{C}_0 \mathbf{r}_i l_i
Computes :math:`m` dominant eigenvalues :math:`l_i` and eigenvectors
:math:`\mathbf{r}_i`, where :math:`m` is the numerical rank of the problem.
This is done by first conducting a Schur decomposition of the symmetric
positive matrix :math:`\mathbf{C}_0`, then truncating its spectrum to
retain only eigenvalues that are numerically greater than zero, then using
this decomposition to define an ordinary eigenvalue Problem for
:math:`\mathbf{C}_t` of size :math:`m`, and then solving this eigenvalue
problem.
Parameters
----------
C0 : ndarray (n,n)
time-instantaneous correlation matrix. Must be symmetric positive definite
Ct : ndarray (n,n)
time-lagged correlation matrix. Must be symmetric
epsilon : float
eigenvalue norm cutoff. Eigenvalues of C0 with norms <= epsilon will be
cut off. The remaining number of Eigenvalues define the size of
the output.
method : str
Method to perform the decomposition of :math:`W` before inverting. Options are:
* 'QR': QR-based robust eigenvalue decomposition of W
* 'schur': Schur decomposition of W
sign_maxelement : bool
If True, re-scale each eigenvector such that its entry with maximal absolute value
is positive.
Returns
-------
l : ndarray (m)
The first m generalized eigenvalues, sorted by descending norm
R : ndarray (n,m)
The first m generalized eigenvectors, as a column matrix. | 3.288053 | 3.364638 | 0.977238 |
if len(args) < 1:
raise ValueError('need at least one argument')
elif len(args) == 1:
return args[0]
elif len(args) == 2:
return np.dot(args[0], args[1])
else:
return np.dot(args[0], mdot(*args[1:])) | def mdot(*args) | Computes a matrix product of multiple ndarrays
This is a convenience function to avoid constructs such as np.dot(A, np.dot(B, np.dot(C, D))) and instead
use mdot(A, B, C, D).
Parameters
----------
*args : an arbitrarily long list of ndarrays that must be compatible for multiplication,
i.e. args[i].shape[1] = args[i+1].shape[0]. | 1.778931 | 1.715572 | 1.036932 |
assert len(M.shape) == 2, 'M is not a matrix'
assert M.shape[0] == M.shape[1], 'M is not quadratic'
if scipy.sparse.issparse(M):
C_cc = M.tocsr()
else:
C_cc = M
C_cc = C_cc[sel, :]
if scipy.sparse.issparse(M):
C_cc = C_cc.tocsc()
C_cc = C_cc[:, sel]
if scipy.sparse.issparse(M):
return C_cc.tocoo()
else:
return C_cc | def submatrix(M, sel) | Returns a submatrix of the quadratic matrix M, given by the selected columns and row
Parameters
----------
M : ndarray(n,n)
symmetric matrix
sel : int-array
selection of rows and columns. Element i,j will be selected if both are in sel.
Returns
-------
S : ndarray(m,m)
submatrix with m=len(sel) | 2.377483 | 2.480908 | 0.958312 |
# norms
evnorms = np.abs(evals)
# sort
I = np.argsort(evnorms)[::-1]
# permute
evals2 = evals[I]
evecs2 = evecs[:, I]
# done
return evals2, evecs2 | def _sort_by_norm(evals, evecs) | Sorts the eigenvalues and eigenvectors by descending norm of the eigenvalues
Parameters
----------
evals: ndarray(n)
eigenvalues
evecs: ndarray(n,n)
eigenvectors in a column matrix
Returns
-------
(evals, evecs) : ndarray(m), ndarray(n,m)
the sorted eigenvalues and eigenvectors | 3.072494 | 3.411066 | 0.900743 |
# !! PART OF ORIGINAL DOCSTRING INCOMPATIBLE WITH CLASS INTERFACE !!
# Example
# -------
# We set up multiple stationary models, one for a reference (ground)
# state, and two for biased states, and group them in a
# MultiStationaryModel.
# >>> from pyemma.thermo import StationaryModel, MEMM
# >>> m_1 = StationaryModel(f=[1.0, 0], label='biased 1')
# >>> m_2 = StationaryModel(f=[2.0, 0], label='biased 2')
# >>> m_mult = MEMM([m_1, m_2], [0, 0], label='unbiased')
# Compute the stationary distribution for the two biased models
# >>> m_mult.meval('stationary_distribution')
# [array([ 0.73105858, 0.26894142]), array([ 0.88079708, 0.11920292])]
# We set up multiple Markov state models for different temperatures
# and group them in a MultiStationaryModel.
# >>> import numpy as np
# >>> from pyemma.msm import MSM
# >>> from pyemma.thermo import MEMM
# >>> b = 20 # transition barrier in kJ / mol
# >>> temps = np.arange(300, 500, 25) # temperatures 300 to 500 K
# >>> p_trans = [np.exp(- b / kT) for kT in 0.00831*temps ]
# >>> # build MSMs for different temperatures
# >>> msms = [MSM(P=np.array([[1.0-p, p], [p, 1.0-p]])) for p in p_trans]
# >>> # build Multi-MSM
# >>> msm_mult = MEMM(pi=msms[0].stationary_distribution, label='300 K', models=msms)
# Compute the timescales and see how they decay with temperature
# Greetings to Arrhenius.
# >>> np.hstack(msm_mult.meval('timescales'))
# array([ 1523.83827932, 821.88040004, 484.06386176, 305.87880068,
# 204.64109413, 143.49286817, 104.62539128, 78.83331598])
# !! END OF INCOMPATIBLE PART !!
return [_call_member(M, f, *args, **kw) for M in self.models] | def meval(self, f, *args, **kw) | Evaluates the given function call for all models
Returns the results of the calls in a list | 4.910733 | 4.88465 | 1.00534 |
r
if isinstance(X, np.ndarray):
if X.ndim == 2:
mapped = self._transform_array(X)
return mapped
else:
raise TypeError('Input has the wrong shape: %s with %i'
' dimensions. Expecting a matrix (2 dimensions)'
% (str(X.shape), X.ndim))
elif isinstance(X, (list, tuple)):
out = []
for x in X:
mapped = self._transform_array(x)
out.append(mapped)
return out
else:
raise TypeError('Input has the wrong type: %s '
'. Either accepting numpy arrays of dimension 2 '
'or lists of such arrays' % (str(type(X)))) | def transform(self, X) | r"""Maps the input data through the transformer to correspondingly
shaped output data array/list.
Parameters
----------
X : ndarray(T, n) or list of ndarray(T_i, n)
The input data, where T is the number of time steps and n is the
number of dimensions.
If a list is provided, the number of time steps is allowed to vary,
but the number of dimensions are required to be to be consistent.
Returns
-------
Y : ndarray(T, d) or list of ndarray(T_i, d)
The mapped data, where T is the number of time steps of the input
data and d is the output dimension of this transformer. If called
with a list of trajectories, Y will also be a corresponding list of
trajectories | 3.53283 | 3.587924 | 0.984645 |
M = K.shape[0] - 1
# Compute right and left eigenvectors:
l, U = scl.eig(K.T)
l, U = sort_by_norm(l, U)
# Extract the eigenvector for eigenvalue one and normalize:
u = np.real(U[:, 0])
v = np.zeros(M+1)
v[M] = 1.0
u = u / np.dot(u, v)
return u | def _compute_u(K) | Estimate an approximation of the ratio of stationary over empirical distribution from the basis.
Parameters:
-----------
K0, ndarray(M+1, M+1),
time-lagged correlation matrix for the whitened and padded data set.
Returns:
--------
u : ndarray(M,)
coefficients of the ratio stationary / empirical dist. from the whitened and expanded basis. | 4.293242 | 4.573685 | 0.938683 |
'Koopman operator on the modified basis (PC|1)'
self._check_estimated()
if not self._estimation_finished:
self._finish_estimation()
return self._K | def K_pc_1(self) | Koopman operator on the modified basis (PC|1) | 15.396499 | 6.370402 | 2.41688 |
'weights in the input basis'
self._check_estimated()
u_mod = self.u_pc_1
N = self._R.shape[0]
u_input = np.zeros(N+1)
u_input[0:N] = self._R.dot(u_mod[0:-1]) # in input basis
u_input[N] = u_mod[-1] - self.mean.dot(self._R.dot(u_mod[0:-1]))
return u_input | def u(self) | weights in the input basis | 5.433008 | 4.729064 | 1.148855 |
'weights in the input basis (encapsulated in an object)'
self._check_estimated()
u_input = self.u
return _KoopmanWeights(u_input[0:-1], u_input[-1]) | def weights(self) | weights in the input basis (encapsulated in an object) | 18.409477 | 9.214987 | 1.997776 |
'weightening transformation'
self._check_estimated()
if not self._estimation_finished:
self._finish_estimation()
return self._R | def R(self) | weightening transformation | 14.081746 | 8.469083 | 1.662724 |
try:
attr = getattr(obj, name)
except AttributeError as e:
if failfast:
raise e
else:
return None
try:
if inspect.ismethod(attr): # call function
return attr(*args, **kwargs)
elif isinstance(attr, property): # call property
return obj.attr
else: # now it's an Attribute, so we can just return its value
return attr
except Exception as e:
if failfast:
raise e
else:
return None | def _call_member(obj, name, failfast=True, *args, **kwargs) | Calls the specified method, property or attribute of the given object
Parameters
----------
obj : object
The object that will be used
name : str
Name of method, property or attribute
failfast : bool
If True, will raise an exception when trying a method that doesn't exist. If False, will simply return None
in that case
args : list, optional, default=[]
Arguments to be passed to the method (if any)
kwargs: dict | 2.950366 | 3.058767 | 0.96456 |
# run estimation
model = None
try: # catch any exception
estimator.estimate(X, **params)
model = estimator.model
except KeyboardInterrupt:
# we want to be able to interactively interrupt the worker, no matter of failfast=False.
raise
except:
e = sys.exc_info()[1]
if isinstance(estimator, Loggable):
estimator.logger.warning("Ignored error during estimation: %s" % e)
if failfast:
raise # re-raise
elif return_exceptions:
model = e
else:
pass # just return model=None
# deal with results
res = []
# deal with result
if evaluate is None: # we want full models
res.append(model)
# we want to evaluate function(s) of the model
elif _types.is_iterable(evaluate):
values = [] # the function values the model
for ieval, name in enumerate(evaluate):
# get method/attribute name and arguments to be evaluated
#name = evaluate[ieval]
args = ()
if evaluate_args is not None:
args = evaluate_args[ieval]
# wrap single arguments in an iterable again to pass them.
if _types.is_string(args):
args = (args, )
# evaluate
try:
# try calling method/property/attribute
value = _call_member(estimator.model, name, failfast, *args)
# couldn't find method/property/attribute
except AttributeError as e:
if failfast:
raise e # raise an AttributeError
else:
value = None # we just ignore it and return None
values.append(value)
# if we only have one value, unpack it
if len(values) == 1:
values = values[0]
res.append(values)
else:
raise ValueError('Invalid setting for evaluate: ' + str(evaluate))
if len(res) == 1:
res = res[0]
return res | def _estimate_param_scan_worker(estimator, params, X, evaluate, evaluate_args,
failfast, return_exceptions) | Method that runs estimation for several parameter settings.
Defined as a worker for parallelization | 4.583011 | 4.636336 | 0.988498 |
# set params
if params:
self.set_params(**params)
self._model = self._estimate(X)
# ensure _estimate returned something
assert self._model is not None
self._estimated = True
return self | def estimate(self, X, **params) | Estimates the model given the data X
Parameters
----------
X : object
A reference to the data from which the model will be estimated
params : dict
New estimation parameter values. The parameters must that have been
announced in the __init__ method of this estimator. The present
settings will overwrite the settings of parameters given in the
__init__ method, i.e. the parameter values after this call will be
those that have been used for this estimation. Use this option if
only one or a few parameters change with respect to
the __init__ settings for this run, and if you don't need to
remember the original settings of these changed parameters.
Returns
-------
estimator : object
The estimated estimator with the model being available. | 6.064334 | 8.458148 | 0.716981 |
signal.signal(SIGNAL_STACKTRACE, signal.SIG_IGN)
signal.signal(SIGNAL_PDB, signal.SIG_IGN) | def unregister_signal_handlers() | set signal handlers to default | 4.568816 | 4.188223 | 1.090872 |
strategy = strategy.lower()
if strategy == 'random':
return SelectionStrategyRandom(oasis_obj, strategy, nsel=nsel, neig=neig)
elif strategy == 'oasis':
return SelectionStrategyOasis(oasis_obj, strategy, nsel=nsel, neig=neig)
elif strategy == 'spectral-oasis':
return SelectionStrategySpectralOasis(oasis_obj, strategy, nsel=nsel, neig=neig)
else:
raise ValueError('Selected strategy is unknown: '+str(strategy)) | def selection_strategy(oasis_obj, strategy='spectral-oasis', nsel=1, neig=None) | Factory for selection strategy object
Returns
-------
selstr : SelectionStrategy
Selection strategy object | 1.870425 | 2.128595 | 0.878714 |
# err_i = sum_j R_{k,ij} A_{k,ji} - d_i
self._err = np.sum(np.multiply(self._R_k, self._C_k.T), axis=0) - self._d | def _compute_error(self) | Evaluate the absolute error of the Nystroem approximation for each column | 7.266706 | 6.594658 | 1.101908 |
self._selection_strategy = selection_strategy(self, strategy, nsel, neig) | def set_selection_strategy(self, strategy='spectral-oasis', nsel=1, neig=None) | Defines the column selection strategy
Parameters
----------
strategy : str
One of the following strategies to select new columns:
random : randomly choose from non-selected columns
oasis : maximal approximation error in the diagonal of :math:`A`
spectral-oasis : selects the nsel columns that are most distanced in the oASIS-error-scaled dominant eigenspace
nsel : int
number of columns to be selected in each round
neig : int or None, optional, default None
Number of eigenvalues to be optimized by the selection process.
If None, use the whole available eigenspace | 4.878883 | 6.478417 | 0.753098 |
# compute R_k and W_k_inv
Wk = self._C_k[self._columns, :]
self._W_k_inv = np.linalg.pinv(Wk)
self._R_k = np.dot(self._W_k_inv, self._C_k.T) | def update_inverse(self) | Recomputes W_k_inv and R_k given the current column selection
When computed, the block matrix inverse W_k_inv will be updated. This is useful when you want to compute
eigenvalues or get an approximation for the full matrix or individual columns.
Calling this function is not strictly necessary, but then you rely on the fact that the updates did not
accumulate large errors. That depends very much on how columns were added. Adding columns with very small
Schur complement causes accumulation of errors and is more likely to make it necessary to update the inverse. | 4.269194 | 3.174486 | 1.344846 |
# convenience access
k = self._k
d = self._d
R = self._R_k
Winv = self._W_k_inv
b_new = col[self._columns][:, None]
d_new = d[icol]
q_new = R[:, icol][:, None]
# calculate R_new
schur_complement = d_new - np.dot(b_new.T, q_new) # Schur complement
if np.isclose(schur_complement, 0):
return False
# otherwise complete the update
s_new = 1./schur_complement
qC = np.dot(b_new.T, R)
# update Winv
Winv_new = np.zeros((k+1, k+1))
Winv_new[0:k, 0:k] = Winv+s_new*np.dot(q_new, q_new.T)
Winv_new[0:k, k] = -s_new*q_new[0:k, 0]
Winv_new[k, 0:k] = -s_new*q_new[0:k, 0].T
Winv_new[k, k] = s_new
R_new = np.vstack((R + s_new * np.dot(q_new, (qC - col.T)), s_new*(-qC + col.T)))
# forcing known structure on R_new
sel_new = np.append(self._columns, icol)
R_new[:, sel_new] = np.eye(k+1)
# update Winv
self._W_k_inv = Winv_new
# update R
self._R_k = R_new
# update C0_k
self._C_k = np.hstack((self._C_k, col[:, None]))
# update number of selected columns
self._k += 1
# add column to present selection
self._columns = np.append(self._columns, icol)
# update error
if update_error:
self._compute_error()
# exit with success
return True | def add_column(self, col, icol, update_error=True) | Attempts to add a single column of :math:`A` to the Nystroem approximation and updates the local matrices
Parameters
----------
col : ndarray((N,), dtype=float)
new column of :math:`A`
icol : int
index of new column within :math:`A`
update_error : bool, optional, default = True
If True, the absolute and relative approximation error will be updated after adding the column.
If False, then not.
Return
------
success : bool
True if the new column was added to the approximation. False if not. | 3.283329 | 3.291654 | 0.997471 |
r
added = []
for (i, c) in enumerate(columns_new):
if self.add_column(C_k_new[:, i], c, update_error=False):
added.append(c)
# update error only once
self._compute_error()
# return the columns that were successfully added
return np.array(added) | def add_columns(self, C_k_new, columns_new) | r""" Attempts to adds a set of new columns of :math:`A` to the Nystroem approximation and updates the local matrices
Parameters
----------
C_k_new : ndarray((N,k), dtype=float)
:math:`k` new columns of :math:`A`
columns_new : int
indices of new columns within :math:`A`, in the same order as the C_k_new columns
Return
------
cols_added : ndarray of int
Columns that were added successfully. Columns are only added when their Schur complement exceeds 0,
which is normally true for columns that were not yet added, but the Schur complement may become 0 even
for new columns as a result of numerical cancellation errors. | 5.103094 | 5.014937 | 1.017579 |
r
return np.dot(self._C_k, self._R_k[:, i]) | def approximate_column(self, i) | r""" Computes the Nystroem approximation of column :math:`i` of matrix $A \in \mathbb{R}^{n \times n}$. | 17.112984 | 14.197552 | 1.205348 |
r
# compute the Eigenvalues of C0 using Schur factorization
Wk = self._C_k[self._columns, :]
L0 = spd_inv_split(Wk, epsilon=epsilon)
L = np.dot(self._C_k, L0)
return L | def approximate_cholesky(self, epsilon=1e-6) | r""" Compute low-rank approximation to the Cholesky decomposition of target matrix.
The decomposition will be conducted while ensuring that the spectrum of `A_k^{-1}` is positive.
Parameters
----------
epsilon : float, optional, default 1e-6
Cutoff for eigenvalue norms. If negative eigenvalues occur, with norms larger than epsilon, the largest
negative eigenvalue norm will be used instead of epsilon, i.e. a band including all negative eigenvalues
will be cut off.
Returns
-------
L : ndarray((n,m), dtype=float)
Cholesky matrix such that `A \approx L L^{\top}`. Number of columns :math:`m` is most at the number of columns
used in the Nystroem approximation, but may be smaller depending on epsilon. | 15.623005 | 16.82412 | 0.928608 |
L = self.approximate_cholesky(epsilon=epsilon)
LL = np.dot(L.T, L)
s, V = np.linalg.eigh(LL)
# sort
s, V = sort_by_norm(s, V)
# back-transform eigenvectors
Linv = np.linalg.pinv(L.T)
V = np.dot(Linv, V)
# normalize eigenvectors
ncol = V.shape[1]
for i in range(ncol):
if not np.allclose(V[:, i], 0):
V[:, i] /= np.sqrt(np.dot(V[:, i], V[:, i]))
return s, V | def approximate_eig(self, epsilon=1e-6) | Compute low-rank approximation of the eigenvalue decomposition of target matrix.
If spd is True, the decomposition will be conducted while ensuring that the spectrum of `A_k^{-1}` is positive.
Parameters
----------
epsilon : float, optional, default 1e-6
Cutoff for eigenvalue norms. If negative eigenvalues occur, with norms larger than epsilon, the largest
negative eigenvalue norm will be used instead of epsilon, i.e. a band including all negative eigenvalues
will be cut off.
Returns
-------
s : ndarray((m,), dtype=float)
approximated eigenvalues. Number of eigenvalues returned is at most the number of columns used in the
Nystroem approximation, but may be smaller depending on epsilon.
W : ndarray((n,m), dtype=float)
approximated eigenvectors in columns. Number of eigenvectors returned is at most the number of columns
used in the Nystroem approximation, but may be smaller depending on epsilon. | 2.704237 | 2.759692 | 0.979905 |
err = self._oasis_obj.error
if np.allclose(err, 0):
return None
nsel = self._check_nsel()
if nsel is None:
return None
return self._select(nsel, err) | def select(self) | Selects next column indexes according to defined strategy
Returns
-------
cols : ndarray((nsel,), dtype=int)
selected columns | 7.477191 | 6.611041 | 1.131016 |
if not hasattr(self, '_n_jobs'):
self._n_jobs = get_n_jobs(logger=getattr(self, 'logger'))
return self._n_jobs | def n_jobs(self) | Returns number of jobs/threads to use during assignment of data.
Returns
-------
If None it will return the setting of 'PYEMMA_NJOBS' or
'SLURM_CPUS_ON_NODE' environment variable. If none of these environment variables exist,
the number of processors /or cores is returned.
Notes
-----
This setting will effectively be multiplied by the the number of threads used by NumPy for
algorithms which use multiple processes. So take care if you choose this manually. | 3.679451 | 4.027583 | 0.913563 |
if name not in self._parent:
raise KeyError('model "{}" not present'.format(name))
del self._parent[name]
if self._current_model_group == name:
self._current_model_group = None | def delete(self, name) | deletes model with given name | 4.150004 | 3.929052 | 1.056235 |
if name not in self._parent:
raise KeyError('model "{}" not present'.format(name))
self._current_model_group = name | def select_model(self, name) | choose an existing model | 7.422061 | 7.229014 | 1.026704 |
f = self._parent
return {name: {a: f[name].attrs[a]
for a in H5File.stored_attributes}
for name in f.keys()} | def models_descriptive(self) | list all stored models in given file.
Returns
-------
dict: {model_name: {'repr' : 'string representation, 'created': 'human readable date', ...} | 10.785359 | 13.964381 | 0.772348 |
from pyemma import config
# no value yet, obtain from config
if not hasattr(self, "_show_progress"):
val = config.show_progress_bars
self._show_progress = val
# config disabled progress?
elif not config.show_progress_bars:
return False
return self._show_progress | def show_progress(self) | whether to show the progress of heavy calculations on this object. | 7.979916 | 7.342548 | 1.086805 |
if not self.show_progress:
return
if tqdm_args is None:
tqdm_args = {}
if not isinstance(amount_of_work, Integral):
raise ValueError('amount_of_work has to be of integer type. But is {}'.format(type(amount_of_work)))
# if we do not have enough work to do for the overhead of a progress bar just dont create a bar.
if amount_of_work <= ProgressReporterMixin._pg_threshold:
pg = None
else:
args = dict(total=amount_of_work, desc=description, dynamic_ncols=True, **tqdm_args)
if _attached_to_ipy_notebook_with_widgets():
from .notebook import my_tqdm_notebook
pg = my_tqdm_notebook(leave=False, **args)
else:
import tqdm
pg = tqdm.tqdm(leave=True, **args)
self._prog_rep_progressbars[stage] = pg
self._prog_rep_descriptions[stage] = description
assert stage in self._prog_rep_progressbars
assert stage in self._prog_rep_descriptions | def _progress_register(self, amount_of_work, description='', stage=0, tqdm_args=None) | Registers a progress which can be reported/displayed via a progress bar.
Parameters
----------
amount_of_work : int
Amount of steps the underlying algorithm has to perform.
description : str, optional
This string will be displayed in the progress bar widget.
stage : int, optional, default=0
If the algorithm has multiple different stages (eg. calculate means
in the first pass over the data, calculate covariances in the second),
one needs to estimate different times of arrival. | 3.767807 | 3.88283 | 0.970377 |
self.__check_stage_registered(stage)
self._prog_rep_descriptions[stage] = description
if self._prog_rep_progressbars[stage]:
self._prog_rep_progressbars[stage].set_description(description, refresh=False) | def _progress_set_description(self, stage, description) | set description of an already existing progress | 4.215308 | 4.116882 | 1.023908 |
if not self.show_progress:
return
self.__check_stage_registered(stage)
if not self._prog_rep_progressbars[stage]:
return
pg = self._prog_rep_progressbars[stage]
pg.update(int(numerator_increment)) | def _progress_update(self, numerator_increment, stage=0, show_eta=True, **kw) | Updates the progress. Will update progress bars or other progress output.
Parameters
----------
numerator : int
numerator of partial work done already in current stage
stage : int, nonnegative, default=0
Current stage of the algorithm, 0 or greater | 5.501966 | 6.068234 | 0.906683 |
if not self.show_progress:
return
self.__check_stage_registered(stage)
if not self._prog_rep_progressbars[stage]:
return
pg = self._prog_rep_progressbars[stage]
pg.desc = description
increment = int(pg.total - pg.n)
if increment > 0:
pg.update(increment)
pg.refresh(nolock=True)
pg.close()
self._prog_rep_progressbars.pop(stage, None)
self._prog_rep_descriptions.pop(stage, None)
self._prog_rep_callbacks.pop(stage, None) | def _progress_force_finish(self, stage=0, description=None) | forcefully finish the progress for given stage | 3.688817 | 3.567989 | 1.033864 |
r
if not isinstance(xyzall, _np.ndarray):
raise ValueError('Input data hast to be a numpy array. Did you concatenate your data?')
if xyzall.shape[1] > 50 and not ignore_dim_warning:
raise RuntimeError('This function is only useful for less than 50 dimensions. Turn-off this warning '
'at your own risk with ignore_dim_warning=True.')
if feature_labels is not None:
if not isinstance(feature_labels, list):
from pyemma.coordinates.data.featurization.featurizer import MDFeaturizer as _MDFeaturizer
if isinstance(feature_labels, _MDFeaturizer):
feature_labels = feature_labels.describe()
else:
raise ValueError('feature_labels must be a list of feature labels, '
'a pyemma featurizer object or None.')
if not xyzall.shape[1] == len(feature_labels):
raise ValueError('feature_labels must have the same dimension as the input data xyzall.')
# make nice plots if user does not decide on color and transparency
if 'color' not in kwargs.keys():
kwargs['color'] = 'b'
if 'alpha' not in kwargs.keys():
kwargs['alpha'] = .25
import matplotlib.pyplot as _plt
# check input
if ax is None:
fig, ax = _plt.subplots()
else:
fig = ax.get_figure()
hist_offset = -.2
for h, coordinate in enumerate(reversed(xyzall.T)):
hist, edges = _np.histogram(coordinate, bins=n_bins)
if not ylog:
y = hist / hist.max()
else:
y = _np.zeros_like(hist) + _np.NaN
pos_idx = hist > 0
y[pos_idx] = _np.log(hist[pos_idx]) / _np.log(hist[pos_idx]).max()
ax.fill_between(edges[:-1], y + h + hist_offset, y2=h + hist_offset, **kwargs)
ax.axhline(y=h + hist_offset, xmin=0, xmax=1, color='k', linewidth=.2)
ax.set_ylim(hist_offset, h + hist_offset + 1)
# formatting
if feature_labels is None:
feature_labels = [str(n) for n in range(xyzall.shape[1])]
ax.set_ylabel('Feature histograms')
ax.set_yticks(_np.array(range(len(feature_labels))) + .3)
ax.set_yticklabels(feature_labels[::-1])
ax.set_xlabel('Feature values')
# save
if outfile is not None:
fig.savefig(outfile)
return fig, ax | def plot_feature_histograms(xyzall,
feature_labels=None,
ax=None,
ylog=False,
outfile=None,
n_bins=50,
ignore_dim_warning=False,
**kwargs) | r"""Feature histogram plot
Parameters
----------
xyzall : np.ndarray(T, d)
(Concatenated list of) input features; containing time series data to be plotted.
Array of T data points in d dimensions (features).
feature_labels : iterable of str or pyemma.Featurizer, optional, default=None
Labels of histogramed features, defaults to feature index.
ax : matplotlib.Axes object, optional, default=None.
The ax to plot to; if ax=None, a new ax (and fig) is created.
ylog : boolean, default=False
If True, plot logarithm of histogram values.
n_bins : int, default=50
Number of bins the histogram uses.
outfile : str, default=None
If not None, saves plot to this file.
ignore_dim_warning : boolean, default=False
Enable plotting for more than 50 dimensions (on your own risk).
**kwargs: kwargs passed to pyplot.fill_between. See the doc of pyplot for options.
Returns
-------
fig : matplotlib.Figure object
The figure in which the used ax resides.
ax : matplotlib.Axes object
The ax in which the historams were plotted. | 2.902224 | 2.786741 | 1.04144 |
r
old_state = self.in_memory
if not old_state and op_in_mem:
self._map_to_memory()
elif not op_in_mem and old_state:
self._clear_in_memory() | def in_memory(self, op_in_mem) | r"""
If set to True, the output will be stored in memory. | 4.556857 | 4.488949 | 1.015128 |
r
self._mapping_to_mem_active = True
try:
self._Y = self.get_output(stride=stride)
from pyemma.coordinates.data import DataInMemory
self._Y_source = DataInMemory(self._Y)
finally:
self._mapping_to_mem_active = False
self._in_memory = True | def _map_to_memory(self, stride=1) | r"""Maps results to memory. Will be stored in attribute :attr:`_Y`. | 6.902261 | 6.248493 | 1.104628 |
if dim is None or (isinstance(dim, float) and dim == 1.0):
return min(rank0, rankt)
if isinstance(dim, float):
return np.searchsorted(VAMPModel._cumvar(singular_values), dim) + 1
else:
return np.min([rank0, rankt, dim]) | def _dimension(rank0, rankt, dim, singular_values) | output dimension | 5.125017 | 4.88249 | 1.049673 |
if self.C00 is None: # no data yet
if isinstance(self.dim, int): # return user choice
warnings.warn('Returning user-input for dimension, since this model has not yet been estimated.')
return self.dim
raise RuntimeError('Please call set_model_params prior using this method.')
if not self._svd_performed:
self._diagonalize()
return self._dimension(self._rank0, self._rankt, self.dim, self.singular_values) | def dimension(self) | output dimension | 11.90655 | 11.303652 | 1.053337 |
L0 = spd_inv_split(self.C00, epsilon=self.epsilon)
self._rank0 = L0.shape[1] if L0.ndim == 2 else 1
Lt = spd_inv_split(self.Ctt, epsilon=self.epsilon)
self._rankt = Lt.shape[1] if Lt.ndim == 2 else 1
W = np.dot(L0.T, self.C0t).dot(Lt)
from scipy.linalg import svd
A, s, BT = svd(W, compute_uv=True, lapack_driver='gesvd')
self._singular_values = s
# don't pass any values in the argument list that call _diagonalize again!!!
m = VAMPModel._dimension(self._rank0, self._rankt, self.dim, self._singular_values)
U = np.dot(L0, A[:, :m])
V = np.dot(Lt, BT[:m, :].T)
# scale vectors
if self.scaling is not None:
U *= s[np.newaxis, 0:m] # scaled left singular functions induce a kinetic map
V *= s[np.newaxis, 0:m] # scaled right singular functions induce a kinetic map wrt. backward propagator
self._U = U
self._V = V
self._svd_performed = True | def _diagonalize(self) | Performs SVD on covariance matrices and save left, right singular vectors and values in the model.
Parameters
----------
scaling : None or string, default=None
Scaling to be applied to the VAMP modes upon transformation
* None: no scaling will be applied, variance of the singular
functions is 1
* 'kinetic map' or 'km': singular functions are scaled by
singular value. Note that only the left singular functions
induce a kinetic map. | 5.324825 | 4.38531 | 1.214241 |
# TODO: implement for TICA too
if test_model is None:
test_model = self
Uk = self.U[:, 0:self.dimension()]
Vk = self.V[:, 0:self.dimension()]
res = None
if score_method == 'VAMP1' or score_method == 'VAMP2':
A = spd_inv_sqrt(Uk.T.dot(test_model.C00).dot(Uk))
B = Uk.T.dot(test_model.C0t).dot(Vk)
C = spd_inv_sqrt(Vk.T.dot(test_model.Ctt).dot(Vk))
ABC = mdot(A, B, C)
if score_method == 'VAMP1':
res = np.linalg.norm(ABC, ord='nuc')
elif score_method == 'VAMP2':
res = np.linalg.norm(ABC, ord='fro')**2
elif score_method == 'VAMPE':
Sk = np.diag(self.singular_values[0:self.dimension()])
res = np.trace(2.0 * mdot(Vk, Sk, Uk.T, test_model.C0t) - mdot(Vk, Sk, Uk.T, test_model.C00, Uk, Sk, Vk.T, test_model.Ctt))
else:
raise ValueError('"score" should be one of VAMP1, VAMP2 or VAMPE')
# add the contribution (+1) of the constant singular functions to the result
assert res
return res + 1 | def score(self, test_model=None, score_method='VAMP2') | Compute the VAMP score for this model or the cross-validation score between self and a second model.
Parameters
----------
test_model : VAMPModel, optional, default=None
If `test_model` is not None, this method computes the cross-validation score
between self and `test_model`. It is assumed that self was estimated from
the "training" data and `test_model` was estimated from the "test" data. The
score is computed for one realization of self and `test_model`. Estimation
of the average cross-validation score and partitioning of data into test and
training part is not performed by this method.
If `test_model` is None, this method computes the VAMP score for the model
contained in self.
score_method : str, optional, default='VAMP2'
Available scores are based on the variational approach for Markov processes [1]_:
* 'VAMP1' Sum of singular values of the half-weighted Koopman matrix [1]_ .
If the model is reversible, this is equal to the sum of
Koopman matrix eigenvalues, also called Rayleigh quotient [1]_.
* 'VAMP2' Sum of squared singular values of the half-weighted Koopman matrix [1]_ .
If the model is reversible, this is equal to the kinetic variance [2]_ .
* 'VAMPE' Approximation error of the estimated Koopman operator with respect to
the true Koopman operator up to an additive constant [1]_ .
Returns
-------
score : float
If `test_model` is not None, returns the cross-validation VAMP score between
self and `test_model`. Otherwise return the selected VAMP-score of self.
References
----------
.. [1] Wu, H. and Noe, F. 2017. Variational approach for learning Markov processes from time series data.
arXiv:1707.04659v1
.. [2] Noe, F. and Clementi, C. 2015. Kinetic distance and kinetic maps from molecular dynamics simulation.
J. Chem. Theory. Comput. doi:10.1021/acs.jctc.5b00553 | 3.490217 | 3.585696 | 0.973373 |
import functools, numpy as np
if array.ndim == 1:
shape = (array.shape[0], 1)
else:
# hold first dimension, multiply the rest
shape = (array.shape[0], functools.reduce(lambda x, y: x * y, array.shape[1:]))
if not dry:
array = np.reshape(array, shape)
return array, shape | def _reshape(self, array, dry=False) | reshape given array to 2d. If dry is True, the actual reshaping is not performed.
returns tuple (array, shape_2d) | 3.231917 | 2.87535 | 1.124008 |
dws = _DWS()
us_data = dws.us_sample(
ntherm=ntherm, us_fc=us_fc, us_length=us_length, md_length=md_length, nmd=nmd)
us_data.update(centers=dws.centers)
return us_data | def get_umbrella_sampling_data(ntherm=11, us_fc=20.0, us_length=500, md_length=1000, nmd=20) | Continuous MCMC process in an asymmetric double well potential using umbrella sampling.
Parameters
----------
ntherm: int, optional, default=11
Number of umbrella states.
us_fc: double, optional, default=20.0
Force constant in kT/length^2 for each umbrella.
us_length: int, optional, default=500
Length in steps of each umbrella trajectory.
md_length: int, optional, default=1000
Length in steps of each unbiased trajectory.
nmd: int, optional, default=20
Number of unbiased trajectories.
Returns
-------
dict - keys shown below in brackets
Trajectory data from umbrella sampling (us_trajs) and unbiased (md_trajs) MCMC runs and
their discretised counterparts (us_dtrajs + md_dtrajs + centers). The umbrella sampling
parameters (us_centers + us_force_constants) are in the same order as the umbrella sampling
trajectories. Energies are given in kT, lengths in arbitrary units. | 3.745337 | 4.171662 | 0.897805 |
dws = _DWS()
mt_data = dws.mt_sample(
kt0=kt0, kt1=kt1, length0=length0, length1=length1, n0=n0, n1=n1)
mt_data.update(centers=dws.centers)
return mt_data | def get_multi_temperature_data(kt0=1.0, kt1=5.0, length0=10000, length1=10000, n0=10, n1=10) | Continuous MCMC process in an asymmetric double well potential at multiple temperatures.
Parameters
----------
kt0: double, optional, default=1.0
Temperature in kT for the first thermodynamic state.
kt1: double, optional, default=5.0
Temperature in kT for the second thermodynamic state.
length0: int, optional, default=10000
Trajectory length in steps for the first thermodynamic state.
length1: int, optional, default=10000
Trajectory length in steps for the second thermodynamic state.
n0: int, optional, default=10
Number of trajectories in the first thermodynamic state.
n1: int, optional, default=10
Number of trajectories in the second thermodynamic state.
Returns
-------
dict - keys shown below in brackets
Trajectory (trajs), energy (energy_trajs), and temperature (temp_trajs) data from the MCMC
runs as well as the discretised version (dtrajs + centers). Energies and temperatures are
given in kT, lengths in arbitrary units. | 3.988498 | 4.845522 | 0.823131 |
r
from .potentials import PrinzModel
pw = PrinzModel(dt, kT, mass=mass, damping=damping)
import warnings
import numpy as np
with warnings.catch_warnings(record=True) as w:
trajs = [pw.sample(x0, nstep, nskip=nskip) for _ in range(ntraj)]
if not np.all(tuple(np.isfinite(x) for x in trajs)):
raise RuntimeError('integrator detected invalid values in output. If you used a high temperature value (kT),'
' try decreasing the integration time step dt.')
return trajs | def get_quadwell_data(ntraj=10, nstep=10000, x0=0., nskip=1, dt=0.001, kT=1.0, mass=1.0, damping=1.0) | r""" Performs a Brownian dynamics simulation in the Prinz potential (quad well).
Parameters
----------
ntraj: int, default=10
how many realizations will be computed
nstep: int, default=10000
number of time steps
x0: float, default 0
starting point for sampling
nskip: int, default=1
number of integrator steps
dt: float, default=0.001
time step size
kT: float, default=1.0
temperature factor
mass: float, default=1.0
mass
damping: float, default=1.0
damping factor of integrator
Returns
-------
trajectories : list of ndarray
realizations of the the brownian diffusion in the quadwell potential. | 5.567806 | 5.348135 | 1.041074 |
extensions = ["%s%s" % (x, suffix) for x in ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y']]
if num == 0:
return "0%s" % extensions[0]
else:
n_bytes = float(abs(num))
place = int(math.floor(math.log(n_bytes, 1024)))
return "%.1f%s" % (np.sign(num) * (n_bytes / 1024** place), extensions[place]) | def bytes_to_string(num, suffix='B') | Returns the size of num (bytes) in a human readable form up to Yottabytes (YB).
:param num: The size of interest in bytes.
:param suffix: A suffix, default 'B' for 'bytes'.
:return: a human readable representation of a size in bytes | 2.653489 | 2.730047 | 0.971957 |
if string == '0':
return 0
import re
match = re.match('(\d+\.?\d?)\s?([bBkKmMgGtTpPeEzZyY])?(\D?)', string)
if not match:
raise RuntimeError('"{}" does not match "[integer] [suffix]"'.format(string))
if match.group(3):
raise RuntimeError('unknown suffix: "{}"'.format(match.group(3)))
value = float(match.group(1))
if match.group(2) is None:
return int(value)
suffix = match.group(2).upper()
extensions = ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y']
x = extensions.index(suffix)
value *= 1024**x
return int(value) | def string_to_bytes(string) | Returns the amount of bytes in a human readable form up to Yottabytes (YB).
:param string: integer with suffix (b, k, m, g, t, p, e, z, y)
:return: amount of bytes in string representation
>>> string_to_bytes('1024')
1024
>>> string_to_bytes('1024k')
1048576
>>> string_to_bytes('4 G')
4294967296
>>> string_to_bytes('4.5g')
4831838208
>>> try:
... string_to_bytes('1x')
... except RuntimeError as re:
... assert 'unknown suffix' in str(re) | 3.10781 | 2.738618 | 1.13481 |
res = TimeUnit(self)
res._factor = self._factor * factor
res._unit = self._unit
return res | def get_scaled(self, factor) | Get a new time unit, scaled by the given factor | 7.472986 | 6.49289 | 1.150949 |
if self._unit == self._UNIT_STEP:
return times, 'step' # nothing to do
m = np.mean(times)
mult = 1.0
cur_unit = self._unit
# numbers are too small. Making them larger and reducing the unit:
if (m < 0.001):
while mult*m < 0.001 and cur_unit >= 0:
mult *= 1000
cur_unit -= 1
return mult*times, self._unit_names[cur_unit]
# numbers are too large. Making them smaller and increasing the unit:
if (m > 1000):
while mult*m > 1000 and cur_unit <= 5:
mult /= 1000
cur_unit += 1
return mult*times, self._unit_names[cur_unit]
# nothing to do
return times, self._unit | def rescale_around1(self, times) | Suggests a rescaling factor and new physical time unit to balance the given time multiples around 1.
Parameters
----------
times : float array
array of times in multiple of the present elementary unit | 3.147119 | 3.172415 | 0.992026 |
from .h5file import H5File
with H5File(filename, mode='r') as f:
return f.models_descriptive | def list_models(filename) | Lists all models in given filename.
Parameters
----------
filename: str
path to filename, where the model has been stored.
Returns
-------
obj: dict
A mapping by name and a comprehensive description like this:
{model_name: {'repr' : 'string representation, 'created': 'human readable date', ...} | 6.276388 | 7.666292 | 0.818699 |
r
if not is_iterable(l):
return False
return all(is_int(value) for value in l) | def is_iterable_of_int(l) | r""" Checks if l is iterable and contains only integral types | 5.252733 | 5.36026 | 0.97994 |
r
if not is_iterable(l):
return False
return all(is_float(value) for value in l) | def is_iterable_of_float(l) | r""" Checks if l is iterable and contains only floating point types | 5.195973 | 4.761707 | 1.0912 |
r
if isinstance(l, np.ndarray):
if l.ndim == 1 and (l.dtype.kind == 'i' or l.dtype.kind == 'u'):
return True
return False | def is_int_vector(l) | r"""Checks if l is a numpy array of integers | 2.83163 | 2.724355 | 1.039376 |
r
if isinstance(l, np.ndarray):
if l.ndim == 2 and (l.dtype == bool):
return True
return False | def is_bool_matrix(l) | r"""Checks if l is a 2D numpy array of bools | 3.947834 | 3.5034 | 1.126858 |
r
if isinstance(l, np.ndarray):
if l.dtype.kind == 'f':
return True
return False | def is_float_array(l) | r"""Checks if l is a numpy array of floats (any dimension | 3.898977 | 5.215244 | 0.747612 |
r
if isinstance(dtrajs, list):
# elements are ints? then wrap into a list
if is_list_of_int(dtrajs):
return [np.array(dtrajs, dtype=int)]
else:
for i, dtraj in enumerate(dtrajs):
dtrajs[i] = ensure_dtraj(dtraj)
return dtrajs
else:
return [ensure_dtraj(dtrajs)] | def ensure_dtraj_list(dtrajs) | r"""Makes sure that dtrajs is a list of discrete trajectories (array of int) | 2.986731 | 2.813048 | 1.061742 |
if is_int_vector(I):
return I
elif is_int(I):
return np.array([I])
elif is_list_of_int(I):
return np.array(I)
elif is_tuple_of_int(I):
return np.array(I)
elif isinstance(I, set):
if require_order:
raise TypeError('Argument is an unordered set, but I require an ordered array of integers')
else:
lI = list(I)
if is_list_of_int(lI):
return np.array(lI)
else:
raise TypeError('Argument is not of a type that is convertible to an array of integers.') | def ensure_int_vector(I, require_order = False) | Checks if the argument can be converted to an array of ints and does that.
Parameters
----------
I: int or iterable of int
require_order : bool
If False (default), an unordered set is accepted. If True, a set is not accepted.
Returns
-------
arr : ndarray(n)
numpy array with the integers contained in the argument | 2.479995 | 2.410671 | 1.028757 |
if F is None:
return F
else:
return ensure_int_vector(F, require_order = require_order) | def ensure_int_vector_or_None(F, require_order = False) | Ensures that F is either None, or a numpy array of floats
If F is already either None or a numpy array of floats, F is returned (no copied!)
Otherwise, checks if the argument can be converted to an array of floats and does that.
Parameters
----------
F: None, float, or iterable of float
Returns
-------
arr : ndarray(n)
numpy array with the floats contained in the argument | 2.551673 | 3.336725 | 0.764724 |
if is_float_vector(F):
return F
elif is_float(F):
return np.array([F])
elif is_iterable_of_float(F):
return np.array(F)
elif isinstance(F, set):
if require_order:
raise TypeError('Argument is an unordered set, but I require an ordered array of floats')
else:
lF = list(F)
if is_list_of_float(lF):
return np.array(lF)
else:
raise TypeError('Argument is not of a type that is convertible to an array of floats.') | def ensure_float_vector(F, require_order = False) | Ensures that F is a numpy array of floats
If F is already a numpy array of floats, F is returned (no copied!)
Otherwise, checks if the argument can be converted to an array of floats and does that.
Parameters
----------
F: float, or iterable of float
require_order : bool
If False (default), an unordered set is accepted. If True, a set is not accepted.
Returns
-------
arr : ndarray(n)
numpy array with the floats contained in the argument | 2.936409 | 2.728852 | 1.07606 |
if F is None:
return F
else:
return ensure_float_vector(F, require_order = require_order) | def ensure_float_vector_or_None(F, require_order = False) | Ensures that F is either None, or a numpy array of floats
If F is already either None or a numpy array of floats, F is returned (no copied!)
Otherwise, checks if the argument can be converted to an array of floats and does that.
Parameters
----------
F: float, list of float or 1D-ndarray of float
Returns
-------
arr : ndarray(n)
numpy array with the floats contained in the argument | 2.485166 | 3.185152 | 0.780235 |
r
if isinstance(x, np.ndarray):
if x.dtype.kind == 'f':
return x
elif x.dtype.kind == 'i':
return x.astype(default)
else:
raise TypeError('x is of type '+str(x.dtype)+' that cannot be converted to float')
else:
raise TypeError('x is not an array') | def ensure_dtype_float(x, default=np.float64) | r"""Makes sure that x is type of float | 2.667672 | 2.448416 | 1.08955 |
r
try:
if shape is not None:
if not np.array_equal(np.shape(A), shape):
raise AssertionError('Expected shape '+str(shape)+' but given array has shape '+str(np.shape(A)))
if uniform is not None:
shapearr = np.array(np.shape(A))
is_uniform = np.count_nonzero(shapearr-shapearr[0]) == 0
if uniform and not is_uniform:
raise AssertionError('Given array is not uniform \n'+str(shapearr))
elif not uniform and is_uniform:
raise AssertionError('Given array is not nonuniform: \n'+str(shapearr))
if size is not None:
if not np.size(A) == size:
raise AssertionError('Expected size '+str(size)+' but given array has size '+str(np.size(A)))
if ndim is not None:
if not ndim == np.ndim(A):
raise AssertionError('Expected shape '+str(ndim)+' but given array has shape '+str(np.ndim(A)))
if dtype is not None:
# now we must create an array if we don't have one yet
if not isinstance(A, (np.ndarray)) and not scisp.issparse(A):
A = np.array(A)
if not np.dtype(dtype) == A.dtype:
raise AssertionError('Expected data type '+str(dtype)+' but given array has data type '+str(A.dtype))
if kind is not None:
# now we must create an array if we don't have one yet
if not isinstance(A, (np.ndarray)) and not scisp.issparse(A):
A = np.array(A)
if kind == 'numeric':
if not (A.dtype.kind == 'i' or A.dtype.kind == 'f'):
raise AssertionError('Expected numerical data, but given array has data kind '+str(A.dtype.kind))
elif not A.dtype.kind == kind:
raise AssertionError('Expected data kind '+str(kind)
+' but given array has data kind '+str(A.dtype.kind))
except Exception as ex:
if isinstance(ex, AssertionError):
raise ex
else: # other exception raised in the test code above
print('Found exception: ',ex)
raise AssertionError('Given argument is not an array of the expected shape or type:\n'+
'arg = '+str(A)+'\ntype = '+str(type(A))) | def assert_array(A, shape=None, uniform=None, ndim=None, size=None, dtype=None, kind=None) | r""" Asserts whether the given array or sparse matrix has the given properties
Parameters
----------
A : ndarray, scipy.sparse matrix or array-like
the array under investigation
shape : shape, optional, default=None
asserts if the array has the requested shape. Be careful with vectors
because this will distinguish between row vectors (1,n), column vectors
(n,1) and arrays (n,). If you want to be less specific, consider using
size
square : None | True | False
if not None, asserts whether the array dimensions are uniform (e.g.
square for a ndim=2 array) (True), or not uniform (False).
size : int, optional, default=None
asserts if the arrays has the requested number of elements
ndim : int, optional, default=None
asserts if the array has the requested dimension
dtype : type, optional, default=None
asserts if the array data has the requested data type. This check is
strong, e.g. int and int64 are not equal. If you want a weaker check,
consider the kind option
kind : string, optional, default=None
Checks if the array data is of the specified kind. Options include 'i'
for integer types, 'f' for float types Check numpy.dtype.kind for
possible options. An additional option is 'numeric' for either integer
or float.
Raises
------
AssertionError
If assertions has failed | 2.276297 | 2.214519 | 1.027897 |
r
if not isinstance(A, np.ndarray):
try:
A = np.array(A)
except:
raise AssertionError('Given argument cannot be converted to an ndarray:\n'+str(A))
assert_array(A, shape=shape, uniform=uniform, ndim=ndim, size=size, dtype=dtype, kind=kind)
return A | def ensure_ndarray(A, shape=None, uniform=None, ndim=None, size=None, dtype=None, kind=None) | r""" Ensures A is an ndarray and does an assert_array with the given parameters
Returns
-------
A : ndarray
If A is already an ndarray, it is just returned. Otherwise this is an independent copy as an ndarray | 2.808672 | 2.581268 | 1.088098 |
r
if not isinstance(A, np.ndarray) and not scisp.issparse(A):
try:
A = np.array(A)
except:
raise AssertionError('Given argument cannot be converted to an ndarray:\n'+str(A))
assert_array(A, shape=shape, uniform=uniform, ndim=ndim, size=size, dtype=dtype, kind=kind)
return A | def ensure_ndarray_or_sparse(A, shape=None, uniform=None, ndim=None, size=None, dtype=None, kind=None) | r""" Ensures A is an ndarray or a scipy sparse matrix and does an assert_array with the given parameters
Returns
-------
A : ndarray
If A is already an ndarray, it is just returned. Otherwise this is an independent copy as an ndarray | 3.17735 | 2.871676 | 1.106444 |
r
if A is not None:
return ensure_ndarray(A, shape=shape, uniform=uniform, ndim=ndim, size=size, dtype=dtype, kind=kind)
else:
return None | def ensure_ndarray_or_None(A, shape=None, uniform=None, ndim=None, size=None, dtype=None, kind=None) | r""" Ensures A is None or an ndarray and does an assert_array with the given parameters | 2.291649 | 2.505029 | 0.914819 |
r
if is_float_matrix(traj) or is_bool_matrix(traj):
return traj
elif is_float_vector(traj):
return traj[:,None]
else:
try:
arr = np.array(traj)
arr = ensure_dtype_float(arr)
if is_float_matrix(arr):
return arr
if is_float_vector(arr):
return arr[:,None]
else:
raise TypeError('Argument traj cannot be cast into a two-dimensional array. Check type.')
except:
raise TypeError('Argument traj is not a trajectory - only float-arrays or list of float-arrays are allowed. Types is %s' % type(traj)) | def ensure_traj(traj) | r"""Makes sure that traj is a trajectory (array of float) | 4.416813 | 4.205876 | 1.050153 |
at = topology.atom(index)
if topology.n_chains > 1:
return "%s %i %s %i %i" % (at.residue.name, at.residue.resSeq, at.name, at.index, at.residue.chain.index )
else:
return "%s %i %s %i" % (at.residue.name, at.residue.resSeq, at.name, at.index) | def _describe_atom(topology, index) | Returns a string describing the given atom
:param topology:
:param index:
:return: | 2.516281 | 2.605757 | 0.965662 |
if traj_a is None and traj_b is None:
return True
if traj_a is None and traj_b is not None:
return False
if traj_a is not None and traj_b is None:
return False
equal_top = traj_a.top == traj_b.top
xyz_close = np.allclose(traj_a.xyz, traj_b.xyz)
equal_time = np.all(traj_a.time == traj_b.time)
equal_unitcell_angles = np.array_equal(traj_a.unitcell_angles, traj_b.unitcell_angles)
equal_unitcell_lengths = np.array_equal(traj_a.unitcell_lengths, traj_b.unitcell_lengths)
return np.all([equal_top, equal_time, xyz_close, equal_time, equal_unitcell_angles, equal_unitcell_lengths]) | def cmp_traj(traj_a, traj_b) | Parameters
----------
traj_a, traj_b: mdtraj.Trajectory | 1.779126 | 1.744576 | 1.019805 |
r
if is_iterable_of_int(indices1):
MDlogger.warning('The 1D arrays input for %s have been sorted, and '
'index duplicates have been eliminated.\n'
'Check the output of describe() to see the actual order of the features' % fname)
# Eliminate duplicates and sort
indices1 = np.unique(indices1)
# Intra-group distances
if indices2 is None:
atom_pairs = combinations(indices1, 2)
# Inter-group distances
elif is_iterable_of_int(indices2):
# Eliminate duplicates and sort
indices2 = np.unique(indices2)
# Eliminate duplicates between indices1 and indices1
uniqs = np.in1d(indices2, indices1, invert=True)
indices2 = indices2[uniqs]
atom_pairs = product(indices1, indices2)
else:
atom_pairs = indices1
return atom_pairs | def _parse_pairwise_input(indices1, indices2, MDlogger, fname='') | r"""For input of pairwise type (distances, inverse distances, contacts) checks the
type of input the user gave and reformats it so that :py:func:`DistanceFeature`,
:py:func:`InverseDistanceFeature`, and ContactFeature can work.
In case the input isn't already a list of distances, this function will:
- sort the indices1 array
- check for duplicates within the indices1 array
- sort the indices2 array
- check for duplicates within the indices2 array
- check for duplicates between the indices1 and indices2 array
- if indices2 is None, produce a list of pairs of indices in indices1, or
- if indices2 is not None, produce a list of pairs of (i,j) where i comes from indices1, and j from indices2 | 4.525552 | 4.352436 | 1.039775 |
r
assert isinstance(group_definitions, list), "group_definitions has to be of type list, not %s"%type(group_definitions)
# Handle the special case of just one group
if len(group_definitions) == 1:
group_pairs = np.array([0,0], ndmin=2)
# Sort the elements within each group
parsed_group_definitions = []
for igroup in group_definitions:
assert np.ndim(igroup) == 1, "The elements of the groups definition have to be of dim 1, not %u"%np.ndim(igroup)
parsed_group_definitions.append(np.unique(igroup))
# Check for group duplicates
for ii, igroup in enumerate(parsed_group_definitions[:-1]):
for jj, jgroup in enumerate(parsed_group_definitions[ii+1:]):
if len(igroup) == len(jgroup):
assert not np.allclose(igroup, jgroup), "Some group definitions appear to be duplicated, e.g %u and %u"%(ii,ii+jj+1)
# Create and/or check the pair-list
if is_string(group_pairs):
if group_pairs == 'all':
parsed_group_pairs = combinations(np.arange(len(group_definitions)), 2)
else:
assert isinstance(group_pairs, np.ndarray)
assert group_pairs.shape[1] == 2
assert group_pairs.max() <= len(parsed_group_definitions), "Cannot ask for group nr. %u if group_definitions only " \
"contains %u groups"%(group_pairs.max(), len(parsed_group_definitions))
assert group_pairs.min() >= 0, "Group pairs contains negative group indices"
parsed_group_pairs = np.zeros_like(group_pairs, dtype='int')
for ii, ipair in enumerate(group_pairs):
if ipair[0] == ipair[1]:
MDlogger.warning("%s will compute the mindist of group %u with itself. Is this wanted? "%(mname, ipair[0]))
parsed_group_pairs[ii, :] = np.sort(ipair)
# Create the large list of distances that will be computed, and an array containing group identfiers
# of the distances that actually characterize a pair of groups
distance_pairs = []
group_membership = np.zeros_like(parsed_group_pairs)
b = 0
for ii, pair in enumerate(parsed_group_pairs):
if pair[0] != pair[1]:
distance_pairs.append(product(parsed_group_definitions[pair[0]],
parsed_group_definitions[pair[1]]))
else:
parsed = parsed_group_definitions[pair[0]]
distance_pairs.append(combinations(parsed, 2))
group_membership[ii, :] = [b, b + len(distance_pairs[ii])]
b += len(distance_pairs[ii])
return parsed_group_definitions, parsed_group_pairs, np.vstack(distance_pairs), group_membership | def _parse_groupwise_input(group_definitions, group_pairs, MDlogger, mname='') | r"""For input of group type (add_group_mindist), prepare the array of pairs of indices
and groups so that :py:func:`MinDistanceFeature` can work
This function will:
- check the input types
- sort the 1D arrays of each entry of group_definitions
- check for duplicates within each group_definition
- produce the list of pairs for all needed distances
- produce a list that maps each entry in the pairlist to a given group of distances
Returns
--------
parsed_group_definitions: list
List of of 1D arrays containing sorted, unique atom indices
parsed_group_pairs: numpy.ndarray
(N,2)-numpy array containing pairs of indices that represent pairs
of groups for which the inter-group distance-pairs will be generated
distance_pairs: numpy.ndarray
(M,2)-numpy array with all the distance-pairs needed (regardless of their group)
group_membership: numpy.ndarray
(N,2)-numpy array mapping each pair in distance_pairs to their associated group pair | 3.142042 | 2.982668 | 1.053433 |
r
atoms_in_residues = []
if subset_of_atom_idxs is None:
subset_of_atom_idxs = np.arange(top.n_atoms)
special_residues = []
for rr in top.residues:
if rr.index in residue_idxs:
toappend = np.array([aa.index for aa in rr.atoms if aa.index in subset_of_atom_idxs])
if len(toappend) == 0:
special_residues.append(rr)
if fallback_to_full_residue:
toappend = np.array([aa.index for aa in rr.atoms])
atoms_in_residues.append(toappend)
# Any special cases?
if len(special_residues) != 0 and hasattr(MDlogger, 'warning'):
if fallback_to_full_residue:
msg = 'the full residue'
else:
msg = 'emtpy lists'
MDlogger.warning("These residues yielded no atoms in the subset and were returned as %s: %s " % (
msg, ''.join(['%s, ' % rr for rr in special_residues])[:-2]))
return atoms_in_residues | def _atoms_in_residues(top, residue_idxs, subset_of_atom_idxs=None, fallback_to_full_residue=True, MDlogger=None) | r"""Returns a list of ndarrays containing the atom indices in each residue of :obj:`residue_idxs`
:param top: mdtraj.Topology
:param residue_idxs: list or ndarray (ndim=1) of integers
:param subset_of_atom_idxs : iterable of atom_idxs to which the selection has to be restricted. If None, all atoms considered
:param fallback_to_full_residue : it is possible that some residues don't yield any atoms with some subsets. Take
all atoms in that case. If False, then [] is returned for that residue
:param MDlogger: If provided, a warning will be issued when falling back to full residue
:return: list of length==len(residue_idxs)) of ndarrays (ndim=1) containing the atom indices in each residue of residue_idxs | 2.791684 | 2.745406 | 1.016857 |
if isinstance(arr, np.ndarray) or hasattr(arr, 'data'):
# numpy array or sparse matrix with .data attribute
data = arr.data if sparse.issparse(arr) else arr
return data.flat[0], data.flat[-1]
else:
# Sparse matrices without .data attribute. Only dok_matrix at
# the time of writing, in this case indexing is fast
return arr[0, 0], arr[-1, -1] | def _first_and_last_element(arr) | Returns first and last element of numpy array or sparse matrix. | 5.223238 | 4.344646 | 1.202224 |
estimator_type = type(estimator)
# XXX: not handling dictionaries
if estimator_type in (list, tuple, set, frozenset):
return estimator_type([clone(e, safe=safe) for e in estimator])
elif not hasattr(estimator, 'get_params'):
if not safe:
return copy.deepcopy(estimator)
else:
raise TypeError("Cannot clone object '%s' (type %s): "
"it does not seem to be a scikit-learn estimator "
"as it does not implement a 'get_params' methods."
% (repr(estimator), type(estimator)))
# TODO: this is a brute force method to make things work for parameter studies. #1135
# But this can potentially use a lot of memory in case of large input data, which is also copied then.
# we need a way to distinguish input parameters from derived model parameters, which is currently only ensured for
# estimators in the coordinates package.
if hasattr(estimator, '_estimated') and estimator._estimated:
return copy.deepcopy(estimator)
klass = estimator.__class__
new_object_params = estimator.get_params(deep=False)
for name, param in new_object_params.items():
new_object_params[name] = clone(param, safe=False)
new_object = klass(**new_object_params)
params_set = new_object.get_params(deep=False)
# quick sanity check of the parameters of the clone
for name in new_object_params:
param1 = new_object_params[name]
param2 = params_set[name]
if param1 is param2:
# this should always happen
continue
if isinstance(param1, np.ndarray):
# For most ndarrays, we do not test for complete equality
if not isinstance(param2, type(param1)):
equality_test = False
elif (param1.ndim > 0
and param1.shape[0] > 0
and isinstance(param2, np.ndarray)
and param2.ndim > 0
and param2.shape[0] > 0):
equality_test = (
param1.shape == param2.shape
and param1.dtype == param2.dtype
and (_first_and_last_element(param1) ==
_first_and_last_element(param2))
)
else:
equality_test = np.all(param1 == param2)
elif sparse.issparse(param1):
# For sparse matrices equality doesn't work
if not sparse.issparse(param2):
equality_test = False
elif param1.size == 0 or param2.size == 0:
equality_test = (
param1.__class__ == param2.__class__
and param1.size == 0
and param2.size == 0
)
else:
equality_test = (
param1.__class__ == param2.__class__
and (_first_and_last_element(param1) ==
_first_and_last_element(param2))
and param1.nnz == param2.nnz
and param1.shape == param2.shape
)
else:
# fall back on standard equality
equality_test = param1 == param2
if equality_test:
warnings.warn("Estimator %s modifies parameters in __init__."
" This behavior is deprecated as of 0.18 and "
"support for this behavior will be removed in 0.20."
% type(estimator).__name__, DeprecationWarning)
else:
raise RuntimeError('Cannot clone object %s, as the constructor '
'does not seem to set parameter %s' %
(estimator, name))
return new_object | def clone(estimator, safe=True) | Constructs a new estimator with the same parameters.
Clone does a deep copy of the model in an estimator
without actually copying attached data. It yields a new estimator
with the same parameters that has not been fit on any data.
Parameters
----------
estimator : estimator object, or list, tuple or set of objects
The estimator or group of estimators to be cloned
safe : boolean, optional
If safe is false, clone will fall back to a deepcopy on objects
that are not estimators. | 3.11491 | 3.189445 | 0.976631 |
# fetch the constructor or the original constructor before
# deprecation wrapping if any
init = getattr(cls.__init__, 'deprecated_original', cls.__init__)
if init is object.__init__:
# No explicit constructor to introspect
return []
# introspect the constructor arguments to find the model parameters
# to represent
args, varargs, kw, default = getargspec_no_self(init)
if varargs is not None:
raise RuntimeError("scikit-learn estimators should always "
"specify their parameters in the signature"
" of their __init__ (no varargs)."
" %s doesn't follow this convention."
% (cls, ))
args.sort()
return args | def _get_param_names(cls) | Get parameter names for the estimator | 1.851543 | 1.756995 | 1.053813 |
return RunningCovar(compute_XX=xx, compute_XY=xy, compute_YY=yy, sparse_mode=sparse_mode, modify_data=modify_data,
remove_mean=remove_mean, symmetrize=symmetrize, column_selection=column_selection,
diag_only=diag_only, nsave=nsave) | def running_covar(xx=True, xy=False, yy=False, remove_mean=False, symmetrize=False, sparse_mode='auto',
modify_data=False, column_selection=None, diag_only=False, nsave=5) | Returns a running covariance estimator
Returns an estimator object that can be fed chunks of X and Y data, and
that can generate on-the-fly estimates of mean, covariance, running sum
and second moment matrix.
Parameters
----------
xx : bool
Estimate the covariance of X
xy : bool
Estimate the cross-covariance of X and Y
yy : bool
Estimate the covariance of Y
remove_mean : bool
Remove the data mean in the covariance estimation
symmetrize : bool
Use symmetric estimates with sum defined by sum_t x_t + y_t and
second moment matrices defined by X'X + Y'Y and Y'X + X'Y.
modify_data : bool
If remove_mean=True, the mean will be removed in the input data,
without creating an independent copy. This option is faster but should
only be selected if the input data is not used elsewhere.
sparse_mode : str
one of:
* 'dense' : always use dense mode
* 'sparse' : always use sparse mode if possible
* 'auto' : automatic
column_selection: ndarray(k, dtype=int) or None
Indices of those columns that are to be computed. If None, all columns are computed.
diag_only: bool
If True, the computation is restricted to the diagonal entries (autocorrelations) only.
nsave : int
Depth of Moment storage. Moments computed from each chunk will be
combined with Moments of similar statistical weight using the pairwise
combination algorithm described in [1]_.
References
----------
.. [1] http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf | 1.769069 | 2.191986 | 0.807062 |
w1 = self.w
w2 = other.w
w = w1 + w2
# TODO: fix this div by zero error
q = w2 / w1
dsx = q * self.sx - other.sx
dsy = q * self.sy - other.sy
# update
self.w = w1 + w2
self.sx = self.sx + other.sx
self.sy = self.sy + other.sy
#
if mean_free:
if len(self.Mxy.shape) == 1: # diagonal only
d = dsx*dsy
else:
d = np.outer(dsx, dsy)
self.Mxy += other.Mxy + (w1 / (w2 * w)) * d
else:
self.Mxy += other.Mxy
return self | def combine(self, other, mean_free=False) | References
----------
[1] http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf | 3.397341 | 3.376307 | 1.00623 |
if bessel:
return self.Mxy/ (self.w-1)
else:
return self.Mxy / self.w | def covar(self, bessel=True) | Return covariance matrix:
Parameters:
-----------
bessel : bool, optional, default=True
Use Bessel's correction in order to
obtain an unbiased estimator of sample covariances. | 6.337626 | 7.601199 | 0.833767 |
if len(self.storage) < 2:
return False
return self.storage[-2].w <= self.storage[-1].w * self.rtol | def _can_merge_tail(self) | Checks if the two last list elements can be merged | 6.272077 | 5.515038 | 1.137268 |
if len(self.storage) == self.nsave: # merge if we must
# print 'must merge'
self.storage[-1].combine(moments, mean_free=self.remove_mean)
else: # append otherwise
# print 'append'
self.storage.append(moments)
# merge if possible
while self._can_merge_tail():
# print 'merge: ',self.storage
M = self.storage.pop()
# print 'pop last: ',self.storage
self.storage[-1].combine(M, mean_free=self.remove_mean) | def store(self, moments) | Store object X with weight w | 5.215712 | 5.113053 | 1.020078 |
# check input
T = X.shape[0]
if Y is not None:
assert Y.shape[0] == T, 'X and Y must have equal length'
# Weights cannot be used for compute_YY:
if weights is not None and self.compute_YY:
raise ValueError('Use of weights is not implemented for compute_YY==True')
if weights is not None:
# Convert to array of length T if weights is a single number:
if isinstance(weights, numbers.Real):
weights = weights * np.ones(T, dtype=float)
# Check appropriate length if weights is an array:
elif isinstance(weights, np.ndarray):
if len(weights) != T:
raise ValueError('weights and X must have equal length. Was {} and {} respectively.'.format(len(weights), len(X)))
else:
raise TypeError('weights is of type %s, must be a number or ndarray' % (type(weights)))
# estimate and add to storage
if self.compute_XX and not self.compute_XY and not self.compute_YY:
w, s_X, C_XX = moments_XX(X, remove_mean=self.remove_mean, weights=weights, sparse_mode=self.sparse_mode,
modify_data=self.modify_data, column_selection=self.column_selection,
diag_only=self.diag_only)
if self.column_selection is not None:
s_Xk = s_X[self.column_selection]
else:
s_Xk = s_X
self.storage_XX.store(Moments(w, s_X, s_Xk, C_XX))
elif self.compute_XX and self.compute_XY and not self.compute_YY:
assert Y is not None
w, s_X, s_Y, C_XX, C_XY = moments_XXXY(X, Y, remove_mean=self.remove_mean, symmetrize=self.symmetrize,
weights=weights, sparse_mode=self.sparse_mode, modify_data=self.modify_data,
column_selection=self.column_selection, diag_only=self.diag_only)
# make copy in order to get independently mergeable moments
if self.column_selection is not None:
s_Xk = s_X[self.column_selection]
s_Yk = s_Y[self.column_selection]
else:
s_Xk = s_X
s_Yk = s_Y
self.storage_XX.store(Moments(w, s_X, s_Xk, C_XX))
self.storage_XY.store(Moments(w, s_X, s_Yk, C_XY))
else: # compute block
assert Y is not None
assert not self.symmetrize
w, s, C = moments_block(X, Y, remove_mean=self.remove_mean,
sparse_mode=self.sparse_mode, modify_data=self.modify_data,
column_selection=self.column_selection, diag_only=self.diag_only)
# make copy in order to get independently mergeable moments
if self.column_selection is not None:
s0k = s[0][self.column_selection]
s1k = s[1][self.column_selection]
else:
s0k = s[0]
s1k = s[1]
if self.compute_XX:
self.storage_XX.store(Moments(w, s[0], s0k, C[0][0]))
if self.compute_XY:
self.storage_XY.store(Moments(w, s[0], s1k, C[0][1]))
self.storage_YY.store(Moments(w, s[1], s1k, C[1][1])) | def add(self, X, Y=None, weights=None) | Add trajectory to estimate.
Parameters
----------
X : ndarray(T, N)
array of N time series.
Y : ndarray(T, N)
array of N time series, usually time shifted version of X.
weights : None or float or ndarray(T, ):
weights assigned to each trajectory point. If None, all data points have weight one. If float,
the same weight will be given to all data points. If ndarray, each data point is assigned a separate
weight. | 2.143998 | 2.132326 | 1.005474 |
# get the reference HMM submodel
ref = super(SampledHMSM, self).submodel(states=states, obs=obs)
# get the sample submodels
samples_sub = [sample.submodel(states=states, obs=obs) for sample in self.samples]
# new model
return SampledHMSM(samples_sub, ref=ref, conf=self.conf) | def submodel(self, states=None, obs=None) | Returns a HMM with restricted state space
Parameters
----------
states : None or int-array
Hidden states to restrict the model to (if not None).
obs : None, str or int-array
Observed states to restrict the model to (if not None).
Returns
-------
hmm : HMM
The restricted HMM. | 4.553115 | 5.025855 | 0.905938 |
r
# determine lag times
lags = [1]
# build default lag list
lag = 1.0
import decimal
while lag <= maxlag:
lag = lag*multiplier
# round up, like python 2
lag = int(decimal.Decimal(lag).quantize(decimal.Decimal('1'),
rounding=decimal.ROUND_HALF_UP))
if lag <= maxlag:
ilag = int(lag)
lags.append(ilag)
# always include the maximal requested lag time.
if maxlag not in lags:
lags.append(maxlag)
return np.array(lags) | def _generate_lags(maxlag, multiplier) | r"""Generate a set of lag times starting from 1 to maxlag,
using the given multiplier between successive lags | 4.618973 | 4.725589 | 0.977438 |
from itertools import combinations as _combinations, chain
from scipy.special import comb
count = comb(len(seq), k, exact=True)
res = np.fromiter(chain.from_iterable(_combinations(seq, k)),
int, count=count*k)
return res.reshape(-1, k) | def combinations(seq, k) | Return j length subsequences of elements from the input iterable.
This version uses Numpy/Scipy and should be preferred over itertools. It avoids
the creation of all intermediate Python objects.
Examples
--------
>>> import numpy as np
>>> from itertools import combinations as iter_comb
>>> x = np.arange(3)
>>> c1 = combinations(x, 2)
>>> print(c1)
[[0 1]
[0 2]
[1 2]]
>>> c2 = np.array(tuple(iter_comb(x, 2)))
>>> print(c2)
[[0 1]
[0 2]
[1 2]] | 3.353716 | 4.549182 | 0.737213 |
arrays = [np.asarray(x) for x in arrays]
shape = (len(x) for x in arrays)
dtype = arrays[0].dtype
ix = np.indices(shape)
ix = ix.reshape(len(arrays), -1).T
out = np.empty_like(ix, dtype=dtype)
for n, _ in enumerate(arrays):
out[:, n] = arrays[n][ix[:, n]]
return out | def product(*arrays) | Generate a cartesian product of input arrays.
Parameters
----------
arrays : list of array-like
1-D arrays to form the cartesian product of.
Returns
-------
out : ndarray
2-D array of shape (M, len(arrays)) containing cartesian products
formed of input arrays. | 2.224886 | 2.977682 | 0.747187 |
r
r = np.linalg.norm(rvec) - rcut
rr = r ** 2
if r < 0.0:
return -2.5 * rr
return 0.5 * (r - 2.0) * rr | def folding_model_energy(rvec, rcut) | r"""computes the potential energy at point rvec | 5.093396 | 4.507348 | 1.130021 |
r
rnorm = np.linalg.norm(rvec)
if rnorm == 0.0:
return np.zeros(rvec.shape)
r = rnorm - rcut
if r < 0.0:
return -5.0 * r * rvec / rnorm
return (1.5 * r - 2.0) * rvec / rnorm | def folding_model_gradient(rvec, rcut) | r"""computes the potential's gradient at point rvec | 3.229798 | 2.920881 | 1.105762 |
r
adw = AsymmetricDoubleWell(dt, kT, mass=mass, damping=damping)
return adw.sample(x0, nstep, nskip=nskip) | def get_asymmetric_double_well_data(nstep, x0=0., nskip=1, dt=0.01, kT=10.0, mass=1.0, damping=1.0) | r"""wrapper for the asymmetric double well generator | 4.680533 | 4.554443 | 1.027685 |
r
fm = FoldingModel(dt, kT, mass=mass, damping=damping, rcut=rcut)
return fm.sample(rvec0, nstep, nskip=nskip) | def get_folding_model_data(
nstep, rvec0=np.zeros((5)), nskip=1, dt=0.01, kT=10.0, mass=1.0, damping=1.0, rcut=3.0) | r"""wrapper for the folding model generator | 4.091492 | 3.795293 | 1.078044 |
r
pw = PrinzModel(dt, kT, mass=mass, damping=damping)
return pw.sample(x0, nstep, nskip=nskip) | def get_prinz_pot(nstep, x0=0., nskip=1, dt=0.01, kT=10.0, mass=1.0, damping=1.0) | r"""wrapper for the Prinz model generator | 5.692517 | 4.909613 | 1.159464 |
r
return x - self.coeff_A * self.gradient(x) \
+ self.coeff_B * np.random.normal(size=self.dim) | def step(self, x) | r"""perform a single Brownian dynamics step | 7.50632 | 7.358619 | 1.020072 |
r
x = np.zeros(shape=(nsteps + 1,))
x[0] = x0
for t in range(nsteps):
q = x[t]
for s in range(nskip):
q = self.step(q)
x[t + 1] = q
return x | def sample(self, x0, nsteps, nskip=1) | r"""generate nsteps sample points | 2.773977 | 2.803383 | 0.98951 |
r
rvec = np.zeros(shape=(nsteps + 1, self.dim))
rvec[0, :] = rvec0[:]
for t in range(nsteps):
q = rvec[t, :]
for s in range(nskip):
q = self.step(q)
rvec[t + 1, :] = q[:]
return rvec | def sample(self, rvec0, nsteps, nskip=1) | r"""generate nsteps sample points | 2.510869 | 2.676871 | 0.937987 |
# set arrow properties
dist = _sqrt(
((x2 - x1) / float(Dx))**2 + ((y2 - y1) / float(Dy))**2)
arrow_curvature *= 0.075 # standard scale
rad = arrow_curvature / (dist)
tail_width = width
head_width = max(0.5, 2 * width)
head_length = head_width
self.ax.annotate(
"", xy=(x2, y2), xycoords='data', xytext=(x1, y1), textcoords='data',
arrowprops=dict(
arrowstyle='simple,head_length=%f,head_width=%f,tail_width=%f' % (
head_length, head_width, tail_width),
color=color, shrinkA=shrinkA, shrinkB=shrinkB, patchA=patchA, patchB=patchB,
connectionstyle="arc3,rad=%f" % -rad),
zorder=0)
# weighted center position
center = _np.array([0.55 * x1 + 0.45 * x2, 0.55 * y1 + 0.45 * y2])
v = _np.array([x2 - x1, y2 - y1]) # 1->2 vector
vabs = _np.abs(v)
vnorm = _np.array([v[1], -v[0]]) # orthogonal vector
vnorm = _np.divide(vnorm, _np.linalg.norm(vnorm)) # normalize
# cross product to determine the direction into which vnorm points
z = _np.cross(v, vnorm)
if z < 0:
vnorm *= -1
offset = 0.5 * arrow_curvature * \
((vabs[0] / (vabs[0] + vabs[1]))
* Dx + (vabs[1] / (vabs[0] + vabs[1])) * Dy)
ptext = center + offset * vnorm
self.ax.text(
ptext[0], ptext[1], label, size=arrow_label_size,
horizontalalignment='center', verticalalignment='center', zorder=1) | def _draw_arrow(
self, x1, y1, x2, y2, Dx, Dy, label="", width=1.0, arrow_curvature=1.0, color="grey",
patchA=None, patchB=None, shrinkA=0, shrinkB=0, arrow_label_size=None) | Draws a slightly curved arrow from (x1,y1) to (x2,y2).
Will allow the given patches at start end end. | 2.518489 | 2.533221 | 0.994184 |
initpos = None
holddim = None
if self.xpos is not None:
y = _np.random.random(len(self.xpos))
initpos = _np.vstack((self.xpos, y)).T
holddim = 0
elif self.ypos is not None:
x = _np.zeros_like(self.xpos)
initpos = _np.vstack((x, self.ypos)).T
holddim = 1
# nothing to do
elif self.xpos is not None and self.ypos is not None:
return _np.array([self.xpos, self.ypos]), 0
from pyemma.plots._ext.fruchterman_reingold import _fruchterman_reingold
best_pos = _fruchterman_reingold(G, pos=initpos, dim=2, hold_dim=holddim)
# rescale fixed to user settings and balance the other coordinate
if self.xpos is not None:
# rescale x to fixed value
best_pos[:, 0] *= (_np.max(self.xpos) - _np.min(self.xpos)
) / (_np.max(best_pos[:, 0]) - _np.min(best_pos[:, 0]))
best_pos[:, 0] += _np.min(self.xpos) - _np.min(best_pos[:, 0])
# rescale y to balance
if _np.max(best_pos[:, 1]) - _np.min(best_pos[:, 1]) > 0.01:
best_pos[:, 1] *= (_np.max(self.xpos) - _np.min(self.xpos)
) / (_np.max(best_pos[:, 1]) - _np.min(best_pos[:, 1]))
if self.ypos is not None:
best_pos[:, 1] *= (_np.max(self.ypos) - _np.min(self.ypos)
) / (_np.max(best_pos[:, 1]) - _np.min(best_pos[:, 1]))
best_pos[:, 1] += _np.min(self.ypos) - _np.min(best_pos[:, 1])
# rescale x to balance
if _np.max(best_pos[:, 0]) - _np.min(best_pos[:, 0]) > 0.01:
best_pos[:, 0] *= (_np.max(self.ypos) - _np.min(self.ypos)
) / (_np.max(best_pos[:, 0]) - _np.min(best_pos[:, 0]))
return best_pos | def _find_best_positions(self, G) | Finds best positions for the given graph (given as adjacency matrix)
nodes by minimizing a network potential. | 1.89808 | 1.883428 | 1.007779 |
assert hasattr(class_with_globalize_methods, 'active_set')
assert hasattr(class_with_globalize_methods, 'nstates_full')
for name, method in class_with_globalize_methods.__dict__.copy().items():
if isinstance(method, property) and hasattr(method.fget, '_map_to_full_state_def_arg'):
default_value = method.fget._map_to_full_state_def_arg
axis = method.fget._map_to_full_state_along_axis
new_getter = _wrap_to_full_state(name, default_value, axis)
alias_to_full_state_inst = property(new_getter)
elif hasattr(method, '_map_to_full_state_def_arg'):
default_value = method._map_to_full_state_def_arg
axis = method._map_to_full_state_along_axis
alias_to_full_state_inst = _wrap_to_full_state(name, default_value, axis)
else:
continue
name += "_full_state"
setattr(class_with_globalize_methods, name, alias_to_full_state_inst)
return class_with_globalize_methods | def add_full_state_methods(class_with_globalize_methods) | class decorator to create "_full_state" methods/properties on the class (so they
are valid for all instances created from this class).
Parameters
----------
class_with_globalize_methods | 2.40007 | 2.486447 | 0.965261 |
if X is None:
return None
from pyemma._ext.variational.estimators.covar_c._covartools import (variable_cols_double,
variable_cols_float,
variable_cols_int,
variable_cols_long,
variable_cols_char)
# prepare column array
cols = numpy.zeros(X.shape[1], dtype=numpy.bool, order='C')
if X.dtype == numpy.float64:
completed = variable_cols_double(cols, X, tol, min_constant)
elif X.dtype == numpy.float32:
completed = variable_cols_float(cols, X, tol, min_constant)
elif X.dtype == numpy.int32:
completed = variable_cols_int(cols, X, 0, min_constant)
elif X.dtype == numpy.int64:
completed = variable_cols_long(cols, X, 0, min_constant)
elif X.dtype == numpy.bool:
completed = variable_cols_char(cols, X, 0, min_constant)
else:
raise TypeError('unsupported type of X: %s' % X.dtype)
# if interrupted, return all ones. Otherwise return the variable columns as bool array
if completed == 0:
return numpy.ones_like(cols, dtype=numpy.bool)
return cols | def variable_cols(X, tol=0.0, min_constant=0) | Evaluates which columns are constant (0) or variable (1)
Parameters
----------
X : ndarray
Matrix whose columns will be checked for constant or variable.
tol : float
Tolerance for float-matrices. When set to 0 only equal columns with
values will be considered constant. When set to a positive value,
columns where all elements have absolute differences to the first
element of that column are considered constant.
min_constant : int
Minimal number of constant columns to resume operation. If at one
point the number of constant columns drops below min_constant, the
computation will stop and all columns will be assumed to be variable.
In this case, an all-True array will be returned.
Returns
-------
variable : bool-array
Array with number of elements equal to the columns. True: column is
variable / non-constant. False: column is constant. | 2.721506 | 2.799466 | 0.972152 |
r
if connectivity=='post_hoc_RE' or connectivity=='BAR_variance':
raise Exception('Connectivity type %s not supported for dTRAM data.'%connectivity)
state_counts = _np.maximum(count_matrices.sum(axis=1), count_matrices.sum(axis=2))
return _compute_csets(
connectivity, state_counts, count_matrices, None, None, None, nn=nn, callback=callback) | def compute_csets_dTRAM(connectivity, count_matrices, nn=None, callback=None) | r"""
Computes the largest connected sets for dTRAM data.
Parameters
----------
connectivity : string
one 'reversible_pathways', 'neighbors', 'summed_count_matrix' or None.
Selects the algorithm for measuring overlap between thermodynamic
and Markov states.
* 'reversible_pathways' : requires that every state in the connected set
can be reached by following a pathway of reversible transitions. A
reversible transition between two Markov states (within the same
thermodynamic state k) is a pair of Markov states that belong to the
same strongly connected component of the count matrix (from
thermodynamic state k). A pathway of reversible transitions is a list of
reversible transitions [(i_1, i_2), (i_2, i_3),..., (i_(N-2), i_(N-1)),
(i_(N-1), i_N)]. The thermodynamic state where the reversible
transitions happen, is ignored in constructing the reversible pathways.
This is equivalent to assuming that two ensembles overlap at some Markov
state whenever there exist frames from both ensembles in that Markov
state.
* 'largest' : alias for reversible_pathways
* 'neighbors' : similar to 'reversible_pathways' but with a more strict
requirement for the overlap between thermodynamic states. It is required
that every state in the connected set can be reached by following a
pathway of reversible transitions or jumping between overlapping
thermodynamic states while staying in the same Markov state. A reversible
transition between two Markov states (within the same thermodynamic
state k) is a pair of Markov states that belong to the same strongly
connected component of the count matrix (from thermodynamic state k).
It is assumed that the data comes from an Umbrella sampling simulation
and the number of the thermodynamic state matches the position of the
Umbrella along the order parameter. The overlap of thermodynamic states
k and l within Markov state n is set according to the value of nn; if
there are samples in both product-space states (k,n) and (l,n) and
|l-n|<=nn, the states are overlapping.
* 'summed_count_matrix' : all thermodynamic states are assumed to overlap.
The connected set is then computed by summing the count matrices over
all thermodynamic states and taking it's largest strongly connected set.
Not recommended!
* None : assume that everything is connected. For debugging.
count_matrices : numpy.ndarray((T, M, M))
Count matrices for all T thermodynamic states.
nn : int or None, optional
Number of neighbors that are assumed to overlap when
connectivity='neighbors'
Returns
-------
csets, projected_cset
csets : list of numpy.ndarray((M_prime_k,), dtype=int)
List indexed by thermodynamic state. Every element csets[k] is
the largest connected set at thermodynamic state k.
projected_cset : numpy.ndarray(M_prime, dtype=int)
The overall connected set. This is the union of the individual
connected sets of the thermodynamic states. | 7.420557 | 9.977421 | 0.743735 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.