code
string | signature
string | docstring
string | loss_without_docstring
float64 | loss_with_docstring
float64 | factor
float64 |
---|---|---|---|---|---|
'''
The distance matrix contains lengths of shortest paths between all
pairs of nodes. An entry (u,v) represents the length of shortest path
from node u to node v. The average shortest path length is the
characteristic path length of the network.
Parameters
----------
L : NxN np.ndarray
Directed/undirected connection-length matrix.
NB L is not the adjacency matrix. See below.
Returns
-------
D : NxN np.ndarray
distance (shortest weighted path) matrix
B : NxN np.ndarray
matrix of number of edges in shortest weighted path
Notes
-----
The input matrix must be a connection-length matrix, typically
obtained via a mapping from weight to length. For instance, in a
weighted correlation network higher correlations are more naturally
interpreted as shorter distances and the input matrix should
consequently be some inverse of the connectivity matrix.
The number of edges in shortest weighted paths may in general
exceed the number of edges in shortest binary paths (i.e. shortest
paths computed on the binarized connectivity matrix), because shortest
weighted paths have the minimal weighted distance, but not necessarily
the minimal number of edges.
Lengths between disconnected nodes are set to Inf.
Lengths on the main diagonal are set to 0.
Algorithm: Dijkstra's algorithm.
'''
n = len(G)
D = np.zeros((n, n)) # distance matrix
D[np.logical_not(np.eye(n))] = np.inf
B = np.zeros((n, n)) # number of edges matrix
for u in range(n):
# distance permanence (true is temporary)
S = np.ones((n,), dtype=bool)
G1 = G.copy()
V = [u]
while True:
S[V] = 0 # distance u->V is now permanent
G1[:, V] = 0 # no in-edges as already shortest
for v in V:
W, = np.where(G1[v, :]) # neighbors of shortest nodes
td = np.array(
[D[u, W].flatten(), (D[u, v] + G1[v, W]).flatten()])
d = np.min(td, axis=0)
wi = np.argmin(td, axis=0)
D[u, W] = d # smallest of old/new path lengths
ind = W[np.where(wi == 1)] # indices of lengthened paths
# increment nr_edges for lengthened paths
B[u, ind] = B[u, v] + 1
if D[u, S].size == 0: # all nodes reached
break
minD = np.min(D[u, S])
if np.isinf(minD): # some nodes cannot be reached
break
V, = np.where(D[u, :] == minD)
return D, B | def distance_wei(G) | The distance matrix contains lengths of shortest paths between all
pairs of nodes. An entry (u,v) represents the length of shortest path
from node u to node v. The average shortest path length is the
characteristic path length of the network.
Parameters
----------
L : NxN np.ndarray
Directed/undirected connection-length matrix.
NB L is not the adjacency matrix. See below.
Returns
-------
D : NxN np.ndarray
distance (shortest weighted path) matrix
B : NxN np.ndarray
matrix of number of edges in shortest weighted path
Notes
-----
The input matrix must be a connection-length matrix, typically
obtained via a mapping from weight to length. For instance, in a
weighted correlation network higher correlations are more naturally
interpreted as shorter distances and the input matrix should
consequently be some inverse of the connectivity matrix.
The number of edges in shortest weighted paths may in general
exceed the number of edges in shortest binary paths (i.e. shortest
paths computed on the binarized connectivity matrix), because shortest
weighted paths have the minimal weighted distance, but not necessarily
the minimal number of edges.
Lengths between disconnected nodes are set to Inf.
Lengths on the main diagonal are set to 0.
Algorithm: Dijkstra's algorithm. | 5.756661 | 2.366784 | 2.432272 |
'''
The global efficiency is the average of inverse shortest path length,
and is inversely related to the characteristic path length.
The local efficiency is the global efficiency computed on the
neighborhood of the node, and is related to the clustering coefficient.
Parameters
----------
A : NxN np.ndarray
binary undirected connection matrix
local : bool
If True, computes local efficiency instead of global efficiency.
Default value = False.
Returns
-------
Eglob : float
global efficiency, only if local=False
Eloc : Nx1 np.ndarray
local efficiency, only if local=True
'''
def distance_inv(g):
D = np.eye(len(g))
n = 1
nPATH = g.copy()
L = (nPATH != 0)
while np.any(L):
D += n * L
n += 1
nPATH = np.dot(nPATH, g)
L = (nPATH != 0) * (D == 0)
D[np.logical_not(D)] = np.inf
D = 1 / D
np.fill_diagonal(D, 0)
return D
G = binarize(G)
n = len(G) # number of nodes
if local:
E = np.zeros((n,)) # local efficiency
for u in range(n):
# V,=np.where(G[u,:]) #neighbors
# k=len(V) #degree
# if k>=2: #degree must be at least 2
# e=distance_inv(G[V].T[V])
# E[u]=np.sum(e)/(k*k-k) #local efficiency computation
# find pairs of neighbors
V, = np.where(np.logical_or(G[u, :], G[u, :].T))
# inverse distance matrix
e = distance_inv(G[np.ix_(V, V)])
# symmetrized inverse distance matrix
se = e + e.T
# symmetrized adjacency vector
sa = G[u, V] + G[V, u].T
numer = np.sum(np.outer(sa.T, sa) * se) / 2
if numer != 0:
denom = np.sum(sa)**2 - np.sum(sa * sa)
E[u] = numer / denom # local efficiency
else:
e = distance_inv(G)
E = np.sum(e) / (n * n - n) # global efficiency
return E | def efficiency_bin(G, local=False) | The global efficiency is the average of inverse shortest path length,
and is inversely related to the characteristic path length.
The local efficiency is the global efficiency computed on the
neighborhood of the node, and is related to the clustering coefficient.
Parameters
----------
A : NxN np.ndarray
binary undirected connection matrix
local : bool
If True, computes local efficiency instead of global efficiency.
Default value = False.
Returns
-------
Eglob : float
global efficiency, only if local=False
Eloc : Nx1 np.ndarray
local efficiency, only if local=True | 3.698128 | 2.630857 | 1.405674 |
'''
Walks are sequences of linked nodes, that may visit a single node more
than once. This function finds the number of walks of a given length,
between any two nodes.
Parameters
----------
CIJ : NxN np.ndarray
binary directed/undirected connection matrix
Returns
-------
Wq : NxNxQ np.ndarray
Wq[i,j,q] is the number of walks from i to j of length q
twalk : int
total number of walks found
wlq : Qx1 np.ndarray
walk length distribution as a function of q
Notes
-----
Wq grows very quickly for larger N,K,q. Weights are discarded.
'''
CIJ = binarize(CIJ, copy=True)
n = len(CIJ)
Wq = np.zeros((n, n, n))
CIJpwr = CIJ.copy()
Wq[:, :, 1] = CIJ
for q in range(n):
CIJpwr = np.dot(CIJpwr, CIJ)
Wq[:, :, q] = CIJpwr
twalk = np.sum(Wq) # total number of walks
wlq = np.sum(np.sum(Wq, axis=0), axis=0)
return Wq, twalk, wlq | def findwalks(CIJ) | Walks are sequences of linked nodes, that may visit a single node more
than once. This function finds the number of walks of a given length,
between any two nodes.
Parameters
----------
CIJ : NxN np.ndarray
binary directed/undirected connection matrix
Returns
-------
Wq : NxNxQ np.ndarray
Wq[i,j,q] is the number of walks from i to j of length q
twalk : int
total number of walks found
wlq : Qx1 np.ndarray
walk length distribution as a function of q
Notes
-----
Wq grows very quickly for larger N,K,q. Weights are discarded. | 4.331817 | 1.728741 | 2.505763 |
'''
The binary reachability matrix describes reachability between all pairs
of nodes. An entry (u,v)=1 means that there exists a path from node u
to node v; alternatively (u,v)=0.
The distance matrix contains lengths of shortest paths between all
pairs of nodes. An entry (u,v) represents the length of shortest path
from node u to node v. The average shortest path length is the
characteristic path length of the network.
Parameters
----------
CIJ : NxN np.ndarray
binary directed/undirected connection matrix
ensure_binary : bool
Binarizes input. Defaults to true. No user who is not testing
something will ever want to not use this, use distance_wei instead for
unweighted matrices.
Returns
-------
R : NxN np.ndarray
binary reachability matrix
D : NxN np.ndarray
distance matrix
Notes
-----
faster but more memory intensive than "breadthdist.m".
'''
def reachdist2(CIJ, CIJpwr, R, D, n, powr, col, row):
CIJpwr = np.dot(CIJpwr, CIJ)
R = np.logical_or(R, CIJpwr != 0)
D += R
if powr <= n and np.any(R[np.ix_(row, col)] == 0):
powr += 1
R, D, powr = reachdist2(CIJ, CIJpwr, R, D, n, powr, col, row)
return R, D, powr
if ensure_binary:
CIJ = binarize(CIJ)
R = CIJ.copy()
D = CIJ.copy()
powr = 2
n = len(CIJ)
CIJpwr = CIJ.copy()
# check for vertices that have no incoming or outgoing connections
# these are ignored by reachdist
id = np.sum(CIJ, axis=0)
od = np.sum(CIJ, axis=1)
id0, = np.where(id == 0) # nothing goes in, so column(R) will be 0
od0, = np.where(od == 0) # nothing comes out, so row(R) will be 0
# use these colums and rows to check for reachability
col = list(range(n))
col = np.delete(col, id0)
row = list(range(n))
row = np.delete(row, od0)
R, D, powr = reachdist2(CIJ, CIJpwr, R, D, n, powr, col, row)
#'invert' CIJdist to get distances
D = powr - D + 1
# put inf if no path found
D[D == n + 2] = np.inf
D[:, id0] = np.inf
D[od0, :] = np.inf
return R, D | def reachdist(CIJ, ensure_binary=True) | The binary reachability matrix describes reachability between all pairs
of nodes. An entry (u,v)=1 means that there exists a path from node u
to node v; alternatively (u,v)=0.
The distance matrix contains lengths of shortest paths between all
pairs of nodes. An entry (u,v) represents the length of shortest path
from node u to node v. The average shortest path length is the
characteristic path length of the network.
Parameters
----------
CIJ : NxN np.ndarray
binary directed/undirected connection matrix
ensure_binary : bool
Binarizes input. Defaults to true. No user who is not testing
something will ever want to not use this, use distance_wei instead for
unweighted matrices.
Returns
-------
R : NxN np.ndarray
binary reachability matrix
D : NxN np.ndarray
distance matrix
Notes
-----
faster but more memory intensive than "breadthdist.m". | 4.127903 | 2.27784 | 1.812201 |
P = np.linalg.solve(np.diag(np.sum(adjacency, axis=1)), adjacency)
n = len(P)
D, V = np.linalg.eig(P.T)
aux = np.abs(D - 1)
index = np.where(aux == aux.min())[0]
if aux[index] > 10e-3:
raise ValueError("Cannot find eigenvalue of 1. Minimum eigenvalue " +
"value is {0}. Tolerance was ".format(aux[index]+1) +
"set at 10e-3.")
w = V[:, index].T
w = w / np.sum(w)
W = np.real(np.repeat(w, n, 0))
I = np.eye(n)
Z = np.linalg.inv(I - P + W)
mfpt = (np.repeat(np.atleast_2d(np.diag(Z)), n, 0) - Z) / W
return mfpt | def mean_first_passage_time(adjacency) | Calculates mean first passage time of `adjacency`
The first passage time from i to j is the expected number of steps it takes
a random walker starting at node i to arrive for the first time at node j.
The mean first passage time is not a symmetric measure: `mfpt(i,j)` may be
different from `mfpt(j,i)`.
Parameters
----------
adjacency : (N x N) array_like
Weighted/unweighted, direct/undirected connection weight/length array
Returns
-------
MFPT : (N x N) ndarray
Pairwise mean first passage time array
References
----------
.. [1] Goni, J., Avena-Koenigsberger, A., de Mendizabal, N. V., van den
Heuvel, M. P., Betzel, R. F., & Sporns, O. (2013). Exploring the
morphospace of communication efficiency in complex networks. PLoS One,
8(3), e58070. | 4.178525 | 4.013391 | 1.041146 |
'''
Do rounding such that .5 always rounds to 1, and not bankers rounding.
This is for compatibility with matlab functions, and ease of testing.
'''
if ((x > 0) and (x % 1 >= 0.5)) or ((x < 0) and (x % 1 > 0.5)):
return int(np.ceil(x))
else:
return int(np.floor(x)) | def teachers_round(x) | Do rounding such that .5 always rounds to 1, and not bankers rounding.
This is for compatibility with matlab functions, and ease of testing. | 5.169937 | 1.936102 | 2.670281 |
'''
This is equivalent to np.random.choice(n, 4, replace=False)
Another fellow suggested np.random.random_sample(n).argpartition(4) which is
clever but still substantially slower.
'''
rng = get_rng(seed)
k = rng.randint(n**4)
a = k % n
b = k // n % n
c = k // n ** 2 % n
d = k // n ** 3 % n
if (a != b and a != c and a != d and b != c and b != d and c != d):
return (a, b, c, d)
else:
# the probability of finding a wrong configuration is extremely low
# unless for extremely small n. if n is extremely small the
# computational demand is not a problem.
# In my profiling it only took 0.4 seconds to include the uniqueness
# check in 1 million runs of this function so I think it is OK.
return pick_four_unique_nodes_quickly(n, rng) | def pick_four_unique_nodes_quickly(n, seed=None) | This is equivalent to np.random.choice(n, 4, replace=False)
Another fellow suggested np.random.random_sample(n).argpartition(4) which is
clever but still substantially slower. | 5.70135 | 4.014211 | 1.420292 |
'''
This is an efficient implementation of matlab's "dummyvar" command
using sparse matrices.
input: partitions, NxM array-like containing M partitions of N nodes
into <=N distinct communities
output: dummyvar, an NxR matrix containing R column variables (indicator
variables) with N entries, where R is the total number of communities
summed across each of the M partitions.
i.e.
r = sum((max(len(unique(partitions[i]))) for i in range(m)))
'''
# num_rows is not affected by partition indexes
n = np.size(cis, axis=0)
m = np.size(cis, axis=1)
r = np.sum((np.max(len(np.unique(cis[:, i])))) for i in range(m))
nnz = np.prod(cis.shape)
ix = np.argsort(cis, axis=0)
# s_cis=np.sort(cis,axis=0)
# FIXME use the sorted indices to sort by row efficiently
s_cis = cis[ix][:, range(m), range(m)]
mask = np.hstack((((True,),) * m, (s_cis[:-1, :] != s_cis[1:, :]).T))
indptr, = np.where(mask.flat)
indptr = np.append(indptr, nnz)
import scipy.sparse as sp
dv = sp.csc_matrix((np.repeat((1,), nnz), ix.T.flat, indptr), shape=(n, r))
return dv.toarray() | def dummyvar(cis, return_sparse=False) | This is an efficient implementation of matlab's "dummyvar" command
using sparse matrices.
input: partitions, NxM array-like containing M partitions of N nodes
into <=N distinct communities
output: dummyvar, an NxR matrix containing R column variables (indicator
variables) with N entries, where R is the total number of communities
summed across each of the M partitions.
i.e.
r = sum((max(len(unique(partitions[i]))) for i in range(m))) | 6.346187 | 2.931501 | 2.164825 |
if seed is None or seed == np.random:
return np.random.mtrand._rand
elif isinstance(seed, np.random.RandomState):
return seed
try:
rstate = np.random.RandomState(seed)
except ValueError:
rstate = np.random.RandomState(random.Random(seed).randint(0, 2**32-1))
return rstate | def get_rng(seed=None) | By default, or if `seed` is np.random, return the global RandomState
instance used by np.random.
If `seed` is a RandomState instance, return it unchanged.
Otherwise, use the passed (hashable) argument to seed a new instance
of RandomState and return it.
Parameters
----------
seed : hashable or np.random.RandomState or np.random, optional
Returns
-------
np.random.RandomState | 2.550849 | 2.594292 | 0.983254 |
'''
The assortativity coefficient is a correlation coefficient between the
degrees of all nodes on two opposite ends of a link. A positive
assortativity coefficient indicates that nodes tend to link to other
nodes with the same or similar degree.
Parameters
----------
CIJ : NxN np.ndarray
binary directed/undirected connection matrix
flag : int
0 : undirected graph; degree/degree correlation
1 : directed graph; out-degree/in-degree correlation
2 : directed graph; in-degree/out-degree correlation
3 : directed graph; out-degree/out-degree correlation
4 : directed graph; in-degree/in-degreen correlation
Returns
-------
r : float
assortativity coefficient
Notes
-----
The function accepts weighted networks, but all connection
weights are ignored. The main diagonal should be empty. For flag 1
the function computes the directed assortativity described in Rubinov
and Sporns (2010) NeuroImage.
'''
if flag == 0: # undirected version
deg = degrees_und(CIJ)
i, j = np.where(np.triu(CIJ, 1) > 0)
K = len(i)
degi = deg[i]
degj = deg[j]
else: # directed version
id, od, deg = degrees_dir(CIJ)
i, j = np.where(CIJ > 0)
K = len(i)
if flag == 1:
degi = od[i]
degj = id[j]
elif flag == 2:
degi = id[i]
degj = od[j]
elif flag == 3:
degi = od[i]
degj = od[j]
elif flag == 4:
degi = id[i]
degj = id[j]
else:
raise ValueError('Flag must be 0-4')
# compute assortativity
term1 = np.sum(degi * degj) / K
term2 = np.square(np.sum(.5 * (degi + degj)) / K)
term3 = np.sum(.5 * (degi * degi + degj * degj)) / K
r = (term1 - term2) / (term3 - term2)
return r | def assortativity_bin(CIJ, flag=0) | The assortativity coefficient is a correlation coefficient between the
degrees of all nodes on two opposite ends of a link. A positive
assortativity coefficient indicates that nodes tend to link to other
nodes with the same or similar degree.
Parameters
----------
CIJ : NxN np.ndarray
binary directed/undirected connection matrix
flag : int
0 : undirected graph; degree/degree correlation
1 : directed graph; out-degree/in-degree correlation
2 : directed graph; in-degree/out-degree correlation
3 : directed graph; out-degree/out-degree correlation
4 : directed graph; in-degree/in-degreen correlation
Returns
-------
r : float
assortativity coefficient
Notes
-----
The function accepts weighted networks, but all connection
weights are ignored. The main diagonal should be empty. For flag 1
the function computes the directed assortativity described in Rubinov
and Sporns (2010) NeuroImage. | 3.294679 | 1.607119 | 2.050053 |
'''
The assortativity coefficient is a correlation coefficient between the
strengths (weighted degrees) of all nodes on two opposite ends of a link.
A positive assortativity coefficient indicates that nodes tend to link to
other nodes with the same or similar strength.
Parameters
----------
CIJ : NxN np.ndarray
weighted directed/undirected connection matrix
flag : int
0 : undirected graph; strength/strength correlation
1 : directed graph; out-strength/in-strength correlation
2 : directed graph; in-strength/out-strength correlation
3 : directed graph; out-strength/out-strength correlation
4 : directed graph; in-strength/in-strengthn correlation
Returns
-------
r : float
assortativity coefficient
Notes
-----
The main diagonal should be empty. For flag 1
the function computes the directed assortativity described in Rubinov
and Sporns (2010) NeuroImage.
'''
if flag == 0: # undirected version
str = strengths_und(CIJ)
i, j = np.where(np.triu(CIJ, 1) > 0)
K = len(i)
stri = str[i]
strj = str[j]
else:
ist, ost = strengths_dir(CIJ) # directed version
i, j = np.where(CIJ > 0)
K = len(i)
if flag == 1:
stri = ost[i]
strj = ist[j]
elif flag == 2:
stri = ist[i]
strj = ost[j]
elif flag == 3:
stri = ost[i]
strj = ost[j]
elif flag == 4:
stri = ist[i]
strj = ost[j]
else:
raise ValueError('Flag must be 0-4')
# compute assortativity
term1 = np.sum(stri * strj) / K
term2 = np.square(np.sum(.5 * (stri + strj)) / K)
term3 = np.sum(.5 * (stri * stri + strj * strj)) / K
r = (term1 - term2) / (term3 - term2)
return r | def assortativity_wei(CIJ, flag=0) | The assortativity coefficient is a correlation coefficient between the
strengths (weighted degrees) of all nodes on two opposite ends of a link.
A positive assortativity coefficient indicates that nodes tend to link to
other nodes with the same or similar strength.
Parameters
----------
CIJ : NxN np.ndarray
weighted directed/undirected connection matrix
flag : int
0 : undirected graph; strength/strength correlation
1 : directed graph; out-strength/in-strength correlation
2 : directed graph; in-strength/out-strength correlation
3 : directed graph; out-strength/out-strength correlation
4 : directed graph; in-strength/in-strengthn correlation
Returns
-------
r : float
assortativity coefficient
Notes
-----
The main diagonal should be empty. For flag 1
the function computes the directed assortativity described in Rubinov
and Sporns (2010) NeuroImage. | 3.383315 | 1.668682 | 2.027538 |
'''
The k-core is the largest subnetwork comprising nodes of degree at
least k. This function computes the k-core for a given binary directed
connection matrix by recursively peeling off nodes with degree lower
than k, until no such nodes remain.
Parameters
----------
CIJ : NxN np.ndarray
binary directed adjacency matrix
k : int
level of k-core
peel : bool
If True, additionally calculates peelorder and peellevel. Defaults to
False.
Returns
-------
CIJkcore : NxN np.ndarray
connection matrix of the k-core. This matrix only contains nodes of
degree at least k.
kn : int
size of k-core
peelorder : Nx1 np.ndarray
indices in the order in which they were peeled away during k-core
decomposition. only returned if peel is specified.
peellevel : Nx1 np.ndarray
corresponding level - nodes in at the same level have been peeled
away at the same time. only return if peel is specified
Notes
-----
'peelorder' and 'peellevel' are similar the the k-core sub-shells
described in Modha and Singh (2010).
'''
if peel:
peelorder, peellevel = ([], [])
iter = 0
CIJkcore = CIJ.copy()
while True:
id, od, deg = degrees_dir(CIJkcore) # get degrees of matrix
# find nodes with degree <k
ff, = np.where(np.logical_and(deg < k, deg > 0))
if ff.size == 0:
break # if none found -> stop
# else peel away found nodes
iter += 1
CIJkcore[ff, :] = 0
CIJkcore[:, ff] = 0
if peel:
peelorder.append(ff)
if peel:
peellevel.append(iter * np.ones((len(ff),)))
kn = np.sum(deg > 0)
if peel:
return CIJkcore, kn, peelorder, peellevel
else:
return CIJkcore, kn | def kcore_bd(CIJ, k, peel=False) | The k-core is the largest subnetwork comprising nodes of degree at
least k. This function computes the k-core for a given binary directed
connection matrix by recursively peeling off nodes with degree lower
than k, until no such nodes remain.
Parameters
----------
CIJ : NxN np.ndarray
binary directed adjacency matrix
k : int
level of k-core
peel : bool
If True, additionally calculates peelorder and peellevel. Defaults to
False.
Returns
-------
CIJkcore : NxN np.ndarray
connection matrix of the k-core. This matrix only contains nodes of
degree at least k.
kn : int
size of k-core
peelorder : Nx1 np.ndarray
indices in the order in which they were peeled away during k-core
decomposition. only returned if peel is specified.
peellevel : Nx1 np.ndarray
corresponding level - nodes in at the same level have been peeled
away at the same time. only return if peel is specified
Notes
-----
'peelorder' and 'peellevel' are similar the the k-core sub-shells
described in Modha and Singh (2010). | 4.466394 | 1.763293 | 2.532985 |
'''
The k-core is the largest subnetwork comprising nodes of degree at
least k. This function computes the k-core for a given binary
undirected connection matrix by recursively peeling off nodes with
degree lower than k, until no such nodes remain.
Parameters
----------
CIJ : NxN np.ndarray
binary undirected connection matrix
k : int
level of k-core
peel : bool
If True, additionally calculates peelorder and peellevel. Defaults to
False.
Returns
-------
CIJkcore : NxN np.ndarray
connection matrix of the k-core. This matrix only contains nodes of
degree at least k.
kn : int
size of k-core
peelorder : Nx1 np.ndarray
indices in the order in which they were peeled away during k-core
decomposition. only returned if peel is specified.
peellevel : Nx1 np.ndarray
corresponding level - nodes in at the same level have been peeled
away at the same time. only return if peel is specified
Notes
-----
'peelorder' and 'peellevel' are similar the the k-core sub-shells
described in Modha and Singh (2010).
'''
if peel:
peelorder, peellevel = ([], [])
iter = 0
CIJkcore = CIJ.copy()
while True:
deg = degrees_und(CIJkcore) # get degrees of matrix
# find nodes with degree <k
ff, = np.where(np.logical_and(deg < k, deg > 0))
if ff.size == 0:
break # if none found -> stop
# else peel away found nodes
iter += 1
CIJkcore[ff, :] = 0
CIJkcore[:, ff] = 0
if peel:
peelorder.append(ff)
if peel:
peellevel.append(iter * np.ones((len(ff),)))
kn = np.sum(deg > 0)
if peel:
return CIJkcore, kn, peelorder, peellevel
else:
return CIJkcore, kn | def kcore_bu(CIJ, k, peel=False) | The k-core is the largest subnetwork comprising nodes of degree at
least k. This function computes the k-core for a given binary
undirected connection matrix by recursively peeling off nodes with
degree lower than k, until no such nodes remain.
Parameters
----------
CIJ : NxN np.ndarray
binary undirected connection matrix
k : int
level of k-core
peel : bool
If True, additionally calculates peelorder and peellevel. Defaults to
False.
Returns
-------
CIJkcore : NxN np.ndarray
connection matrix of the k-core. This matrix only contains nodes of
degree at least k.
kn : int
size of k-core
peelorder : Nx1 np.ndarray
indices in the order in which they were peeled away during k-core
decomposition. only returned if peel is specified.
peellevel : Nx1 np.ndarray
corresponding level - nodes in at the same level have been peeled
away at the same time. only return if peel is specified
Notes
-----
'peelorder' and 'peellevel' are similar the the k-core sub-shells
described in Modha and Singh (2010). | 4.183613 | 1.665794 | 2.511483 |
'''
Local assortativity measures the extent to which nodes are connected to
nodes of similar strength. Adapted from Thedchanamoorthy et al. 2014
formula to allowed weighted/signed networks.
Parameters
----------
W : NxN np.ndarray
undirected connection matrix with positive and negative weights
Returns
-------
loc_assort_pos : Nx1 np.ndarray
local assortativity from positive weights
loc_assort_neg : Nx1 np.ndarray
local assortativity from negative weights
'''
n = len(W)
np.fill_diagonal(W, 0)
r_pos = assortativity_wei(W * (W > 0))
r_neg = assortativity_wei(W * (W < 0))
str_pos, str_neg, _, _ = strengths_und_sign(W)
loc_assort_pos = np.zeros((n,))
loc_assort_neg = np.zeros((n,))
for curr_node in range(n):
jp = np.where(W[curr_node, :] > 0)
loc_assort_pos[curr_node] = np.sum(np.abs(str_pos[jp] -
str_pos[curr_node])) / str_pos[curr_node]
jn = np.where(W[curr_node, :] < 0)
loc_assort_neg[curr_node] = np.sum(np.abs(str_neg[jn] -
str_neg[curr_node])) / str_neg[curr_node]
loc_assort_pos = ((r_pos + 1) / n -
loc_assort_pos / np.sum(loc_assort_pos))
loc_assort_neg = ((r_neg + 1) / n -
loc_assort_neg / np.sum(loc_assort_neg))
return loc_assort_pos, loc_assort_neg | def local_assortativity_wu_sign(W) | Local assortativity measures the extent to which nodes are connected to
nodes of similar strength. Adapted from Thedchanamoorthy et al. 2014
formula to allowed weighted/signed networks.
Parameters
----------
W : NxN np.ndarray
undirected connection matrix with positive and negative weights
Returns
-------
loc_assort_pos : Nx1 np.ndarray
local assortativity from positive weights
loc_assort_neg : Nx1 np.ndarray
local assortativity from negative weights | 2.815807 | 1.770259 | 1.590619 |
'''
The rich club coefficient, R, at level k is the fraction of edges that
connect nodes of degree k or higher out of the maximum number of edges
that such nodes might share.
Parameters
----------
CIJ : NxN np.ndarray
binary directed connection matrix
klevel : int | None
sets the maximum level at which the rich club coefficient will be
calculated. If None (default), the maximum level is set to the
maximum degree of the adjacency matrix
Returns
-------
R : Kx1 np.ndarray
vector of rich-club coefficients for levels 1 to klevel
Nk : int
number of nodes with degree > k
Ek : int
number of edges remaining in subgraph with degree > k
'''
# definition of degree as used for RC coefficients
# degree is taken to be the sum of incoming and outgoing connections
id, od, deg = degrees_dir(CIJ)
if klevel is None:
klevel = int(np.max(deg))
R = np.zeros((klevel,))
Nk = np.zeros((klevel,))
Ek = np.zeros((klevel,))
for k in range(klevel):
SmallNodes, = np.where(deg <= k + 1) # get small nodes with degree <=k
subCIJ = np.delete(CIJ, SmallNodes, axis=0)
subCIJ = np.delete(subCIJ, SmallNodes, axis=1)
Nk[k] = np.size(subCIJ, axis=1) # number of nodes with degree >k
Ek[k] = np.sum(subCIJ) # number of connections in subgraph
# unweighted rich club coefficient
R[k] = Ek[k] / (Nk[k] * (Nk[k] - 1))
return R, Nk, Ek | def rich_club_bd(CIJ, klevel=None) | The rich club coefficient, R, at level k is the fraction of edges that
connect nodes of degree k or higher out of the maximum number of edges
that such nodes might share.
Parameters
----------
CIJ : NxN np.ndarray
binary directed connection matrix
klevel : int | None
sets the maximum level at which the rich club coefficient will be
calculated. If None (default), the maximum level is set to the
maximum degree of the adjacency matrix
Returns
-------
R : Kx1 np.ndarray
vector of rich-club coefficients for levels 1 to klevel
Nk : int
number of nodes with degree > k
Ek : int
number of edges remaining in subgraph with degree > k | 3.758694 | 2.054194 | 1.829765 |
'''
The rich club coefficient, R, at level k is the fraction of edges that
connect nodes of degree k or higher out of the maximum number of edges
that such nodes might share.
Parameters
----------
CIJ : NxN np.ndarray
binary undirected connection matrix
klevel : int | None
sets the maximum level at which the rich club coefficient will be
calculated. If None (default), the maximum level is set to the
maximum degree of the adjacency matrix
Returns
-------
R : Kx1 np.ndarray
vector of rich-club coefficients for levels 1 to klevel
Nk : int
number of nodes with degree > k
Ek : int
number of edges remaining in subgraph with degree > k
'''
deg = degrees_und(CIJ) # compute degree of each node
if klevel == None:
klevel = int(np.max(deg))
R = np.zeros((klevel,))
Nk = np.zeros((klevel,))
Ek = np.zeros((klevel,))
for k in range(klevel):
SmallNodes, = np.where(deg <= k + 1) # get small nodes with degree <=k
subCIJ = np.delete(CIJ, SmallNodes, axis=0)
subCIJ = np.delete(subCIJ, SmallNodes, axis=1)
Nk[k] = np.size(subCIJ, axis=1) # number of nodes with degree >k
Ek[k] = np.sum(subCIJ) # number of connections in subgraph
# unweighted rich club coefficient
R[k] = Ek[k] / (Nk[k] * (Nk[k] - 1))
return R, Nk, Ek | def rich_club_bu(CIJ, klevel=None) | The rich club coefficient, R, at level k is the fraction of edges that
connect nodes of degree k or higher out of the maximum number of edges
that such nodes might share.
Parameters
----------
CIJ : NxN np.ndarray
binary undirected connection matrix
klevel : int | None
sets the maximum level at which the rich club coefficient will be
calculated. If None (default), the maximum level is set to the
maximum degree of the adjacency matrix
Returns
-------
R : Kx1 np.ndarray
vector of rich-club coefficients for levels 1 to klevel
Nk : int
number of nodes with degree > k
Ek : int
number of edges remaining in subgraph with degree > k | 3.270103 | 1.716236 | 1.905392 |
'''
Parameters
----------
CIJ : NxN np.ndarray
weighted directed connection matrix
klevel : int | None
sets the maximum level at which the rich club coefficient will be
calculated. If None (default), the maximum level is set to the
maximum degree of the adjacency matrix
Returns
-------
Rw : Kx1 np.ndarray
vector of rich-club coefficients for levels 1 to klevel
'''
nr_nodes = len(CIJ)
# degree of each node is defined here as in+out
deg = np.sum((CIJ != 0), axis=0) + np.sum((CIJ.T != 0), axis=0)
if klevel is None:
klevel = np.max(deg)
Rw = np.zeros((klevel,))
# sort the weights of the network, with the strongest connection first
wrank = np.sort(CIJ.flat)[::-1]
for k in range(klevel):
SmallNodes, = np.where(deg < k + 1)
if np.size(SmallNodes) == 0:
Rw[k] = np.nan
continue
# remove small nodes with node degree < k
cutCIJ = np.delete(
np.delete(CIJ, SmallNodes, axis=0), SmallNodes, axis=1)
# total weight of connections in subset E>r
Wr = np.sum(cutCIJ)
# total number of connections in subset E>r
Er = np.size(np.where(cutCIJ.flat != 0), axis=1)
# E>r number of connections with max weight in network
wrank_r = wrank[:Er]
# weighted rich-club coefficient
Rw[k] = Wr / np.sum(wrank_r)
return Rw | def rich_club_wd(CIJ, klevel=None) | Parameters
----------
CIJ : NxN np.ndarray
weighted directed connection matrix
klevel : int | None
sets the maximum level at which the rich club coefficient will be
calculated. If None (default), the maximum level is set to the
maximum degree of the adjacency matrix
Returns
-------
Rw : Kx1 np.ndarray
vector of rich-club coefficients for levels 1 to klevel | 4.232376 | 3.11705 | 1.357815 |
'''
The s-core is the largest subnetwork comprising nodes of strength at
least s. This function computes the s-core for a given weighted
undirected connection matrix. Computation is analogous to the more
widely used k-core, but is based on node strengths instead of node
degrees.
Parameters
----------
CIJ : NxN np.ndarray
weighted undirected connection matrix
s : float
level of s-core. Note that can take on any fractional value.
Returns
-------
CIJscore : NxN np.ndarray
connection matrix of the s-core. This matrix contains only nodes with
a strength of at least s.
sn : int
size of s-core
'''
CIJscore = CIJ.copy()
while True:
str = strengths_und(CIJscore) # get strengths of matrix
# find nodes with strength <s
ff, = np.where(np.logical_and(str < s, str > 0))
if ff.size == 0:
break # if none found -> stop
# else peel away found nodes
CIJscore[ff, :] = 0
CIJscore[:, ff] = 0
sn = np.sum(str > 0)
return CIJscore, sn | def score_wu(CIJ, s) | The s-core is the largest subnetwork comprising nodes of strength at
least s. This function computes the s-core for a given weighted
undirected connection matrix. Computation is analogous to the more
widely used k-core, but is based on node strengths instead of node
degrees.
Parameters
----------
CIJ : NxN np.ndarray
weighted undirected connection matrix
s : float
level of s-core. Note that can take on any fractional value.
Returns
-------
CIJscore : NxN np.ndarray
connection matrix of the s-core. This matrix contains only nodes with
a strength of at least s.
sn : int
size of s-core | 5.627903 | 2.071018 | 2.717457 |
try:
return list(array).index(self.pad_value)
except ValueError:
return len(array) | def find_pad_index(self, array) | Find padding index.
Args:
array (list): integer list.
Returns:
idx: padding index.
Examples:
>>> array = [1, 2, 0]
>>> self.find_pad_index(array)
2 | 3.433372 | 4.777613 | 0.718638 |
lens = [self.find_pad_index(row) for row in y]
return lens | def get_length(self, y) | Get true length of y.
Args:
y (list): padded list.
Returns:
lens: true length of y.
Examples:
>>> y = [[1, 0, 0], [1, 1, 0], [1, 1, 1]]
>>> self.get_length(y)
[1, 2, 3] | 11.838353 | 18.766808 | 0.630813 |
y = [[self.id2label[idx] for idx in row[:l]]
for row, l in zip(y, lens)]
return y | def convert_idx_to_name(self, y, lens) | Convert label index to name.
Args:
y (list): label index list.
lens (list): true length of y.
Returns:
y: label name list.
Examples:
>>> # assumes that id2label = {1: 'B-LOC', 2: 'I-LOC'}
>>> y = [[1, 0, 0], [1, 2, 0], [1, 1, 1]]
>>> lens = [1, 2, 3]
>>> self.convert_idx_to_name(y, lens)
[['B-LOC'], ['B-LOC', 'I-LOC'], ['B-LOC', 'B-LOC', 'B-LOC']] | 4.016612 | 6.884893 | 0.583395 |
y_pred = self.model.predict_on_batch(X)
# reduce dimension.
y_true = np.argmax(y, -1)
y_pred = np.argmax(y_pred, -1)
lens = self.get_length(y_true)
y_true = self.convert_idx_to_name(y_true, lens)
y_pred = self.convert_idx_to_name(y_pred, lens)
return y_true, y_pred | def predict(self, X, y) | Predict sequences.
Args:
X (list): input data.
y (list): tags.
Returns:
y_true: true sequences.
y_pred: predicted sequences. | 2.607158 | 2.689478 | 0.969392 |
score = f1_score(y_true, y_pred)
print(' - f1: {:04.2f}'.format(score * 100))
print(classification_report(y_true, y_pred, digits=4))
return score | def score(self, y_true, y_pred) | Calculate f1 score.
Args:
y_true (list): true sequences.
y_pred (list): predicted sequences.
Returns:
score: f1 score. | 2.631655 | 3.435667 | 0.765981 |
# for nested list
if any(isinstance(s, list) for s in seq):
seq = [item for sublist in seq for item in sublist + ['O']]
prev_tag = 'O'
prev_type = ''
begin_offset = 0
chunks = []
for i, chunk in enumerate(seq + ['O']):
if suffix:
tag = chunk[-1]
type_ = chunk.split('-')[0]
else:
tag = chunk[0]
type_ = chunk.split('-')[-1]
if end_of_chunk(prev_tag, tag, prev_type, type_):
chunks.append((prev_type, begin_offset, i-1))
if start_of_chunk(prev_tag, tag, prev_type, type_):
begin_offset = i
prev_tag = tag
prev_type = type_
return chunks | def get_entities(seq, suffix=False) | Gets entities from sequence.
Args:
seq (list): sequence of labels.
Returns:
list: list of (chunk_type, chunk_start, chunk_end).
Example:
>>> from seqeval.metrics.sequence_labeling import get_entities
>>> seq = ['B-PER', 'I-PER', 'O', 'B-LOC']
>>> get_entities(seq)
[('PER', 0, 1), ('LOC', 3, 3)] | 2.192916 | 2.096879 | 1.0458 |
chunk_end = False
if prev_tag == 'E': chunk_end = True
if prev_tag == 'S': chunk_end = True
if prev_tag == 'B' and tag == 'B': chunk_end = True
if prev_tag == 'B' and tag == 'S': chunk_end = True
if prev_tag == 'B' and tag == 'O': chunk_end = True
if prev_tag == 'I' and tag == 'B': chunk_end = True
if prev_tag == 'I' and tag == 'S': chunk_end = True
if prev_tag == 'I' and tag == 'O': chunk_end = True
if prev_tag != 'O' and prev_tag != '.' and prev_type != type_:
chunk_end = True
return chunk_end | def end_of_chunk(prev_tag, tag, prev_type, type_) | Checks if a chunk ended between the previous and current word.
Args:
prev_tag: previous chunk tag.
tag: current chunk tag.
prev_type: previous type.
type_: current type.
Returns:
chunk_end: boolean. | 1.536627 | 1.618354 | 0.9495 |
chunk_start = False
if tag == 'B': chunk_start = True
if tag == 'S': chunk_start = True
if prev_tag == 'E' and tag == 'E': chunk_start = True
if prev_tag == 'E' and tag == 'I': chunk_start = True
if prev_tag == 'S' and tag == 'E': chunk_start = True
if prev_tag == 'S' and tag == 'I': chunk_start = True
if prev_tag == 'O' and tag == 'E': chunk_start = True
if prev_tag == 'O' and tag == 'I': chunk_start = True
if tag != 'O' and tag != '.' and prev_type != type_:
chunk_start = True
return chunk_start | def start_of_chunk(prev_tag, tag, prev_type, type_) | Checks if a chunk started between the previous and current word.
Args:
prev_tag: previous chunk tag.
tag: current chunk tag.
prev_type: previous type.
type_: current type.
Returns:
chunk_start: boolean. | 1.680201 | 1.776091 | 0.946011 |
true_entities = set(get_entities(y_true, suffix))
pred_entities = set(get_entities(y_pred, suffix))
nb_correct = len(true_entities & pred_entities)
nb_pred = len(pred_entities)
nb_true = len(true_entities)
p = nb_correct / nb_pred if nb_pred > 0 else 0
r = nb_correct / nb_true if nb_true > 0 else 0
score = 2 * p * r / (p + r) if p + r > 0 else 0
return score | def f1_score(y_true, y_pred, average='micro', suffix=False) | Compute the F1 score.
The F1 score can be interpreted as a weighted average of the precision and
recall, where an F1 score reaches its best value at 1 and worst score at 0.
The relative contribution of precision and recall to the F1 score are
equal. The formula for the F1 score is::
F1 = 2 * (precision * recall) / (precision + recall)
Args:
y_true : 2d array. Ground truth (correct) target values.
y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
score : float.
Example:
>>> from seqeval.metrics import f1_score
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> f1_score(y_true, y_pred)
0.50 | 1.569039 | 2.096663 | 0.748351 |
if any(isinstance(s, list) for s in y_true):
y_true = [item for sublist in y_true for item in sublist]
y_pred = [item for sublist in y_pred for item in sublist]
nb_correct = sum(y_t==y_p for y_t, y_p in zip(y_true, y_pred))
nb_true = len(y_true)
score = nb_correct / nb_true
return score | def accuracy_score(y_true, y_pred) | Accuracy classification score.
In multilabel classification, this function computes subset accuracy:
the set of labels predicted for a sample must *exactly* match the
corresponding set of labels in y_true.
Args:
y_true : 2d array. Ground truth (correct) target values.
y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
score : float.
Example:
>>> from seqeval.metrics import accuracy_score
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> accuracy_score(y_true, y_pred)
0.80 | 1.852974 | 2.14585 | 0.863515 |
true_entities = set(get_entities(y_true, suffix))
pred_entities = set(get_entities(y_pred, suffix))
nb_correct = len(true_entities & pred_entities)
nb_pred = len(pred_entities)
score = nb_correct / nb_pred if nb_pred > 0 else 0
return score | def precision_score(y_true, y_pred, average='micro', suffix=False) | Compute the precision.
The precision is the ratio ``tp / (tp + fp)`` where ``tp`` is the number of
true positives and ``fp`` the number of false positives. The precision is
intuitively the ability of the classifier not to label as positive a sample.
The best value is 1 and the worst value is 0.
Args:
y_true : 2d array. Ground truth (correct) target values.
y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
score : float.
Example:
>>> from seqeval.metrics import precision_score
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> precision_score(y_true, y_pred)
0.50 | 2.043265 | 3.226726 | 0.633232 |
true_entities = set(get_entities(y_true, suffix))
pred_entities = set(get_entities(y_pred, suffix))
nb_correct = len(true_entities & pred_entities)
nb_true = len(true_entities)
score = nb_correct / nb_true if nb_true > 0 else 0
return score | def recall_score(y_true, y_pred, average='micro', suffix=False) | Compute the recall.
The recall is the ratio ``tp / (tp + fn)`` where ``tp`` is the number of
true positives and ``fn`` the number of false negatives. The recall is
intuitively the ability of the classifier to find all the positive samples.
The best value is 1 and the worst value is 0.
Args:
y_true : 2d array. Ground truth (correct) target values.
y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
score : float.
Example:
>>> from seqeval.metrics import recall_score
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> recall_score(y_true, y_pred)
0.50 | 2.061201 | 3.229084 | 0.638324 |
performace_dict = dict()
if any(isinstance(s, list) for s in y_true):
y_true = [item for sublist in y_true for item in sublist]
y_pred = [item for sublist in y_pred for item in sublist]
performace_dict['TP'] = sum(y_t == y_p for y_t, y_p in zip(y_true, y_pred)
if ((y_t != 'O') or (y_p != 'O')))
performace_dict['FP'] = sum(y_t != y_p for y_t, y_p in zip(y_true, y_pred))
performace_dict['FN'] = sum(((y_t != 'O') and (y_p == 'O'))
for y_t, y_p in zip(y_true, y_pred))
performace_dict['TN'] = sum((y_t == y_p == 'O')
for y_t, y_p in zip(y_true, y_pred))
return performace_dict | def performance_measure(y_true, y_pred) | Compute the performance metrics: TP, FP, FN, TN
Args:
y_true : 2d array. Ground truth (correct) target values.
y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
performance_dict : dict
Example:
>>> from seqeval.metrics import performance_measure
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'O', 'B-ORG'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> performance_measure(y_true, y_pred)
(3, 3, 1, 4) | 1.653895 | 1.681199 | 0.98376 |
true_entities = set(get_entities(y_true, suffix))
pred_entities = set(get_entities(y_pred, suffix))
name_width = 0
d1 = defaultdict(set)
d2 = defaultdict(set)
for e in true_entities:
d1[e[0]].add((e[1], e[2]))
name_width = max(name_width, len(e[0]))
for e in pred_entities:
d2[e[0]].add((e[1], e[2]))
last_line_heading = 'macro avg'
width = max(name_width, len(last_line_heading), digits)
headers = ["precision", "recall", "f1-score", "support"]
head_fmt = u'{:>{width}s} ' + u' {:>9}' * len(headers)
report = head_fmt.format(u'', *headers, width=width)
report += u'\n\n'
row_fmt = u'{:>{width}s} ' + u' {:>9.{digits}f}' * 3 + u' {:>9}\n'
ps, rs, f1s, s = [], [], [], []
for type_name, true_entities in d1.items():
pred_entities = d2[type_name]
nb_correct = len(true_entities & pred_entities)
nb_pred = len(pred_entities)
nb_true = len(true_entities)
p = nb_correct / nb_pred if nb_pred > 0 else 0
r = nb_correct / nb_true if nb_true > 0 else 0
f1 = 2 * p * r / (p + r) if p + r > 0 else 0
report += row_fmt.format(*[type_name, p, r, f1, nb_true], width=width, digits=digits)
ps.append(p)
rs.append(r)
f1s.append(f1)
s.append(nb_true)
report += u'\n'
# compute averages
report += row_fmt.format('micro avg',
precision_score(y_true, y_pred, suffix=suffix),
recall_score(y_true, y_pred, suffix=suffix),
f1_score(y_true, y_pred, suffix=suffix),
np.sum(s),
width=width, digits=digits)
report += row_fmt.format(last_line_heading,
np.average(ps, weights=s),
np.average(rs, weights=s),
np.average(f1s, weights=s),
np.sum(s),
width=width, digits=digits)
return report | def classification_report(y_true, y_pred, digits=2, suffix=False) | Build a text report showing the main classification metrics.
Args:
y_true : 2d array. Ground truth (correct) target values.
y_pred : 2d array. Estimated targets as returned by a classifier.
digits : int. Number of digits for formatting output floating point values.
Returns:
report : string. Text summary of the precision, recall, F1 score for each class.
Examples:
>>> from seqeval.metrics import classification_report
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> print(classification_report(y_true, y_pred))
precision recall f1-score support
<BLANKLINE>
MISC 0.00 0.00 0.00 1
PER 1.00 1.00 1.00 1
<BLANKLINE>
micro avg 0.50 0.50 0.50 2
macro avg 0.50 0.50 0.50 2
<BLANKLINE> | 1.631171 | 1.664434 | 0.980016 |
if isinstance(td, numbers.Real):
td = datetime.timedelta(seconds=td)
return td.total_seconds() | def _timedelta_to_seconds(td) | Convert a datetime.timedelta object into a seconds interval for
rotating file ouput.
:param td: datetime.timedelta
:return: time in seconds
:rtype: int | 3.383373 | 4.214242 | 0.802843 |
adapter = _LOGGERS.get(name)
if not adapter:
# NOTE(jd) Keep using the `adapter' variable here because so it's not
# collected by Python since _LOGGERS contains only a weakref
adapter = KeywordArgumentAdapter(logging.getLogger(name), kwargs)
_LOGGERS[name] = adapter
return adapter | def getLogger(name=None, **kwargs) | Build a logger with the given name.
:param name: The name for the logger. This is usually the module
name, ``__name__``.
:type name: string | 9.426425 | 10.516461 | 0.89635 |
root_logger = logging.getLogger(None)
# Remove all handlers
for handler in list(root_logger.handlers):
root_logger.removeHandler(handler)
# Add configured handlers
for out in outputs:
if isinstance(out, str):
out = output.preconfigured.get(out)
if out is None:
raise RuntimeError("Output {} is not available".format(out))
out.add_to_logger(root_logger)
root_logger.setLevel(level)
program_logger = logging.getLogger(program_name)
def logging_excepthook(exc_type, value, tb):
program_logger.critical(
"".join(traceback.format_exception(exc_type, value, tb)))
sys.excepthook = logging_excepthook
if capture_warnings:
logging.captureWarnings(True) | def setup(level=logging.WARNING, outputs=[output.STDERR], program_name=None,
capture_warnings=True) | Setup Python logging.
This will setup basic handlers for Python logging.
:param level: Root log level.
:param outputs: Iterable of outputs to log to.
:param program_name: The name of the program. Auto-detected if not set.
:param capture_warnings: Capture warnings from the `warnings' module. | 2.34559 | 2.589242 | 0.905899 |
for logger, level in loggers_and_log_levels:
if isinstance(level, str):
level = level.upper()
logging.getLogger(logger).setLevel(level) | def set_default_log_levels(loggers_and_log_levels) | Set default log levels for some loggers.
:param loggers_and_log_levels: List of tuple (logger name, level). | 2.207919 | 2.734063 | 0.80756 |
swag_opts = {}
if ctx.type == 'file':
swag_opts = {
'swag.type': 'file',
'swag.data_dir': ctx.data_dir,
'swag.data_file': ctx.data_file
}
elif ctx.type == 's3':
swag_opts = {
'swag.type': 's3',
'swag.bucket_name': ctx.bucket_name,
'swag.data_file': ctx.data_file,
'swag.region': ctx.region
}
elif ctx.type == 'dynamodb':
swag_opts = {
'swag.type': 'dynamodb',
'swag.region': ctx.region
}
return SWAGManager(**parse_swag_config_options(swag_opts)) | def create_swag_from_ctx(ctx) | Creates SWAG client from the current context. | 2.044177 | 1.991229 | 1.02659 |
if not ctx.file:
ctx.data_file = data_file
if not ctx.data_dir:
ctx.data_dir = data_dir
ctx.type = 'file' | def file(ctx, data_dir, data_file) | Use the File SWAG Backend | 3.51649 | 3.2785 | 1.072591 |
if not ctx.data_file:
ctx.data_file = data_file
if not ctx.bucket_name:
ctx.bucket_name = bucket_name
if not ctx.region:
ctx.region = region
ctx.type = 's3' | def s3(ctx, bucket_name, data_file, region) | Use the S3 SWAG backend. | 2.281693 | 2.278301 | 1.001489 |
if ctx.namespace != 'accounts':
click.echo(
click.style('Only account data is available for listing.', fg='red')
)
return
swag = create_swag_from_ctx(ctx)
accounts = swag.get_all()
_table = [[result['name'], result.get('id')] for result in accounts]
click.echo(
tabulate(_table, headers=["Account Name", "Account Number"])
) | def list(ctx) | List SWAG account info. | 5.339768 | 4.67078 | 1.143228 |
swag = create_swag_from_ctx(ctx)
accounts = swag.get_service_enabled(name)
_table = [[result['name'], result.get('id')] for result in accounts]
click.echo(
tabulate(_table, headers=["Account Name", "Account Number"])
) | def list_service(ctx, name) | Retrieve accounts pertaining to named service. | 5.894555 | 5.120251 | 1.151224 |
if ctx.type == 'file':
if ctx.data_file:
file_path = ctx.data_file
else:
file_path = os.path.join(ctx.data_file, ctx.namespace + '.json')
# todo make this more like alemebic and determine/load versions automatically
with open(file_path, 'r') as f:
data = json.loads(f.read())
data = run_migration(data, start_version, end_version)
with open(file_path, 'w') as f:
f.write(json.dumps(data)) | def migrate(ctx, start_version, end_version) | Transition from one SWAG schema to another. | 4.053816 | 4.010408 | 1.010824 |
data = []
if ctx.type == 'file':
if ctx.data_file:
file_path = ctx.data_file
else:
file_path = os.path.join(ctx.data_dir, ctx.namespace + '.json')
with open(file_path, 'r') as f:
data = json.loads(f.read())
swag_opts = {
'swag.type': 'dynamodb'
}
swag = SWAGManager(**parse_swag_config_options(swag_opts))
for item in data:
time.sleep(2)
swag.create(item, dry_run=ctx.dry_run) | def propagate(ctx) | Transfers SWAG data from one backend to another | 3.860934 | 3.538641 | 1.091078 |
swag = create_swag_from_ctx(ctx)
data = json.loads(data.read())
for account in data:
swag.create(account, dry_run=ctx.dry_run) | def create(ctx, data) | Create a new SWAG item. | 6.227707 | 5.389903 | 1.15544 |
enabled = False if disabled else True
swag = create_swag_from_ctx(ctx)
accounts = swag.get_all(search_filter=path)
log.debug('Searching for accounts. Found: {} JMESPath: `{}`'.format(len(accounts), path))
for a in accounts:
try:
if not swag.get_service(name, search_filter="[?id=='{id}']".format(id=a['id'])):
log.info('Found an account to update. AccountName: {name} AccountNumber: {number}'.format(name=a['name'], number=a['id']))
status = []
for region in regions:
status.append(
{
'enabled': enabled,
'region': region
}
)
a['services'].append(
{
'name': name,
'status': status
}
)
swag.update(a, dry_run=ctx.dry_run)
except InvalidSWAGDataException as e:
log.warning('Found a data quality issue. AccountName: {name} AccountNumber: {number}'.format(name=a['name'], number=a['id']))
log.info('Service has been deployed to all matching accounts.') | def deploy_service(ctx, path, name, regions, disabled) | Deploys a new service JSON to multiple accounts. NAME is the service name you wish to deploy. | 4.232579 | 4.072337 | 1.039349 |
swag = create_swag_from_ctx(ctx)
for k, v in json.loads(data.read()).items():
for account in v['accounts']:
data = {
'description': 'This is an AWS owned account used for {}'.format(k),
'id': account['account_id'],
'contacts': [],
'owner': 'aws',
'provider': 'aws',
'sensitive': False,
'email': '[email protected]',
'name': k + '-' + account['region']
}
click.echo(click.style(
'Seeded Account. AccountName: {}'.format(data['name']), fg='green')
)
swag.create(data, dry_run=ctx.dry_run) | def seed_aws_data(ctx, data) | Seeds SWAG from a list of known AWS accounts. | 5.648676 | 5.337643 | 1.058272 |
swag = create_swag_from_ctx(ctx)
accounts = swag.get_all()
_ids = [result.get('id') for result in accounts]
client = boto3.client('organizations')
paginator = client.get_paginator('list_accounts')
response_iterator = paginator.paginate()
count = 0
for response in response_iterator:
for account in response['Accounts']:
if account['Id'] in _ids:
click.echo(click.style(
'Ignoring Duplicate Account. AccountId: {} already exists in SWAG'.format(account['Id']), fg='yellow')
)
continue
if account['Status'] == 'SUSPENDED':
status = 'deprecated'
else:
status = 'created'
data = {
'id': account['Id'],
'name': account['Name'],
'description': 'Account imported from AWS organization.',
'email': account['Email'],
'owner': owner,
'provider': 'aws',
'contacts': [],
'sensitive': False,
'status': [{'region': 'all', 'status': status}]
}
click.echo(click.style(
'Seeded Account. AccountName: {}'.format(data['name']), fg='green')
)
count += 1
swag.create(data, dry_run=ctx.dry_run)
click.echo('Seeded {} accounts to SWAG.'.format(count)) | def seed_aws_organization(ctx, owner) | Seeds SWAG from an AWS organziation. | 3.136531 | 3.038494 | 1.032265 |
logger.debug('Loading item from s3. Bucket: {bucket} Key: {key}'.format(
bucket=bucket,
key=data_file
))
# If the file doesn't exist, then return an empty dict:
try:
data = _get_from_s3(client, bucket, data_file)
except ClientError as ce:
if ce.response['Error']['Code'] == 'NoSuchKey':
return {}
else:
raise ce
if sys.version_info > (3,):
data = data.decode('utf-8')
return json.loads(data) | def load_file(client, bucket, data_file) | Tries to load JSON data from S3. | 2.725006 | 2.565884 | 1.062014 |
logger.debug('Writing {number_items} items to s3. Bucket: {bucket} Key: {key}'.format(
number_items=len(items),
bucket=bucket,
key=data_file
))
if not dry_run:
return _put_to_s3(client, bucket, data_file, json.dumps(items)) | def save_file(client, bucket, data_file, items, dry_run=None) | Tries to write JSON data to data file in S3. | 2.965835 | 2.824596 | 1.050003 |
logger.debug('Creating new item. Item: {item} Path: {data_file}'.format(
item=item,
data_file=self.data_file
))
items = load_file(self.client, self.bucket_name, self.data_file)
items = append_item(self.namespace, self.version, item, items)
save_file(self.client, self.bucket_name, self.data_file, items, dry_run=dry_run)
return item | def create(self, item, dry_run=None) | Creates a new item in file. | 3.412635 | 3.249219 | 1.050294 |
logger.debug('Updating item. Item: {item} Path: {data_file}'.format(
item=item,
data_file=self.data_file
))
self.delete(item, dry_run=dry_run)
return self.create(item, dry_run=dry_run) | def update(self, item, dry_run=None) | Updates item info in file. | 3.362245 | 3.107551 | 1.08196 |
logger.debug('Fetching items. Path: {data_file}'.format(
data_file=self.data_file
))
return load_file(self.client, self.bucket_name, self.data_file) | def get_all(self) | Gets all items in file. | 6.916444 | 5.905971 | 1.171094 |
logger.debug('Health Check on S3 file for: {namespace}'.format(
namespace=self.namespace
))
try:
self.client.head_object(Bucket=self.bucket_name, Key=self.data_file)
return True
except ClientError as e:
logger.debug('Error encountered with S3. Assume unhealthy') | def health_check(self) | Uses head object to make sure the file exists in S3. | 5.518123 | 4.549799 | 1.212828 |
logger.debug('Deleting item. Item: {item} Table: {namespace}'.format(
item=item,
namespace=self.namespace
))
if not dry_run:
self.table.delete_item(Key={'id': item['id']})
return item | def delete(self, item, dry_run=None) | Deletes item in file. | 4.194571 | 4.179071 | 1.003709 |
logger.debug('Updating item. Item: {item} Table: {namespace}'.format(
item=item,
namespace=self.namespace
))
if not dry_run:
self.table.put_item(Item=item)
return item | def update(self, item, dry_run=None) | Updates item info in file. | 4.660406 | 4.426467 | 1.05285 |
logger.debug('Fetching items. Table: {namespace}'.format(
namespace=self.namespace
))
rows = []
result = self.table.scan()
while True:
next_token = result.get('LastEvaluatedKey', None)
rows += result['Items']
if next_token:
result = self.table.scan(ExclusiveStartKey=next_token)
else:
break
return rows | def get_all(self) | Gets all items in file. | 3.477647 | 3.291455 | 1.056569 |
logger.debug('Health Check on Table: {namespace}'.format(
namespace=self.namespace
))
try:
self.get_all()
return True
except ClientError as e:
logger.exception(e)
logger.error('Error encountered with Database. Assume unhealthy')
return False | def health_check(self) | Gets a single item to determine if Dynamo is functioning. | 7.427935 | 6.193407 | 1.199329 |
options = {}
for key, val in config.items():
if key.startswith('swag.backend.'):
options[key[12:]] = val
if key.startswith('swag.'):
options[key[5:]] = val
if options.get('type') == 's3':
return S3OptionsSchema(strict=True).load(options).data
elif options.get('type') == 'dynamodb':
return DynamoDBOptionsSchema(strict=True).load(options).data
else:
return FileOptionsSchema(strict=True).load(options).data | def parse_swag_config_options(config) | Ensures that options passed to the backend are valid. | 2.244843 | 2.178895 | 1.030267 |
def wrapper(fn):
def deprecated_method(*args, **kargs):
warnings.warn(message, DeprecationWarning, 2)
return fn(*args, **kargs)
# TODO: use decorator ? functools.wrapper ?
deprecated_method.__name__ = fn.__name__
deprecated_method.__doc__ = "%s\n\n%s" % (message, fn.__doc__)
return deprecated_method
return wrapper | def deprecated(message) | Deprecated function decorator. | 3.001265 | 2.927637 | 1.025149 |
for key in sub_dict.keys():
if key not in dictionary:
return False
if (type(sub_dict[key]) is not dict) and (sub_dict[key] != dictionary[key]):
return False
if (type(sub_dict[key]) is dict) and (not is_sub_dict(sub_dict[key], dictionary[key])):
return False
return True | def is_sub_dict(sub_dict, dictionary) | Legacy filter for determining if a given dict is present. | 1.585336 | 1.563143 | 1.014198 |
for account in get_all_accounts(bucket, region, json_path)['accounts']:
if 'aws' in account['type']:
if account['name'] == account_name:
return account
elif alias:
for a in account['alias']:
if a == account_name:
return account | def get_by_name(account_name, bucket, region='us-west-2', json_path='accounts.json', alias=None) | Given an account name, attempts to retrieve associated account info. | 2.65664 | 2.737844 | 0.97034 |
for account in get_all_accounts(bucket, region, json_path)['accounts']:
if 'aws' in account['type']:
if account['metadata']['account_number'] == account_number:
return account | def get_by_aws_account_number(account_number, bucket, region='us-west-2', json_path='accounts.json') | Given an account number (or ID), attempts to retrieve associated account info. | 2.93157 | 2.993581 | 0.979285 |
swag_opts = {
'swag.type': 's3',
'swag.bucket_name': bucket,
'swag.bucket_region': region,
'swag.data_file': json_path,
'swag.schema_version': 1
}
swag = SWAGManager(**parse_swag_config_options(swag_opts))
accounts = swag.get_all()
accounts = [account for account in accounts['accounts'] if is_sub_dict(filters, account)]
return {'accounts': accounts} | def get_all_accounts(bucket, region='us-west-2', json_path='accounts.json', **filters) | Fetches all the accounts from SWAG. | 3.77418 | 3.467073 | 1.088578 |
try:
with open(data_file, 'r', encoding='utf-8') as f:
return json.loads(f.read())
except JSONDecodeError as e:
return [] | def load_file(data_file) | Tries to load JSON from data file. | 2.78169 | 2.669593 | 1.04199 |
if dry_run:
return
with open(data_file, 'w', encoding='utf-8') as f:
if sys.version_info > (3, 0):
f.write(json.dumps(data))
else:
f.write(json.dumps(data).decode('utf-8')) | def save_file(data_file, data, dry_run=None) | Writes JSON data to data file. | 2.002866 | 1.912265 | 1.047379 |
logger.debug('Deleting item. Item: {item} Path: {data_file}'.format(
item=item,
data_file=self.data_file
))
items = load_file(self.data_file)
items = remove_item(self.namespace, self.version, item, items)
save_file(self.data_file, items, dry_run=dry_run)
return item | def delete(self, item, dry_run=None) | Deletes item in file. | 3.619984 | 3.493322 | 1.036258 |
logger.debug('Fetching items. Path: {data_file}'.format(
data_file=self.data_file
))
return load_file(self.data_file) | def get_all(self) | Gets all items in file. | 7.790462 | 6.248349 | 1.246803 |
logger.debug('Health Check on file for: {namespace}'.format(
namespace=self.namespace
))
return os.path.isfile(self.data_file) | def health_check(self) | Checks to make sure the file is there. | 10.294732 | 7.356252 | 1.399453 |
if namespace == 'accounts':
if version == 2:
schema = v2.AccountSchema(strict=True, context=context)
return schema.load(item).data
elif version == 1:
return v1.AccountSchema(strict=True).load(item).data
raise InvalidSWAGDataException('Schema version is not supported. Version: {}'.format(version))
raise InvalidSWAGDataException('Namespace not supported. Namespace: {}'.format(namespace)) | def validate(item, namespace='accounts', version=2, context=None) | Validate item against version schema.
Args:
item: data object
namespace: backend namespace
version: schema version
context: schema context object | 3.328918 | 3.399098 | 0.979354 |
self.version = kwargs['schema_version']
self.namespace = kwargs['namespace']
self.backend = get(kwargs['type'])(*args, **kwargs)
self.context = kwargs.pop('schema_context', {}) | def configure(self, *args, **kwargs) | Configures a SWAG manager. Overrides existing configuration. | 6.883242 | 6.97028 | 0.987513 |
return self.backend.create(validate(item, version=self.version, context=self.context), dry_run=dry_run) | def create(self, item, dry_run=None) | Create a new item in backend. | 6.506313 | 5.72955 | 1.135571 |
return self.backend.delete(item, dry_run=dry_run) | def delete(self, item, dry_run=None) | Delete an item in backend. | 4.070331 | 3.40309 | 1.196069 |
return self.backend.update(validate(item, version=self.version, context=self.context), dry_run=dry_run) | def update(self, item, dry_run=None) | Update an item in backend. | 6.668077 | 5.990669 | 1.113077 |
items = self.backend.get_all()
if not items:
if self.version == 1:
return {self.namespace: []}
return []
if search_filter:
items = jmespath.search(search_filter, items)
return items | def get_all(self, search_filter=None) | Fetch all data from backend. | 4.250572 | 3.996511 | 1.063571 |
if not accounts_list:
accounts = self.get_all(search_filter=search_filter)
else:
accounts = accounts_list
if self.version == 1:
accounts = accounts['accounts']
enabled = []
for account in accounts:
if self.version == 1:
account_filter = "accounts[?id=='{id}']".format(id=account['id'])
else:
account_filter = "[?id=='{id}']".format(id=account['id'])
service = self.get_service(name, search_filter=account_filter)
if self.version == 1:
if service:
service = service['enabled'] # no region information available in v1
else:
if not region:
service_filter = "status[?enabled]"
else:
service_filter = "status[?(region=='{region}' || region=='all') && enabled]".format(region=region)
service = jmespath.search(service_filter, service)
if service:
enabled.append(account)
return enabled | def get_service_enabled(self, name, accounts_list=None, search_filter=None, region=None) | Get a list of accounts where a service has been enabled. | 2.72645 | 2.644813 | 1.030867 |
if self.version == 1:
service_filter = "service.{name}".format(name=name)
return jmespath.search(service_filter, self.get(search_filter))
else:
service_filter = "services[?name=='{}']".format(name)
return one(jmespath.search(service_filter, self.get(search_filter))) | def get_service(self, name, search_filter) | Fetch service metadata. | 2.94136 | 2.919153 | 1.007607 |
service_filter = "services[?name=='{}'].metadata.name".format(name)
return one(jmespath.search(service_filter, self.get(search_filter))) | def get_service_name(self, name, search_filter) | Fetch account name as referenced by a particular service. | 7.048717 | 6.506659 | 1.083308 |
search_filter = "[?name=='{}']".format(name)
if alias:
if self.version == 1:
search_filter = "accounts[?name=='{name}' || contains(alias, '{name}')]".format(name=name)
elif self.version == 2:
search_filter = "[?name=='{name}' || contains(aliases, '{name}')]".format(name=name)
return self.get_all(search_filter) | def get_by_name(self, name, alias=None) | Fetch all accounts with name specified, optionally include aliases. | 3.545602 | 3.053985 | 1.160976 |
items = []
if version_start == 1 and version_end == 2:
for item in data['accounts']:
items.append(v2.upgrade(item))
if version_start == 2 and version_end == 1:
for item in data:
items.append(v2.downgrade(item))
items = {'accounts': items}
return items | def run_migration(data, version_start, version_end) | Runs migration against a data set. | 3.283743 | 3.239978 | 1.013508 |
environ = 'test'
if 'prod' in account['tags']:
environ = 'prod'
owner = 'netflix'
if not account['ours']:
owner = 'third-party'
services = []
if account['metadata'].get('s3_name'):
services.append(
dict(
name='s3',
metadata=dict(
name=account['metadata']['s3_name']
),
status=[
dict(
region='all',
enabled=True
)
]
)
)
if account['metadata'].get('cloudtrail_index'):
services.append(
dict(
name='cloudtrail',
metadata=dict(
esIndex=account['metadata']['cloudtrail_index'],
kibanaUrl=account['metadata']['cloudtrail_kibana_url']
),
status=[
dict(
region='all',
enabled=True
)
]
)
)
if account.get('bastion'):
services.append(
dict(
name='bastion',
metadata=dict(
hostname=account['bastion']
),
status=[
dict(
region='all',
enabled=True
)
]
)
)
for service in account['services'].keys():
s = dict(
name=service,
status=[
dict(
region='all',
enabled=account['services'][service].get('enabled', True)
)
]
)
if service == 'spinnaker':
s['metadata'] = {'name': account['services'][service]['name']}
if service == 'lazyfalcon':
if account['services'][service].get('owner'):
s['metadata'] = {'owner': account['services'][service]['owner']}
if service == 'titus':
s['metadata'] = {'stacks': account['services'][service]['stacks']}
services.append(s)
if account['metadata'].get('project_id'):
item_id = account['metadata']['project_id']
elif account['metadata'].get('account_number'):
item_id = account['metadata']['account_number']
else:
raise Exception('No id found, are you sure this is in v1 swag format.')
status = []
if account['type'] == 'aws':
status = [
{
'region': 'us-east-1',
'status': 'ready'
},
{
'region': 'us-west-2',
'status': 'ready'
},
{
'region': 'eu-west-1',
'status': 'ready'
},
{
'region': 'us-east-2',
'status': 'in-active'
},
{
'region': 'us-west-1',
'status': 'in-active'
},
{
'region': 'ca-central-1',
'status': 'in-active'
},
{
'region': 'ap-south-1',
'status': 'in-active'
},
{
'region': 'ap-northeast-2',
'status': 'in-active'
},
{
'region': 'ap-northeast-1',
'status': 'in-active'
},
{
'region': 'ap-southeast-1',
'status': 'in-active'
},
{
'region': 'ap-southeast-2',
'status': 'in-active'
},
{
'region': 'eu-west-2',
'status': 'in-active'
},
{
'region': 'eu-central-1',
'status': 'in-active'
},
{
'region': 'sa-east-1',
'status': 'in-active'
},
]
return dict(
id=item_id,
email=account['metadata'].get('email'),
name=account['name'],
contacts=account['owners'],
provider=account['type'],
status=status,
tags=list(set(account['tags'])),
environment=environ,
description=account['description'],
sensitive=account['cmc_required'],
owner=owner,
aliases=account['alias'],
services=services,
account_status=account['account_status']
) | def upgrade(account) | Transforms data from a v1 format to a v2 format | 2.135894 | 2.116345 | 1.009237 |
d_account = dict(schema_version=1, metadata={'email': account['email']},
tags=list(set([account['environment']] + account.get('tags', []))))
v1_services = {}
for service in account.get('services', []):
if service['name'] == 's3':
if service['metadata'].get('name'):
d_account['metadata']['s3_name'] = service['metadata']['name']
elif service['name'] == 'cloudtrail':
d_account['metadata']['cloudtrail_index'] = service['metadata']['esIndex']
d_account['metadata']['cloudtrail_kibana_url'] = service['metadata']['kibanaUrl']
elif service['name'] == 'bastion':
d_account['bastion'] = service['metadata']['hostname']
elif service['name'] == 'titus':
v1_services['titus'] = {
'stacks': service['metadata']['stacks'],
'enabled': service['status'][0]['enabled']
}
elif service['name'] == 'spinnaker':
v1_services['spinnaker'] = {
'name': service['metadata'].get('name', account["name"]),
'enabled': service['status'][0]['enabled']
}
elif service['name'] == 'awwwdit':
v1_services['awwwdit'] = {
'enabled': service['status'][0]['enabled']
}
elif service['name'] == 'security_monkey':
v1_services['security_monkey'] = {
'enabled': service['status'][0]['enabled']
}
elif service['name'] == 'poseidon':
v1_services['poseidon'] = {
'enabled': service['status'][0]['enabled']
}
elif service['name'] == 'rolliepollie':
v1_services['rolliepollie'] = {
'enabled': service['status'][0]['enabled']
}
elif service['name'] == 'lazyfalcon':
owner = None
if service.get('metadata'):
if service['metadata'].get('owner'):
owner = service['metadata']['owner']
v1_services['lazyfalcon'] = {
'enabled': service['status'][0]['enabled'],
'owner': owner
}
if account['provider'] == 'aws':
d_account['metadata']['account_number'] = account['id']
elif account['provider'] == 'gcp':
d_account['metadata']['project_id'] = account['id']
d_account['id'] = account['provider'] + '-' + account['id']
d_account['cmc_required'] = account['sensitive']
d_account['name'] = account['name']
d_account['alias'] = account['aliases']
d_account['description'] = account['description']
d_account['owners'] = account['contacts']
d_account['type'] = account['provider']
d_account['ours'] = True if account['owner'] == 'netflix' else False
d_account['netflix'] = True if account['owner'] == 'netflix' else False
d_account['services'] = v1_services
d_account['account_status'] = account['account_status']
return d_account | def downgrade(account) | Transforms data from v2 format to a v1 format | 2.515836 | 2.459906 | 1.022737 |
fields_to_validate = ['type', 'environment', 'owner']
for field in fields_to_validate:
value = data.get(field)
allowed_values = self.context.get(field)
if allowed_values and value not in allowed_values:
raise ValidationError('Must be one of {}'.format(allowed_values), field_names=field) | def validate_type(self, data) | Performs field validation against the schema context
if values have been provided to SWAGManager via the
swag.schema_context config object.
If the schema context for a given field is empty, then
we assume any value is valid for the given schema field. | 3.40011 | 3.136919 | 1.083901 |
deleted_status = 'deleted'
region_status = data.get('status')
account_status = data.get('account_status')
for region in region_status:
if region['status'] != deleted_status and account_status == deleted_status:
raise ValidationError('Account Status cannot be "deleted" if a region is not "deleted"') | def validate_account_status(self, data) | Performs field validation for account_status. If any
region is not deleted, account_status cannot be deleted | 3.825084 | 3.089755 | 1.237989 |
region_schema = RegionSchema()
supplied_regions = data.get('regions', {})
for region in supplied_regions.keys():
result = region_schema.validate(supplied_regions[region])
if len(result.keys()) > 0:
raise ValidationError(result) | def validate_regions_schema(self, data) | Performs field validation for regions. This should be
a dict with region names as the key and RegionSchema as the value | 3.045683 | 2.62948 | 1.158284 |
kwargs = {}
if coord.units.is_time_reference():
kwargs['value_format'] = get_date_format(coord)
else:
kwargs['unit'] = str(coord.units)
return Dimension(coord.name(), **kwargs) | def coord_to_dimension(coord) | Converts an iris coordinate to a HoloViews dimension. | 4.559172 | 3.746581 | 1.216889 |
import iris
order = {'T': -2, 'Z': -1, 'X': 1, 'Y': 2}
axis = iris.util.guess_coord_axis(coord)
return (order.get(axis, 0), coord and coord.name()) | def sort_coords(coord) | Sorts a list of DimCoords trying to ensure that
dates and pressure levels appear first and the
longitude and latitude appear last in the correct
order. | 5.079932 | 4.784752 | 1.061692 |
dim = dataset.get_dimension(dim, strict=True)
if dim in dataset.vdims:
coord_names = [c.name() for c in dataset.data.dim_coords]
data = dataset.data.copy().data
data = cls.canonicalize(dataset, data, coord_names)
return data.T.flatten() if flat else data
elif expanded:
data = cls.coords(dataset, dim.name, expanded=True)
return data.T.flatten() if flat else data
else:
return cls.coords(dataset, dim.name, ordered=True) | def values(cls, dataset, dim, expanded=True, flat=True, compute=True) | Returns an array of the values along the supplied dimension. | 3.713135 | 3.677017 | 1.009823 |
import iris
if not isinstance(dims, list): dims = [dims]
dims = [dataset.get_dimension(d, strict=True) for d in dims]
constraints = [d.name for d in dims]
slice_dims = [d for d in dataset.kdims if d not in dims]
# Update the kwargs appropriately for Element group types
group_kwargs = {}
group_type = dict if group_type == 'raw' else group_type
if issubclass(group_type, Element):
group_kwargs.update(util.get_param_values(dataset))
group_kwargs['kdims'] = slice_dims
group_kwargs.update(kwargs)
drop_dim = any(d not in group_kwargs['kdims'] for d in slice_dims)
unique_coords = product(*[cls.values(dataset, d, expanded=False)
for d in dims])
data = []
for key in unique_coords:
constraint = iris.Constraint(**dict(zip(constraints, key)))
extracted = dataset.data.extract(constraint)
if drop_dim:
extracted = group_type(extracted, kdims=slice_dims,
vdims=dataset.vdims).columns()
cube = group_type(extracted, **group_kwargs)
data.append((key, cube))
if issubclass(container_type, NdMapping):
with item_check(False), sorted_context(False):
return container_type(data, kdims=dims)
else:
return container_type(data) | def groupby(cls, dataset, dims, container_type=HoloMap, group_type=None, **kwargs) | Groups the data by one or more dimensions returning a container
indexed by the grouped dimensions containing slices of the
cube wrapped in the group_type. This makes it very easy to
break up a high-dimensional dataset into smaller viewable chunks. | 3.864412 | 3.936878 | 0.981593 |
import iris
from iris.experimental.equalise_cubes import equalise_attributes
cubes = []
for c, cube in datasets.items():
cube = cube.copy()
cube.add_aux_coord(iris.coords.DimCoord([c], var_name=dim.name))
cubes.append(cube)
cubes = iris.cube.CubeList(cubes)
equalise_attributes(cubes)
return cubes.merge_cube() | def concat_dim(cls, datasets, dim, vdims) | Concatenates datasets along one dimension | 2.475118 | 2.539071 | 0.974812 |
dim = dataset.get_dimension(dimension, strict=True)
values = dataset.dimension_values(dim.name, False)
return (np.nanmin(values), np.nanmax(values)) | def range(cls, dataset, dimension) | Computes the range along a particular dimension. | 3.453307 | 3.1659 | 1.090782 |
new_dataset = dataset.data.copy()
for name, new_dim in dimensions.items():
if name == new_dataset.name():
new_dataset.rename(new_dim.name)
for coord in new_dataset.dim_coords:
if name == coord.name():
coord.rename(new_dim.name)
return new_dataset | def redim(cls, dataset, dimensions) | Rename coords on the Cube. | 3.26163 | 2.852873 | 1.143279 |
return np.product([len(d.points) for d in dataset.data.coords(dim_coords=True)], dtype=np.intp) | def length(cls, dataset) | Returns the total number of samples in the dataset. | 9.589006 | 9.10297 | 1.053393 |
if not vdim:
raise Exception("Cannot add key dimension to a dense representation.")
raise NotImplementedError | def add_dimension(cls, columns, dimension, dim_pos, values, vdim) | Adding value dimensions not currently supported by iris interface.
Adding key dimensions not possible on dense interfaces. | 23.55876 | 9.202007 | 2.560176 |
import iris
def get_slicer(start, end):
def slicer(cell):
return start <= cell.point < end
return slicer
constraint_kwargs = {}
for dim, constraint in selection.items():
if isinstance(constraint, slice):
constraint = (constraint.start, constraint.stop)
if isinstance(constraint, tuple):
if constraint == (None, None):
continue
constraint = get_slicer(*constraint)
dim = dataset.get_dimension(dim, strict=True)
constraint_kwargs[dim.name] = constraint
return iris.Constraint(**constraint_kwargs) | def select_to_constraint(cls, dataset, selection) | Transform a selection dictionary to an iris Constraint. | 3.148003 | 2.956843 | 1.06465 |
import iris
constraint = cls.select_to_constraint(dataset, selection)
pre_dim_coords = [c.name() for c in dataset.data.dim_coords]
indexed = cls.indexed(dataset, selection)
extracted = dataset.data.extract(constraint)
if indexed and not extracted.dim_coords:
return extracted.data.item()
post_dim_coords = [c.name() for c in extracted.dim_coords]
dropped = [c for c in pre_dim_coords if c not in post_dim_coords]
for d in dropped:
extracted = iris.util.new_axis(extracted, d)
return extracted | def select(cls, dataset, selection_mask=None, **selection) | Apply a selection to the data. | 3.989912 | 3.869428 | 1.031138 |
geotype = getattr(gv_element, type(element).__name__, None)
if crs is None or geotype is None or isinstance(element, _Element):
return element
return geotype(element, crs=crs) | def convert_to_geotype(element, crs=None) | Converts a HoloViews element type to the equivalent GeoViews
element if given a coordinate reference system. | 4.838383 | 4.3176 | 1.120619 |
crss = [crs for crs in element.traverse(lambda x: x.crs, [_Element])
if crs is not None]
if not crss:
return {}
crs = crss[0]
if any(crs != ocrs for ocrs in crss[1:]):
raise ValueError('Cannot %s Elements in different '
'coordinate reference systems.'
% type(op).__name__)
return {'crs': crs} | def find_crs(op, element) | Traverses the supplied object looking for coordinate reference
systems (crs). If multiple clashing reference systems are found
it will throw an error. | 4.951342 | 4.861199 | 1.018543 |
return element.map(lambda x: convert_to_geotype(x, kwargs.get('crs')), Element) | def add_crs(op, element, **kwargs) | Converts any elements in the input to their equivalent geotypes
if given a coordinate reference system. | 13.727289 | 9.112581 | 1.506411 |
if isinstance(element, (Overlay, NdOverlay)):
return any(element.traverse(is_geographic, [_Element]))
if kdims:
kdims = [element.get_dimension(d) for d in kdims]
else:
kdims = element.kdims
if len(kdims) != 2 and not isinstance(element, (Graph, Nodes)):
return False
if isinstance(element.data, geographic_types) or isinstance(element, (WMTS, Feature)):
return True
elif isinstance(element, _Element):
return kdims == element.kdims and element.crs
else:
return False | def is_geographic(element, kdims=None) | Utility to determine whether the supplied element optionally
a subset of its key dimensions represent a geographic coordinate
system. | 4.420888 | 4.676623 | 0.945316 |
feature = self.data
if scale is not None:
feature = feature.with_scale(scale)
if bounds:
extent = (bounds[0], bounds[2], bounds[1], bounds[3])
else:
extent = None
geoms = [g for g in feature.intersecting_geometries(extent) if g is not None]
if not as_element:
return geoms
elif not geoms or 'Polygon' in geoms[0].geom_type:
return Polygons(geoms, crs=feature.crs)
elif 'Point' in geoms[0].geom_type:
return Points(geoms, crs=feature.crs)
else:
return Path(geoms, crs=feature.crs) | def geoms(self, scale=None, bounds=None, as_element=True) | Returns the geometries held by the Feature.
Parameters
----------
scale: str
Scale of the geometry to return expressed as string.
Available scales depends on the Feature type.
NaturalEarthFeature:
'10m', '50m', '110m'
GSHHSFeature:
'auto', 'coarse', 'low', 'intermediate', 'high', 'full'
bounds: tuple
Tuple of a bounding region to query for geometries in
as_element: boolean
Whether to wrap the geometries in an element
Returns
-------
geometries: Polygons/Path
Polygons or Path object wrapping around returned geometries | 2.58093 | 2.616573 | 0.986378 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.