code
string
signature
string
docstring
string
loss_without_docstring
float64
loss_with_docstring
float64
factor
float64
try: assert(G.graph["family"] == "chimera") m = G.graph["columns"] n = G.graph["rows"] t = G.graph["tile"] coordinates = G.graph["labels"] == "coordinate" except: raise ValueError("Target chimera graph needs to have columns, rows, \ tile, and label attributes to be able to identify faulty qubits.") perfect_graph = chimera_graph(m,n,t, coordinates=coordinates) draw_yield(G, chimera_layout(perfect_graph), perfect_graph, **kwargs)
def draw_chimera_yield(G, **kwargs)
Draws the given graph G with highlighted faults, according to layout. Parameters ---------- G : NetworkX graph The graph to be parsed for faults unused_color : tuple or color string (optional, default (0.9,0.9,0.9,1.0)) The color to use for nodes and edges of G which are not faults. If unused_color is None, these nodes and edges will not be shown at all. fault_color : tuple or color string (optional, default (1.0,0.0,0.0,1.0)) A color to represent nodes absent from the graph G. Colors should be length-4 tuples of floats between 0 and 1 inclusive. fault_shape : string, optional (default='x') The shape of the fault nodes. Specification is as matplotlib.scatter marker, one of 'so^>v<dph8'. fault_style : string, optional (default='dashed') Edge fault line style (solid|dashed|dotted,dashdot) kwargs : optional keywords See networkx.draw_networkx() for a description of optional keywords, with the exception of the `pos` parameter which is not used by this function. If `linear_biases` or `quadratic_biases` are provided, any provided `node_color` or `edge_color` arguments are ignored.
6.212608
6.161531
1.00829
if G is None: raise ValueError("Expected NetworkX graph!") # finding the maximum clique in a graph is equivalent to finding # the independent set in the complementary graph complement_G = nx.complement(G) return dnx.maximum_independent_set(complement_G, sampler, lagrange, **sampler_args)
def maximum_clique(G, sampler=None, lagrange=2.0, **sampler_args)
Returns an approximate maximum clique. A clique in an undirected graph G = (V, E) is a subset of the vertex set `C \subseteq V` such that for every two vertices in C there exists an edge connecting the two. This is equivalent to saying that the subgraph induced by C is complete (in some cases, the term clique may also refer to the subgraph).A maximum clique is a clique of the largest possible size in a given graph. This function works by finding the maximum independent set of the compliment graph of the given graph G which is equivalent to finding maximum clique. It defines a QUBO with ground states corresponding to a maximum weighted independent set and uses the sampler to sample from it. Parameters ---------- G : NetworkX graph The graph on which to find a maximum clique. sampler A binary quadratic model sampler. A sampler is a process that samples from low energy states in models defined by an Ising equation or a Quadratic Unconstrained Binary Optimization Problem (QUBO). A sampler is expected to have a 'sample_qubo' and 'sample_ising' method. A sampler is expected to return an iterable of samples, in order of increasing energy. If no sampler is provided, one must be provided using the `set_default_sampler` function. lagrange : optional (default 2) Lagrange parameter to weight constraints (no edges within set) versus objective (largest set possible). sampler_args Additional keyword parameters are passed to the sampler. Returns ------- clique_nodes : list List of nodes that form a maximum clique, as determined by the given sampler. Notes ----- Samplers by their nature may not return the optimal solution. This function does not attempt to confirm the quality of the returned sample. References ---------- `Maximum Clique on Wikipedia <https://en.wikipedia.org/wiki/Maximum_clique(graph_theory)>`_ `Independent Set on Wikipedia <https://en.wikipedia.org/wiki/Independent_set_(graph_theory)>`_ `QUBO on Wikipedia <https://en.wikipedia.org/wiki/Quadratic_unconstrained_binary_optimization>`_ .. [AL] Lucas, A. (2014). Ising formulations of many NP problems. Frontiers in Physics, Volume 2, Article 5.
4.816532
5.300198
0.908746
# if the nodes are orderable, we want the lowest-order one. try: nlist = sorted(G.nodes) except TypeError: nlist = G.nodes() n_nodes = len(nlist) # create the object that will store the indices chimera_indices = {} # ok, let's first check for the simple cases if n_nodes == 0: return chimera_indices elif n_nodes == 1: raise DWaveNetworkXException( 'Singleton graphs are not Chimera-structured') elif n_nodes == 2: return {nlist[0]: (0, 0, 0, 0), nlist[1]: (0, 0, 1, 0)} # next, let's get the bicoloring of the graph; this raises an exception if the graph is # not bipartite coloring = color(G) # we want the color of the node to be the u term in the Chimera-index, so we want the # first node in nlist to be color 0 if coloring[nlist[0]] == 1: coloring = {v: 1 - coloring[v] for v in coloring} # we also want the diameter of the graph # claim: diameter(G) == m + n for |G| > 2 dia = diameter(G) # we have already handled the |G| <= 2 case, so we know, for diameter == 2, that the Chimera # graph is a single tile if dia == 2: shore_indices = [0, 0] for v in nlist: u = coloring[v] chimera_indices[v] = (0, 0, u, shore_indices[u]) shore_indices[u] += 1 return chimera_indices # NB: max degree == shore size <==> one tile raise Exception('not yet implemented for Chimera graphs with more than one tile')
def find_chimera_indices(G)
Attempts to determine the Chimera indices of the nodes in graph G. See the `chimera_graph()` function for a definition of a Chimera graph and Chimera indices. Parameters ---------- G : NetworkX graph Should be a single-tile Chimera graph. Returns ------- chimera_indices : dict A dict of the form {node: (i, j, u, k), ...} where (i, j, u, k) is a 4-tuple of integer Chimera indices. Examples -------- >>> G = dnx.chimera_graph(1, 1, 4) >>> chimera_indices = dnx.find_chimera_indices(G) >>> G = nx.Graph() >>> G.add_edges_from([(0, 2), (1, 2), (1, 3), (0, 3)]) >>> chimera_indices = dnx.find_chimera_indices(G) >>> nx.set_node_attributes(G, chimera_indices, 'chimera_index')
4.968252
4.657345
1.066756
if n is None: n = m if t is None: t = 4 index_flip = m > n if index_flip: m, n = n, m def chimeraI(m0, n0, k0, l0): if index_flip: return m*2*t*n0 + 2*t*m0 + t*(1-k0) + l0 else: return n*2*t*m0 + 2*t*n0 + t*k0 + l0 order = [] for n_i in range(n): for t_i in range(t): for m_i in range(m): order.append(chimeraI(m_i, n_i, 0, t_i)) for n_i in range(n): for m_i in range(m): for t_i in range(t): order.append(chimeraI(m_i, n_i, 1, t_i)) return order
def chimera_elimination_order(m, n=None, t=None)
Provides a variable elimination order for a Chimera graph. A graph defined by chimera_graph(m,n,t) has treewidth max(m,n)*t. This function outputs a variable elimination order inducing a tree decomposition of that width. Parameters ---------- m : int Number of rows in the Chimera lattice. n : int (optional, default m) Number of columns in the Chimera lattice. t : int (optional, default 4) Size of the shore within each Chimera tile. Returns ------- order : list An elimination order that induces the treewidth of chimera_graph(m,n,t). Examples -------- >>> G = dnx.chimera_elimination_order(1, 1, 4) # a single Chimera tile
2.344289
2.465364
0.95089
i, j, u, k = q m, n, t = self.args return ((n*i + j)*2 + u)*t + k
def int(self, q)
Converts the chimera_index `q` into an linear_index Parameters ---------- q : tuple The chimera_index node label Returns ------- r : int The linear_index node label corresponding to q
12.661422
11.345951
1.115942
m, n, t = self.args r, k = divmod(r, t) r, u = divmod(r, 2) i, j = divmod(r, n) return i, j, u, k
def tuple(self, r)
Converts the linear_index `q` into an chimera_index Parameters ---------- r : int The linear_index node label Returns ------- q : tuple The chimera_index node label corresponding to r
5.416209
5.429136
0.997619
m, n, t = self.args return (((n*i + j)*2 + u)*t + k for (i, j, u, k) in qlist)
def ints(self, qlist)
Converts a sequence of chimera_index node labels into linear_index node labels, preserving order Parameters ---------- qlist : sequence of ints The chimera_index node labels Returns ------- rlist : iterable of tuples The linear_lindex node lables corresponding to qlist
11.682414
11.337182
1.030451
m, n, t = self.args for r in rlist: r, k = divmod(r, t) r, u = divmod(r, 2) i, j = divmod(r, n) yield i, j, u, k
def tuples(self, rlist)
Converts a sequence of linear_index node labels into chimera_index node labels, preserving order Parameters ---------- rlist : sequence of tuples The linear_index node labels Returns ------- qlist : iterable of ints The chimera_lindex node lables corresponding to rlist
5.192184
5.489171
0.945896
h = {v: 0.0 for v in S} J = {} for u, v, data in S.edges(data=True): try: J[(u, v)] = -1. * data['sign'] except KeyError: raise ValueError(("graph should be a signed social graph," "each edge should have a 'sign' attr")) return h, J
def structural_imbalance_ising(S)
Construct the Ising problem to calculate the structural imbalance of a signed social network. A signed social network graph is a graph whose signed edges represent friendly/hostile interactions between nodes. A signed social network is considered balanced if it can be cleanly divided into two factions, where all relations within a faction are friendly, and all relations between factions are hostile. The measure of imbalance or frustration is the minimum number of edges that violate this rule. Parameters ---------- S : NetworkX graph A social graph on which each edge has a 'sign' attribute with a numeric value. Returns ------- h : dict The linear biases of the Ising problem. Each variable in the Ising problem represent a node in the signed social network. The solution that minimized the Ising problem will assign each variable a value, either -1 or 1. This bi-coloring defines the factions. J : dict The quadratic biases of the Ising problem. Raises ------ ValueError If any edge does not have a 'sign' attribute. Examples -------- >>> import dimod >>> from dwave_networkx.algorithms.social import structural_imbalance_ising ... >>> S = nx.Graph() >>> S.add_edge('Alice', 'Bob', sign=1) # Alice and Bob are friendly >>> S.add_edge('Alice', 'Eve', sign=-1) # Alice and Eve are hostile >>> S.add_edge('Bob', 'Eve', sign=-1) # Bob and Eve are hostile ... >>> h, J = structural_imbalance_ising(S) >>> h # doctest: +SKIP {'Alice': 0.0, 'Bob': 0.0, 'Eve': 0.0} >>> J # doctest: +SKIP {('Alice', 'Bob'): -1.0, ('Alice', 'Eve'): 1.0, ('Bob', 'Eve'): 1.0}
5.194139
4.364497
1.190089
return all(u in G[v] for u, v in itertools.combinations(G[n], 2))
def is_simplicial(G, n)
Determines whether a node n in G is simplicial. Parameters ---------- G : NetworkX graph The graph on which to check whether node n is simplicial. n : node A node in graph G. Returns ------- is_simplicial : bool True if its neighbors form a clique. Examples -------- This example checks whether node 0 is simplicial for two graphs: G, a single Chimera unit cell, which is bipartite, and K_5, the :math:`K_5` complete graph. >>> import dwave_networkx as dnx >>> import networkx as nx >>> G = dnx.chimera_graph(1, 1, 4) >>> K_5 = nx.complete_graph(5) >>> dnx.is_simplicial(G, 0) False >>> dnx.is_simplicial(K_5, 0) True
4.477506
9.210176
0.486148
for w in G[n]: if all(u in G[v] for u, v in itertools.combinations(G[n], 2) if u != w and v != w): return True return False
def is_almost_simplicial(G, n)
Determines whether a node n in G is almost simplicial. Parameters ---------- G : NetworkX graph The graph on which to check whether node n is almost simplicial. n : node A node in graph G. Returns ------- is_almost_simplicial : bool True if all but one of its neighbors induce a clique Examples -------- This example checks whether node 0 is simplicial or almost simplicial for a :math:`K_5` complete graph with one edge removed. >>> import dwave_networkx as dnx >>> import networkx as nx >>> K_5 = nx.complete_graph(5) >>> K_5.remove_edge(1,3) >>> dnx.is_simplicial(K_5, 0) False >>> dnx.is_almost_simplicial(K_5, 0) True
3.15807
5.053149
0.624971
# we need only deal with the adjacency structure of G. We will also # be manipulating it directly so let's go ahead and make a new one adj = {v: set(G[v]) for v in G} lb = 0 # lower bound on treewidth while len(adj) > 1: # get the node with the smallest degree v = min(adj, key=lambda v: len(adj[v])) # find the vertex u such that the degree of u is minimal in the neighborhood of v neighbors = adj[v] if not neighbors: # if v is a singleton, then we can just delete it del adj[v] continue def neighborhood_degree(u): Gu = adj[u] return sum(w in Gu for w in neighbors) u = min(neighbors, key=neighborhood_degree) # update the lower bound new_lb = len(adj[v]) if new_lb > lb: lb = new_lb # contract the edge between u, v adj[v] = adj[v].union(n for n in adj[u] if n != v) for n in adj[v]: adj[n].add(v) for n in adj[u]: adj[n].discard(u) del adj[u] return lb
def minor_min_width(G)
Computes a lower bound for the treewidth of graph G. Parameters ---------- G : NetworkX graph The graph on which to compute a lower bound on the treewidth. Returns ------- lb : int A lower bound on the treewidth. Examples -------- This example computes a lower bound for the treewidth of the :math:`K_7` complete graph. >>> import dwave_networkx as dnx >>> import networkx as nx >>> K_7 = nx.complete_graph(7) >>> dnx.minor_min_width(K_7) 6 References ---------- Based on the algorithm presented in [GD]_
3.8398
3.854792
0.996111
# we need only deal with the adjacency structure of G. We will also # be manipulating it directly so let's go ahead and make a new one adj = {v: set(G[v]) for v in G} num_nodes = len(adj) # preallocate the return values order = [0] * num_nodes upper_bound = 0 for i in range(num_nodes): # get the node that adds the fewest number of edges when eliminated from the graph v = min(adj, key=lambda x: _min_fill_needed_edges(adj, x)) # if the number of neighbours of v is higher than upper_bound, update dv = len(adj[v]) if dv > upper_bound: upper_bound = dv # make v simplicial by making its neighborhood a clique then remove the # node _elim_adj(adj, v) order[i] = v return upper_bound, order
def min_fill_heuristic(G)
Computes an upper bound on the treewidth of graph G based on the min-fill heuristic for the elimination ordering. Parameters ---------- G : NetworkX graph The graph on which to compute an upper bound for the treewidth. Returns ------- treewidth_upper_bound : int An upper bound on the treewidth of the graph G. order : list An elimination order that induces the treewidth. Examples -------- This example computes an upper bound for the treewidth of the :math:`K_4` complete graph. >>> import dwave_networkx as dnx >>> import networkx as nx >>> K_4 = nx.complete_graph(4) >>> tw, order = dnx.min_fill_heuristic(K_4) References ---------- Based on the algorithm presented in [GD]_
6.030385
6.273484
0.96125
# we need only deal with the adjacency structure of G. We will also # be manipulating it directly so let's go ahead and make a new one adj = {v: set(G[v]) for v in G} num_nodes = len(adj) # preallocate the return values order = [0] * num_nodes upper_bound = 0 for i in range(num_nodes): # get the node with the smallest degree. We add random() which picks a value # in the range [0., 1.). This is ok because the lens are all integers. By # adding a small random value, we randomize which node is chosen without affecting # correctness. v = min(adj, key=lambda u: len(adj[u]) + random()) # if the number of neighbours of v is higher than upper_bound, update dv = len(adj[v]) if dv > upper_bound: upper_bound = dv # make v simplicial by making its neighborhood a clique then remove the # node _elim_adj(adj, v) order[i] = v return upper_bound, order
def min_width_heuristic(G)
Computes an upper bound on the treewidth of graph G based on the min-width heuristic for the elimination ordering. Parameters ---------- G : NetworkX graph The graph on which to compute an upper bound for the treewidth. Returns ------- treewidth_upper_bound : int An upper bound on the treewidth of the graph G. order : list An elimination order that induces the treewidth. Examples -------- This example computes an upper bound for the treewidth of the :math:`K_4` complete graph. >>> import dwave_networkx as dnx >>> import networkx as nx >>> K_4 = nx.complete_graph(4) >>> tw, order = dnx.min_width_heuristic(K_4) References ---------- Based on the algorithm presented in [GD]_
6.757953
6.956438
0.971467
# we need only deal with the adjacency structure of G. We will also # be manipulating it directly so let's go ahead and make a new one adj = {v: set(G[v]) for v in G} num_nodes = len(adj) # preallocate the return values order = [0] * num_nodes upper_bound = 0 # we will need to track the nodes and how many labelled neighbors # each node has labelled_neighbors = {v: 0 for v in adj} # working backwards for i in range(num_nodes): # pick the node with the most labelled neighbors v = max(labelled_neighbors, key=lambda u: labelled_neighbors[u] + random()) del labelled_neighbors[v] # increment all of its neighbors for u in adj[v]: if u in labelled_neighbors: labelled_neighbors[u] += 1 order[-(i + 1)] = v for v in order: # if the number of neighbours of v is higher than upper_bound, update dv = len(adj[v]) if dv > upper_bound: upper_bound = dv # make v simplicial by making its neighborhood a clique then remove the node # add v to order _elim_adj(adj, v) return upper_bound, order
def max_cardinality_heuristic(G)
Computes an upper bound on the treewidth of graph G based on the max-cardinality heuristic for the elimination ordering. Parameters ---------- G : NetworkX graph The graph on which to compute an upper bound for the treewidth. inplace : bool If True, G will be made an empty graph in the process of running the function, otherwise the function uses a copy of G. Returns ------- treewidth_upper_bound : int An upper bound on the treewidth of the graph G. order : list An elimination order that induces the treewidth. Examples -------- This example computes an upper bound for the treewidth of the :math:`K_4` complete graph. >>> import dwave_networkx as dnx >>> import networkx as nx >>> K_4 = nx.complete_graph(4) >>> dnx.max_cardinality_heuristic(K_4) (3, [3, 1, 0, 2]) References ---------- Based on the algorithm presented in [GD]_
4.83842
4.810787
1.005744
neighbors = adj[n] new_edges = set() for u, v in itertools.combinations(neighbors, 2): if v not in adj[u]: adj[u].add(v) adj[v].add(u) new_edges.add((u, v)) new_edges.add((v, u)) for v in neighbors: adj[v].discard(n) del adj[n] return new_edges
def _elim_adj(adj, n)
eliminates a variable, acting on the adj matrix of G, returning set of edges that were added. Parameters ---------- adj: dict A dict of the form {v: neighbors, ...} where v are vertices in a graph and neighbors is a set. Returns ---------- new_edges: set of edges that were added by eliminating v.
2.112292
2.318691
0.910985
# we need only deal with the adjacency structure of G. We will also # be manipulating it directly so let's go ahead and make a new one adj = {v: set(G[v]) for v in G} treewidth = 0 for v in order: # get the degree of the eliminated variable try: dv = len(adj[v]) except KeyError: raise ValueError('{} is in order but not in G'.format(v)) # the treewidth is the max of the current treewidth and the degree if dv > treewidth: treewidth = dv # eliminate v by making it simplicial (acts on adj in place) _elim_adj(adj, v) # if adj is not empty, then order did not include all of the nodes in G. if adj: raise ValueError('not all nodes in G were in order') return treewidth
def elimination_order_width(G, order)
Calculates the width of the tree decomposition induced by a variable elimination order. Parameters ---------- G : NetworkX graph The graph on which to compute the width of the tree decomposition. order : list The elimination order. Must be a list of all of the variables in G. Returns ------- treewidth : int The width of the tree decomposition induced by order. Examples -------- This example computes the width of the tree decomposition for the :math:`K_4` complete graph induced by an elimination order found through the min-width heuristic. >>> import dwave_networkx as dnx >>> import networkx as nx >>> K_4 = nx.complete_graph(4) >>> dnx.min_width_heuristic(K_4) (3, [1, 2, 0, 3]) >>> dnx.elimination_order_width(K_4, [1, 2, 0, 3]) 3
6.067363
6.417487
0.945442
# empty graphs have treewidth 0 and the nodes can be eliminated in # any order if not any(G[v] for v in G): return 0, list(G) # variable names are chosen to match the paper # our order will be stored in vector x, named to be consistent with # the paper x = [] # the partial order f = minor_min_width(G) # our current lower bound guess, f(s) in the paper g = 0 # g(s) in the paper # we need the best current update we can find. ub, order = min_fill_heuristic(G) # if the user has provided an upperbound or an elimination order, check those against # our current best guess if elimination_order is not None: upperbound = elimination_order_width(G, elimination_order) if upperbound <= ub: ub, order = upperbound, elimination_order if treewidth_upperbound is not None and treewidth_upperbound < ub: # in this case the order might never be found ub, order = treewidth_upperbound, [] # best found encodes the ub and the order best_found = ub, order # if our upper bound is the same as f, then we are done! Otherwise begin the # algorithm assert f <= ub, "Logic error" if f < ub: # we need only deal with the adjacency structure of G. We will also # be manipulating it directly so let's go ahead and make a new one adj = {v: set(G[v]) for v in G} best_found = _branch_and_bound(adj, x, g, f, best_found) return best_found
def treewidth_branch_and_bound(G, elimination_order=None, treewidth_upperbound=None)
Computes the treewidth of graph G and a corresponding perfect elimination ordering. Parameters ---------- G : NetworkX graph The graph on which to compute the treewidth and perfect elimination ordering. elimination_order: list (optional, Default None) An elimination order used as an initial best-known order. If a good order is provided, it may speed up computation. If not provided, the initial order is generated using the min-fill heuristic. treewidth_upperbound : int (optional, Default None) An upper bound on the treewidth. Note that using this parameter can result in no returned order. Returns ------- treewidth : int The treewidth of graph G. order : list An elimination order that induces the treewidth. Examples -------- This example computes the treewidth for the :math:`K_7` complete graph using an optionally provided elimination order (a sequential ordering of the nodes, arbitrally chosen). >>> import dwave_networkx as dnx >>> import networkx as nx >>> K_7 = nx.complete_graph(7) >>> dnx.treewidth_branch_and_bound(K_7, [0, 1, 2, 3, 4, 5, 6]) (6, [0, 1, 2, 3, 4, 5, 6]) References ---------- .. [GD] Gogate & Dechter, "A Complete Anytime Algorithm for Treewidth", https://arxiv.org/abs/1207.4109
6.55627
6.799875
0.964175
as_list = set() as_nodes = {v for v in adj if len(adj[v]) <= f and is_almost_simplicial(adj, v)} while as_nodes: as_list.union(as_nodes) for n in as_nodes: # update g and f dv = len(adj[n]) if dv > g: g = dv if g > f: f = g # eliminate v x.append(n) _elim_adj(adj, n) # see if we have any more simplicial nodes as_nodes = {v for v in adj if len(adj[v]) <= f and is_almost_simplicial(adj, v)} return g, f, as_list
def _graph_reduction(adj, x, g, f)
we can go ahead and remove any simplicial or almost-simplicial vertices from adj.
3.731144
3.25979
1.144596
new_edges = set() for u, v in itertools.combinations(adj, 2): if u in adj[v]: # already an edge continue if len(adj[u].intersection(adj[v])) > ub: new_edges.add((u, v)) while new_edges: for u, v in new_edges: adj[u].add(v) adj[v].add(u) new_edges = set() for u, v in itertools.combinations(adj, 2): if u in adj[v]: continue if len(adj[u].intersection(adj[v])) > ub: new_edges.add((u, v))
def _theorem5p4(adj, ub)
By Theorem 5.4, if any two vertices have ub + 1 common neighbors then we can add an edge between them.
1.847451
1.726842
1.069843
pruning_set = set() def _prune(x): if len(x) <= 2: return False # this is faster than tuple(x[-3:]) key = (tuple(x[:-2]), x[-2], x[-1]) return key in pruning_set def _explored(x): if len(x) >= 3: prunable = (tuple(x[:-2]), x[-1], x[-2]) pruning_set.add(prunable) return _prune, _explored
def _theorem6p1()
See Theorem 6.1 in paper.
3.817793
3.535603
1.079814
pruning_set2 = set() def _prune2(x, a, nbrs_a): frozen_nbrs_a = frozenset(nbrs_a) for i in range(len(x)): key = (tuple(x[0:i]), a, frozen_nbrs_a) if key in pruning_set2: return True return False def _explored2(x, a, nbrs_a): prunable = (tuple(x), a, frozenset(nbrs_a)) # (s,a,N(a)) pruning_set2.add(prunable) return prunable def _finished2(prunable): pruning_set2.remove(prunable) return _prune2, _explored2, _finished2
def _theorem6p2()
See Theorem 6.2 in paper. Prunes (x,...,a) when (x,a) is explored and a has the same neighbour set in both graphs.
3.184651
2.707938
1.176043
pruning_set3 = set() def _prune3(x, as_list, b): for a in as_list: key = (tuple(x), a, b) # (s,a,b) with (s,a) explored if key in pruning_set3: return True return False def _explored3(x, a, as_list): for b in as_list: prunable = (tuple(x), a, b) # (s,a,b) with (s,a) explored pruning_set3.add(prunable) return _prune3, _explored3
def _theorem6p3()
See Theorem 6.3 in paper. Prunes (s,b) when (s,a) is explored, b (almost) simplicial in (s,a), and a (almost) simplicial in (s,b)
3.541794
2.928783
1.209306
pruning_set4 = list() def _prune4(edges_b): for edges_a in pruning_set4: if edges_a.issubset(edges_b): return True return False def _explored4(edges_a): pruning_set4.append(edges_a) # (s,E_a) with (s,a) explored return _prune4, _explored4
def _theorem6p4()
See Theorem 6.4 in paper. Let E(x) denote the edges added when eliminating x. (edges_x below). Prunes (s,b) when (s,a) is explored and E(a) is a subset of E(b). For this theorem we only record E(a) rather than (s,E(a)) because we only need to check for pruning in the same s context (i.e the same level of recursion).
5.87558
3.637833
1.615132
try: import matplotlib.pyplot as plt import matplotlib as mpl except ImportError: raise ImportError("Matplotlib and numpy required for draw_chimera()") nodelist = G.nodes() edgelist = G.edges() faults_nodelist = perfect_graph.nodes() - nodelist faults_edgelist = perfect_graph.edges() - edgelist # To avoid matplotlib.pyplot.scatter warnings for single tuples, create # lists of colors from given colors. faults_node_color = [fault_color for v in faults_nodelist] faults_edge_color = [fault_color for v in faults_edgelist] # Draw faults with different style and shape draw(perfect_graph, layout, nodelist=faults_nodelist, edgelist=faults_edgelist, node_color=faults_node_color, edge_color=faults_edge_color, style=fault_style, node_shape=fault_shape, **kwargs ) # Draw rest of graph if unused_color is not None: if nodelist is None: nodelist = G.nodes() - faults_nodelist if edgelist is None: edgelist = G.edges() - faults_edgelist unused_node_color = [unused_color for v in nodelist] unused_edge_color = [unused_color for v in edgelist] draw(perfect_graph, layout, nodelist=nodelist, edgelist=edgelist, node_color=unused_node_color, edge_color=unused_edge_color, **kwargs)
def draw_yield(G, layout, perfect_graph, unused_color=(0.9,0.9,0.9,1.0), fault_color=(1.0,0.0,0.0,1.0), fault_shape='x', fault_style='dashed', **kwargs)
Draws the given graph G with highlighted faults, according to layout. Parameters ---------- G : NetworkX graph The graph to be parsed for faults layout : dict A dict of coordinates associated with each node in perfect_graph. Should be of the form {node: coordinate, ...}. Coordinates will be treated as vectors, and should all have the same length. perfect_graph : NetworkX graph The graph to be drawn with highlighted faults unused_color : tuple or color string (optional, default (0.9,0.9,0.9,1.0)) The color to use for nodes and edges of G which are not faults. If unused_color is None, these nodes and edges will not be shown at all. fault_color : tuple or color string (optional, default (1.0,0.0,0.0,1.0)) A color to represent nodes absent from the graph G. Colors should be length-4 tuples of floats between 0 and 1 inclusive. fault_shape : string, optional (default='x') The shape of the fault nodes. Specification is as matplotlib.scatter marker, one of 'so^>v<dph8'. fault_style : string, optional (default='dashed') Edge fault line style (solid|dashed|dotted,dashdot) kwargs : optional keywords See networkx.draw_networkx() for a description of optional keywords, with the exception of the `pos` parameter which is not used by this function. If `linear_biases` or `quadratic_biases` are provided, any provided `node_color` or `edge_color` arguments are ignored.
2.380604
2.294664
1.037452
# if we already know the chromatic number, then we don't need to # disincentivize any colors. if chi_lb == chi_ub: return {} # we might need to use some of the colors, so we want to disincentivize # them in increasing amounts, linearly. scaling = magnitude / (chi_ub - chi_lb) # build the QUBO Q = {} for v in x_vars: for f, color in enumerate(range(chi_lb, chi_ub)): idx = x_vars[v][color] Q[(idx, idx)] = (f + 1) * scaling return Q
def _minimum_coloring_qubo(x_vars, chi_lb, chi_ub, magnitude=1.)
We want to disincentivize unneeded colors. Generates the QUBO that does that.
4.978868
4.131921
1.204977
Q = {} for u, v in G.edges: if u not in x_vars or v not in x_vars: continue for color in x_vars[u]: if color in x_vars[v]: Q[(x_vars[u][color], x_vars[v][color])] = 1. return Q
def _vertex_different_colors_qubo(G, x_vars)
For each vertex, it should not have the same color as any of its neighbors. Generates the QUBO to enforce this constraint. Notes ----- Does not enforce each node having a single color. Ground energy is 0, infeasible gap is 1.
2.299231
2.582537
0.890299
Q = {} for v in x_vars: for color in x_vars[v]: idx = x_vars[v][color] Q[(idx, idx)] = -1 for color0, color1 in itertools.combinations(x_vars[v], 2): idx0 = x_vars[v][color0] idx1 = x_vars[v][color1] Q[(idx0, idx1)] = 2 return Q
def _vertex_one_color_qubo(x_vars)
For each vertex, it should have exactly one color. Generates the QUBO to enforce this constraint. Notes ----- Does not enforce neighboring vertices having different colors. Ground energy is -1 * |G|, infeasible gap is 1.
2.157793
2.39104
0.902449
# find a random maximal clique and give each node in it a unique color v = next(iter(G)) clique = [v] for u in G[v]: if all(w in G[u] for w in clique): clique.append(u) partial_coloring = {v: c for c, v in enumerate(clique)} chi_lb = len(partial_coloring) # lower bound for the chromatic number # now for each uncolored node determine the possible colors possible_colors = {v: set(range(chi_ub)) for v in G if v not in partial_coloring} for v, color in iteritems(partial_coloring): for u in G[v]: if u in possible_colors: possible_colors[u].discard(color) # TODO: there is more here that can be done. For instance some nodes now # might only have one possible color. Or there might only be one node # remaining to color return partial_coloring, possible_colors, chi_lb
def _partial_precolor(G, chi_ub)
In order to reduce the number of variables in the QUBO, we want to color as many nodes as possible without affecting the min vertex coloring. Without loss of generality, we can choose a single maximal clique and color each node in it uniquely. Returns ------- partial_coloring : dict A dict describing a partial coloring of the nodes of G. Of the form {node: color, ...}. possible_colors : dict A dict giving the possible colors for each node in G not already colored. Of the form {node: set([color, ...]), ...}. chi_lb : int A lower bound on the chromatic number chi. Notes ----- partial_coloring.keys() and possible_colors.keys() should be disjoint.
4.27354
3.482102
1.227288
trailing, leading = next(iter(G.edges)) start_node = trailing # travel around the graph, checking that each node has degree exactly two # also track how many nodes were visited n_visited = 1 while leading != start_node: neighbors = G[leading] if len(neighbors) != 2: return False node1, node2 = neighbors if node1 == trailing: trailing, leading = leading, node2 else: trailing, leading = leading, node1 n_visited += 1 # if we haven't visited all of the nodes, then it is not a connected cycle return n_visited == len(G)
def is_cycle(G)
Determines whether the given graph is a cycle or circle graph. A cycle graph or circular graph is a graph that consists of a single cycle. https://en.wikipedia.org/wiki/Cycle_graph Parameters ---------- G : NetworkX graph Returns ------- is_cycle : bool True if the graph consists of a single cycle.
4.310565
5.308339
0.812036
return all(coloring[u] != coloring[v] for u, v in G.edges)
def is_vertex_coloring(G, coloring)
Determines whether the given coloring is a vertex coloring of graph G. Parameters ---------- G : NetworkX graph The graph on which the vertex coloring is applied. coloring : dict A coloring of the nodes of G. Should be a dict of the form {node: color, ...}. Returns ------- is_vertex_coloring : bool True if the given coloring defines a vertex coloring; that is, no two adjacent vertices share a color. Example ------- This example colors checks two colorings for a graph, G, of a single Chimera unit cell. The first uses one color (0) for the four horizontal qubits and another (1) for the four vertical qubits, in which case there are no adjacencies; the second coloring swaps the color of one node. >>> G = dnx.chimera_graph(1,1,4) >>> colors = {0: 0, 1: 0, 2: 0, 3: 0, 4: 1, 5: 1, 6: 1, 7: 1} >>> dnx.is_vertex_coloring(G, colors) True >>> colors[4]=0 >>> dnx.is_vertex_coloring(G, colors) False
2.894404
6.739141
0.429492
# the maximum degree delta = max(G.degree(node) for node in G) # use the maximum degree to determine the infeasible gaps A = 1. if delta == 2: B = .75 else: B = .75 * A / (delta - 2.) # we want A > (delta - 2) * B # each edge in G gets a variable, so let's create those edge_mapping = _edge_mapping(G) # build the QUBO Q = _maximal_matching_qubo(G, edge_mapping, magnitude=B) Qm = _matching_qubo(G, edge_mapping, magnitude=A) for edge, bias in Qm.items(): if edge not in Q: Q[edge] = bias else: Q[edge] += bias # use the sampler to find low energy states response = sampler.sample_qubo(Q, **sampler_args) # we want the lowest energy sample sample = next(iter(response)) # the matching are the edges that are 1 in the sample return set(edge for edge in G.edges if sample[edge_mapping[edge]] > 0)
def maximal_matching(G, sampler=None, **sampler_args)
Finds an approximate maximal matching. Defines a QUBO with ground states corresponding to a maximal matching and uses the sampler to sample from it. A matching is a subset of edges in which no node occurs more than once. A maximal matching is one in which no edges from G can be added without violating the matching rule. Parameters ---------- G : NetworkX graph The graph on which to find a maximal matching. sampler A binary quadratic model sampler. A sampler is a process that samples from low energy states in models defined by an Ising equation or a Quadratic Unconstrained Binary Optimization Problem (QUBO). A sampler is expected to have a 'sample_qubo' and 'sample_ising' method. A sampler is expected to return an iterable of samples, in order of increasing energy. If no sampler is provided, one must be provided using the `set_default_sampler` function. sampler_args Additional keyword parameters are passed to the sampler. Returns ------- matching : set A maximal matching of the graph. Notes ----- Samplers by their nature may not return the optimal solution. This function does not attempt to confirm the quality of the returned sample. References ---------- `Matching on Wikipedia <https://en.wikipedia.org/wiki/Matching_(graph_theory)>`_ `QUBO on Wikipedia <https://en.wikipedia.org/wiki/Quadratic_unconstrained_binary_optimization>`_ Based on the formulation presented in [AL]_
4.594186
4.52348
1.015631
touched_nodes = set().union(*matching) # first check if a matching if len(touched_nodes) != len(matching) * 2: return False # now for each edge, check that at least one of its variables is # already in the matching for (u, v) in G.edges: if u not in touched_nodes and v not in touched_nodes: return False return True
def is_maximal_matching(G, matching)
Determines whether the given set of edges is a maximal matching. A matching is a subset of edges in which no node occurs more than once. The cardinality of a matching is the number of matched edges. A maximal matching is one where one cannot add any more edges without violating the matching rule. Parameters ---------- G : NetworkX graph The graph on which to check the maximal matching. edges : iterable A iterable of edges. Returns ------- is_matching : bool True if the given edges are a maximal matching. Example ------- This example checks two sets of edges, both derived from a single Chimera unit cell, for a matching. The first set (a matching) is a subset of the second, which was found using the `min_maximal_matching()` function. >>> import dwave_networkx as dnx >>> G = dnx.chimera_graph(1, 1, 4) >>> dnx.is_matching({(0, 4), (2, 7)}) True >>> dnx.is_maximal_matching(G,{(0, 4), (2, 7)}) False >>> dnx.is_maximal_matching(G,{(0, 4), (1, 5), (2, 7), (3, 6)}) True
4.198218
5.642886
0.743984
edge_mapping = {edge: idx for idx, edge in enumerate(G.edges)} edge_mapping.update({(e1, e0): idx for (e0, e1), idx in edge_mapping.items()}) return edge_mapping
def _edge_mapping(G)
Assigns a variable for each edge in G. (u, v) and (v, u) map to the same variable.
2.439237
2.736082
0.891507
Q = {} # for each node n in G, define a variable y_n to be 1 when n has a colored edge # and 0 otherwise. # for each edge (u, v) in the graph we want to enforce y_u OR y_v. This is because # if both y_u == 0 and y_v == 0, then we could add (u, v) to the matching. for (u, v) in G.edges: # 1 - y_v - y_u + y_v*y_u # for each edge connected to u for edge in G.edges(u): x = edge_mapping[edge] if (x, x) not in Q: Q[(x, x)] = -1 * magnitude else: Q[(x, x)] -= magnitude # for each edge connected to v for edge in G.edges(v): x = edge_mapping[edge] if (x, x) not in Q: Q[(x, x)] = -1 * magnitude else: Q[(x, x)] -= magnitude for e0 in G.edges(v): x0 = edge_mapping[e0] for e1 in G.edges(u): x1 = edge_mapping[e1] if x0 < x1: if (x0, x1) not in Q: Q[(x0, x1)] = magnitude else: Q[(x0, x1)] += magnitude else: if (x1, x0) not in Q: Q[(x1, x0)] = magnitude else: Q[(x1, x0)] += magnitude return Q
def _maximal_matching_qubo(G, edge_mapping, magnitude=1.)
Generates a QUBO that when combined with one as generated by _matching_qubo, induces a maximal matching on the given graph G. The variables in the QUBO are the edges, as given my edge_mapping. ground_energy = -1 * magnitude * |edges| infeasible_gap >= magnitude
2.371689
2.354353
1.007363
Q = {} # We wish to enforce the behavior that no node has two colored edges for node in G: # for each pair of edges that contain node for edge0, edge1 in itertools.combinations(G.edges(node), 2): v0 = edge_mapping[edge0] v1 = edge_mapping[edge1] # penalize both being True Q[(v0, v1)] = magnitude return Q
def _matching_qubo(G, edge_mapping, magnitude=1.)
Generates a QUBO that induces a matching on the given graph G. The variables in the QUBO are the edges, as given my edge_mapping. ground_energy = 0 infeasible_gap = magnitude
5.828958
5.575002
1.045553
adj = G.adj if t is None: if hasattr(G, 'edges'): num_edges = len(G.edges) else: num_edges = len(G.quadratic) t = _chimera_shore_size(adj, num_edges) chimera_indices = {} row = col = 0 root = min(adj, key=lambda v: len(adj[v])) horiz, verti = rooted_tile(adj, root, t) while len(chimera_indices) < len(adj): new_indices = {} if row == 0: # if we're in the 0th row, we can assign the horizontal randomly for si, v in enumerate(horiz): new_indices[v] = (row, col, 0, si) else: # we need to match the row above for v in horiz: north = [u for u in adj[v] if u in chimera_indices] assert len(north) == 1 i, j, u, si = chimera_indices[north[0]] assert i == row - 1 and j == col and u == 0 new_indices[v] = (row, col, 0, si) if col == 0: # if we're in the 0th col, we can assign the vertical randomly for si, v in enumerate(verti): new_indices[v] = (row, col, 1, si) else: # we need to match the column to the east for v in verti: east = [u for u in adj[v] if u in chimera_indices] assert len(east) == 1 i, j, u, si = chimera_indices[east[0]] assert i == row and j == col - 1 and u == 1 new_indices[v] = (row, col, 1, si) chimera_indices.update(new_indices) # get the next root root_neighbours = [v for v in adj[root] if v not in chimera_indices] if len(root_neighbours) == 1: # we can increment the row root = root_neighbours[0] horiz, verti = rooted_tile(adj, root, t) row += 1 else: # need to go back to row 0, and increment the column assert not root_neighbours # should be empty # we want (0, col, 1, 0), we could cache this, but for now let's just go look for it # the slow way vert_root = [v for v in chimera_indices if chimera_indices[v] == (0, col, 1, 0)][0] vert_root_neighbours = [v for v in adj[vert_root] if v not in chimera_indices] if vert_root_neighbours: verti, horiz = rooted_tile(adj, vert_root_neighbours[0], t) root = next(iter(horiz)) row = 0 col += 1 return chimera_indices
def canonical_chimera_labeling(G, t=None)
Returns a mapping from the labels of G to chimera-indexed labeling. Parameters ---------- G : NetworkX graph A Chimera-structured graph. t : int (optional, default 4) Size of the shore within each Chimera tile. Returns ------- chimera_indices: dict A mapping from the current labels to a 4-tuple of Chimera indices.
2.548948
2.512883
1.014352
# Get a QUBO representation of the problem Q = maximum_weighted_independent_set_qubo(G, weight, lagrange) # use the sampler to find low energy states response = sampler.sample_qubo(Q, **sampler_args) # we want the lowest energy sample sample = next(iter(response)) # nodes that are spin up or true are exactly the ones in S. return [node for node in sample if sample[node] > 0]
def maximum_weighted_independent_set(G, weight=None, sampler=None, lagrange=2.0, **sampler_args)
Returns an approximate maximum weighted independent set. Defines a QUBO with ground states corresponding to a maximum weighted independent set and uses the sampler to sample from it. An independent set is a set of nodes such that the subgraph of G induced by these nodes contains no edges. A maximum independent set is an independent set of maximum total node weight. Parameters ---------- G : NetworkX graph The graph on which to find a maximum cut weighted independent set. weight : string, optional (default None) If None, every node has equal weight. If a string, use this node attribute as the node weight. A node without this attribute is assumed to have max weight. sampler A binary quadratic model sampler. A sampler is a process that samples from low energy states in models defined by an Ising equation or a Quadratic Unconstrained Binary Optimization Problem (QUBO). A sampler is expected to have a 'sample_qubo' and 'sample_ising' method. A sampler is expected to return an iterable of samples, in order of increasing energy. If no sampler is provided, one must be provided using the `set_default_sampler` function. lagrange : optional (default 2) Lagrange parameter to weight constraints (no edges within set) versus objective (largest set possible). sampler_args Additional keyword parameters are passed to the sampler. Returns ------- indep_nodes : list List of nodes that form a maximum weighted independent set, as determined by the given sampler. Notes ----- Samplers by their nature may not return the optimal solution. This function does not attempt to confirm the quality of the returned sample. References ---------- `Independent Set on Wikipedia <https://en.wikipedia.org/wiki/Independent_set_(graph_theory)>`_ `QUBO on Wikipedia <https://en.wikipedia.org/wiki/Quadratic_unconstrained_binary_optimization>`_ .. [AL] Lucas, A. (2014). Ising formulations of many NP problems. Frontiers in Physics, Volume 2, Article 5.
5.358132
6.81252
0.786513
return maximum_weighted_independent_set(G, None, sampler, lagrange, **sampler_args)
def maximum_independent_set(G, sampler=None, lagrange=2.0, **sampler_args)
Returns an approximate maximum independent set. Defines a QUBO with ground states corresponding to a maximum independent set and uses the sampler to sample from it. An independent set is a set of nodes such that the subgraph of G induced by these nodes contains no edges. A maximum independent set is an independent set of largest possible size. Parameters ---------- G : NetworkX graph The graph on which to find a maximum cut independent set. sampler A binary quadratic model sampler. A sampler is a process that samples from low energy states in models defined by an Ising equation or a Quadratic Unconstrained Binary Optimization Problem (QUBO). A sampler is expected to have a 'sample_qubo' and 'sample_ising' method. A sampler is expected to return an iterable of samples, in order of increasing energy. If no sampler is provided, one must be provided using the `set_default_sampler` function. lagrange : optional (default 2) Lagrange parameter to weight constraints (no edges within set) versus objective (largest set possible). sampler_args Additional keyword parameters are passed to the sampler. Returns ------- indep_nodes : list List of nodes that form a maximum independent set, as determined by the given sampler. Example ------- This example uses a sampler from `dimod <https://github.com/dwavesystems/dimod>`_ to find a maximum independent set for a graph of a Chimera unit cell created using the `chimera_graph()` function. >>> import dimod >>> sampler = dimod.SimulatedAnnealingSampler() >>> G = dnx.chimera_graph(1, 1, 4) >>> indep_nodes = dnx.maximum_independent_set(G, sampler) Notes ----- Samplers by their nature may not return the optimal solution. This function does not attempt to confirm the quality of the returned sample. References ---------- `Independent Set on Wikipedia <https://en.wikipedia.org/wiki/Independent_set_(graph_theory)>`_ `QUBO on Wikipedia <https://en.wikipedia.org/wiki/Quadratic_unconstrained_binary_optimization>`_ .. [AL] Lucas, A. (2014). Ising formulations of many NP problems. Frontiers in Physics, Volume 2, Article 5.
3.946811
9.113861
0.433056
# empty QUBO for an empty graph if not G: return {} # We assume that the sampler can handle an unstructured QUBO problem, so let's set one up. # Let us define the largest independent set to be S. # For each node n in the graph, we assign a boolean variable v_n, where v_n = 1 when n # is in S and v_n = 0 otherwise. # We call the matrix defining our QUBO problem Q. # On the diagnonal, we assign the linear bias for each node to be the negative of its weight. # This means that each node is biased towards being in S. Weights are scaled to a maximum of 1. # Negative weights are considered 0. # On the off diagnonal, we assign the off-diagonal terms of Q to be 2. Thus, if both # nodes are in S, the overall energy is increased by 2. cost = dict(G.nodes(data=weight, default=1)) scale = max(cost.values()) Q = {(node, node): min(-cost[node] / scale, 0.0) for node in G} Q.update({edge: lagrange for edge in G.edges}) return Q
def maximum_weighted_independent_set_qubo(G, weight=None, lagrange=2.0)
Return the QUBO with ground states corresponding to a maximum weighted independent set. Parameters ---------- G : NetworkX graph weight : string, optional (default None) If None, every node has equal weight. If a string, use this node attribute as the node weight. A node without this attribute is assumed to have max weight. lagrange : optional (default 2) Lagrange parameter to weight constraints (no edges within set) versus objective (largest set possible). Returns ------- QUBO : dict The QUBO with ground states corresponding to a maximum weighted independent set. Examples -------- >>> from dwave_networkx.algorithms.independent_set import maximum_weighted_independent_set_qubo ... >>> G = nx.path_graph(3) >>> Q = maximum_weighted_independent_set_qubo(G, weight='weight', lagrange=2.0) >>> Q[(0, 0)] -1.0 >>> Q[(1, 1)] -1.0 >>> Q[(0, 1)] 2.0
6.35246
6.419474
0.989561
indep_nodes = set(maximum_weighted_independent_set(G, weight, sampler, **sampler_args)) return [v for v in G if v not in indep_nodes]
def min_weighted_vertex_cover(G, weight=None, sampler=None, **sampler_args)
Returns an approximate minimum weighted vertex cover. Defines a QUBO with ground states corresponding to a minimum weighted vertex cover and uses the sampler to sample from it. A vertex cover is a set of vertices such that each edge of the graph is incident with at least one vertex in the set. A minimum weighted vertex cover is the vertex cover of minimum total node weight. Parameters ---------- G : NetworkX graph weight : string, optional (default None) If None, every node has equal weight. If a string, use this node attribute as the node weight. A node without this attribute is assumed to have max weight. sampler A binary quadratic model sampler. A sampler is a process that samples from low energy states in models defined by an Ising equation or a Quadratic Unconstrained Binary Optimization Problem (QUBO). A sampler is expected to have a 'sample_qubo' and 'sample_ising' method. A sampler is expected to return an iterable of samples, in order of increasing energy. If no sampler is provided, one must be provided using the `set_default_sampler` function. sampler_args Additional keyword parameters are passed to the sampler. Returns ------- vertex_cover : list List of nodes that the form a the minimum weighted vertex cover, as determined by the given sampler. Notes ----- Samplers by their nature may not return the optimal solution. This function does not attempt to confirm the quality of the returned sample. https://en.wikipedia.org/wiki/Vertex_cover https://en.wikipedia.org/wiki/Quadratic_unconstrained_binary_optimization References ---------- Based on the formulation presented in [AL]_
4.13534
6.08091
0.680053
cover = set(vertex_cover) return all(u in cover or v in cover for u, v in G.edges)
def is_vertex_cover(G, vertex_cover)
Determines whether the given set of vertices is a vertex cover of graph G. A vertex cover is a set of vertices such that each edge of the graph is incident with at least one vertex in the set. Parameters ---------- G : NetworkX graph The graph on which to check the vertex cover. vertex_cover : Iterable of nodes. Returns ------- is_cover : bool True if the given iterable forms a vertex cover. Examples -------- This example checks two covers for a graph, G, of a single Chimera unit cell. The first uses the set of the four horizontal qubits, which do constitute a cover; the second set removes one node. >>> import dwave_networkx as dnx >>> G = dnx.chimera_graph(1, 1, 4) >>> cover = [0, 1, 2, 3] >>> dnx.is_vertex_cover(G,cover) True >>> cover = [0, 1, 2] >>> dnx.is_vertex_cover(G,cover) False
3.754996
10.764668
0.348826
if not isinstance(G, nx.Graph) or G.graph.get("family") != "pegasus": raise ValueError("G must be generated by dwave_networkx.pegasus_graph") if G.graph.get('labels') == 'nice': m = 3*(G.graph['rows']-1) c_coords = chimera_node_placer_2d(m, m, 4, scale=scale, center=center, dim=dim) def xy_coords(t, y, x, u, k): return c_coords(3*y+2-t, 3*x+t, u, k) pos = {v: xy_coords(*v) for v in G.nodes()} else: xy_coords = pegasus_node_placer_2d(G, scale, center, dim, crosses=crosses) if G.graph.get('labels') == 'coordinate': pos = {v: xy_coords(*v) for v in G.nodes()} elif G.graph.get('data'): pos = {v: xy_coords(*dat['pegasus_index']) for v, dat in G.nodes(data=True)} else: m = G.graph.get('rows') coord = pegasus_coordinates(m) pos = {v: xy_coords(*coord.tuple(v)) for v in G.nodes()} return pos
def pegasus_layout(G, scale=1., center=None, dim=2, crosses=False)
Positions the nodes of graph G in a Pegasus topology. NumPy (http://scipy.org) is required for this function. Parameters ---------- G : NetworkX graph Should be a Pegasus graph or a subgraph of a Pegasus graph. This should be the product of dwave_networkx.pegasus_graph scale : float (default 1.) Scale factor. When scale = 1, all positions fit within [0, 1] on the x-axis and [-1, 0] on the y-axis. center : None or array (default None) Coordinates of the top left corner. dim : int (default 2) Number of dimensions. When dim > 2, all extra dimensions are set to 0. crosses: boolean (optional, default False) If crosses is True, K_4,4 subgraphs are shown in a cross rather than L configuration. Ignored if G was defined with nice_coordinates=True. Returns ------- pos : dict A dictionary of positions keyed by node. Examples -------- >>> G = dnx.pegasus_graph(1) >>> pos = dnx.pegasus_layout(G)
3.630651
3.618965
1.003229
import numpy as np m = G.graph.get('rows') h_offsets = G.graph.get("horizontal_offsets") v_offsets = G.graph.get("vertical_offsets") tile_width = G.graph.get("tile") tile_center = tile_width / 2 - .5 # want the enter plot to fill in [0, 1] when scale=1 scale /= m * tile_width if center is None: center = np.zeros(dim) else: center = np.asarray(center) paddims = dim - 2 if paddims < 0: raise ValueError("layout must have at least two dimensions") if len(center) != dim: raise ValueError("length of center coordinates must match dimension of layout") if crosses: # adjustment for crosses cross_shift = 2. else: cross_shift = 0. def _xy_coords(u, w, k, z): # orientation, major perpendicular offset, minor perpendicular offset, parallel offset if k % 2: p = -.1 else: p = .1 if u: xy = np.array([z*tile_width+h_offsets[k] + tile_center, -tile_width*w-k-p+cross_shift]) else: xy = np.array([tile_width*w+k+p+cross_shift, -z*tile_width-v_offsets[k]-tile_center]) # convention for Pegasus-lattice pictures is to invert the y-axis return np.hstack((xy * scale, np.zeros(paddims))) + center return _xy_coords
def pegasus_node_placer_2d(G, scale=1., center=None, dim=2, crosses=False)
Generates a function that converts Pegasus indices to x, y coordinates for a plot. Parameters ---------- G : NetworkX graph Should be a Pegasus graph or a subgraph of a Pegasus graph. This should be the product of dwave_networkx.pegasus_graph scale : float (default 1.) Scale factor. When scale = 1, all positions fit within [0, 1] on the x-axis and [-1, 0] on the y-axis. center : None or array (default None) Coordinates of the top left corner. dim : int (default 2) Number of dimensions. When dim > 2, all extra dimensions are set to 0. crosses: boolean (optional, default False) If crosses is True, K_4,4 subgraphs are shown in a cross rather than L configuration. Returns ------- xy_coords : function A function that maps a Pegasus index (u, w, k, z) in a Pegasus lattice to x,y coordinates such as used by a plot.
4.709238
4.456603
1.056688
draw_qubit_graph(G, pegasus_layout(G, crosses=crosses), **kwargs)
def draw_pegasus(G, crosses=False, **kwargs)
Draws graph G in a Pegasus topology. If `linear_biases` and/or `quadratic_biases` are provided, these are visualized on the plot. Parameters ---------- G : NetworkX graph Should be a Pegasus graph or a subgraph of a Pegasus graph, a product of dwave_networkx.pegasus_graph. linear_biases : dict (optional, default {}) A dict of biases associated with each node in G. Should be of form {node: bias, ...}. Each bias should be numeric. quadratic_biases : dict (optional, default {}) A dict of biases associated with each edge in G. Should be of form {edge: bias, ...}. Each bias should be numeric. Self-loop edges (i.e., :math:`i=j`) are treated as linear biases. crosses: boolean (optional, default False) If crosses is True, K_4,4 subgraphs are shown in a cross rather than L configuration. Ignored if G was defined with nice_coordinates=True. kwargs : optional keywords See networkx.draw_networkx() for a description of optional keywords, with the exception of the `pos` parameter which is not used by this function. If `linear_biases` or `quadratic_biases` are provided, any provided `node_color` or `edge_color` arguments are ignored. Examples -------- >>> # Plot a Pegasus graph with size parameter 2 >>> import networkx as nx >>> import dwave_networkx as dnx >>> import matplotlib.pyplot as plt >>> G = dnx.pegasus_graph(2) >>> dnx.draw_pegasus(G) >>> plt.show()
6.260893
19.080513
0.32813
crosses = kwargs.pop("crosses", False) draw_embedding(G, pegasus_layout(G, crosses=crosses), *args, **kwargs)
def draw_pegasus_embedding(G, *args, **kwargs)
Draws an embedding onto the pegasus graph G, according to layout. If interaction_edges is not None, then only display the couplers in that list. If embedded_graph is not None, the only display the couplers between chains with intended couplings according to embedded_graph. Parameters ---------- G : NetworkX graph Should be a Pegasus graph or a subgraph of a Pegasus graph. This should be the product of dwave_networkx.pegasus_graph emb : dict A dict of chains associated with each node in G. Should be of the form {node: chain, ...}. Chains should be iterables of qubit labels (qubits are nodes in G). embedded_graph : NetworkX graph (optional, default None) A graph which contains all keys of emb as nodes. If specified, edges of G will be considered interactions if and only if they exist between two chains of emb if their keys are connected by an edge in embedded_graph interaction_edges : list (optional, default None) A list of edges which will be used as interactions. show_labels: boolean (optional, default False) If show_labels is True, then each chain in emb is labelled with its key. chain_color : dict (optional, default None) A dict of colors associated with each key in emb. Should be of the form {node: rgba_color, ...}. Colors should be length-4 tuples of floats between 0 and 1 inclusive. If chain_color is None, each chain will be assigned a different color. unused_color : tuple (optional, default (0.9,0.9,0.9,1.0)) The color to use for nodes and edges of G which are not involved in chains, and edges which are neither chain edges nor interactions. If unused_color is None, these nodes and edges will not be shown at all. crosses: boolean (optional, default False) If crosses is True, K_4,4 subgraphs are shown in a cross rather than L configuration. Ignored if G was defined with nice_coordinates=True. kwargs : optional keywords See networkx.draw_networkx() for a description of optional keywords, with the exception of the `pos` parameter which is not used by this function. If `linear_biases` or `quadratic_biases` are provided, any provided `node_color` or `edge_color` arguments are ignored.
4.829201
8.54279
0.565296
try: assert(G.graph["family"] == "pegasus") m = G.graph['columns'] offset_lists = (G.graph['vertical_offsets'], G.graph['horizontal_offsets']) coordinates = G.graph["labels"] == "coordinate" # Can't interpret fabric_only from graph attributes except: raise ValueError("Target pegasus graph needs to have columns, rows, \ tile, and label attributes to be able to identify faulty qubits.") perfect_graph = pegasus_graph(m, offset_lists=offset_lists, coordinates=coordinates) draw_yield(G, pegasus_layout(perfect_graph), perfect_graph, **kwargs)
def draw_pegasus_yield(G, **kwargs)
Draws the given graph G with highlighted faults, according to layout. Parameters ---------- G : NetworkX graph The graph to be parsed for faults unused_color : tuple or color string (optional, default (0.9,0.9,0.9,1.0)) The color to use for nodes and edges of G which are not faults. If unused_color is None, these nodes and edges will not be shown at all. fault_color : tuple or color string (optional, default (1.0,0.0,0.0,1.0)) A color to represent nodes absent from the graph G. Colors should be length-4 tuples of floats between 0 and 1 inclusive. fault_shape : string, optional (default='x') The shape of the fault nodes. Specification is as matplotlib.scatter marker, one of 'so^>v<dph8'. fault_style : string, optional (default='dashed') Edge fault line style (solid|dashed|dotted,dashdot) kwargs : optional keywords See networkx.draw_networkx() for a description of optional keywords, with the exception of the `pos` parameter which is not used by this function. If `linear_biases` or `quadratic_biases` are provided, any provided `node_color` or `edge_color` arguments are ignored.
10.34702
10.19131
1.015279
if isinstance(duration, (int, float, long)): return duration elif isinstance(duration, (datetime.timedelta,)): if units == 'seconds': return duration.total_seconds() else: msg = 'unit "%s" is not supported' % units raise NotImplementedError(msg) elif duration == inf or duration == -inf: msg = "Can't convert infinite duration to number" raise ValueError(msg) else: msg = 'duration is an unknown type (%s)' % duration raise TypeError(msg)
def duration_to_number(duration, units='seconds')
If duration is already a numeric type, then just return duration. If duration is a timedelta, return a duration in seconds. TODO: allow for multiple types of units.
2.728137
2.755527
0.99006
list_of_pairs = [] if len(args) == 0: return [] if any(isinstance(arg, (list, tuple)) for arg in args): # Domain([[1, 4]]) # Domain([(1, 4)]) # Domain([(1, 4), (5, 8)]) # Domain([[1, 4], [5, 8]]) if len(args) == 1 and \ any(isinstance(arg, (list, tuple)) for arg in args[0]): for item in args[0]: list_of_pairs.append(list(item)) else: # Domain([1, 4]) # Domain((1, 4)) # Domain((1, 4), (5, 8)) # Domain([1, 4], [5, 8]) for item in args: list_of_pairs.append(list(item)) else: # Domain(1, 2) if len(args) == 2: list_of_pairs.append(list(args)) else: msg = "The argument type is invalid. ".format(args) raise TypeError(msg) return list_of_pairs
def convert_args_to_list(args)
Convert all iterable pairs of inputs into a list of list
2.113234
2.03356
1.03918
def done(a, b, inclusive_end): if inclusive_end: return a <= b else: return a < b current = start_dt while done(current, end_dt, inclusive_end): yield current current += datetime.timedelta(**{unit: n_units})
def datetime_range(start_dt, end_dt, unit, n_units=1, inclusive_end=False)
A range of datetimes/dates.
2.789676
2.857368
0.97631
if unit == 'years': new_year = dt.year - (dt.year - 1) % n_units return datetime.datetime(new_year, 1, 1, 0, 0, 0) elif unit == 'months': new_month = dt.month - (dt.month - 1) % n_units return datetime.datetime(dt.year, new_month, 1, 0, 0, 0) elif unit == 'weeks': _, isoweek, _ = dt.isocalendar() new_week = isoweek - (isoweek - 1) % n_units return datetime.datetime.strptime( "%d %02d 1" % (dt.year, new_week), "%Y %W %w") elif unit == 'days': new_day = dt.day - dt.day % n_units return datetime.datetime(dt.year, dt.month, new_day, 0, 0, 0) elif unit == 'hours': new_hour = dt.hour - dt.hour % n_units return datetime.datetime(dt.year, dt.month, dt.day, new_hour, 0, 0) elif unit == 'minutes': new_minute = dt.minute - dt.minute % n_units return datetime.datetime(dt.year, dt.month, dt.day, dt.hour, new_minute, 0) elif unit == 'seconds': new_second = dt.second - dt.second % n_units return datetime.datetime(dt.year, dt.month, dt.day, dt.hour, dt.minute, new_second) else: msg = 'Unknown unit type {}'.format(unit) raise ValueError(msg)
def floor_datetime(dt, unit, n_units=1)
Floor a datetime to nearest n units. For example, if we want to floor to nearest three months, starting with 2016-05-06-yadda, it will go to 2016-04-01. Or, if starting with 2016-05-06-11:45:06 and rounding to nearest fifteen minutes, it will result in 2016-05-06-11:45:00.
1.383961
1.389812
0.99579
it = iter(iterable) a = next(it, None) for b in it: yield (a, b) a = b
def pairwise(iterable)
given an interable `p1, p2, p3, ...` it iterates through pairwise tuples `(p0, p1), (p1, p2), ...`
2.59731
4.141635
0.627122
_self = self._discard_value(None) if not _self.total(): return None weighted_sum = sum( key * value for key, value in iteritems(_self) ) return weighted_sum / float(_self.total())
def mean(self)
Mean of the distribution.
8.508545
7.987054
1.065292
_self = self._discard_value(None) if not _self.total(): return 0.0 mean = _self.mean() weighted_central_moment = sum( count * (value - mean)**2 for value, count in iteritems(_self) ) return weighted_central_moment / float(_self.total())
def variance(self)
Variance of the distribution.
6.391283
5.985155
1.067856
total = self.total() result = Histogram() for value, count in iteritems(self): try: result[value] = count / float(total) except UnorderableElements as e: result = Histogram.from_dict(dict(result), key=hash) result[value] = count / float(total) return result
def normalized(self)
Return a normalized version of the histogram where the values sum to one.
5.283498
4.284123
1.233274
total = float(self.total()) smallest_observed_count = min(itervalues(self)) if smallest_count is None: smallest_count = smallest_observed_count else: smallest_count = min(smallest_count, smallest_observed_count) beta = alpha * smallest_count debug_plot = [] cumulative_sum = 0.0 inverse = sortedcontainers.SortedDict() for value, count in iteritems(self): debug_plot.append((cumulative_sum / total, value)) inverse[(cumulative_sum + beta) / total] = value cumulative_sum += count inverse[(cumulative_sum - beta) / total] = value debug_plot.append((cumulative_sum / total, value)) # get maximum and minumum q values q_min = inverse.iloc[0] q_max = inverse.iloc[-1] # this stuff if helpful for debugging -- keep it in here # for i, j in debug_plot: # print i, j # print '' # for i, j in inverse.iteritems(): # print i, j # print '' def function(q): if q < 0.0 or q > 1.0: msg = 'invalid quantile %s, need `0 <= q <= 1`' % q raise ValueError(msg) elif q < q_min: q = q_min elif q > q_max: q = q_max # if beta is if beta > 0: if q in inverse: result = inverse[q] else: previous_index = inverse.bisect_left(q) - 1 x1 = inverse.iloc[previous_index] x2 = inverse.iloc[previous_index + 1] y1 = inverse[x1] y2 = inverse[x2] result = (y2 - y1) * (q - x1) / float(x2 - x1) + y1 else: if q in inverse: previous_index = inverse.bisect_left(q) - 1 x1 = inverse.iloc[previous_index] x2 = inverse.iloc[previous_index + 1] y1 = inverse[x1] y2 = inverse[x2] result = 0.5 * (y1 + y2) else: previous_index = inverse.bisect_left(q) - 1 x1 = inverse.iloc[previous_index] result = inverse[x1] return float(result) return function
def _quantile_function(self, alpha=0.5, smallest_count=None)
Return a function that returns the quantile values for this histogram.
2.383647
2.348609
1.014919
try: getter = self.getter_functions[interpolate] except KeyError: msg = ( "unknown value '{}' for interpolate, " "valid values are in [{}]" ).format(interpolate, ', '.join(self.getter_functions)) raise ValueError(msg) else: return getter(time)
def get(self, time, interpolate='previous')
Get the value of the time series, even in-between measured values.
3.595359
3.759254
0.956402
if (len(self) == 0) or (not compact) or \ (compact and self.get(time) != value): self._d[time] = value
def set(self, time, value, compact=False)
Set the value for the time series. If compact is True, only set the value if it's different from what it would be anyway.
5.079412
4.802305
1.057703
# for each interval to render for i, (s, e, v) in enumerate(self.iterperiods(start, end)): # look at all intervals included in the current interval # (always at least 1) if i == 0: # if the first, set initial value to new value of range self.set(s, value, compact) else: # otherwise, remove intermediate key del self[s] # finish by setting the end of the interval to the previous value self.set(end, v, compact)
def set_interval(self, start, end, value, compact=False)
Set the value for the time series on an interval. If compact is True, only set the value if it's different from what it would be anyway.
6.989696
7.376656
0.947543
previous_value = object() redundant = [] for time, value in self: if value == previous_value: redundant.append(time) previous_value = value for time in redundant: del self[time]
def compact(self)
Convert this instance to a compact version: the value will be the same at all times, but repeated measurements are discarded.
4.190723
3.686874
1.13666
result = TimeSeries(default=False if self.default is None else True) for t, v in self: result[t] = False if v is None else True return result
def exists(self)
returns False when the timeseries has a None value, True otherwise
7.982057
5.924057
1.347397
for s, e, v in self.iterperiods(start, end): try: del self._d[s] except KeyError: pass
def remove_points_from_interval(self, start, end)
Allow removal of all points from the time series within a interval [start:end].
6.608163
6.07707
1.087393
# tee the original iterator into n identical iterators streams = tee(iter(self), n) # advance the "cursor" on each iterator by an increasing # offset, e.g. if n=3: # # [a, b, c, d, e, f, ..., w, x, y, z] # first cursor --> * # second cursor --> * # third cursor --> * for stream_index, stream in enumerate(streams): for i in range(stream_index): next(stream) # now, zip the offset streams back together to yield tuples, # in the n=3 example it would yield: # (a, b, c), (b, c, d), ..., (w, x, y), (x, y, z) for intervals in zip(*streams): yield intervals
def iterintervals(self, n=2)
Iterate over groups of `n` consecutive measurement points in the time series.
6.215186
6.504601
0.955506
start, end, mask = \ self._check_boundaries(start, end, allow_infinite=False) value_function = self._value_function(value) # get start index and value start_index = self._d.bisect_right(start) if start_index: start_value = self._d[self._d.iloc[start_index - 1]] else: start_value = self.default # get last index before end of time span end_index = self._d.bisect_right(end) interval_t0, interval_value = start, start_value for interval_t1 in self._d.islice(start_index, end_index): if value_function(interval_t0, interval_t1, interval_value): yield interval_t0, interval_t1, interval_value # set start point to the end of this interval for next # iteration interval_t0 = interval_t1 interval_value = self[interval_t0] # yield the time, duration, and value of the final period if interval_t0 < end: if value_function(interval_t0, end, interval_value): yield interval_t0, end, interval_value
def iterperiods(self, start=None, end=None, value=None)
This iterates over the periods (optionally, within a given time span) and yields (interval start, interval end, value) tuples. TODO: add mask argument here.
3.286462
3.022781
1.087231
start, end, mask = \ self._check_boundaries(start, end, allow_infinite=True) result = TimeSeries(default=self.default) for t0, t1, value in self.iterperiods(start, end): result[t0] = value result[t1] = self[t1] return result
def slice(self, start, end)
Return an equivalent TimeSeries that only has points between `start` and `end` (always starting at `start`)
5.172989
4.59072
1.126836
start, end, mask = self._check_boundaries(start, end) sampling_period = \ self._check_regularization(start, end, sampling_period) result = [] current_time = start while current_time <= end: value = self.get(current_time, interpolate=interpolate) result.append((current_time, value)) current_time += sampling_period return result
def sample(self, sampling_period, start=None, end=None, interpolate='previous')
Sampling at regular time periods.
3.214232
3.209038
1.001619
start, end, mask = self._check_boundaries(start, end) # default to sampling_period if not given if window_size is None: window_size = sampling_period sampling_period = \ self._check_regularization(start, end, sampling_period) # convert to datetime if the times are datetimes full_window = window_size * 1. # convert to float if int or do nothing half_window = full_window / 2. # divide by 2 if (isinstance(start, datetime.datetime) and not isinstance(full_window, datetime.timedelta)): half_window = datetime.timedelta(seconds=half_window) full_window = datetime.timedelta(seconds=full_window) result = [] current_time = start while current_time <= end: if placement == 'center': window_start = current_time - half_window window_end = current_time + half_window elif placement == 'left': window_start = current_time window_end = current_time + full_window elif placement == 'right': window_start = current_time - full_window window_end = current_time else: msg = 'unknown placement "{}"'.format(placement) raise ValueError(msg) # calculate mean over window and add (t, v) tuple to list try: mean = self.mean(window_start, window_end) except TypeError as e: if 'NoneType' in str(e): mean = None else: raise e result.append((current_time, mean)) current_time += sampling_period # convert to pandas Series if pandas=True if pandas: try: import pandas as pd except ImportError: msg = "can't have pandas=True if pandas is not installed" raise ImportError(msg) result = pd.Series( [v for t, v in result], index=[t for t, v in result], ) return result
def moving_average(self, sampling_period, window_size=None, start=None, end=None, placement='center', pandas=False)
Averaging over regular intervals
2.444999
2.435901
1.003735
return self.distribution(start=start, end=end, mask=mask).mean()
def mean(self, start=None, end=None, mask=None)
This calculated the average value of the time series over the given time range from `start` to `end`, when `mask` is truthy.
4.837308
5.6994
0.84874
start, end, mask = self._check_boundaries(start, end, mask=mask) counter = histogram.Histogram() for start, end, _ in mask.iterperiods(value=True): for t0, t1, value in self.iterperiods(start, end): duration = utils.duration_to_number( t1 - t0, units='seconds', ) try: counter[value] += duration except histogram.UnorderableElements as e: counter = histogram.Histogram.from_dict( dict(counter), key=hash) counter[value] += duration # divide by total duration if result needs to be normalized if normalized: return counter.normalized() else: return counter
def distribution(self, start=None, end=None, normalized=True, mask=None)
Calculate the distribution of values over the given time range from `start` to `end`. Args: start (orderable, optional): The lower time bound of when to calculate the distribution. By default, the first time point will be used. end (orderable, optional): The upper time bound of when to calculate the distribution. By default, the last time point will be used. normalized (bool): If True, distribution will sum to one. If False and the time values of the TimeSeries are datetimes, the units will be seconds. mask (:obj:`TimeSeries`, optional): A domain on which to calculate the distribution. Returns: :obj:`Histogram` with the results.
5.419663
5.638376
0.96121
# just go ahead and return 0 if we already know it regarless # of boundaries if not self.n_measurements(): return 0 start, end, mask = self._check_boundaries(start, end, mask=mask) count = 0 for start, end, _ in mask.iterperiods(value=True): if include_end: end_count = self._d.bisect_right(end) else: end_count = self._d.bisect_left(end) if include_start: start_count = self._d.bisect_left(start) else: start_count = self._d.bisect_right(start) count += (end_count - start_count) if normalized: count /= float(self.n_measurements()) return count
def n_points(self, start=-inf, end=+inf, mask=None, include_start=True, include_end=False, normalized=False)
Calculate the number of points over the given time range from `start` to `end`. Args: start (orderable, optional): The lower time bound of when to calculate the distribution. By default, start is -infinity. end (orderable, optional): The upper time bound of when to calculate the distribution. By default, the end is +infinity. mask (:obj:`TimeSeries`, optional): A domain on which to calculate the distribution. Returns: `int` with the result
3.227971
3.462922
0.932152
if not isinstance(other, TimeSeries): msg = "unsupported operand types(s) for +: %s and %s" % \ (type(self), type(other)) raise TypeError(msg)
def _check_time_series(self, other)
Function used to check the type of the argument and raise an informative error message if it's not a TimeSeries.
2.951391
2.670207
1.105304
# cast to list since this is getting iterated over several # times (causes problem if timeseries_list is a generator) timeseries_list = list(timeseries_list) # Create iterators for each timeseries and then add the first # item from each iterator onto a priority queue. The first # item to be popped will be the one with the lowest time queue = PriorityQueue() for index, timeseries in enumerate(timeseries_list): iterator = iter(timeseries) try: t, value = next(iterator) except StopIteration: pass else: queue.put((t, index, value, iterator)) # `state` keeps track of the value of the merged # TimeSeries. It starts with the default. It starts as a list # of the default value for each individual TimeSeries. state = [ts.default for ts in timeseries_list] while not queue.empty(): # get the next time with a measurement from queue t, index, next_value, iterator = queue.get() # make a copy of previous state, and modify only the value # at the index of the TimeSeries that this item came from state = list(state) state[index] = next_value yield t, state # add the next measurement from the time series to the # queue (if there is one) try: t, value = next(iterator) except StopIteration: pass else: queue.put((t, index, value, iterator))
def _iter_merge(timeseries_list)
This function uses a priority queue to efficiently yield the (time, value_list) tuples that occur from merging together many time series.
4.07407
3.886977
1.048133
# using return without an argument is the way to say "the # iterator is empty" when there is nothing to iterate over # (the more you know...) if not timeseries_list: return # for ts in timeseries_list: # if ts.is_floating(): # msg = "can't merge empty TimeSeries with no default value" # raise KeyError(msg) # This function mostly wraps _iter_merge, the main point of # this is to deal with the case of tied times, where we only # want to yield the last list of values that occurs for any # group of tied times. index, previous_t, previous_state = -1, object(), object() for index, (t, state) in enumerate(cls._iter_merge(timeseries_list)): if index > 0 and t != previous_t: yield previous_t, previous_state previous_t, previous_state = t, state # only yield final thing if there was at least one element # yielded by _iter_merge if index > -1: yield previous_t, previous_state
def iter_merge(cls, timeseries_list)
Iterate through several time series in order, yielding (time, list) tuples where list is the values of each individual TimeSeries in the list at time t.
6.242669
5.966712
1.046249
# If operation is not given then the default is the list # of defaults of all time series # If operation is given, then the default is the result of # the operation over the list of all defaults default = [ts.default for ts in ts_list] if operation: default = operation(default) result = cls(default=default) for t, merged in cls.iter_merge(ts_list): if operation is None: value = merged else: value = operation(merged) result.set(t, value, compact=compact) return result
def merge(cls, ts_list, compact=True, operation=None)
Iterate through several time series in order, yielding (time, `value`) where `value` is the either the list of each individual TimeSeries in the list at time t (in the same order as in ts_list) or the result of the optional `operation` on that list of values.
4.048718
3.541893
1.143095
result = TimeSeries(**kwargs) if isinstance(other, TimeSeries): for time, value in self: result[time] = function(value, other[time]) for time, value in other: result[time] = function(self[time], value) else: for time, value in self: result[time] = function(value, other) return result
def operation(self, other, function, **kwargs)
Calculate "elementwise" operation either between this TimeSeries and another one, i.e. operation(t) = function(self(t), other(t)) or between this timeseries and a constant: operation(t) = function(self(t), other) If it's another time series, the measurement times in the resulting TimeSeries will be the union of the sets of measurement times of the input time series. If it's a constant, the measurement times will not change.
2.161175
1.995197
1.083189
if invert: def function(x, y): return False if x else True else: def function(x, y): return True if x else False return self.operation(None, function)
def to_bool(self, invert=False)
Return the truth value of each element.
3.788418
3.322107
1.140366
if inclusive: def function(x, y): return True if x >= y else False else: def function(x, y): return True if x > y else False return self.operation(value, function)
def threshold(self, value, inclusive=False)
Return True if > than treshold value (or >= threshold value if inclusive=True).
3.13879
3.282223
0.9563
return TimeSeries.merge( [self, other], operation=operations.ignorant_sum )
def sum(self, other)
sum(x, y) = x(t) + y(t).
22.699331
20.864416
1.087945
return self.operation(other, lambda x, y: x - y)
def difference(self, other)
difference(x, y) = x(t) - y(t).
6.802165
6.720491
1.012153
return self.operation(other, lambda x, y: x * y)
def multiply(self, other)
mul(t) = self(t) * other(t).
6.367679
5.956878
1.068962
return self.operation(other, lambda x, y: int(x and y))
def logical_and(self, other)
logical_and(t) = self(t) and other(t).
6.551274
7.300387
0.897387
return self.operation(other, lambda x, y: int(x or y))
def logical_or(self, other)
logical_or(t) = self(t) or other(t).
6.189737
7.033649
0.880018
return self.operation(other, lambda x, y: int(bool(x) ^ bool(y)))
def logical_xor(self, other)
logical_xor(t) = self(t) ^ other(t).
4.470946
4.776977
0.935936
result = [] for filename in glob.iglob(pattern): print('reading', filename, file=sys.stderr) ts = traces.TimeSeries.from_csv( filename, time_column=0, time_transform=parse_iso_datetime, value_column=1, value_transform=int, default=0, ) ts.compact() result.append(ts) return result
def read_all(pattern='data/lightbulb-*.csv')
Read all of the CSVs in a directory matching the filename pattern as TimeSeries.
3.955646
3.602389
1.098062
filename = os.path.join("traces", "__init__.py") result = None with open(filename) as stream: for line in stream: if key in line: result = line.split('=')[-1].strip().replace("'", "") # throw error if version isn't in __init__ file if result is None: raise ValueError('must define %s in %s' % (key, filename)) return result
def read_init(key)
Parse the package __init__ file to find a variable so that it's not in multiple places.
3.8416
3.620551
1.061054
dependencies = [] filepath = os.path.join('requirements', filename) with open(filepath, 'r') as stream: for line in stream: package = line.strip().split('#')[0].strip() if package and package.split(' ')[0] != '-r': dependencies.append(package) return dependencies
def read_dependencies(filename)
Read in the dependencies from the virtualenv requirements file.
2.630998
2.467468
1.066274
plugin_obj = self.__plugins[plugin["id"]] instance_obj = (self.__instances[instance["id"]] if instance is not None else None) result = pyblish.plugin.process( plugin=plugin_obj, context=self._context, instance=instance_obj, action=action) return formatting.format_result(result)
def process(self, plugin, instance=None, action=None)
Given JSON objects from client, perform actual processing Arguments: plugin (dict): JSON representation of plug-in to process instance (dict, optional): JSON representation of Instance to be processed. action (str, optional): Id of action to process
4.630403
4.463447
1.037405
self._count += 1 func = getattr(self, method) try: return func(*params) except Exception as e: traceback.print_exc() raise e
def _dispatch(self, method, params)
Customise exception handling
3.678734
3.408575
1.079259
if "context" in kwargs: kwargs["context"] = self._context if "instance" in kwargs: kwargs["instance"] = self.__instances[kwargs["instance"]] if "plugin" in kwargs: kwargs["plugin"] = self.__plugins[kwargs["plugin"]] pyblish.api.emit(signal, **kwargs)
def emit(self, signal, kwargs)
Trigger registered callbacks This method is triggered remotely and run locally. The keywords "instance" and "plugin" are implicitly converted to their corresponding Pyblish objects.
3.097915
2.316117
1.337547
def _validates(cls): validators[version] = cls if u"id" in cls.META_SCHEMA: meta_schemas[cls.META_SCHEMA[u"id"]] = cls return cls return _validates
def validates(version)
Register the decorated validator for a ``version`` of the specification. Registered validators and their meta schemas will be considered when parsing ``$schema`` properties' URIs. :argument str version: an identifier to use as the version's name :returns: a class decorator to decorate the validator with the version
6.586374
4.714239
1.397124
if cls is None: cls = validator_for(schema) cls.check_schema(schema) cls(schema, *args, **kwargs).validate(instance)
def validate(instance, schema, cls=None, *args, **kwargs)
Validate an instance under the given schema. >>> validate([2, 3, 4], {"maxItems" : 2}) Traceback (most recent call last): ... ValidationError: [2, 3, 4] is too long :func:`validate` will first verify that the provided schema is itself valid, since not doing so can lead to less obvious error messages and fail in less obvious or consistent ways. If you know you have a valid schema already or don't care, you might prefer using the :meth:`~IValidator.validate` method directly on a specific validator (e.g. :meth:`Draft4Validator.validate`). :argument instance: the instance to validate :argument schema: the schema to validate with :argument cls: an :class:`IValidator` class that will be used to validate the instance. If the ``cls`` argument is not provided, two things will happen in accordance with the specification. First, if the schema has a :validator:`$schema` property containing a known meta-schema [#]_ then the proper validator will be used. The specification recommends that all schemas contain :validator:`$schema` properties for this reason. If no :validator:`$schema` property is found, the default validator class is :class:`Draft4Validator`. Any other provided positional and keyword arguments will be passed on when instantiating the ``cls``. :raises: :exc:`ValidationError` if the instance is invalid :exc:`SchemaError` if the schema itself is invalid .. rubric:: Footnotes .. [#] known by a validator registered with :func:`validates`
3.048749
6.575139
0.463678
return cls(schema.get(u"id", u""), schema, *args, **kwargs)
def from_schema(cls, schema, *args, **kwargs)
Construct a resolver from a JSON schema object. :argument schema schema: the referring schema :rtype: :class:`RefResolver`
7.518102
6.590774
1.140701
fragment = fragment.lstrip(u"/") parts = unquote(fragment).split(u"/") if fragment else [] for part in parts: part = part.replace(u"~1", u"/").replace(u"~0", u"~") if isinstance(document, Sequence): # Array indexes should be turned into integers try: part = int(part) except ValueError: pass try: document = document[part] except (TypeError, LookupError): raise RefResolutionError( "Unresolvable JSON pointer: %r" % fragment ) return document
def resolve_fragment(self, document, fragment)
Resolve a ``fragment`` within the referenced ``document``. :argument document: the referrant document :argument str fragment: a URI fragment to resolve within it
3.277275
3.29934
0.993312
scheme = urlsplit(uri).scheme if scheme in self.handlers: result = self.handlers[scheme](uri) elif ( scheme in [u"http", u"https"] and requests and getattr(requests.Response, "json", None) is not None ): # Requests has support for detecting the correct encoding of # json over http if callable(requests.Response.json): result = requests.get(uri).json() else: result = requests.get(uri).json else: # Otherwise, pass off to urllib and assume utf-8 result = json.loads(urlopen(uri).read().decode("utf-8")) if self.cache_remote: self.store[uri] = result return result
def resolve_remote(self, uri)
Resolve a remote ``uri``. Does not check the store first, but stores the retrieved document in the store if :attr:`RefResolver.cache_remote` is True. .. note:: If the requests_ library is present, ``jsonschema`` will use it to request the remote ``uri``, so that the correct encoding is detected and used. If it isn't, or if the scheme of the ``uri`` is not ``http`` or ``https``, UTF-8 is assumed. :argument str uri: the URI to resolve :returns: the retrieved document .. _requests: http://pypi.python.org/pypi/requests/
3.782809
3.043025
1.243108
errors = dict() for test in (test_architecture, test_pyqt_availability, test_pyblish_availability, test_qtconf_availability, test_qtconf_correctness, test_qt_availability): try: test() except Exception as e: errors[test] = e if not errors: print("=" * 78) print() print(.format(exe=sys.executable)) print() print("=" * 78) return True print("=" * 78) print() print(" - Failed") print() for test, error in errors.iteritems(): print(test.__name__) print(" %s" % error) print() print("=" * 78) return False
def validate()
Validate compatibility with environment and Pyblish QML
4.023778
3.763576
1.069137
try: import pyblish import pyblish_qml import PyQt5 except ImportError: return sys.stderr.write( "Run this in a terminal with access to " "the Pyblish libraries and PyQt5.\n") template = r values = {} for lib in (pyblish, pyblish_qml, PyQt5): values[lib.__name__] = os.path.dirname(os.path.dirname(lib.__file__)) values["python"] = os.path.dirname(sys.executable) with open("run.bat", "w") as f: print("Writing %s" % template.format(**values)) f.write(template.format(**values))
def generate_safemode_windows()
Produce batch file to run QML in safe-mode Usage: $ python -c "import compat;compat.generate_safemode_windows()" $ run.bat
4.546669
4.27071
1.064617
python = ( _state.get("pythonExecutable") or # Support for multiple executables. next(( exe for exe in os.getenv("PYBLISH_QML_PYTHON_EXECUTABLE", "").split(os.pathsep) if os.path.isfile(exe)), None ) or # Search PATH for executables. which("python") or which("python3") ) if not python or not os.path.isfile(python): raise ValueError("Could not locate Python executable.") return python
def find_python()
Search for Python automatically
4.520513
4.443851
1.017251
pyqt5 = ( _state.get("pyqt5") or os.getenv("PYBLISH_QML_PYQT5") ) # If not registered, ask Python for it explicitly # This avoids having to expose PyQt5 on PYTHONPATH # where it may otherwise get picked up by bystanders # such as Python 2. if not pyqt5: try: path = subprocess.check_output([ python, "-c", "import PyQt5, sys;" "sys.stdout.write(PyQt5.__file__)" # Normally, the output is bytes. ], universal_newlines=True) pyqt5 = os.path.dirname(os.path.dirname(path)) except subprocess.CalledProcessError: pass return pyqt5
def find_pyqt5(python)
Search for PyQt5 automatically
5.870824
5.665767
1.036192
def is_exe(fpath): if os.path.isfile(fpath) and os.access(fpath, os.X_OK): return True return False for path in os.environ["PATH"].split(os.pathsep): for ext in os.getenv("PATHEXT", "").split(os.pathsep): fname = program + ext.lower() abspath = os.path.join(path.strip('"'), fname) if is_exe(abspath): return abspath return None
def which(program)
Locate `program` in PATH Arguments: program (str): Name of program, e.g. "python"
2.168101
2.196939
0.986873
def _listen(): HEADER = "pyblish-qml:popen.request" for line in iter(self.popen.stdout.readline, b""): if six.PY3: line = line.decode("utf8") try: response = json.loads(line) except Exception: # This must be a regular message. sys.stdout.write(line) else: if (hasattr(response, "get") and response.get("header") == HEADER): payload = response["payload"] args = payload["args"] func_name = payload["name"] wrapper = _state.get("dispatchWrapper", default_wrapper) func = getattr(self.service, func_name) result = wrapper(func, *args) # block.. # Note(marcus): This is where we wait for the host to # finish. Technically, we could kill the GUI at this # point which would make the following commands throw # an exception. However, no host is capable of kill # the GUI whilst running a command. The host is locked # until finished, which means we are guaranteed to # always respond. data = json.dumps({ "header": "pyblish-qml:popen.response", "payload": result }) if six.PY3: data = data.encode("ascii") self.popen.stdin.write(data + b"\n") self.popen.stdin.flush() else: # In the off chance that a message # was successfully decoded as JSON, # but *wasn't* a request, just print it. sys.stdout.write(line) if not self.listening: self._start_pulse() if self.modal: _listen() else: thread = threading.Thread(target=_listen) thread.daemon = True thread.start() self.listening = True
def listen(self)
Listen to both stdout and stderr We'll want messages of a particular origin and format to cause QML to perform some action. Other messages are simply forwarded, as they are expected to be plain print or error messages.
5.138193
4.947649
1.038512