response
stringlengths 1
33.1k
| instruction
stringlengths 22
582k
|
---|---|
Split a `node` using RQ (reversed QR) decomposition.
Let :math:`M` be the matrix created by
flattening `left_edges` and `right_edges` into 2 axes.
Let :math:`QR = M^*` be the QR Decomposition of :math:`M^*`.
This will split the network into 2 nodes.
The left node's tensor will be :math:`R^*` (a lower triangular matrix)
and the right node's tensor will be :math:`Q^*` (an orthonormal matrix)
Args:
node: The node you want to split.
left_edges: The edges you want connected to the new left node.
right_edges: The edges you want connected to the new right node.
left_name: The name of the new left node. If `None`, a name will be
generated automatically.
right_name: The name of the new right node. If `None`, a name will be
generated automatically.
edge_name: The name of the new `Edge` connecting the new left and
right node. If `None`, a name will be generated automatically.
Returns:
A tuple containing:
left_node:
A new node that connects to all of the `left_edges`.
Its underlying tensor is :math:`R^*`
right_node:
A new node that connects to all of the `right_edges`.
Its underlying tensor is :math:`Q^*`
Raises:
AttributeError: If `node` has no backend attribute | def split_node_rq(
node: AbstractNode,
left_edges: List[Edge],
right_edges: List[Edge],
left_name: Optional[Text] = None,
right_name: Optional[Text] = None,
edge_name: Optional[Text] = None,
) -> Tuple[AbstractNode, AbstractNode]:
"""Split a `node` using RQ (reversed QR) decomposition.
Let :math:`M` be the matrix created by
flattening `left_edges` and `right_edges` into 2 axes.
Let :math:`QR = M^*` be the QR Decomposition of :math:`M^*`.
This will split the network into 2 nodes.
The left node's tensor will be :math:`R^*` (a lower triangular matrix)
and the right node's tensor will be :math:`Q^*` (an orthonormal matrix)
Args:
node: The node you want to split.
left_edges: The edges you want connected to the new left node.
right_edges: The edges you want connected to the new right node.
left_name: The name of the new left node. If `None`, a name will be
generated automatically.
right_name: The name of the new right node. If `None`, a name will be
generated automatically.
edge_name: The name of the new `Edge` connecting the new left and
right node. If `None`, a name will be generated automatically.
Returns:
A tuple containing:
left_node:
A new node that connects to all of the `left_edges`.
Its underlying tensor is :math:`R^*`
right_node:
A new node that connects to all of the `right_edges`.
Its underlying tensor is :math:`Q^*`
Raises:
AttributeError: If `node` has no backend attribute
"""
if not hasattr(node, 'backend'):
raise AttributeError('Node {} of type {} has no `backend`'.format(
node, type(node)))
if node.axis_names and edge_name:
left_axis_names = []
right_axis_names = [edge_name]
for edge in left_edges:
left_axis_names.append(node.axis_names[edge.axis1] if edge.node1 is node
else node.axis_names[edge.axis2])
for edge in right_edges:
right_axis_names.append(node.axis_names[edge.axis1] if edge.node1 is node
else node.axis_names[edge.axis2])
left_axis_names.append(edge_name)
else:
left_axis_names = None
right_axis_names = None
backend = node.backend
transp_tensor = node.tensor_from_edge_order(left_edges + right_edges)
r, q = backend.rq(transp_tensor, len(left_edges))
left_node = Node(r,
name=left_name,
axis_names=left_axis_names,
backend=backend)
left_axes_order = [
edge.axis1 if edge.node1 is node else edge.axis2 for edge in left_edges
]
for i, edge in enumerate(left_edges):
left_node.add_edge(edge, i)
edge.update_axis(left_axes_order[i], node, i, left_node)
right_node = Node(q,
name=right_name,
axis_names=right_axis_names,
backend=backend)
right_axes_order = [
edge.axis1 if edge.node1 is node else edge.axis2 for edge in right_edges
]
for i, edge in enumerate(right_edges):
# i + 1 to account for the new edge.
right_node.add_edge(edge, i + 1)
edge.update_axis(right_axes_order[i], node, i + 1, right_node)
connect(left_node.edges[-1], right_node.edges[0], name=edge_name)
node.fresh_edges(node.axis_names)
return left_node, right_node |
Split a node by doing a full singular value decomposition.
Let :math:`M` be the matrix created by
flattening `left_edges` and `right_edges` into 2 axes.
Let :math:`U S V^* = M` be the Singular Value Decomposition of :math:`M`.
The left most node will be :math:`U` tensor of the SVD, the middle node is
the diagonal matrix of the singular values, ordered largest to smallest,
and the right most node will be the :math:`V*` tensor of the SVD.
The singular value decomposition is truncated if `max_singular_values` or
`max_truncation_err` is not `None`.
The truncation error is the 2-norm of the vector of truncated singular
values. If only `max_truncation_err` is set, as many singular values will
be truncated as possible while maintaining:
`norm(truncated_singular_values) <= max_truncation_err`.
If `relative` is set `True` then `max_truncation_err` is understood
relative to the largest singular value.
If only `max_singular_values` is set, the number of singular values kept
will be `min(max_singular_values, number_of_singular_values)`, so that
`max(0, number_of_singular_values - max_singular_values)` are truncated.
If both `max_truncation_err` and `max_singular_values` are set,
`max_singular_values` takes priority: The truncation error may be larger
than `max_truncation_err` if required to satisfy `max_singular_values`.
Args:
node: The node you want to split.
left_edges: The edges you want connected to the new left node.
right_edges: The edges you want connected to the new right node.
max_singular_values: The maximum number of singular values to keep.
max_truncation_err: The maximum allowed truncation error.
relative: Multiply `max_truncation_err` with the largest singular value.
left_name: The name of the new left node. If None, a name will be
generated automatically.
middle_name: The name of the new center node. If `None`, a name will be
generated automatically.
right_name: The name of the new right node. If `None`, a name will be
generated automatically.
left_edge_name: The name of the new left `Edge` connecting
the new left node (:math:`U`) and the new central node (:math:`S`).
If `None`, a name will be generated automatically.
right_edge_name: The name of the new right `Edge` connecting
the new central node (:math:`S`) and the new right node (:math:`V*`).
If `None`, a name will be generated automatically.
Returns:
A tuple containing:
left_node:
A new node created that connects to all of the `left_edges`.
Its underlying tensor is :math:`U`
singular_values_node:
A new node that has 2 edges connecting `left_node` and `right_node`.
Its underlying tensor is :math:`S`
right_node:
A new node created that connects to all of the `right_edges`.
Its underlying tensor is :math:`V^*`
truncated_singular_values:
The vector of truncated singular values.
Raises:
AttributeError: If `node` has no backend attribute | def split_node_full_svd(
node: AbstractNode,
left_edges: List[Edge],
right_edges: List[Edge],
max_singular_values: Optional[int] = None,
max_truncation_err: Optional[float] = None,
relative: Optional[bool] = False,
left_name: Optional[Text] = None,
middle_name: Optional[Text] = None,
right_name: Optional[Text] = None,
left_edge_name: Optional[Text] = None,
right_edge_name: Optional[Text] = None,
) -> Tuple[AbstractNode, AbstractNode, AbstractNode, Tensor]:
"""Split a node by doing a full singular value decomposition.
Let :math:`M` be the matrix created by
flattening `left_edges` and `right_edges` into 2 axes.
Let :math:`U S V^* = M` be the Singular Value Decomposition of :math:`M`.
The left most node will be :math:`U` tensor of the SVD, the middle node is
the diagonal matrix of the singular values, ordered largest to smallest,
and the right most node will be the :math:`V*` tensor of the SVD.
The singular value decomposition is truncated if `max_singular_values` or
`max_truncation_err` is not `None`.
The truncation error is the 2-norm of the vector of truncated singular
values. If only `max_truncation_err` is set, as many singular values will
be truncated as possible while maintaining:
`norm(truncated_singular_values) <= max_truncation_err`.
If `relative` is set `True` then `max_truncation_err` is understood
relative to the largest singular value.
If only `max_singular_values` is set, the number of singular values kept
will be `min(max_singular_values, number_of_singular_values)`, so that
`max(0, number_of_singular_values - max_singular_values)` are truncated.
If both `max_truncation_err` and `max_singular_values` are set,
`max_singular_values` takes priority: The truncation error may be larger
than `max_truncation_err` if required to satisfy `max_singular_values`.
Args:
node: The node you want to split.
left_edges: The edges you want connected to the new left node.
right_edges: The edges you want connected to the new right node.
max_singular_values: The maximum number of singular values to keep.
max_truncation_err: The maximum allowed truncation error.
relative: Multiply `max_truncation_err` with the largest singular value.
left_name: The name of the new left node. If None, a name will be
generated automatically.
middle_name: The name of the new center node. If `None`, a name will be
generated automatically.
right_name: The name of the new right node. If `None`, a name will be
generated automatically.
left_edge_name: The name of the new left `Edge` connecting
the new left node (:math:`U`) and the new central node (:math:`S`).
If `None`, a name will be generated automatically.
right_edge_name: The name of the new right `Edge` connecting
the new central node (:math:`S`) and the new right node (:math:`V*`).
If `None`, a name will be generated automatically.
Returns:
A tuple containing:
left_node:
A new node created that connects to all of the `left_edges`.
Its underlying tensor is :math:`U`
singular_values_node:
A new node that has 2 edges connecting `left_node` and `right_node`.
Its underlying tensor is :math:`S`
right_node:
A new node created that connects to all of the `right_edges`.
Its underlying tensor is :math:`V^*`
truncated_singular_values:
The vector of truncated singular values.
Raises:
AttributeError: If `node` has no backend attribute
"""
if not hasattr(node, 'backend'):
raise AttributeError('Node {} of type {} has no `backend`'.format(
node, type(node)))
if node.axis_names and left_edge_name and right_edge_name:
left_axis_names = []
right_axis_names = [right_edge_name]
for edge in left_edges:
left_axis_names.append(node.axis_names[edge.axis1] if edge.node1 is node
else node.axis_names[edge.axis2])
for edge in right_edges:
right_axis_names.append(node.axis_names[edge.axis1] if edge.node1 is node
else node.axis_names[edge.axis2])
left_axis_names.append(left_edge_name)
center_axis_names = [left_edge_name, right_edge_name]
else:
left_axis_names = None
center_axis_names = None
right_axis_names = None
backend = node.backend
transp_tensor = node.tensor_from_edge_order(left_edges + right_edges)
u, s, vh, trun_vals = backend.svd(transp_tensor,
len(left_edges),
max_singular_values,
max_truncation_err,
relative=relative)
left_node = Node(u,
name=left_name,
axis_names=left_axis_names,
backend=backend)
singular_values_node = Node(backend.diagflat(s),
name=middle_name,
axis_names=center_axis_names,
backend=backend)
right_node = Node(vh,
name=right_name,
axis_names=right_axis_names,
backend=backend)
left_axes_order = [
edge.axis1 if edge.node1 is node else edge.axis2 for edge in left_edges
]
for i, edge in enumerate(left_edges):
left_node.add_edge(edge, i)
edge.update_axis(left_axes_order[i], node, i, left_node)
right_axes_order = [
edge.axis1 if edge.node1 is node else edge.axis2 for edge in right_edges
]
for i, edge in enumerate(right_edges):
# i + 1 to account for the new edge.
right_node.add_edge(edge, i + 1)
edge.update_axis(right_axes_order[i], node, i + 1, right_node)
connect(left_node.edges[-1],
singular_values_node.edges[0],
name=left_edge_name)
connect(singular_values_node.edges[1],
right_node.edges[0],
name=right_edge_name)
node.fresh_edges(node.axis_names)
return left_node, singular_values_node, right_node, trun_vals |
Computes all nodes reachable from `node` or `edge.node1` by connected
edges.
Args:
inputs: A `AbstractNode`/`Edge` or collection of `AbstractNodes`/`Edges`
Returns:
A set of `AbstractNode` objects that can be reached from `node`
via connected edges.
Raises:
TypeError: If inputs contains other then `Edge` or `Node`. | def reachable(
inputs: Union[AbstractNode, Iterable[AbstractNode], Edge, Iterable[Edge]]
) -> Set[AbstractNode]:
"""Computes all nodes reachable from `node` or `edge.node1` by connected
edges.
Args:
inputs: A `AbstractNode`/`Edge` or collection of `AbstractNodes`/`Edges`
Returns:
A set of `AbstractNode` objects that can be reached from `node`
via connected edges.
Raises:
TypeError: If inputs contains other then `Edge` or `Node`.
"""
if isinstance(inputs, AbstractNode):
inputs = {inputs}
if isinstance(inputs, Edge):
inputs = {inputs.node1} # pytype: disable=attribute-error
processed_inputs = set()
for inp in inputs:
if isinstance(inp, AbstractNode):
processed_inputs |= {inp}
elif isinstance(inp, Edge):
processed_inputs |= {inp.node1}
else:
raise TypeError(f"input to `reachable` has to be an iterable of "
f"Nodes or Edges, got {type(inp)} instead.")
return _reachable(set(processed_inputs)) |
Check if the network defined by `nodes` fulfills necessary consistency
relations.
Args:
nodes: A list of `AbstractNode` objects.
check_connections: Check if the network is connected.
Returns:
`None`
Raises:
ValueError: If the network defined by `nodes` is not
correctly structured. | def check_correct(nodes: Iterable[AbstractNode],
check_connections: Optional[bool] = True) -> None:
"""Check if the network defined by `nodes` fulfills necessary consistency
relations.
Args:
nodes: A list of `AbstractNode` objects.
check_connections: Check if the network is connected.
Returns:
`None`
Raises:
ValueError: If the network defined by `nodes` is not
correctly structured.
"""
for node in nodes:
for i, edge in enumerate(node.edges):
if edge.node1 is not node and edge.node2 is not node:
raise ValueError("Edge '{}' does not connect to node '{}'."
"Edge's nodes: '{}', '{}'.".format(
edge, node, edge.node1, edge.node2))
is_edge_node_consistent = False
if edge.node1 is node:
if edge.axis1 == i:
is_edge_node_consistent = True
if edge.node2 is node:
if edge.axis2 == i:
is_edge_node_consistent = True
if not is_edge_node_consistent:
raise ValueError(
"Edge '{}' does not point to '{}' on the correct axis. "
"Edge axes: {}, {}. Node axis: {}.".format(edge, node, edge.axis1,
edge.axis2, i))
if check_connections:
check_connected(nodes) |
Check if all nodes in `nodes` are connected.
Args:
nodes: A list of `nodes`.
Returns:
`None`
Raises:
ValueError: If not all nodes in `nodes` are connected. | def check_connected(nodes: Iterable[AbstractNode]) -> None:
"""Check if all nodes in `nodes` are connected.
Args:
nodes: A list of `nodes`.
Returns:
`None`
Raises:
ValueError: If not all nodes in `nodes` are connected.
"""
nodes = list(nodes)
if not set(nodes) <= reachable([nodes[0]]):
raise ValueError("Non-connected graph") |
Return the set of nodes connected to edges. | def get_all_nodes(edges: Iterable[Edge]) -> Set[AbstractNode]:
"""Return the set of nodes connected to edges."""
nodes = set()
for edge in edges:
if edge.node1 is not None:
nodes |= {edge.node1}
if edge.node2 is not None:
nodes |= {edge.node2}
return nodes |
Return the set of edges of all nodes. | def get_all_edges(nodes: Iterable[AbstractNode]) -> Set[Edge]:
"""Return the set of edges of all nodes."""
edges = set()
for node in nodes:
edges |= set(node.edges)
return edges |
Get all of the edges that are "relatively dangling" to the given nodes.
A "relatively dangling" edge is an edge that is either actually dangling
or is connected to another node that is outside of the given collection
of `nodes`.
Args:
nodes: A set of nodes.
Returns:
The set of "relatively dangling" edges. | def get_subgraph_dangling(nodes: Iterable[AbstractNode]) -> Set[Edge]:
"""Get all of the edges that are "relatively dangling" to the given nodes.
A "relatively dangling" edge is an edge that is either actually dangling
or is connected to another node that is outside of the given collection
of `nodes`.
Args:
nodes: A set of nodes.
Returns:
The set of "relatively dangling" edges.
"""
output = set()
for edge in get_all_edges(nodes):
if edge.is_dangling() or not set(edge.get_nodes()) <= set(nodes):
output.add(edge)
return output |
contract all trace edges of `node`.
Args:
node: A `AbstractNode` object.
Returns:
A new `AbstractNode` obtained from contracting all trace edges. | def contract_trace_edges(node: AbstractNode) -> AbstractNode:
"""contract all trace edges of `node`.
Args:
node: A `AbstractNode` object.
Returns:
A new `AbstractNode` obtained from contracting all trace edges.
"""
res = node
for edge in res.edges:
if edge.is_trace():
res = contract_parallel(edge)
break
return res |
Constructs the tensor network for a reduced density matrix, if it is pure.
The tensor network connected to `traced_out_edges` is assumed to be a pure
quantum state (a state vector). This modifies the network so that it
describes the reduced density matrix obtained by "tracing out" the specified
edges.
This is done by making a conjugate copy of the original network and
connecting each edge in `traced_out_edges` with its conjugate counterpart.
The edges in `edge_dict` corresponding to `traced_out_edges` will be the
new non-dangling edges connecting the state with its conjugate.
Args:
traced_out_edges: A list of dangling edges.
Returns:
A tuple containing:
node_dict: A dictionary mapping the nodes in the original network to
their conjugate copies.
edge_dict: A dictionary mapping edges in the original network to their
conjugate copies. | def reduced_density(traced_out_edges: Iterable[Edge]) -> Tuple[dict, dict]:
"""Constructs the tensor network for a reduced density matrix, if it is pure.
The tensor network connected to `traced_out_edges` is assumed to be a pure
quantum state (a state vector). This modifies the network so that it
describes the reduced density matrix obtained by "tracing out" the specified
edges.
This is done by making a conjugate copy of the original network and
connecting each edge in `traced_out_edges` with its conjugate counterpart.
The edges in `edge_dict` corresponding to `traced_out_edges` will be the
new non-dangling edges connecting the state with its conjugate.
Args:
traced_out_edges: A list of dangling edges.
Returns:
A tuple containing:
node_dict: A dictionary mapping the nodes in the original network to
their conjugate copies.
edge_dict: A dictionary mapping edges in the original network to their
conjugate copies.
"""
if list(filter(lambda x: not x.is_dangling(), traced_out_edges)):
raise ValueError("traced_out_edges must only include dangling edges!")
# Get all reachable nodes.
old_nodes = reachable(get_all_nodes(traced_out_edges))
# Copy and conjugate all reachable nodes.
node_dict, edge_dict = copy(old_nodes, True)
for t_edge in traced_out_edges:
# Add each edge to the copied nodes as new edge.
edge_dict[t_edge] = edge_dict[t_edge] ^ t_edge
return node_dict, edge_dict |
Change the backend of the nodes.
This will convert all `node`'s tensors to the `new_backend`'s Tensor type.
Args:
nodes: iterable of nodes
new_backend (str): The new backend.
dtype (datatype): The dtype of the backend. If `None`,
a defautl dtype according to config.py will be chosen.
Returns:
None | def switch_backend(nodes: Iterable[AbstractNode], new_backend: Text) -> None:
"""Change the backend of the nodes.
This will convert all `node`'s tensors to the `new_backend`'s Tensor type.
Args:
nodes: iterable of nodes
new_backend (str): The new backend.
dtype (datatype): The dtype of the backend. If `None`,
a defautl dtype according to config.py will be chosen.
Returns:
None
"""
if new_backend == 'symmetric':
if np.all([n.backend.name == 'symmetric' for n in nodes]):
return
raise ValueError("switching to `symmetric` backend not possible")
backend = backend_factory.get_backend(new_backend)
for node in nodes:
if node.backend.name != "numpy":
raise NotImplementedError("Can only switch backends when the current "
"backend is 'numpy'. Current backend "
"is '{}'".format(node.backend))
node.tensor = backend.convert_to_tensor(node.tensor)
node.backend = backend |
Get all of the neighbors that are directly connected to the given node.
Note: `node` will never be in the returned list, even if `node` has a
trace edge.
Args:
node: A node.
Returns:
All of the neighboring edges that share an `Edge` with `node`. | def get_neighbors(node: AbstractNode) -> List[AbstractNode]:
"""Get all of the neighbors that are directly connected to the given node.
Note: `node` will never be in the returned list, even if `node` has a
trace edge.
Args:
node: A node.
Returns:
All of the neighboring edges that share an `Edge` with `node`.
"""
neighbors = []
neighbors_set = set()
for edge in node.edges:
if not edge.is_dangling() and not edge.is_trace():
if edge.node1 is node:
if edge.node2 not in neighbors_set:
neighbors.append(edge.node2)
neighbors_set.add(edge.node2)
elif edge.node1 not in neighbors_set:
neighbors.append(edge.node1)
neighbors_set.add(edge.node1)
return neighbors |
Create a JSON string representing the Tensor Network made up of the given
nodes. Nodes and their attributes, edges and their attributes and tensor
values are included.
Tensors are serialized according the the format used by each tensors backend.
For edges spanning included nodes and excluded nodes the edge attributes are
preserved in the serialization but the connection to the excluded node is
dropped. The original edge is not modified.
Args:
nodes: A list of nodes making up a tensor network.
edge_binding: A dictionary containing {str->edge} bindings. Edges that are
not included in the serialized network are ommited from the dictionary.
Returns:
A string representing the JSON serialized tensor network.
Raises:
TypeError: If an edge_binding dict is passed with non string keys, or non
Edge values. | def nodes_to_json(nodes: List[AbstractNode],
edge_binding: Optional[Dict[str, Union[Edge, Iterable[Edge]]]]
= None) -> str:
"""
Create a JSON string representing the Tensor Network made up of the given
nodes. Nodes and their attributes, edges and their attributes and tensor
values are included.
Tensors are serialized according the the format used by each tensors backend.
For edges spanning included nodes and excluded nodes the edge attributes are
preserved in the serialization but the connection to the excluded node is
dropped. The original edge is not modified.
Args:
nodes: A list of nodes making up a tensor network.
edge_binding: A dictionary containing {str->edge} bindings. Edges that are
not included in the serialized network are ommited from the dictionary.
Returns:
A string representing the JSON serialized tensor network.
Raises:
TypeError: If an edge_binding dict is passed with non string keys, or non
Edge values.
"""
network_dict = {
'nodes': [],
'edges': [],
}
node_id_dict = {}
edge_id_dict = {}
# Build serialized Nodes
for i, node in enumerate(nodes):
node_id_dict[node] = i
network_dict['nodes'].append({
'id': i,
'attributes': node.to_serial_dict(),
})
edges = get_all_edges(nodes)
# Build serialized edges
for i, edge in enumerate(edges):
edge_id_dict[edge] = i
node_ids = [node_id_dict.get(n) for n in edge.get_nodes()]
attributes = edge.to_serial_dict()
attributes['axes'] = [
a if node_ids[j] is not None else None
for j, a in enumerate(attributes['axes'])
]
edge_dict = {
'id': i,
'node_ids': node_ids,
'attributes': attributes,
}
network_dict['edges'].append(edge_dict)
serial_edge_binding = _build_serial_binding(edge_binding, edge_id_dict)
if serial_edge_binding:
network_dict['edge_binding'] = serial_edge_binding
return json.dumps(network_dict) |
Create a tensor network from a JSON string representation of a tensor network.
Args:
json_str: A string representing a JSON serialized tensor network.
Returns:
A list of nodes making up the tensor network.
A dictionary of {str -> (edge,)} bindings. All dictionary values are tuples
of Edges.
| def nodes_from_json(json_str: str) -> Tuple[List[AbstractNode],
Dict[str, Tuple[Edge]]]:
"""
Create a tensor network from a JSON string representation of a tensor network.
Args:
json_str: A string representing a JSON serialized tensor network.
Returns:
A list of nodes making up the tensor network.
A dictionary of {str -> (edge,)} bindings. All dictionary values are tuples
of Edges.
"""
network_dict = json.loads(json_str)
nodes = []
node_ids = {}
edge_lookup = {}
edge_binding = {}
for n in network_dict['nodes']:
node = Node.from_serial_dict(n['attributes'])
nodes.append(node)
node_ids[n['id']] = node
for e in network_dict['edges']:
e_nodes = [node_ids.get(n_id) for n_id in e['node_ids']]
axes = e['attributes']['axes']
edge = Edge(node1=e_nodes[0],
axis1=axes[0],
node2=e_nodes[1],
axis2=axes[1],
name=e['attributes']['name'])
edge_lookup[e['id']] = edge
for node, axis in zip(e_nodes, axes):
if node is not None:
node.add_edge(edge, axis, override=True)
for k, v in network_dict.get('edge_binding', {}).items():
for e_id in v:
edge_binding[k] = edge_binding.get(k, ()) + (edge_lookup[e_id],)
return nodes, edge_binding |
Redirect `edge` from `old_node` to `new_node`.
Routine updates `new_node` and `old_node`.
`edge` is added to `new_node`, `old_node` gets a
new Edge instead of `edge`.
Args:
edge: An Edge.
new_node: The new `Node` object.
old_node: The old `Node` object.
Returns:
None
Raises:
ValueError: if `edge` does not point to `old_node`. | def redirect_edge(edge: Edge, new_node: AbstractNode,
old_node: AbstractNode) -> None:
"""
Redirect `edge` from `old_node` to `new_node`.
Routine updates `new_node` and `old_node`.
`edge` is added to `new_node`, `old_node` gets a
new Edge instead of `edge`.
Args:
edge: An Edge.
new_node: The new `Node` object.
old_node: The old `Node` object.
Returns:
None
Raises:
ValueError: if `edge` does not point to `old_node`.
"""
if not edge.is_trace():
if edge.is_dangling():
if edge.node1 is not old_node:
raise ValueError(f"edge {edge} is not pointing "
f"to old_node {old_node}")
edge.node1 = new_node
axis = edge.axis1
else:
if edge.node1 is old_node:
edge.node1 = new_node
axis = edge.axis1
elif edge.node2 is old_node:
edge.node2 = new_node
axis = edge.axis2
else:
raise ValueError(f"edge {edge} is not pointing "
f"to old_node {old_node}")
new_node.add_edge(edge, axis, True)
new_edge = Edge(old_node, axis)
old_node.add_edge(new_edge, axis, True)
else:
if edge.node1 is not old_node:
raise ValueError(f"edge {edge} is not pointing "
f"to old_node {old_node}")
edge.node1 = new_node
edge.node2 = new_node
axis1 = edge.axis1
axis2 = edge.axis2
new_node.add_edge(edge, axis1, True)
new_node.add_edge(edge, axis2, True)
new_edge = Edge(old_node, axis1, None, old_node, axis2)
old_node.add_edge(new_edge, axis1, True)
old_node.add_edge(new_edge, axis2, True) |
Save an iterable of nodes into hdf5 format.
Args:
nodes: An iterable of connected nodes. All nodes have to connect within
`nodes`.
path: path to file where network is saved. | def save_nodes(nodes: List[AbstractNode], path: Union[str, BinaryIO]) -> None:
"""Save an iterable of nodes into hdf5 format.
Args:
nodes: An iterable of connected nodes. All nodes have to connect within
`nodes`.
path: path to file where network is saved.
"""
if reachable(nodes) > set(nodes):
raise ValueError(
"Some nodes in `nodes` are connected to nodes not contained in `nodes`."
" Saving not possible.")
if len(set(nodes)) < len(list(nodes)):
raise ValueError(
'Some nodes in `nodes` appear more than once. This is not supported')
#we need to iterate twice and order matters
edges = list(get_all_edges(nodes))
nodes = list(nodes)
old_edge_names = {n: edge.name for n, edge in enumerate(edges)}
old_node_names = {n: node.name for n, node in enumerate(nodes)}
#generate unique names for nodes and edges
#for saving them
for n, node in enumerate(nodes):
node.set_name('node{}'.format(n))
for e, edge in enumerate(edges):
edge.set_name('edge{}'.format(e))
with h5py.File(path, 'w') as net_file:
nodes_group = net_file.create_group('nodes')
node_names_group = net_file.create_group('node_names')
node_names_group.create_dataset(
'names',
dtype=string_type,
data=np.array(list(old_node_names.values()), dtype=object))
edges_group = net_file.create_group('edges')
edge_names_group = net_file.create_group('edge_names')
edge_names_group.create_dataset(
'names',
dtype=string_type,
data=np.array(list(old_edge_names.values()), dtype=object))
for n, node in enumerate(nodes):
node_group = nodes_group.create_group(node.name)
node._save_node(node_group)
for edge in node.edges:
if edge.node1 == node and edge in edges:
edge_group = edges_group.create_group(edge.name)
edge._save_edge(edge_group)
edges.remove(edge)
#name edges and nodes back to their original names
for n, node in enumerate(nodes):
nodes[n].set_name(old_node_names[n])
for n, edge in enumerate(edges):
edges[n].set_name(old_edge_names[n]) |
Load a set of nodes from disk.
Args:
path: path to file where network is saved.
Returns:
An iterable of `Node` objects | def load_nodes(path: str) -> List[AbstractNode]:
"""Load a set of nodes from disk.
Args:
path: path to file where network is saved.
Returns:
An iterable of `Node` objects
"""
nodes_list = []
edges_list = []
with h5py.File(path, 'r') as net_file:
nodes = list(net_file["nodes"].keys())
node_names = {
'node{}'.format(n): v for n, v in enumerate(
net_file["node_names"]['names'].asstr(STRING_ENCODING)[()])#pylint: disable=no-member
}
edge_names = {
'edge{}'.format(n): v for n, v in enumerate(
net_file["edge_names"]['names'].asstr(STRING_ENCODING)[()])#pylint: disable=no-member
}
edges = list(net_file["edges"].keys())
for node_name in nodes:
node_data = net_file["nodes/" + node_name]
node_type = get_component(node_data['type'].asstr()[()])
nodes_list.append(node_type._load_node(node_data=node_data))
nodes_dict = {node.name: node for node in nodes_list}
for edge in edges:
edge_data = net_file["edges/" + edge]
edges_list.append(Edge._load_edge(edge_data, nodes_dict))
for edge in edges_list:
edge.set_name(edge_names[edge.name])
for node in nodes_list:
node.set_name(node_names[node.name])
return nodes_list |
Create and connect new `tn.Node`s by the given einsum-like topology.
Example:
```
a, b, c = tn.from_topology("xy,yz,zx", [a, b, c])
```
Args:
topology: A string that defines the topology. Should be like
the left side of an einsum expression.
tensors: The tensors needed to create the nodes.
Returns:
A list of Nodes. | def from_topology(topology, tensors, backend=None):
"""Create and connect new `tn.Node`s by the given einsum-like topology.
Example:
```
a, b, c = tn.from_topology("xy,yz,zx", [a, b, c])
```
Args:
topology: A string that defines the topology. Should be like
the left side of an einsum expression.
tensors: The tensors needed to create the nodes.
Returns:
A list of Nodes.
"""
edge_dict = {}
nodes = []
split_list = topology.split(",")
if len(split_list) != len(tensors):
raise ValueError("topology and number of tensors is mismatched")
for local_axes, tensor in zip(split_list, tensors):
local_axes_list = list(local_axes)
if len(local_axes_list) != len(tensor.shape):
raise ValueError(f"{local_axes} does not match shape {tensor.shape}")
new_node = Node(tensor, axis_names=local_axes_list, backend=backend)
for c in local_axes:
if c in edge_dict:
edge_dict[c] = edge_dict[c] ^ new_node[c]
else:
edge_dict[c] = new_node[c]
nodes.append(new_node)
return nodes |
Return a jitted or graph-compiled version of `fun`
for JAX backend. For all other backends returns `fun`.
Args:
fun: Callable
backend: The backend.
backend_argnum: Labels the argument of the decorated function which
specifies the backend.
This argument will be treated
as static in the sense of static_argnums.
If backend_argnum is specified, backend must be None.
static_argnums: Label the arguments which will be statically compiled
against.
xla_backend: Specifies the backend ('gpu', 'cpu'...) against which
XLA is to run.
donate_argnums: Labels arguments that Jit is allowed to overwrite.
args: Arguments to `fun`.
kwargs: Keyword arguments to `fun`.
Raises:
ValueError: If backend_argnum is specified but backend is not None.
If backend_argnum is specified but the corresponding
argument neither is nor labels a backend.
Returns:
Callable: jitted/graph-compiled version of `fun`, or just `fun`. | def jit(fun: Callable,
backend: Union[Text, AbstractBackend] = None,
backend_argnum: Optional[int] = None,
static_argnums: Union[int, Iterable[int]] = (), device=None,
xla_backend: Optional[str] = None) -> Callable:
"""
Return a jitted or graph-compiled version of `fun`
for JAX backend. For all other backends returns `fun`.
Args:
fun: Callable
backend: The backend.
backend_argnum: Labels the argument of the decorated function which
specifies the backend.
This argument will be treated
as static in the sense of static_argnums.
If backend_argnum is specified, backend must be None.
static_argnums: Label the arguments which will be statically compiled
against.
xla_backend: Specifies the backend ('gpu', 'cpu'...) against which
XLA is to run.
donate_argnums: Labels arguments that Jit is allowed to overwrite.
args: Arguments to `fun`.
kwargs: Keyword arguments to `fun`.
Raises:
ValueError: If backend_argnum is specified but backend is not None.
If backend_argnum is specified but the corresponding
argument neither is nor labels a backend.
Returns:
Callable: jitted/graph-compiled version of `fun`, or just `fun`.
"""
argnum_mode = False
if backend_argnum is not None:
if backend is not None:
raise ValueError("backend must be None if backend_argnum is specified.")
argnum_mode = True
static_argnums = tuple(list(static_argnums) + [backend_argnum,])
if not argnum_mode:
if backend is None:
backend = backend_contextmanager.get_default_backend()
backend_obj = backends.backend_factory.get_backend(backend)
@functools.wraps(fun)
def wrapper(*args, **kwargs):
jitted = backend_obj.jit(fun, static_argnums=static_argnums,
device=device, backend=xla_backend)
return jitted(*args, **kwargs)
else:
@functools.wraps(fun)
def wrapper(*args, **kwargs):
backend = args[backend_argnum]
try:
backend_obj = backends.backend_factory.get_backend(backend)
except ValueError as error:
errstr = (f"backend_argnum={backend_argnum} was specified"
f"but the corresponding argument {args[backend_argnum]}"
f"did not specify a backend.")
raise ValueError(errstr) from error
jitted = backend_obj.jit(fun, static_argnums=static_argnums,
device=device, backend=xla_backend)
return jitted(*args, **kwargs)
return wrapper |
Helper to initialize data for the other Jit tests. | def jittest_init(backend):
"""
Helper to initialize data for the other Jit tests.
"""
backend_obj = backends.backend_factory.get_backend(backend)
def fun(x, A, y):
return backend_obj.multiply(x, backend_obj.multiply(A, y))
x = backend_obj.randn((4,), seed=11)
y = backend_obj.randn((4,), seed=11)
A = backend_obj.randn((4, 4), seed=11)
return (x, y, A, fun) |
Tests that tn.jit gives the right answer. | def test_jit(backend):
"""
Tests that tn.jit gives the right answer.
"""
x, y, A, fun = jittest_init(backend)
fun_jit = tensornetwork.jit(fun, backend=backend)
res1 = fun(x, A, y)
res2 = fun_jit(x, A, y)
np.testing.assert_allclose(res1, res2) |
Tests that tn.jit gives the right answer when used as a decorator. | def test_jit_ampersand(backend):
"""
Tests that tn.jit gives the right answer when used as a decorator.
"""
x, y, A, fun = jittest_init(backend)
@functools.partial(tensornetwork.jit, static_argnums=(3,), backend=backend)
def fun_jit(x, A, y, dummy):
_ = dummy
return fun(x, A, y)
res1 = fun(x, A, y)
res2 = fun_jit(x, A, y, 2)
np.testing.assert_allclose(res1, res2) |
Tests that tn.jit gives the right answer when given extra arguments. | def test_jit_args(backend):
"""
Tests that tn.jit gives the right answer when given extra arguments.
"""
x, y, A, fun = jittest_init(backend)
fun_jit = tensornetwork.jit(fun, backend=backend)
res1 = fun(x, A, y)
res2 = fun_jit(x, A, y)
res3 = fun_jit(x, y=y, A=A)
np.testing.assert_allclose(res1, res2)
np.testing.assert_allclose(res1, res3) |
Tests that tn.jit gives the right answer when the backend is supplied
via backend_argnum as a string. | def test_jit_backend_argnum_is_string(backend):
"""
Tests that tn.jit gives the right answer when the backend is supplied
via backend_argnum as a string.
"""
x, y, A, fun = jittest_init(backend)
@functools.partial(tensornetwork.jit, backend_argnum=3)
def fun_jit(x, A, y, the_backend):
_ = the_backend
return fun(x, A, y)
res1 = fun(x, A, y)
res2 = fun_jit(x, A, y, backend)
np.testing.assert_allclose(res1, res2) |
Tests that tn.jit gives the right answer when the backend is supplied
via backend_argnum as a backend object. | def test_jit_backend_argnum_is_obj(backend):
"""
Tests that tn.jit gives the right answer when the backend is supplied
via backend_argnum as a backend object.
"""
x, y, A, fun = jittest_init(backend)
@functools.partial(tensornetwork.jit, backend_argnum=3)
def fun_jit(x, A, y, the_backend):
_ = the_backend
return fun(x, A, y)
res1 = fun(x, A, y)
backend_obj = backends.backend_factory.get_backend(backend)
res2 = fun_jit(x, A, y, backend_obj)
np.testing.assert_allclose(res1, res2) |
Tests that tn.jit raises ValueError when backend_argnum points to something
other than a backend. | def test_jit_backend_argnum_invalid(backend):
"""
Tests that tn.jit raises ValueError when backend_argnum points to something
other than a backend.
"""
x, y, A, fun = jittest_init(backend)
with pytest.raises(ValueError):
@functools.partial(tensornetwork.jit, backend_argnum=3)
def fun_jit(x, A, y, the_backend):
_ = the_backend
return fun(x, A, y)
_ = fun_jit(x, A, y, 99) |
Tests that tn.jit raises ValueError when backend_argnum and backend
are both specified. | def test_jit_backend_and_backend_obj_raises_error(backend):
"""
Tests that tn.jit raises ValueError when backend_argnum and backend
are both specified.
"""
x, y, A, fun = jittest_init(backend)
with pytest.raises(ValueError):
@functools.partial(tensornetwork.jit, backend_argnum=3, backend=backend)
def fun_jit(x, A, y, the_backend):
_ = the_backend
return fun(x, A, y)
_ = fun_jit(x, A, y, backend) |
Find the lowest eigenvalues and eigenvectors
of a 1d free-fermion Hamiltonian on N sites.
The dimension of the hermitian matrix is
(2**N, 2**N). | def test_eigsh_free_fermions(N, dtype, param_type):
"""
Find the lowest eigenvalues and eigenvectors
of a 1d free-fermion Hamiltonian on N sites.
The dimension of the hermitian matrix is
(2**N, 2**N).
"""
backend = jax_backend.JaxBackend(precision=jax.lax.Precision.HIGHEST)
np.random.seed(10)
pot, hop = get_ham_params(dtype, N, param_type)
P = jnp.diag(np.array([0, -1])).astype(dtype)
c = jnp.array([[0, 1], [0, 0]], dtype)
n = c.T @ c
eye = jnp.eye(2, dtype=dtype)
neye = jnp.kron(n, eye)
eyen = jnp.kron(eye, n)
ccT = jnp.kron(c @ P, c.T)
cTc = jnp.kron(c.T, c)
@jax.jit
def matvec(vec):
x = vec.reshape((4, 2**(N - 2)))
out = jnp.zeros(x.shape, x.dtype)
t1 = neye * pot[0] + eyen * pot[1] / 2
t2 = cTc * hop[0] - ccT * jnp.conj(hop[0])
out += jnp.einsum('ij,ki -> kj', x, t1 + t2)
x = x.reshape((2, 2**(N - 1))).transpose((1, 0)).reshape((4, 2**(N - 2)))
out = out.reshape((2, 2**(N - 1))).transpose((1, 0)).reshape(
(4, 2**(N - 2)))
for site in range(1, N - 2):
t1 = neye * pot[site] / 2 + eyen * pot[site + 1] / 2
t2 = cTc * hop[site] - ccT * jnp.conj(hop[site])
out += jnp.einsum('ij,ki -> kj', x, t1 + t2)
x = x.reshape((2, 2**(N - 1))).transpose((1, 0)).reshape((4, 2**(N - 2)))
out = out.reshape((2, 2**(N - 1))).transpose((1, 0)).reshape(
(4, 2**(N - 2)))
t1 = neye * pot[N - 2] / 2 + eyen * pot[N - 1]
t2 = cTc * hop[N - 2] - ccT * jnp.conj(hop[N - 2])
out += jnp.einsum('ij,ki -> kj', x, t1 + t2)
x = x.reshape((2, 2**(N - 1))).transpose((1, 0)).reshape((4, 2**(N - 2)))
out = out.reshape((2, 2**(N - 1))).transpose((1, 0)).reshape(
(4, 2**(N - 2)))
x = x.reshape((2, 2**(N - 1))).transpose((1, 0)).reshape(2**N)
out = out.reshape((2, 2**(N - 1))).transpose((1, 0)).reshape(2**N)
return out.ravel()
H = np.diag(pot) + np.diag(hop.conj(), 1) + np.diag(hop, -1)
single_particle_energies = np.linalg.eigh(H)[0]
many_body_energies = []
for n in range(2**N):
many_body_energies.append(
np.sum(single_particle_energies[np.nonzero(
np.array(list(bin(n)[2:]), dtype=int)[::-1])[0]]))
many_body_energies = np.sort(many_body_energies)
init = jnp.array(np.random.randn(2**N)).astype(dtype)
init /= jnp.linalg.norm(init)
ncv = 20
numeig = 3
which = 'SA'
tol = 1E-10
maxiter = 30
atol = 1E-8
eta, _ = backend.eigsh(
A=matvec,
args=[],
initial_state=init,
num_krylov_vecs=ncv,
numeig=numeig,
which=which,
tol=tol,
maxiter=maxiter)
np.testing.assert_allclose(
eta, many_body_energies[:numeig], atol=atol, rtol=atol) |
Helper function to generate jitted lanczos function used
in JaxBackend.eigsh_lanczos. The function `jax_lanczos`
returned by this higher-order function has the following
call signature:
```
eigenvalues, eigenvectors = jax_lanczos(matvec:Callable,
arguments: List[Tensor],
init: Tensor,
ncv: int,
neig: int,
landelta: float,
reortho: bool)
```
`matvec`: A callable implementing the matrix-vector product of a
linear operator. `arguments`: Arguments to `matvec` additional to
an input vector. `matvec` will be called as `matvec(init, *args)`.
`init`: An initial input vector to `matvec`.
`ncv`: Number of krylov iterations (i.e. dimension of the Krylov space).
`neig`: Number of eigenvalue-eigenvector pairs to be computed.
`landelta`: Convergence parameter: if the norm of the current Lanczos vector
`reortho`: If `True`, reorthogonalize all krylov vectors at each step.
This should be used if `neig>1`.
Args:
jax: The `jax` module.
Returns:
Callable: A jitted function that does a lanczos iteration. | def _generate_jitted_eigsh_lanczos(jax: types.ModuleType) -> Callable:
"""
Helper function to generate jitted lanczos function used
in JaxBackend.eigsh_lanczos. The function `jax_lanczos`
returned by this higher-order function has the following
call signature:
```
eigenvalues, eigenvectors = jax_lanczos(matvec:Callable,
arguments: List[Tensor],
init: Tensor,
ncv: int,
neig: int,
landelta: float,
reortho: bool)
```
`matvec`: A callable implementing the matrix-vector product of a
linear operator. `arguments`: Arguments to `matvec` additional to
an input vector. `matvec` will be called as `matvec(init, *args)`.
`init`: An initial input vector to `matvec`.
`ncv`: Number of krylov iterations (i.e. dimension of the Krylov space).
`neig`: Number of eigenvalue-eigenvector pairs to be computed.
`landelta`: Convergence parameter: if the norm of the current Lanczos vector
`reortho`: If `True`, reorthogonalize all krylov vectors at each step.
This should be used if `neig>1`.
Args:
jax: The `jax` module.
Returns:
Callable: A jitted function that does a lanczos iteration.
"""
JaxPrecisionType = type(jax.lax.Precision.DEFAULT)
@functools.partial(jax.jit, static_argnums=(3, 4, 5, 6, 7))
def jax_lanczos(matvec: Callable, arguments: List, init: jax.ShapedArray,
ncv: int, neig: int, landelta: float, reortho: bool,
precision: JaxPrecisionType) -> Tuple[jax.ShapedArray, List]:
"""
Lanczos iteration for symmeric eigenvalue problems. If reortho = False,
the Krylov basis is constructed without explicit re-orthogonalization.
In infinite precision, all Krylov vectors would be orthogonal. Due to
finite precision arithmetic, orthogonality is usually quickly lost.
For reortho=True, the Krylov basis is explicitly reorthogonalized.
Args:
matvec: A callable implementing the matrix-vector product of a
linear operator.
arguments: Arguments to `matvec` additional to an input vector.
`matvec` will be called as `matvec(init, *args)`.
init: An initial input vector to `matvec`.
ncv: Number of krylov iterations (i.e. dimension of the Krylov space).
neig: Number of eigenvalue-eigenvector pairs to be computed.
landelta: Convergence parameter: if the norm of the current Lanczos vector
falls below `landelta`, iteration is stopped.
reortho: If `True`, reorthogonalize all krylov vectors at each step.
This should be used if `neig>1`.
precision: jax.lax.Precision type used in jax.numpy.vdot
Returns:
jax.ShapedArray: Eigenvalues
List: Eigenvectors
int: Number of iterations
"""
shape = init.shape
dtype = init.dtype
iterative_classical_gram_schmidt = _iterative_classical_gram_schmidt(jax)
mask_slice = (slice(ncv + 2), ) + (None,) * len(shape)
def scalar_product(a, b):
i1 = list(range(len(a.shape)))
i2 = list(range(len(b.shape)))
return jax.numpy.tensordot(a.conj(), b, (i1, i2), precision=precision)
def norm(a):
return jax.numpy.sqrt(scalar_product(a, a))
def body_lanczos(vals):
krylov_vectors, alphas, betas, i = vals
previous_vector = krylov_vectors[i]
def body_while(vals):
pv, kv, _ = vals
pv = iterative_classical_gram_schmidt(
pv, (i > jax.numpy.arange(ncv + 2))[mask_slice] * kv, precision)[0]
return [pv, kv, False]
def cond_while(vals):
return vals[2]
previous_vector, krylov_vectors, _ = jax.lax.while_loop(
cond_while, body_while,
[previous_vector, krylov_vectors, reortho])
beta = norm(previous_vector)
normalized_vector = previous_vector / beta
Av = matvec(normalized_vector, *arguments)
alpha = scalar_product(normalized_vector, Av)
alphas = alphas.at[i - 1].set(alpha)
betas = betas.at[i].set(beta)
def while_next(vals):
Av, _ = vals
res = Av - normalized_vector * alpha - krylov_vectors[i - 1] * beta
return [res, False]
def cond_next(vals):
return vals[1]
next_vector, _ = jax.lax.while_loop(
cond_next, while_next,
[Av, jax.numpy.logical_not(reortho)])
next_vector = jax.numpy.reshape(next_vector, shape)
krylov_vectors = krylov_vectors.at[i].set(normalized_vector)
krylov_vectors = krylov_vectors.at[i + 1].set(next_vector)
return [krylov_vectors, alphas, betas, i + 1]
def cond_fun(vals):
betas, i = vals[-2], vals[-1]
norm = betas[i - 1]
return jax.lax.cond(i <= ncv, lambda x: x[0] > x[1], lambda x: False,
[norm, landelta])
# note: ncv + 2 because the first vector is all zeros, and the
# last is the unnormalized residual.
krylov_vecs = jax.numpy.zeros((ncv + 2,) + shape, dtype=dtype)
# NOTE (mganahl): initial vector is normalized inside the loop
krylov_vecs = krylov_vecs.at[1].set(init)
# betas are the upper and lower diagonal elements
# of the projected linear operator
# the first two beta-values can be discarded
# set betas[0] to 1.0 for initialization of loop
# betas[2] is set to the norm of the initial vector.
betas = jax.numpy.zeros(ncv + 1, dtype=dtype)
betas = betas.at[0].set(1.0)
# diagonal elements of the projected linear operator
alphas = jax.numpy.zeros(ncv, dtype=dtype)
initvals = [krylov_vecs, alphas, betas, 1]
krylov_vecs, alphas, betas, numits = jax.lax.while_loop(
cond_fun, body_lanczos, initvals)
# FIXME (mganahl): if the while_loop stopps early at iteration i, alphas
# and betas are 0.0 at positions n >= i - 1. eigh will then wrongly give
# degenerate eigenvalues 0.0. JAX does currently not support
# dynamic slicing with variable slice sizes, so these beta values
# can't be truncated. Thus, if numeig >= i - 1, jitted_lanczos returns
# a set of spurious eigen vectors and eigen values.
# If algebraically small EVs are desired, one can initialize `alphas` with
# large positive values, thus pushing the spurious eigenvalues further
# away from the desired ones (similar for algebraically large EVs)
#FIXME: replace with eigh_banded once JAX supports it
A_tridiag = jax.numpy.diag(alphas) + jax.numpy.diag(
betas[2:], 1) + jax.numpy.diag(jax.numpy.conj(betas[2:]), -1)
eigvals, U = jax.numpy.linalg.eigh(A_tridiag)
eigvals = eigvals.astype(dtype)
# expand eigenvectors in krylov basis
def body_vector(i, vals):
krv, unitary, vectors = vals
dim = unitary.shape[1]
n, m = jax.numpy.divmod(i, dim)
vectors = jax.ops.index_add(vectors, jax.ops.index[n, :],
krv[m + 1] * unitary[m, n])
return [krv, unitary, vectors]
_vectors = jax.numpy.zeros((neig,) + shape, dtype=dtype)
_, _, vectors = jax.lax.fori_loop(0, neig * (krylov_vecs.shape[0] - 1),
body_vector,
[krylov_vecs, U, _vectors])
return jax.numpy.array(eigvals[0:neig]), [
vectors[n] / norm(vectors[n]) for n in range(neig)
], numits
return jax_lanczos |
Helper function to generate a jitteed function that
computes a lanczos factoriazation of a linear operator.
Returns:
Callable: A jitted function that does a lanczos factorization. | def _generate_lanczos_factorization(jax: types.ModuleType) -> Callable:
"""
Helper function to generate a jitteed function that
computes a lanczos factoriazation of a linear operator.
Returns:
Callable: A jitted function that does a lanczos factorization.
"""
JaxPrecisionType = type(jax.lax.Precision.DEFAULT)
@functools.partial(jax.jit, static_argnums=(6, 7, 8, 9))
def _lanczos_fact(
matvec: Callable, args: List, v0: jax.ShapedArray,
Vm: jax.ShapedArray, alphas: jax.ShapedArray, betas: jax.ShapedArray,
start: int, num_krylov_vecs: int, tol: float, precision: JaxPrecisionType
):
"""
Compute an m-step lanczos factorization of `matvec`, with
m <=`num_krylov_vecs`. The factorization will
do at most `num_krylov_vecs` steps, and terminate early
if an invariat subspace is encountered. The returned arrays
`alphas`, `betas` and `Vm` will satisfy the Lanczos recurrence relation
```
matrix @ Vm - Vm @ Hm - fm * em = 0
```
with `matrix` the matrix representation of `matvec`,
`Hm = jnp.diag(alphas) + jnp.diag(betas, -1) + jnp.diag(betas.conj(), 1)`
`fm=residual * norm`, and `em` a cartesian basis vector of shape
`(1, kv.shape[1])` with `em[0, -1] == 1` and 0 elsewhere.
Note that the caller is responsible for dtype consistency between
the inputs, i.e. dtypes between all input arrays have to match.
Args:
matvec: The matrix vector product.
args: List of arguments to `matvec`.
v0: Initial state to `matvec`.
Vm: An array for storing the krylov vectors. The individual
vectors are stored as columns.
The shape of `krylov_vecs` has to be
(num_krylov_vecs + 1, np.ravel(v0).shape[0]).
alphas: An array for storing the diagonal elements of the reduced
operator.
betas: An array for storing the lower diagonal elements of the
reduced operator.
start: Integer denoting the start position where the first
produced krylov_vector should be inserted into `Vm`
num_krylov_vecs: Number of krylov iterations, should be identical to
`Vm.shape[0] + 1`
tol: Convergence parameter. Iteration is terminated if the norm of a
krylov-vector falls below `tol`.
Returns:
jax.ShapedArray: An array of shape
`(num_krylov_vecs, np.prod(initial_state.shape))` of krylov vectors.
jax.ShapedArray: The diagonal elements of the tridiagonal reduced
operator ("alphas")
jax.ShapedArray: The lower-diagonal elements of the tridiagonal reduced
operator ("betas")
jax.ShapedArray: The unnormalized residual of the Lanczos process.
float: The norm of the residual.
int: The number of performed iterations.
bool: if `True`: iteration hit an invariant subspace.
if `False`: iteration terminated without encountering
an invariant subspace.
"""
shape = v0.shape
iterative_classical_gram_schmidt = _iterative_classical_gram_schmidt(jax)
Z = jax.numpy.linalg.norm(v0)
#only normalize if norm > tol, else return zero vector
v = jax.lax.cond(Z > tol, lambda x: v0 / Z, lambda x: v0 * 0.0, None)
Vm = Vm.at[start, :].set(jax.numpy.ravel(v))
betas = jax.lax.cond(
start > 0,
lambda x: betas.at[start - 1].set(Z),
lambda x: betas, start)
# body of the arnoldi iteration
def body(vals):
Vm, alphas, betas, previous_vector, _, i = vals
Av = matvec(previous_vector, *args)
Av, overlaps = iterative_classical_gram_schmidt(
Av.ravel(),
(i >= jax.numpy.arange(Vm.shape[0]))[:, None] * Vm, precision)
alphas = alphas.at[i].set(overlaps[i])
norm = jax.numpy.linalg.norm(Av)
Av = jax.numpy.reshape(Av, shape)
# only normalize if norm is larger than threshold,
# otherwise return zero vector
Av = jax.lax.cond(norm > tol, lambda x: Av/norm, lambda x: Av * 0.0, None)
Vm, betas = jax.lax.cond(
i < num_krylov_vecs - 1,
lambda x: (Vm.at[i + 1, :].set(Av.ravel()), betas.at[i].set(norm)),
lambda x: (Vm, betas),
None)
return [Vm, alphas, betas, Av, norm, i + 1]
def cond_fun(vals):
# Continue loop while iteration < num_krylov_vecs and norm > tol
norm, iteration = vals[4], vals[5]
counter_done = (iteration >= num_krylov_vecs)
norm_not_too_small = norm > tol
continue_iteration = jax.lax.cond(counter_done, lambda x: False,
lambda x: norm_not_too_small, None)
return continue_iteration
initial_values = [Vm, alphas, betas, v, Z, start]
final_values = jax.lax.while_loop(cond_fun, body, initial_values)
Vm, alphas, betas, residual, norm, it = final_values
return Vm, alphas, betas, residual, norm, it, norm < tol
return _lanczos_fact |
Helper function to create a jitted arnoldi factorization.
The function returns a function `_arnoldi_fact` which
performs an m-step arnoldi factorization.
`_arnoldi_fact` computes an m-step arnoldi factorization
of an input callable `matvec`, with m = min(`it`,`num_krylov_vecs`).
`_arnoldi_fact` will do at most `num_krylov_vecs` steps.
`_arnoldi_fact` returns arrays `kv` and `H` which satisfy
the Arnoldi recurrence relation
```
matrix @ Vm - Vm @ Hm - fm * em = 0
```
with `matrix` the matrix representation of `matvec` and
`Vm = jax.numpy.transpose(kv[:it, :])`,
`Hm = H[:it, :it]`, `fm = np.expand_dims(kv[it, :] * H[it, it - 1]`,1)
and `em` a kartesian basis vector of shape `(1, kv.shape[1])`
with `em[0, -1] == 1` and 0 elsewhere.
Note that the caller is responsible for dtype consistency between
the inputs, i.e. dtypes between all input arrays have to match.
Args:
matvec: The matrix vector product. This function has to be wrapped into
`jax.tree_util.Partial`. `matvec` will be called as `matvec(x, *args)`
for an input vector `x`.
args: List of arguments to `matvec`.
v0: Initial state to `matvec`.
Vm: An array for storing the krylov vectors. The individual
vectors are stored as columns. The shape of `krylov_vecs` has to be
(num_krylov_vecs + 1, np.ravel(v0).shape[0]).
H: Matrix of overlaps. The shape has to be
(num_krylov_vecs + 1,num_krylov_vecs + 1).
start: Integer denoting the start position where the first
produced krylov_vector should be inserted into `Vm`
num_krylov_vecs: Number of krylov iterations, should be identical to
`Vm.shape[0] + 1`
tol: Convergence parameter. Iteration is terminated if the norm of a
krylov-vector falls below `tol`.
Returns:
kv: An array of krylov vectors
H: A matrix of overlaps
it: The number of performed iterations.
converged: Whether convergence was achieved. | def _generate_arnoldi_factorization(jax: types.ModuleType) -> Callable:
"""
Helper function to create a jitted arnoldi factorization.
The function returns a function `_arnoldi_fact` which
performs an m-step arnoldi factorization.
`_arnoldi_fact` computes an m-step arnoldi factorization
of an input callable `matvec`, with m = min(`it`,`num_krylov_vecs`).
`_arnoldi_fact` will do at most `num_krylov_vecs` steps.
`_arnoldi_fact` returns arrays `kv` and `H` which satisfy
the Arnoldi recurrence relation
```
matrix @ Vm - Vm @ Hm - fm * em = 0
```
with `matrix` the matrix representation of `matvec` and
`Vm = jax.numpy.transpose(kv[:it, :])`,
`Hm = H[:it, :it]`, `fm = np.expand_dims(kv[it, :] * H[it, it - 1]`,1)
and `em` a kartesian basis vector of shape `(1, kv.shape[1])`
with `em[0, -1] == 1` and 0 elsewhere.
Note that the caller is responsible for dtype consistency between
the inputs, i.e. dtypes between all input arrays have to match.
Args:
matvec: The matrix vector product. This function has to be wrapped into
`jax.tree_util.Partial`. `matvec` will be called as `matvec(x, *args)`
for an input vector `x`.
args: List of arguments to `matvec`.
v0: Initial state to `matvec`.
Vm: An array for storing the krylov vectors. The individual
vectors are stored as columns. The shape of `krylov_vecs` has to be
(num_krylov_vecs + 1, np.ravel(v0).shape[0]).
H: Matrix of overlaps. The shape has to be
(num_krylov_vecs + 1,num_krylov_vecs + 1).
start: Integer denoting the start position where the first
produced krylov_vector should be inserted into `Vm`
num_krylov_vecs: Number of krylov iterations, should be identical to
`Vm.shape[0] + 1`
tol: Convergence parameter. Iteration is terminated if the norm of a
krylov-vector falls below `tol`.
Returns:
kv: An array of krylov vectors
H: A matrix of overlaps
it: The number of performed iterations.
converged: Whether convergence was achieved.
"""
JaxPrecisionType = type(jax.lax.Precision.DEFAULT)
iterative_classical_gram_schmidt = _iterative_classical_gram_schmidt(jax)
@functools.partial(jax.jit, static_argnums=(5, 6, 7, 8))
def _arnoldi_fact(
matvec: Callable, args: List, v0: jax.ShapedArray,
Vm: jax.ShapedArray, H: jax.ShapedArray, start: int,
num_krylov_vecs: int, tol: float, precision: JaxPrecisionType
) -> Tuple[jax.ShapedArray, jax.ShapedArray, jax.ShapedArray, float, int,
bool]:
"""
Compute an m-step arnoldi factorization of `matvec`, with
m = min(`it`,`num_krylov_vecs`). The factorization will
do at most `num_krylov_vecs` steps. The returned arrays
`kv` and `H` will satisfy the Arnoldi recurrence relation
```
matrix @ Vm - Vm @ Hm - fm * em = 0
```
with `matrix` the matrix representation of `matvec` and
`Vm = jax.numpy.transpose(kv[:it, :])`,
`Hm = H[:it, :it]`, `fm = np.expand_dims(kv[it, :] * H[it, it - 1]`,1)
and `em` a cartesian basis vector of shape `(1, kv.shape[1])`
with `em[0, -1] == 1` and 0 elsewhere.
Note that the caller is responsible for dtype consistency between
the inputs, i.e. dtypes between all input arrays have to match.
Args:
matvec: The matrix vector product.
args: List of arguments to `matvec`.
v0: Initial state to `matvec`.
Vm: An array for storing the krylov vectors. The individual
vectors are stored as columns.
The shape of `krylov_vecs` has to be
(num_krylov_vecs + 1, np.ravel(v0).shape[0]).
H: Matrix of overlaps. The shape has to be
(num_krylov_vecs + 1,num_krylov_vecs + 1).
start: Integer denoting the start position where the first
produced krylov_vector should be inserted into `Vm`
num_krylov_vecs: Number of krylov iterations, should be identical to
`Vm.shape[0] + 1`
tol: Convergence parameter. Iteration is terminated if the norm of a
krylov-vector falls below `tol`.
Returns:
jax.ShapedArray: An array of shape
`(num_krylov_vecs, np.prod(initial_state.shape))` of krylov vectors.
jax.ShapedArray: Upper Hessenberg matrix of shape
`(num_krylov_vecs, num_krylov_vecs`) of the Arnoldi processs.
jax.ShapedArray: The unnormalized residual of the Arnoldi process.
int: The norm of the residual.
int: The number of performed iterations.
bool: if `True`: iteration hit an invariant subspace.
if `False`: iteration terminated without encountering
an invariant subspace.
"""
# Note (mganahl): currently unused, but is very convenient to have
# for further development and tests (it's usually more accurate than
# classical gs)
# Call signature:
#```python
# initial_vals = [Av.ravel(), Vm, i, H]
# Av, Vm, _, H = jax.lax.fori_loop(
# 0, i + 1, modified_gram_schmidt_step_arnoldi, initial_vals)
#```
def modified_gram_schmidt_step_arnoldi(j, vals): #pylint: disable=unused-variable
"""
Single step of a modified gram-schmidt orthogonalization.
Substantially more accurate than classical gram schmidt
Args:
j: Integer value denoting the vector to be orthogonalized.
vals: A list of variables:
`vector`: The current vector to be orthogonalized
to all previous ones
`Vm`: jax.array of collected krylov vectors
`n`: integer denoting the column-position of the overlap
<`krylov_vector`|`vector`> within `H`.
Returns:
updated vals.
"""
vector, krylov_vectors, n, H = vals
v = krylov_vectors[j, :]
h = jax.numpy.vdot(v, vector, precision=precision)
H = H.at[j, n].set(h)
vector = vector - h * v
return [vector, krylov_vectors, n, H]
shape = v0.shape
Z = jax.numpy.linalg.norm(v0)
#only normalize if norm > tol, else return zero vector
v = jax.lax.cond(Z > tol, lambda x: v0 / Z, lambda x: v0 * 0.0, None)
Vm = Vm.at[start, :].set(jax.numpy.ravel(v))
H = jax.lax.cond(
start > 0,
lambda x: H.at[x, x - 1].set(Z),
lambda x: H, start)
# body of the arnoldi iteration
def body(vals):
Vm, H, previous_vector, _, i = vals
Av = matvec(previous_vector, *args)
Av, overlaps = iterative_classical_gram_schmidt(
Av.ravel(),
(i >= jax.numpy.arange(Vm.shape[0]))[:, None] *
Vm, precision)
H = H.at[:, i].set(overlaps)
norm = jax.numpy.linalg.norm(Av)
Av = jax.numpy.reshape(Av, shape)
# only normalize if norm is larger than threshold,
# otherwise return zero vector
Av = jax.lax.cond(norm > tol, lambda x: Av/norm, lambda x: Av * 0.0, None)
Vm, H = jax.lax.cond(
i < num_krylov_vecs - 1,
lambda x: (Vm.at[i + 1, :].set(Av.ravel()), H.at[i + 1, i].set(norm)), #pylint: disable=line-too-long
lambda x: (x[0], x[1]),
(Vm, H, Av, i, norm))
return [Vm, H, Av, norm, i + 1]
def cond_fun(vals):
# Continue loop while iteration < num_krylov_vecs and norm > tol
norm, iteration = vals[3], vals[4]
counter_done = (iteration >= num_krylov_vecs)
norm_not_too_small = norm > tol
continue_iteration = jax.lax.cond(counter_done, lambda x: False,
lambda x: norm_not_too_small, None)
return continue_iteration
initial_values = [Vm, H, v, Z, start]
final_values = jax.lax.while_loop(cond_fun, body, initial_values)
Vm, H, residual, norm, it = final_values
return Vm, H, residual, norm, it, norm < tol
return _arnoldi_fact |
Helper function to generate a jitted function to do an
implicitly restarted arnoldi factorization of `matvec`. The
returned routine finds the lowest `numeig`
eigenvector-eigenvalue pairs of `matvec`
by alternating between compression and re-expansion of an initial
`num_krylov_vecs`-step Arnoldi factorization.
Note: The caller has to ensure that the dtype of the return value
of `matvec` matches the dtype of the initial state. Otherwise jax
will raise a TypeError.
The function signature of the returned function is
Args:
matvec: A callable representing the linear operator.
args: Arguments to `matvec`. `matvec` is called with
`matvec(x, *args)` with `x` the input array on which
`matvec` should act.
initial_state: An starting vector for the iteration.
num_krylov_vecs: Number of krylov vectors of the arnoldi factorization.
numeig: The number of desired eigenvector-eigenvalue pairs.
which: Which eigenvalues to target. Currently supported: `which = 'LR'`.
tol: Convergence flag. If the norm of a krylov vector drops below `tol`
the iteration is terminated.
maxiter: Maximum number of (outer) iteration steps.
Returns:
eta, U: Two lists containing eigenvalues and eigenvectors.
Args:
jax: The jax module.
Returns:
Callable: A function performing an implicitly restarted
Arnoldi factorization | def _implicitly_restarted_arnoldi(jax: types.ModuleType) -> Callable:
"""
Helper function to generate a jitted function to do an
implicitly restarted arnoldi factorization of `matvec`. The
returned routine finds the lowest `numeig`
eigenvector-eigenvalue pairs of `matvec`
by alternating between compression and re-expansion of an initial
`num_krylov_vecs`-step Arnoldi factorization.
Note: The caller has to ensure that the dtype of the return value
of `matvec` matches the dtype of the initial state. Otherwise jax
will raise a TypeError.
The function signature of the returned function is
Args:
matvec: A callable representing the linear operator.
args: Arguments to `matvec`. `matvec` is called with
`matvec(x, *args)` with `x` the input array on which
`matvec` should act.
initial_state: An starting vector for the iteration.
num_krylov_vecs: Number of krylov vectors of the arnoldi factorization.
numeig: The number of desired eigenvector-eigenvalue pairs.
which: Which eigenvalues to target. Currently supported: `which = 'LR'`.
tol: Convergence flag. If the norm of a krylov vector drops below `tol`
the iteration is terminated.
maxiter: Maximum number of (outer) iteration steps.
Returns:
eta, U: Two lists containing eigenvalues and eigenvectors.
Args:
jax: The jax module.
Returns:
Callable: A function performing an implicitly restarted
Arnoldi factorization
"""
JaxPrecisionType = type(jax.lax.Precision.DEFAULT)
arnoldi_fact = _generate_arnoldi_factorization(jax)
@functools.partial(jax.jit, static_argnums=(3, 4, 5, 6, 7, 8))
def implicitly_restarted_arnoldi_method(
matvec: Callable, args: List, initial_state: jax.ShapedArray,
num_krylov_vecs: int, numeig: int, which: Text, tol: float, maxiter: int,
precision: JaxPrecisionType
) -> Tuple[jax.ShapedArray, List[jax.ShapedArray], int]:
"""
Implicitly restarted arnoldi factorization of `matvec`. The routine
finds the lowest `numeig` eigenvector-eigenvalue pairs of `matvec`
by alternating between compression and re-expansion of an initial
`num_krylov_vecs`-step Arnoldi factorization.
Note: The caller has to ensure that the dtype of the return value
of `matvec` matches the dtype of the initial state. Otherwise jax
will raise a TypeError.
NOTE: Under certain circumstances, the routine can return spurious
eigenvalues 0.0: if the Arnoldi iteration terminated early
(after numits < num_krylov_vecs iterations)
and numeig > numits, then spurious 0.0 eigenvalues will be returned.
Args:
matvec: A callable representing the linear operator.
args: Arguments to `matvec`. `matvec` is called with
`matvec(x, *args)` with `x` the input array on which
`matvec` should act.
initial_state: An starting vector for the iteration.
num_krylov_vecs: Number of krylov vectors of the arnoldi factorization.
numeig: The number of desired eigenvector-eigenvalue pairs.
which: Which eigenvalues to target.
Currently supported: `which = 'LR'` (largest real part).
tol: Convergence flag. If the norm of a krylov vector drops below `tol`
the iteration is terminated.
maxiter: Maximum number of (outer) iteration steps.
precision: jax.lax.Precision used within lax operations.
Returns:
jax.ShapedArray: Eigenvalues
List: Eigenvectors
int: Number of inner krylov iterations of the last arnoldi
factorization.
"""
shape = initial_state.shape
dtype = initial_state.dtype
dim = np.prod(shape).astype(np.int32)
num_expand = num_krylov_vecs - numeig
if not numeig <= num_krylov_vecs <= dim:
raise ValueError(f"num_krylov_vecs must be between numeig <="
f" num_krylov_vecs <= dim, got "
f" numeig = {numeig}, num_krylov_vecs = "
f"{num_krylov_vecs}, dim = {dim}.")
if numeig > dim:
raise ValueError(f"number of requested eigenvalues numeig = {numeig} "
f"is larger than the dimension of the operator "
f"dim = {dim}")
# initialize arrays
Vm = jax.numpy.zeros(
(num_krylov_vecs, jax.numpy.ravel(initial_state).shape[0]), dtype=dtype)
Hm = jax.numpy.zeros((num_krylov_vecs, num_krylov_vecs), dtype=dtype)
# perform initial arnoldi factorization
Vm, Hm, residual, norm, numits, ar_converged = arnoldi_fact(
matvec, args, initial_state, Vm, Hm, 0, num_krylov_vecs, tol, precision)
fm = residual.ravel() * norm
# generate needed functions
shifted_QR = _shifted_QR(jax)
check_eigvals_convergence = _check_eigvals_convergence_eig(jax)
get_vectors = _get_vectors(jax)
# sort_fun returns `num_expand` least relevant eigenvalues
# (those to be projected out)
if which == 'LR':
sort_fun = jax.tree_util.Partial(_LR_sort(jax), num_expand)
elif which == 'LM':
sort_fun = jax.tree_util.Partial(_LM_sort(jax), num_expand)
else:
raise ValueError(f"which = {which} not implemented")
it = 1 # we already did one arnoldi factorization
if maxiter > 1:
# cast arrays to correct complex dtype
if Vm.dtype == np.float64:
dtype = np.complex128
elif Vm.dtype == np.float32:
dtype = np.complex64
elif Vm.dtype == np.complex128:
dtype = Vm.dtype
elif Vm.dtype == np.complex64:
dtype = Vm.dtype
else:
raise TypeError(f'dtype {Vm.dtype} not supported')
Vm = Vm.astype(dtype)
Hm = Hm.astype(dtype)
fm = fm.astype(dtype)
def outer_loop(carry):
Hm, Vm, fm, it, numits, ar_converged, _, _, = carry
evals, _ = jax.numpy.linalg.eig(Hm)
shifts, _ = sort_fun(evals)
# perform shifted QR iterations to compress arnoldi factorization
# Note that ||fk|| typically decreases as one iterates the outer loop
# indicating that iram converges.
# ||fk|| = \beta_m in reference above
Vk, Hk, fk = shifted_QR(Vm, Hm, fm, shifts, numeig)
# reset matrices
beta_k = jax.numpy.linalg.norm(fk)
converged = check_eigvals_convergence(beta_k, Hk, tol, numeig)
Vk = Vk.at[numeig:, :].set(0.0)
Hk = Hk.at[numeig:, :].set(0.0)
Hk = Hk.at[:, numeig:].set(0.0)
def do_arnoldi(vals):
Vk, Hk, fk, _, _, _, _ = vals
# restart
Vm, Hm, residual, norm, numits, ar_converged = arnoldi_fact(
matvec, args, jax.numpy.reshape(fk, shape), Vk, Hk, numeig,
num_krylov_vecs, tol, precision)
fm = residual.ravel() * norm
return [Vm, Hm, fm, norm, numits, ar_converged, False]
def cond_arnoldi(vals):
return vals[6]
res = jax.lax.while_loop(cond_arnoldi, do_arnoldi, [
Vk, Hk, fk,
jax.numpy.linalg.norm(fk), numeig, False,
jax.numpy.logical_not(converged)
])
Vm, Hm, fm, norm, numits, ar_converged = res[0:6]
out_vars = [
Hm, Vm, fm, it + 1, numits, ar_converged, converged, norm
]
return out_vars
def cond_fun(carry):
it, ar_converged, converged = carry[3], carry[5], carry[
6]
return jax.lax.cond(
it < maxiter, lambda x: x, lambda x: False,
jax.numpy.logical_not(jax.numpy.logical_or(converged, ar_converged)))
converged = False
carry = [Hm, Vm, fm, it, numits, ar_converged, converged, norm]
res = jax.lax.while_loop(cond_fun, outer_loop, carry)
Hm, Vm = res[0], res[1]
numits, converged = res[4], res[6]
# if `ar_converged` then `norm`is below convergence threshold
# set it to 0.0 in this case to prevent `jnp.linalg.eig` from finding a
# spurious eigenvalue of order `norm`.
Hm = Hm.at[numits, numits - 1].set(
jax.lax.cond(converged, lambda x: Hm.dtype.type(0.0), lambda x: x,
Hm[numits, numits - 1]))
# if the Arnoldi-factorization stopped early (after `numit` iterations)
# before exhausting the allowed size of the Krylov subspace,
# (i.e. `numit` < 'num_krylov_vecs'), set elements
# at positions m, n with m, n >= `numit` to 0.0.
# FIXME (mganahl): under certain circumstances, the routine can still
# return spurious 0 eigenvalues: if arnoldi terminated early
# (after numits < num_krylov_vecs iterations)
# and numeig > numits, then spurious 0.0 eigenvalues will be returned
Hm = (numits > jax.numpy.arange(num_krylov_vecs))[:, None] * Hm * (
numits > jax.numpy.arange(num_krylov_vecs))[None, :]
eigvals, U = jax.numpy.linalg.eig(Hm)
inds = sort_fun(eigvals)[1][:numeig]
vectors = get_vectors(Vm, U, inds, numeig)
return eigvals[inds], [
jax.numpy.reshape(vectors[n, :], shape)
for n in range(numeig)
], numits
return implicitly_restarted_arnoldi_method |
Helper function to generate a jitted function to do an
implicitly restarted lanczos factorization of `matvec`. The
returned routine finds the lowest `numeig`
eigenvector-eigenvalue pairs of `matvec`
by alternating between compression and re-expansion of an initial
`num_krylov_vecs`-step Lanczos factorization.
Note: The caller has to ensure that the dtype of the return value
of `matvec` matches the dtype of the initial state. Otherwise jax
will raise a TypeError.
The function signature of the returned function is
Args:
matvec: A callable representing the linear operator.
args: Arguments to `matvec`. `matvec` is called with
`matvec(x, *args)` with `x` the input array on which
`matvec` should act.
initial_state: An starting vector for the iteration.
num_krylov_vecs: Number of krylov vectors of the lanczos factorization.
numeig: The number of desired eigenvector-eigenvalue pairs.
which: Which eigenvalues to target. Currently supported: `which = 'LR'`
or `which = 'SR'`.
tol: Convergence flag. If the norm of a krylov vector drops below `tol`
the iteration is terminated.
maxiter: Maximum number of (outer) iteration steps.
Returns:
eta, U: Two lists containing eigenvalues and eigenvectors.
Args:
jax: The jax module.
Returns:
Callable: A function performing an implicitly restarted
Lanczos factorization | def _implicitly_restarted_lanczos(jax: types.ModuleType) -> Callable:
"""
Helper function to generate a jitted function to do an
implicitly restarted lanczos factorization of `matvec`. The
returned routine finds the lowest `numeig`
eigenvector-eigenvalue pairs of `matvec`
by alternating between compression and re-expansion of an initial
`num_krylov_vecs`-step Lanczos factorization.
Note: The caller has to ensure that the dtype of the return value
of `matvec` matches the dtype of the initial state. Otherwise jax
will raise a TypeError.
The function signature of the returned function is
Args:
matvec: A callable representing the linear operator.
args: Arguments to `matvec`. `matvec` is called with
`matvec(x, *args)` with `x` the input array on which
`matvec` should act.
initial_state: An starting vector for the iteration.
num_krylov_vecs: Number of krylov vectors of the lanczos factorization.
numeig: The number of desired eigenvector-eigenvalue pairs.
which: Which eigenvalues to target. Currently supported: `which = 'LR'`
or `which = 'SR'`.
tol: Convergence flag. If the norm of a krylov vector drops below `tol`
the iteration is terminated.
maxiter: Maximum number of (outer) iteration steps.
Returns:
eta, U: Two lists containing eigenvalues and eigenvectors.
Args:
jax: The jax module.
Returns:
Callable: A function performing an implicitly restarted
Lanczos factorization
"""
JaxPrecisionType = type(jax.lax.Precision.DEFAULT)
lanczos_fact = _generate_lanczos_factorization(jax)
@functools.partial(jax.jit, static_argnums=(3, 4, 5, 6, 7, 8))
def implicitly_restarted_lanczos_method(
matvec: Callable, args: List, initial_state: jax.ShapedArray,
num_krylov_vecs: int, numeig: int, which: Text, tol: float, maxiter: int,
precision: JaxPrecisionType
) -> Tuple[jax.ShapedArray, List[jax.ShapedArray], int]:
"""
Implicitly restarted lanczos factorization of `matvec`. The routine
finds the lowest `numeig` eigenvector-eigenvalue pairs of `matvec`
by alternating between compression and re-expansion of an initial
`num_krylov_vecs`-step Lanczos factorization.
Note: The caller has to ensure that the dtype of the return value
of `matvec` matches the dtype of the initial state. Otherwise jax
will raise a TypeError.
NOTE: Under certain circumstances, the routine can return spurious
eigenvalues 0.0: if the Lanczos iteration terminated early
(after numits < num_krylov_vecs iterations)
and numeig > numits, then spurious 0.0 eigenvalues will be returned.
References:
http://emis.impa.br/EMIS/journals/ETNA/vol.2.1994/pp1-21.dir/pp1-21.pdf
http://people.inf.ethz.ch/arbenz/ewp/Lnotes/chapter11.pdf
Args:
matvec: A callable representing the linear operator.
args: Arguments to `matvec`. `matvec` is called with
`matvec(x, *args)` with `x` the input array on which
`matvec` should act.
initial_state: An starting vector for the iteration.
num_krylov_vecs: Number of krylov vectors of the lanczos factorization.
numeig: The number of desired eigenvector-eigenvalue pairs.
which: Which eigenvalues to target.
Currently supported: `which = 'LR'` (largest real part).
tol: Convergence flag. If the norm of a krylov vector drops below `tol`
the iteration is terminated.
maxiter: Maximum number of (outer) iteration steps.
precision: jax.lax.Precision used within lax operations.
Returns:
jax.ShapedArray: Eigenvalues
List: Eigenvectors
int: Number of inner krylov iterations of the last lanczos
factorization.
"""
shape = initial_state.shape
dtype = initial_state.dtype
dim = np.prod(shape).astype(np.int32)
num_expand = num_krylov_vecs - numeig
#note: the second part of the cond is for testing purposes
if num_krylov_vecs <= numeig < dim:
raise ValueError(f"num_krylov_vecs must be between numeig <"
f" num_krylov_vecs <= dim = {dim},"
f" num_krylov_vecs = {num_krylov_vecs}")
if numeig > dim:
raise ValueError(f"number of requested eigenvalues numeig = {numeig} "
f"is larger than the dimension of the operator "
f"dim = {dim}")
# initialize arrays
Vm = jax.numpy.zeros(
(num_krylov_vecs, jax.numpy.ravel(initial_state).shape[0]), dtype=dtype)
alphas = jax.numpy.zeros(num_krylov_vecs, dtype=dtype)
betas = jax.numpy.zeros(num_krylov_vecs - 1, dtype=dtype)
# perform initial lanczos factorization
Vm, alphas, betas, residual, norm, numits, ar_converged = lanczos_fact(
matvec, args, initial_state, Vm, alphas, betas, 0, num_krylov_vecs, tol,
precision)
fm = residual.ravel() * norm
# generate needed functions
shifted_QR = _shifted_QR(jax)
check_eigvals_convergence = _check_eigvals_convergence_eigh(jax)
get_vectors = _get_vectors(jax)
# sort_fun returns `num_expand` least relevant eigenvalues
# (those to be projected out)
if which == 'LA':
sort_fun = jax.tree_util.Partial(_LA_sort(jax), num_expand)
elif which == 'SA':
sort_fun = jax.tree_util.Partial(_SA_sort(jax), num_expand)
elif which == 'LM':
sort_fun = jax.tree_util.Partial(_LM_sort(jax), num_expand)
else:
raise ValueError(f"which = {which} not implemented")
it = 1 # we already did one lanczos factorization
def outer_loop(carry):
alphas, betas, Vm, fm, it, numits, ar_converged, _, _, = carry
# pack into alphas and betas into tridiagonal matrix
Hm = jax.numpy.diag(alphas) + jax.numpy.diag(betas, -1) + jax.numpy.diag(
betas.conj(), 1)
evals, _ = jax.numpy.linalg.eigh(Hm)
shifts, _ = sort_fun(evals)
# perform shifted QR iterations to compress lanczos factorization
# Note that ||fk|| typically decreases as one iterates the outer loop
# indicating that iram converges.
# ||fk|| = \beta_m in reference above
Vk, Hk, fk = shifted_QR(Vm, Hm, fm, shifts, numeig)
# extract new alphas and betas
alphas = jax.numpy.diag(Hk)
betas = jax.numpy.diag(Hk, -1)
alphas = alphas.at[numeig:].set(0.0)
betas = betas.at[numeig-1:].set(0.0)
beta_k = jax.numpy.linalg.norm(fk)
Hktest = Hk[:numeig, :numeig]
matnorm = jax.numpy.linalg.norm(Hktest)
converged = check_eigvals_convergence(beta_k, Hktest, matnorm, tol)
def do_lanczos(vals):
Vk, alphas, betas, fk, _, _, _, _ = vals
# restart
Vm, alphas, betas, residual, norm, numits, ar_converged = lanczos_fact(
matvec, args, jax.numpy.reshape(fk, shape), Vk, alphas, betas,
numeig, num_krylov_vecs, tol, precision)
fm = residual.ravel() * norm
return [Vm, alphas, betas, fm, norm, numits, ar_converged, False]
def cond_lanczos(vals):
return vals[7]
res = jax.lax.while_loop(cond_lanczos, do_lanczos, [
Vk, alphas, betas, fk,
jax.numpy.linalg.norm(fk), numeig, False,
jax.numpy.logical_not(converged)
])
Vm, alphas, betas, fm, norm, numits, ar_converged = res[0:7]
out_vars = [
alphas, betas, Vm, fm, it + 1, numits, ar_converged, converged, norm
]
return out_vars
def cond_fun(carry):
it, ar_converged, converged = carry[4], carry[6], carry[7]
return jax.lax.cond(
it < maxiter, lambda x: x, lambda x: False,
jax.numpy.logical_not(jax.numpy.logical_or(converged, ar_converged)))
converged = False
carry = [alphas, betas, Vm, fm, it, numits, ar_converged, converged, norm]
res = jax.lax.while_loop(cond_fun, outer_loop, carry)
alphas, betas, Vm = res[0], res[1], res[2]
numits, ar_converged, converged = res[5], res[6], res[7]
Hm = jax.numpy.diag(alphas) + jax.numpy.diag(betas, -1) + jax.numpy.diag(
betas.conj(), 1)
# FIXME (mganahl): under certain circumstances, the routine can still
# return spurious 0 eigenvalues: if lanczos terminated early
# (after numits < num_krylov_vecs iterations)
# and numeig > numits, then spurious 0.0 eigenvalues will be returned
Hm = (numits > jax.numpy.arange(num_krylov_vecs))[:, None] * Hm * (
numits > jax.numpy.arange(num_krylov_vecs))[None, :]
eigvals, U = jax.numpy.linalg.eigh(Hm)
inds = sort_fun(eigvals)[1][:numeig]
vectors = get_vectors(Vm, U, inds, numeig)
return eigvals[inds], [
jax.numpy.reshape(vectors[n, :], shape) for n in range(numeig)
], numits
return implicitly_restarted_lanczos_method |
Allows Jax (the module) to be passed in as an argument rather than imported,
since doing the latter breaks the build. In addition, instantiates certain
of the enclosed functions as concrete objects within a Dict, allowing them to
be cached. This avoids spurious recompilations that would otherwise be
triggered by attempts to pass callables into Jitted functions.
The important function here is functions["gmres_m"], which implements
GMRES. The other functions are exposed only for testing.
Args:
----
jax: The imported Jax module.
Returns:
-------
functions: A namedtuple of functions:
functions.gmres_m = gmres_m
functions.gmres_residual = gmres_residual
functions.gmres_krylov = gmres_krylov
functions.gs_step = _gs_step
functions.kth_arnoldi_step = kth_arnoldi_step
functions.givens_rotation = givens_rotation | def gmres_wrapper(jax: types.ModuleType):
"""
Allows Jax (the module) to be passed in as an argument rather than imported,
since doing the latter breaks the build. In addition, instantiates certain
of the enclosed functions as concrete objects within a Dict, allowing them to
be cached. This avoids spurious recompilations that would otherwise be
triggered by attempts to pass callables into Jitted functions.
The important function here is functions["gmres_m"], which implements
GMRES. The other functions are exposed only for testing.
Args:
----
jax: The imported Jax module.
Returns:
-------
functions: A namedtuple of functions:
functions.gmres_m = gmres_m
functions.gmres_residual = gmres_residual
functions.gmres_krylov = gmres_krylov
functions.gs_step = _gs_step
functions.kth_arnoldi_step = kth_arnoldi_step
functions.givens_rotation = givens_rotation
"""
jnp = jax.numpy
JaxPrecisionType = type(jax.lax.Precision.DEFAULT)
def gmres_m(
A_mv: Callable, A_args: Sequence, b: jax.ShapedArray, x0: jax.ShapedArray,
tol: float, atol: float, num_krylov_vectors: int, maxiter: int,
precision: JaxPrecisionType) -> Tuple[jax.ShapedArray, float, int, bool]:
"""
Solve A x = b for x using the m-restarted GMRES method. This is
intended to be called via jax_backend.gmres.
Given a linear mapping with (n x n) matrix representation
A = A_mv(*A_args) gmres_m solves
Ax = b (1)
where x and b are length-n vectors, using the method of
Generalized Minimum RESiduals with M iterations per restart (GMRES_M).
Args:
A_mv: A function v0 = A_mv(v, *A_args) where v0 and v have the same shape.
A_args: A list of positional arguments to A_mv.
b: The b in A @ x = b.
x0: Initial guess solution.
tol, atol: Solution tolerance to achieve,
norm(residual) <= max(tol * norm(b), atol).
tol is also used to set the threshold at which the Arnoldi factorization
terminates.
num_krylov_vectors: Size of the Krylov space to build at each restart.
maxiter: The Krylov space will be repeatedly rebuilt up to this many
times.
Returns:
x: The approximate solution.
beta: Norm of the residual at termination.
n_iter: Number of iterations at termination.
converged: Whether the desired tolerance was achieved.
"""
num_krylov_vectors = min(num_krylov_vectors, b.size)
x = x0
b_norm = jnp.linalg.norm(b)
tol = max(tol * b_norm, atol)
for n_iter in range(maxiter):
done, beta, x = gmres(A_mv, A_args, b, x, num_krylov_vectors, x0, tol,
b_norm, precision)
if done:
break
return x, beta, n_iter, done
def gmres(A_mv: Callable, A_args: Sequence, b: jax.ShapedArray,
x: jax.ShapedArray, num_krylov_vectors: int, x0: jax.ShapedArray,
tol: float, b_norm: float,
precision: JaxPrecisionType) -> Tuple[bool, float, jax.ShapedArray]:
"""
A single restart of GMRES.
Args:
A_mv: A function `v0 = A_mv(v, *A_args)` where `v0` and
`v` have the same shape.
A_args: A list of positional arguments to A_mv.
b: The `b` in `A @ x = b`.
x: Initial guess solution.
tol: Solution tolerance to achieve,
num_krylov_vectors : Size of the Krylov space to build.
Returns:
done: Whether convergence was achieved.
beta: Magnitude of residual (i.e. the error estimate).
x: The approximate solution.
"""
r, beta = gmres_residual(A_mv, A_args, b, x)
k, V, R, beta_vec = gmres_krylov(A_mv, A_args, num_krylov_vectors,
x0, r, beta, tol, b_norm, precision)
x = gmres_update(k, V, R, beta_vec, x0)
done = k < num_krylov_vectors - 1
return done, beta, x
@jax.jit
def gmres_residual(A_mv: Callable, A_args: Sequence, b: jax.ShapedArray,
x: jax.ShapedArray) -> Tuple[jax.ShapedArray, float]:
"""
Computes the residual vector r and its norm, beta, which is minimized by
GMRES.
Args:
A_mv: A function v0 = A_mv(v, *A_args) where v0 and
v have the same shape.
A_args: A list of positional arguments to A_mv.
b: The b in A @ x = b.
x: Initial guess solution.
Returns:
r: The residual vector.
beta: Its magnitude.
"""
r = b - A_mv(x, *A_args)
beta = jnp.linalg.norm(r)
return r, beta
def gmres_update(k: int, V: jax.ShapedArray, R: jax.ShapedArray,
beta_vec: jax.ShapedArray,
x0: jax.ShapedArray) -> jax.ShapedArray:
"""
Updates the solution in response to the information computed by the
main GMRES loop.
Args:
k: The final iteration which was reached by GMRES before convergence.
V: The Arnoldi matrix of Krylov vectors.
R: The R factor in H = QR where H is the Arnoldi overlap matrix.
beta_vec: Stores the Givens factors used to map H into QR.
x0: The initial guess solution.
Returns:
x: The updated solution.
"""
q = min(k, R.shape[1])
y = jax.scipy.linalg.solve_triangular(R[:q, :q], beta_vec[:q])
x = x0 + V[:, :q] @ y
return x
@functools.partial(jax.jit, static_argnums=(2, 8))
def gmres_krylov(
A_mv: Callable, A_args: Sequence, n_kry: int, x0: jax.ShapedArray,
r: jax.ShapedArray, beta: float, tol: float, b_norm: float,
precision: JaxPrecisionType
) -> Tuple[int, jax.ShapedArray, jax.ShapedArray, jax.ShapedArray]:
"""
Builds the Arnoldi decomposition of (A, v), where v is the normalized
residual of the current solution estimate. The decomposition is
returned as V, R, where V is the usual matrix of Krylov vectors and
R is the upper triangular matrix in H = QR, with H the usual matrix
of overlaps.
Args:
A_mv: A function `v0 = A_mv(v, *A_args)` where `v0` and
`v` have the same shape.
A_args: A list of positional arguments to A_mv.
n_kry: Size of the Krylov space to build; this is called
num_krylov_vectors in higher level code.
x0: Guess solution.
r: Residual vector.
beta: Magnitude of r.
tol: Solution tolerance to achieve.
b_norm: Magnitude of b in Ax = b.
Returns:
k: Counts the number of iterations before convergence.
V: The Arnoldi matrix of Krylov vectors.
R: From H = QR where H is the Arnoldi matrix of overlaps.
beta_vec: Stores Q implicitly as Givens factors.
"""
n = r.size
err = beta
v = r / beta
# These will store the Givens rotations used to update the QR decompositions
# of the Arnoldi matrices.
# cos : givens[0, :]
# sine: givens[1, :]
givens = jnp.zeros((2, n_kry), dtype=x0.dtype)
beta_vec = jnp.zeros((n_kry + 1), dtype=x0.dtype)
beta_vec = jax.ops.index_update(beta_vec, jax.ops.index[0], beta)
V = jnp.zeros((n, n_kry + 1), dtype=x0.dtype)
V = jax.ops.index_update(V, jax.ops.index[:, 0], v)
R = jnp.zeros((n_kry + 1, n_kry), dtype=x0.dtype)
# The variable data for the carry call. Each iteration modifies these
# values and feeds the results to the next iteration.
k = 0
gmres_variables = (k, V, R, beta_vec, err, # < The actual output we need.
givens) # < Modified between iterations.
gmres_constants = (tol, A_mv, A_args, b_norm, n_kry)
gmres_carry = (gmres_variables, gmres_constants)
# The 'x' input for the carry call. Each iteration will receive an ascending
# loop index (from the jnp.arange) along with the constant data
# in gmres_constants.
def gmres_krylov_work(gmres_carry: GmresCarryType) -> GmresCarryType:
"""
Performs a single iteration of gmres_krylov. See that function for a more
detailed description.
Args:
gmres_carry: The gmres_carry from gmres_krylov.
Returns:
gmres_carry: The updated gmres_carry.
"""
gmres_variables, gmres_constants = gmres_carry
k, V, R, beta_vec, err, givens = gmres_variables
tol, A_mv, A_args, b_norm, _ = gmres_constants
V, H = kth_arnoldi_step(k, A_mv, A_args, V, R, tol, precision)
R_col, givens = apply_givens_rotation(H[:, k], givens, k)
R = jax.ops.index_update(R, jax.ops.index[:, k], R_col[:])
# Update the residual vector.
cs, sn = givens[:, k] * beta_vec[k]
beta_vec = jax.ops.index_update(beta_vec, jax.ops.index[k], cs)
beta_vec = jax.ops.index_update(beta_vec, jax.ops.index[k + 1], sn)
err = jnp.abs(sn) / b_norm
gmres_variables = (k + 1, V, R, beta_vec, err, givens)
return (gmres_variables, gmres_constants)
def gmres_krylov_loop_condition(gmres_carry: GmresCarryType) -> bool:
"""
This function dictates whether the main GMRES while loop will proceed.
It is equivalent to:
if k < n_kry and err > tol:
return True
else:
return False
where k, n_kry, err, and tol are unpacked from gmres_carry.
Args:
gmres_carry: The gmres_carry from gmres_krylov.
Returns:
(bool): Whether to continue iterating.
"""
gmres_constants, gmres_variables = gmres_carry
tol = gmres_constants[0]
k = gmres_variables[0]
err = gmres_variables[4]
n_kry = gmres_constants[4]
def is_iterating(k, n_kry):
return k < n_kry
def not_converged(args):
err, tol = args
return err >= tol
return jax.lax.cond(is_iterating(k, n_kry), # Predicate.
not_converged, # Called if True.
lambda x: False, # Called if False.
(err, tol)) # Arguments to calls.
gmres_carry = jax.lax.while_loop(gmres_krylov_loop_condition,
gmres_krylov_work,
gmres_carry)
gmres_variables, gmres_constants = gmres_carry
k, V, R, beta_vec, err, givens = gmres_variables
return (k, V, R, beta_vec)
VarType = Tuple[int, jax.ShapedArray, jax.ShapedArray, jax.ShapedArray,
float, jax.ShapedArray]
ConstType = Tuple[float, Callable, Sequence, jax.ShapedArray, int]
GmresCarryType = Tuple[VarType, ConstType]
@functools.partial(jax.jit, static_argnums=(6,))
def kth_arnoldi_step(
k: int, A_mv: Callable, A_args: Sequence, V: jax.ShapedArray,
H: jax.ShapedArray, tol: float,
precision: JaxPrecisionType) -> Tuple[jax.ShapedArray, jax.ShapedArray]:
"""
Performs the kth iteration of the Arnoldi reduction procedure.
Args:
k: The current iteration.
A_mv, A_args: A function A_mv(v, *A_args) performing a linear
transformation on v.
V: A matrix of size (n, K + 1), K > k such that each column in
V[n, :k+1] stores a Krylov vector and V[:, k+1] is all zeroes.
H: A matrix of size (K, K), K > k with H[:, k] all zeroes.
Returns:
V, H: With their k'th columns respectively filled in by a new
orthogonalized Krylov vector and new overlaps.
"""
def _gs_step(
r: jax.ShapedArray,
v_i: jax.ShapedArray) -> Tuple[jax.ShapedArray, jax.ShapedArray]:
"""
Performs one iteration of the stabilized Gram-Schmidt procedure, with
r to be orthonormalized against {v} = {v_0, v_1, ...}.
Args:
r: The new vector which is not in the initially orthonormal set.
v_i: The i'th vector in that set.
Returns:
r_i: The updated r which is now orthonormal with v_i.
h_i: The overlap of r with v_i.
"""
h_i = jnp.vdot(v_i, r, precision=precision)
r_i = r - h_i * v_i
return r_i, h_i
v = A_mv(V[:, k], *A_args)
v_new, H_k = jax.lax.scan(_gs_step, init=v, xs=V.T)
v_norm = jnp.linalg.norm(v_new)
r_new = v_new / v_norm
# Normalize v unless it is the zero vector.
r_new = jax.lax.cond(v_norm > tol,
lambda x: x[0] / x[1],
lambda x: 0.*x[0],
(v_new, v_norm)
)
H = jax.ops.index_update(H, jax.ops.index[:, k], H_k)
H = jax.ops.index_update(H, jax.ops.index[k+1, k], v_norm)
V = jax.ops.index_update(V, jax.ops.index[:, k+1], r_new)
return V, H
####################################################################
# GIVENS ROTATIONS
####################################################################
@jax.jit
def apply_rotations(H_col: jax.ShapedArray, givens: jax.ShapedArray,
k: int) -> jax.ShapedArray:
"""
Successively applies each of the rotations stored in givens to H_col.
Args:
H_col : The vector to be rotated.
givens: 2 x K, K > k matrix of rotation factors.
k : Iteration number.
Returns:
H_col : The rotated vector.
"""
rotation_carry = (H_col, 0, k, givens)
def loop_condition(carry):
i = carry[1]
k = carry[2]
return jax.lax.cond(i < k, lambda x: True, lambda x: False, 0)
def apply_ith_rotation(carry):
H_col, i, k, givens = carry
cs = givens[0, i]
sn = givens[1, i]
H_i = cs * H_col[i] - sn * H_col[i + 1]
H_ip1 = sn * H_col[i] + cs * H_col[i + 1]
H_col = jax.ops.index_update(H_col, jax.ops.index[i], H_i)
H_col = jax.ops.index_update(H_col, jax.ops.index[i + 1], H_ip1)
return (H_col, i + 1, k, givens)
rotation_carry = jax.lax.while_loop(loop_condition,
apply_ith_rotation,
rotation_carry)
H_col = rotation_carry[0]
return H_col
@jax.jit
def apply_givens_rotation(H_col: jax.ShapedArray, givens: jax.ShapedArray,
k: int) -> Tuple[jax.ShapedArray, jax.ShapedArray]:
"""
Applies the Givens rotations stored in the vectors cs and sn to the vector
H_col. Then constructs a new Givens rotation that eliminates H_col's
k'th element, yielding the corresponding column of the R in H's QR
decomposition. Returns the new column of R along with the new Givens
factors.
Args:
H_col : The column of H to be rotated.
givens: A matrix representing the cosine and sine factors of the
previous GMRES Givens rotations, in that order
(i.e. givens[0, :] -> the cos factor).
k : Iteration number.
Returns:
R_col : The column of R obtained by transforming H_col.
givens_k: The new elements of givens that zeroed out the k+1'th element
of H_col.
"""
# This call successively applies each of the
# Givens rotations stored in givens[:, :k] to H_col.
H_col = apply_rotations(H_col, givens, k)
cs_k, sn_k = givens_rotation(H_col[k], H_col[k + 1])
givens = jax.ops.index_update(givens, jax.ops.index[0, k], cs_k)
givens = jax.ops.index_update(givens, jax.ops.index[1, k], sn_k)
r_k = cs_k * H_col[k] - sn_k * H_col[k + 1]
R_col = jax.ops.index_update(H_col, jax.ops.index[k], r_k)
R_col = jax.ops.index_update(R_col, jax.ops.index[k + 1], 0.)
return R_col, givens
@jax.jit
def givens_rotation(v1: float, v2: float) -> Tuple[float, float]:
"""
Given scalars v1 and v2, computes cs = cos(theta) and sn = sin(theta)
so that [cs -sn] @ [v1] = [r]
[sn cs] [v2] [0]
Args:
v1, v2: The scalars.
Returns:
cs, sn: The rotation factors.
"""
t = jnp.sqrt(v1**2 + v2**2)
cs = v1 / t
sn = -v2 / t
return cs, sn
fnames = [
"gmres_m", "gmres_residual", "gmres_krylov",
"kth_arnoldi_step", "givens_rotation"
]
functions = [
gmres_m, gmres_residual, gmres_krylov, kth_arnoldi_step,
givens_rotation
]
class Functions:
def __init__(self, fun_dict):
self.dict = fun_dict
def __getattr__(self, name):
return self.dict[name]
return Functions(dict(zip(fnames, functions))) |
GMRES produces the correct result on an analytically solved
linear system. | def test_gmres_on_small_known_problem(dtype):
"""
GMRES produces the correct result on an analytically solved
linear system.
"""
dummy = jax.numpy.zeros(1, dtype=dtype)
dtype = dummy.dtype
gmres = jitted_functions.gmres_wrapper(jax)
A = jax.numpy.array(([[1, 1], [3, -4]]), dtype=dtype)
b = jax.numpy.array([3, 2], dtype=dtype)
x0 = jax.numpy.ones(2, dtype=dtype)
n_kry = 2
maxiter = 1
@jax.tree_util.Partial
def A_mv(x):
return A @ x
tol = A.size*jax.numpy.finfo(dtype).eps
x, _, _, _ = gmres.gmres_m(A_mv, [], b, x0, tol, tol, n_kry, maxiter,
precision)
solution = jax.numpy.array([2., 1.], dtype=dtype)
np.testing.assert_allclose(x, solution, atol=tol) |
gmres_krylov correctly builds the QR-decomposed Arnoldi decomposition.
This function assumes that gmres["kth_arnoldi_step (which is
independently tested) is correct. | def test_gmres_krylov(dtype):
"""
gmres_krylov correctly builds the QR-decomposed Arnoldi decomposition.
This function assumes that gmres["kth_arnoldi_step (which is
independently tested) is correct.
"""
dummy = jax.numpy.zeros(1, dtype=dtype)
dtype = dummy.dtype
gmres = jitted_functions.gmres_wrapper(jax)
n = 2
n_kry = n
np.random.seed(10)
@jax.tree_util.Partial
def A_mv(x):
return A @ x
A = jax.numpy.array(np.random.rand(n, n).astype(dtype))
tol = A.size*jax.numpy.finfo(dtype).eps
x0 = jax.numpy.array(np.random.rand(n).astype(dtype))
b = jax.numpy.array(np.random.rand(n), dtype=dtype)
r, beta = gmres.gmres_residual(A_mv, [], b, x0)
_, V, R, _ = gmres.gmres_krylov(A_mv, [], n_kry, x0, r, beta,
tol, jax.numpy.linalg.norm(b),
precision)
phases = jax.numpy.sign(jax.numpy.diagonal(R[:-1, :]))
R = phases.conj()[:, None] * R[:-1, :]
Vtest = np.zeros((n, n_kry + 1), dtype=x0.dtype)
Vtest[:, 0] = r/beta
Vtest = jax.numpy.array(Vtest)
Htest = jax.numpy.zeros((n_kry + 1, n_kry), dtype=x0.dtype)
for k in range(n_kry):
Vtest, Htest = gmres.kth_arnoldi_step(k, A_mv, [], Vtest, Htest, tol,
precision)
_, Rtest = jax.numpy.linalg.qr(Htest)
phases = jax.numpy.sign(jax.numpy.diagonal(Rtest))
Rtest = phases.conj()[:, None] * Rtest
np.testing.assert_allclose(V, Vtest, atol=tol)
np.testing.assert_allclose(R, Rtest, atol=tol) |
The Arnoldi decomposition within GMRES is correct. | def test_gmres_arnoldi_step(dtype):
"""
The Arnoldi decomposition within GMRES is correct.
"""
gmres = jitted_functions.gmres_wrapper(jax)
dummy = jax.numpy.zeros(1, dtype=dtype)
dtype = dummy.dtype
n = 4
n_kry = n
np.random.seed(10)
A = jax.numpy.array(np.random.rand(n, n).astype(dtype))
x0 = jax.numpy.array(np.random.rand(n).astype(dtype))
Q = np.zeros((n, n_kry + 1), dtype=x0.dtype)
Q[:, 0] = x0/jax.numpy.linalg.norm(x0)
Q = jax.numpy.array(Q)
H = jax.numpy.zeros((n_kry + 1, n_kry), dtype=x0.dtype)
tol = A.size*jax.numpy.finfo(dtype).eps
@jax.tree_util.Partial
def A_mv(x):
return A @ x
for k in range(n_kry):
Q, H = gmres.kth_arnoldi_step(k, A_mv, [], Q, H, tol, precision)
QAQ = Q[:, :n_kry].conj().T @ A @ Q[:, :n_kry]
np.testing.assert_allclose(H[:n_kry, :], QAQ, atol=tol) |
gmres["givens_rotation produces the correct rotation factors. | def test_givens(dtype):
"""
gmres["givens_rotation produces the correct rotation factors.
"""
gmres = jitted_functions.gmres_wrapper(jax)
np.random.seed(10)
v = jax.numpy.array(np.random.rand(2).astype(dtype))
cs, sn = gmres.givens_rotation(*v)
rot = np.zeros((2, 2), dtype=dtype)
rot[0, 0] = cs
rot[1, 1] = cs
rot[0, 1] = -sn
rot[1, 0] = sn
rot = jax.numpy.array(rot)
result = rot @ v
tol = 4*jax.numpy.finfo(dtype).eps
np.testing.assert_allclose(result[-1], 0., atol=tol) |
Computes the singular value decomposition (SVD) of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details. | def svd(
np, # TODO: Typing
tensor: Tensor,
pivot_axis: int,
max_singular_values: Optional[int] = None,
max_truncation_error: Optional[float] = None,
relative: Optional[bool] = False) -> Tuple[Tensor, Tensor, Tensor, Tensor]:
"""Computes the singular value decomposition (SVD) of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details.
"""
left_dims = tensor.shape[:pivot_axis]
right_dims = tensor.shape[pivot_axis:]
tensor = np.reshape(tensor, [numpy.prod(left_dims), numpy.prod(right_dims)])
u, s, vh = np.linalg.svd(tensor, full_matrices=False)
if max_singular_values is None:
max_singular_values = np.size(s)
if max_truncation_error is not None:
# Cumulative norms of singular values in ascending order.
trunc_errs = np.sqrt(np.cumsum(np.square(s[::-1])))
# If relative is true, rescale max_truncation error with the largest
# singular value to yield the absolute maximal truncation error.
if relative:
abs_max_truncation_error = max_truncation_error * s[0]
else:
abs_max_truncation_error = max_truncation_error
# We must keep at least this many singular values to ensure the
# truncation error is <= abs_max_truncation_error.
num_sing_vals_err = np.count_nonzero(
(trunc_errs > abs_max_truncation_error).astype(np.int32))
else:
num_sing_vals_err = max_singular_values
num_sing_vals_keep = min(max_singular_values, num_sing_vals_err)
# tf.svd() always returns the singular values as a vector of float{32,64}.
# since tf.math_ops.real is automatically applied to s. This causes
# s to possibly not be the same dtype as the original tensor, which can cause
# issues for later contractions. To fix it, we recast to the original dtype.
s = s.astype(tensor.dtype)
s_rest = s[num_sing_vals_keep:]
s = s[:num_sing_vals_keep]
u = u[:, :num_sing_vals_keep]
vh = vh[:num_sing_vals_keep, :]
dim_s = s.shape[0]
u = np.reshape(u, list(left_dims) + [dim_s])
vh = np.reshape(vh, [dim_s] + list(right_dims))
return u, s, vh, s_rest |
Computes the QR decomposition of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details. | def qr(
np, # TODO: Typing
tensor: Tensor,
pivot_axis: int,
non_negative_diagonal: bool
) -> Tuple[Tensor, Tensor]:
"""Computes the QR decomposition of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details.
"""
left_dims = tensor.shape[:pivot_axis]
right_dims = tensor.shape[pivot_axis:]
tensor = np.reshape(tensor, [numpy.prod(left_dims), numpy.prod(right_dims)])
q, r = np.linalg.qr(tensor)
if non_negative_diagonal:
phases = np.sign(np.diagonal(r))
q = q * phases
r = phases.conj()[:, None] * r
center_dim = q.shape[1]
q = np.reshape(q, list(left_dims) + [center_dim])
r = np.reshape(r, [center_dim] + list(right_dims))
return q, r |
Computes the RQ (reversed QR) decomposition of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details. | def rq(
np, # TODO: Typing
tensor: Tensor,
pivot_axis: int,
non_negative_diagonal: bool
) -> Tuple[Tensor, Tensor]:
"""Computes the RQ (reversed QR) decomposition of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details.
"""
left_dims = tensor.shape[:pivot_axis]
right_dims = tensor.shape[pivot_axis:]
tensor = np.reshape(tensor, [numpy.prod(left_dims), numpy.prod(right_dims)])
q, r = np.linalg.qr(np.conj(np.transpose(tensor)))
if non_negative_diagonal:
phases = np.sign(np.diagonal(r))
q = q * phases
r = phases.conj()[:, None] * r
r, q = np.conj(np.transpose(r)), np.conj(
np.transpose(q)) #M=r*q at this point
center_dim = r.shape[1]
r = np.reshape(r, list(left_dims) + [center_dim])
q = np.reshape(q, [center_dim] + list(right_dims))
return r, q |
Computes the singular value decomposition (SVD) of a tensor.
The SVD is performed by treating the tensor as a matrix, with an effective
left (row) index resulting from combining the axes `tensor.shape[:pivot_axis]`
and an effective right (column) index resulting from combining the axes
`tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2, then
`u` would have shape (2, 3, 6), `s` would have shape (6), and `vh` would
have shape (6, 4, 5).
If `max_singular_values` is set to an integer, the SVD is truncated to keep
at most this many singular values.
If `max_truncation_error > 0`, as many singular values will be truncated as
possible, so that the truncation error (the norm of discarded singular
values) is at most `max_truncation_error`.
If `relative` is set `True` then `max_truncation_err` is understood
relative to the largest singular value.
If both `max_singular_values` snd `max_truncation_error` are specified, the
number of retained singular values will be
`min(max_singular_values, nsv_auto_trunc)`, where `nsv_auto_trunc` is the
number of singular values that must be kept to maintain a truncation error
smaller than `max_truncation_error`.
The output consists of three tensors `u, s, vh` such that:
```python
u[i1,...,iN, j] * s[j] * vh[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
max_singular_values: The number of singular values to keep, or `None` to
keep them all.
max_truncation_error: The maximum allowed truncation error or `None` to not
do any truncation.
relative: Multiply `max_truncation_err` with the largest singular value.
Returns:
u: Left tensor factor.
s: Vector of ordered singular values from largest to smallest.
vh: Right tensor factor.
s_rest: Vector of discarded singular values (length zero if no
truncation). | def svd(
torch: Any,
tensor: Tensor,
pivot_axis: int,
max_singular_values: Optional[int] = None,
max_truncation_error: Optional[float] = None,
relative: Optional[bool] = False) -> Tuple[Tensor, Tensor, Tensor, Tensor]:
"""Computes the singular value decomposition (SVD) of a tensor.
The SVD is performed by treating the tensor as a matrix, with an effective
left (row) index resulting from combining the axes `tensor.shape[:pivot_axis]`
and an effective right (column) index resulting from combining the axes
`tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2, then
`u` would have shape (2, 3, 6), `s` would have shape (6), and `vh` would
have shape (6, 4, 5).
If `max_singular_values` is set to an integer, the SVD is truncated to keep
at most this many singular values.
If `max_truncation_error > 0`, as many singular values will be truncated as
possible, so that the truncation error (the norm of discarded singular
values) is at most `max_truncation_error`.
If `relative` is set `True` then `max_truncation_err` is understood
relative to the largest singular value.
If both `max_singular_values` snd `max_truncation_error` are specified, the
number of retained singular values will be
`min(max_singular_values, nsv_auto_trunc)`, where `nsv_auto_trunc` is the
number of singular values that must be kept to maintain a truncation error
smaller than `max_truncation_error`.
The output consists of three tensors `u, s, vh` such that:
```python
u[i1,...,iN, j] * s[j] * vh[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
max_singular_values: The number of singular values to keep, or `None` to
keep them all.
max_truncation_error: The maximum allowed truncation error or `None` to not
do any truncation.
relative: Multiply `max_truncation_err` with the largest singular value.
Returns:
u: Left tensor factor.
s: Vector of ordered singular values from largest to smallest.
vh: Right tensor factor.
s_rest: Vector of discarded singular values (length zero if no
truncation).
"""
left_dims = list(tensor.shape)[:pivot_axis]
right_dims = list(tensor.shape)[pivot_axis:]
tensor = torch.reshape(tensor, (np.prod(left_dims), np.prod(right_dims)))
u, s, v = torch.svd(tensor)
if max_singular_values is None:
max_singular_values = s.nelement()
if max_truncation_error is not None:
# Cumulative norms of singular values in ascending order
s_sorted, _ = torch.sort(s**2)
trunc_errs = torch.sqrt(torch.cumsum(s_sorted, 0))
# If relative is true, rescale max_truncation error with the largest
# singular value to yield the absolute maximal truncation error.
if relative:
abs_max_truncation_error = max_truncation_error * s[0]
else:
abs_max_truncation_error = max_truncation_error
# We must keep at least this many singular values to ensure the
# truncation error is <= abs_max_truncation_error.
num_sing_vals_err = torch.nonzero(
trunc_errs > abs_max_truncation_error).nelement()
else:
num_sing_vals_err = max_singular_values
num_sing_vals_keep = min(max_singular_values, num_sing_vals_err)
# we recast to the original dtype.
s = s.type(tensor.type())
s_rest = s[num_sing_vals_keep:]
s = s[:num_sing_vals_keep]
u = u[:, :num_sing_vals_keep]
v = v[:, :num_sing_vals_keep]
vh = torch.transpose(v, 0, 1)
dim_s = s.shape[0]
u = torch.reshape(u, left_dims + [dim_s])
vh = torch.reshape(vh, [dim_s] + right_dims)
return u, s, vh, s_rest |
Computes the QR decomposition of a tensor.
The QR decomposition is performed by treating the tensor as a matrix,
with an effective left (row) index resulting from combining the axes
`tensor.shape[:pivot_axis]` and an effective right (column) index
resulting from combining the axes `tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2,
then `q` would have shape (2, 3, 6), and `r` would have shape (6, 4, 5).
The output consists of two tensors `Q, R` such that:
```python
Q[i1,...,iN, j] * R[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
`R` is an upper triangular matrix, `Q` is an orthonormal matrix
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
Returns:
Q: Left tensor factor.
R: Right tensor factor. | def qr(
torch: Any,
tensor: Tensor,
pivot_axis: int,
non_negative_diagonal: bool = False
) -> Tuple[Tensor, Tensor]:
"""Computes the QR decomposition of a tensor.
The QR decomposition is performed by treating the tensor as a matrix,
with an effective left (row) index resulting from combining the axes
`tensor.shape[:pivot_axis]` and an effective right (column) index
resulting from combining the axes `tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2,
then `q` would have shape (2, 3, 6), and `r` would have shape (6, 4, 5).
The output consists of two tensors `Q, R` such that:
```python
Q[i1,...,iN, j] * R[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
`R` is an upper triangular matrix, `Q` is an orthonormal matrix
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
Returns:
Q: Left tensor factor.
R: Right tensor factor.
"""
left_dims = list(tensor.shape)[:pivot_axis]
right_dims = list(tensor.shape)[pivot_axis:]
tensor = torch.reshape(tensor, (np.prod(left_dims), np.prod(right_dims)))
q, r = torch.qr(tensor)
if non_negative_diagonal:
phases = torch.sign(torch.diagonal(r))
q = q * phases
r = phases[:, None] * r
center_dim = q.shape[1]
q = torch.reshape(q, list(left_dims) + [center_dim])
r = torch.reshape(r, [center_dim] + list(right_dims))
return q, r |
Computes the RQ decomposition of a tensor.
The RQ decomposition is performed by treating the tensor as a matrix,
with an effective left (row) index resulting from combining the axes
`tensor.shape[:pivot_axis]` and an effective right (column) index
resulting from combining the axes `tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2,
then `r` would have shape (2, 3, 6), and `q` would have shape (6, 4, 5).
The output consists of two tensors `R, Q` such that:
```python
R[i1,...,iN, j] * Q[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
`R` is a lower triangular matrix, `Q` is an orthonormal matrix
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
p matrix.
Returns:
R: Left tensor factor.
Q: Right tensor factor. | def rq(
torch: Any,
tensor: Tensor,
pivot_axis: int,
non_negative_diagonal: bool = False
) -> Tuple[Tensor, Tensor]:
"""Computes the RQ decomposition of a tensor.
The RQ decomposition is performed by treating the tensor as a matrix,
with an effective left (row) index resulting from combining the axes
`tensor.shape[:pivot_axis]` and an effective right (column) index
resulting from combining the axes `tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2,
then `r` would have shape (2, 3, 6), and `q` would have shape (6, 4, 5).
The output consists of two tensors `R, Q` such that:
```python
R[i1,...,iN, j] * Q[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
`R` is a lower triangular matrix, `Q` is an orthonormal matrix
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
p matrix.
Returns:
R: Left tensor factor.
Q: Right tensor factor.
"""
left_dims = tensor.shape[:pivot_axis]
right_dims = tensor.shape[pivot_axis:]
tensor = torch.reshape(tensor, [np.prod(left_dims), np.prod(right_dims)])
#torch has currently no support for complex dtypes
q, r = torch.qr(torch.transpose(tensor, 0, 1))
if non_negative_diagonal:
phases = torch.sign(torch.diagonal(r))
q = q * phases
r = phases[:, None] * r
r, q = torch.transpose(r, 0, 1), torch.transpose(q, 0,
1) #M=r*q at this point
center_dim = r.shape[1]
r = torch.reshape(r, list(left_dims) + [center_dim])
q = torch.reshape(q, [center_dim] + list(right_dims))
return r, q |
Computes the singular value decomposition (SVD) of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details. | def svd(
bt,
tensor: BlockSparseTensor,
pivot_axis: int,
max_singular_values: Optional[int] = None,
max_truncation_error: Optional[float] = None,
relative: Optional[bool] = False) -> Tuple[Tensor, Tensor, Tensor, Tensor]:
"""
Computes the singular value decomposition (SVD) of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details.
"""
left_dims = tensor.shape[:pivot_axis]
right_dims = tensor.shape[pivot_axis:]
matrix = bt.reshape(tensor, [np.prod(left_dims), np.prod(right_dims)])
flat_charges = matrix._charges
flat_flows = matrix._flows
flat_order = matrix.flat_order
tr_partition = len(matrix._order[0])
blocks, charges, shapes = _find_transposed_diagonal_sparse_blocks(
flat_charges, flat_flows, tr_partition, flat_order)
u_blocks = []
singvals = []
v_blocks = []
for n, b in enumerate(blocks):
out = np.linalg.svd(
np.reshape(matrix.data[b], shapes[:, n]),
full_matrices=False,
compute_uv=True)
u_blocks.append(out[0])
singvals.append(out[1])
v_blocks.append(out[2])
orig_num_singvals = np.int64(np.sum([len(s) for s in singvals]))
orig_block_size = [len(s) for s in singvals]
discarded_singvals = np.zeros(0, dtype=get_real_dtype(tensor.dtype))
if (max_singular_values
is not None) and (max_singular_values >= orig_num_singvals):
max_singular_values = None
if (max_truncation_error is not None) or (max_singular_values is not None):
max_D = np.max([len(s) for s in singvals]) if len(singvals) > 0 else 0
#extend singvals of all blocks into a matrix by padding each block with 0
if len(singvals) > 0:
extended_singvals = np.stack([
np.append(s, np.zeros(max_D - len(s), dtype=s.dtype))
for s in singvals
],
axis=1)
else:
extended_singvals = np.empty((0, 0), dtype=get_real_dtype(tensor.dtype))
extended_flat_singvals = np.ravel(extended_singvals)
#sort singular values
inds = np.argsort(extended_flat_singvals, kind='stable')
discarded_inds = np.zeros(0, dtype=SIZE_T)
if max_truncation_error is not None:
if relative and (len(singvals) > 0):
max_truncation_error = max_truncation_error * np.max(
[s[0] for s in singvals])
kept_inds_mask = np.sqrt(
np.cumsum(np.square(
extended_flat_singvals[inds]))) > max_truncation_error
trunc_inds_mask = np.logical_not(kept_inds_mask)
discarded_inds = inds[trunc_inds_mask]
inds = inds[kept_inds_mask]
if max_singular_values is not None:
#if the original number of non-zero singular values
#is smaller than `max_singular_values` we need to reset
#`max_singular_values` (we were filling in 0.0 into singular
#value blocks to facilitate trunction steps, thus we could end up
#with more singular values than originally there).
if max_singular_values > orig_num_singvals:
max_singular_values = orig_num_singvals
if max_singular_values < len(inds):
discarded_inds = np.append(discarded_inds,
inds[:(-1) * max_singular_values])
inds = inds[(-1) * max_singular_values::]
if len(inds) == 0:
#special case of truncation to 0 dimension;
warnings.warn("svd_decomposition truncated to 0 dimensions.")
if extended_singvals.shape[1] > 0:
#pylint: disable=no-member
keep = np.divmod(inds, extended_singvals.shape[1])
disc = np.divmod(discarded_inds, extended_singvals.shape[1])
else:
keep = (np.zeros(1, dtype=SIZE_T), np.zeros(1, dtype=SIZE_T))
disc = (np.zeros(0, dtype=SIZE_T), np.zeros(0, dtype=SIZE_T))
newsingvals = [
extended_singvals[keep[0][keep[1] == n], keep[1][keep[1] == n]][::-1]
for n in range(extended_singvals.shape[1])
]
discsingvals = [
extended_singvals[disc[0][disc[1] == n], disc[1][disc[1] == n]][::-1]
for n in range(extended_singvals.shape[1])
]
new_block_size = [len(s) for s in newsingvals]
discsingvals = [
d[:(orig_block_size[n] - new_block_size[n])]
for n, d in enumerate(discsingvals)
]
singvals = newsingvals
discarded_singvals = discsingvals
if len(singvals) > 0:
left_singval_charge_labels = np.concatenate([
np.full(singvals[n].shape[0], fill_value=n, dtype=np.int16)
for n in range(len(singvals))
])
all_singvals = np.concatenate(singvals)
#define the new charges on the two central bonds
left_charge_labels = np.concatenate([
np.full(len(singvals[n]), fill_value=n, dtype=np.int16)
for n in range(len(u_blocks))
])
right_charge_labels = np.concatenate([
np.full(len(singvals[n]), fill_value=n, dtype=np.int16)
for n in range(len(v_blocks))
])
all_ublocks = np.concatenate([
np.ravel(np.transpose(u_blocks[n][:, 0:len(singvals[n])]))
for n in range(len(u_blocks))
])
all_vblocks = np.concatenate([
np.ravel(v_blocks[n][0:len(singvals[n]), :])
for n in range(len(v_blocks))
])
else:
left_singval_charge_labels = np.empty(0, dtype=np.int16)
all_singvals = np.empty(0, dtype=get_real_dtype(tensor.dtype))
left_charge_labels = np.empty(0, dtype=np.int16)
right_charge_labels = np.empty(0, dtype=np.int16)
all_ublocks = np.empty(0, dtype=get_real_dtype(tensor.dtype))
all_vblocks = np.empty(0, dtype=get_real_dtype(tensor.dtype))
if len(discarded_singvals) > 0:
tmp_labels = [
np.full(discarded_singvals[n].shape[0], fill_value=n, dtype=np.int16)
for n in range(len(discarded_singvals))
]
left_discarded_singval_charge_labels = np.concatenate(tmp_labels)
all_discarded_singvals = np.concatenate(discarded_singvals)
else:
left_discarded_singval_charge_labels = np.empty(0, dtype=np.int16)
all_discarded_singvals = np.empty(0, dtype=get_real_dtype(tensor.dtype))
left_singval_charge = charges[left_singval_charge_labels]
S = ChargeArray(all_singvals, [left_singval_charge], [False])
left_discarded_singval_charge = charges[left_discarded_singval_charge_labels]
Sdisc = ChargeArray(all_discarded_singvals, [left_discarded_singval_charge],
[False])
new_left_charge = charges[left_charge_labels]
new_right_charge = charges[right_charge_labels]
#get the indices of the new tensors U,S and V
charges_u = [new_left_charge] + [matrix._charges[o] for o in matrix._order[0]]
order_u = [[0]] + [list(np.arange(1, len(matrix._order[0]) + 1))]
flows_u = [True] + [matrix._flows[o] for o in matrix._order[0]]
charges_v = [new_right_charge
] + [matrix._charges[o] for o in matrix._order[1]]
flows_v = [False] + [matrix._flows[o] for o in matrix._order[1]]
order_v = [[0]] + [list(np.arange(1, len(matrix._order[1]) + 1))]
#We fill in data into the transposed U
U = BlockSparseTensor(
all_ublocks,
charges=charges_u,
flows=flows_u,
order=order_u,
check_consistency=False).transpose((1, 0))
V = BlockSparseTensor(
all_vblocks,
charges=charges_v,
flows=flows_v,
order=order_v,
check_consistency=False)
left_shape = left_dims + (S.shape[0],)
right_shape = (S.shape[0],) + right_dims
return U.reshape(left_shape), S, V.reshape(right_shape), Sdisc |
Computes the QR decomposition of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details. | def qr(bt, tensor: BlockSparseTensor, pivot_axis: int) -> Tuple[Tensor, Tensor]:
"""Computes the QR decomposition of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details.
"""
left_dims = tensor.shape[:pivot_axis]
right_dims = tensor.shape[pivot_axis:]
tensor = bt.reshape(tensor, [np.prod(left_dims), np.prod(right_dims)])
q, r = bt.qr(tensor)
center_dim = q.shape[1]
q = bt.reshape(q, list(left_dims) + [center_dim])
r = bt.reshape(r, [center_dim] + list(right_dims))
return q, r |
Computes the RQ (reversed QR) decomposition of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details. | def rq(bt, tensor: BlockSparseTensor, pivot_axis: int) -> Tuple[Tensor, Tensor]:
"""Computes the RQ (reversed QR) decomposition of a tensor.
See tensornetwork.backends.tensorflow.decompositions for details.
"""
left_dims = tensor.shape[:pivot_axis]
right_dims = tensor.shape[pivot_axis:]
tensor = bt.reshape(tensor, [np.prod(left_dims), np.prod(right_dims)])
q, r = bt.qr(bt.conj(bt.transpose(tensor, (1, 0))))
r, q = bt.conj(bt.transpose(r, (1, 0))), bt.conj(bt.transpose(
q, (1, 0))) #M=r*q at this point
center_dim = r.shape[1]
r = bt.reshape(r, list(left_dims) + [center_dim])
q = bt.reshape(q, [center_dim] + list(right_dims))
return r, q |
Computes the singular value decomposition (SVD) of a tensor.
The SVD is performed by treating the tensor as a matrix, with an effective
left (row) index resulting from combining the axes `tensor.shape[:pivot_axis]`
and an effective right (column) index resulting from combining the axes
`tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2, then
`u` would have shape (2, 3, 6), `s` would have shape (6), and `vh` would
have shape (6, 4, 5).
If `max_singular_values` is set to an integer, the SVD is truncated to keep
at most this many singular values.
If `max_truncation_error > 0`, as many singular values will be truncated as
possible, so that the truncation error (the norm of discarded singular
values) is at most `max_truncation_error`.
If `relative` is set `True` then `max_truncation_err` is understood
relative to the largest singular value.
If both `max_singular_values` snd `max_truncation_error` are specified, the
number of retained singular values will be
`min(max_singular_values, nsv_auto_trunc)`, where `nsv_auto_trunc` is the
number of singular values that must be kept to maintain a truncation error
smaller than `max_truncation_error`.
The output consists of three tensors `u, s, vh` such that:
```python
u[i1,...,iN, j] * s[j] * vh[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
max_singular_values: The number of singular values to keep, or `None` to
keep them all.
max_truncation_error: The maximum allowed truncation error or `None` to not
do any truncation.
relative: Multiply `max_truncation_err` with the largest singular value.
Returns:
u: Left tensor factor.
s: Vector of ordered singular values from largest to smallest.
vh: Right tensor factor.
s_rest: Vector of discarded singular values (length zero if no
truncation). | def svd(
tf: Any,
tensor: Tensor,
pivot_axis: int,
max_singular_values: Optional[int] = None,
max_truncation_error: Optional[float] = None,
relative: Optional[bool] = False) -> Tuple[Tensor, Tensor, Tensor, Tensor]:
"""Computes the singular value decomposition (SVD) of a tensor.
The SVD is performed by treating the tensor as a matrix, with an effective
left (row) index resulting from combining the axes `tensor.shape[:pivot_axis]`
and an effective right (column) index resulting from combining the axes
`tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2, then
`u` would have shape (2, 3, 6), `s` would have shape (6), and `vh` would
have shape (6, 4, 5).
If `max_singular_values` is set to an integer, the SVD is truncated to keep
at most this many singular values.
If `max_truncation_error > 0`, as many singular values will be truncated as
possible, so that the truncation error (the norm of discarded singular
values) is at most `max_truncation_error`.
If `relative` is set `True` then `max_truncation_err` is understood
relative to the largest singular value.
If both `max_singular_values` snd `max_truncation_error` are specified, the
number of retained singular values will be
`min(max_singular_values, nsv_auto_trunc)`, where `nsv_auto_trunc` is the
number of singular values that must be kept to maintain a truncation error
smaller than `max_truncation_error`.
The output consists of three tensors `u, s, vh` such that:
```python
u[i1,...,iN, j] * s[j] * vh[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
max_singular_values: The number of singular values to keep, or `None` to
keep them all.
max_truncation_error: The maximum allowed truncation error or `None` to not
do any truncation.
relative: Multiply `max_truncation_err` with the largest singular value.
Returns:
u: Left tensor factor.
s: Vector of ordered singular values from largest to smallest.
vh: Right tensor factor.
s_rest: Vector of discarded singular values (length zero if no
truncation).
"""
left_dims = tf.shape(tensor)[:pivot_axis]
right_dims = tf.shape(tensor)[pivot_axis:]
tensor = tf.reshape(tensor,
[tf.reduce_prod(left_dims),
tf.reduce_prod(right_dims)])
s, u, v = tf.linalg.svd(tensor)
if max_singular_values is None:
max_singular_values = tf.size(s, out_type=tf.int64)
else:
max_singular_values = tf.constant(max_singular_values, dtype=tf.int64)
if max_truncation_error is not None:
# Cumulative norms of singular values in ascending order.
trunc_errs = tf.sqrt(tf.cumsum(tf.square(s), reverse=True))
# If relative is true, rescale max_truncation error with the largest
# singular value to yield the absolute maximal truncation error.
if relative:
abs_max_truncation_error = max_truncation_error * s[0]
else:
abs_max_truncation_error = max_truncation_error
# We must keep at least this many singular values to ensure the
# truncation error is <= abs_max_truncation_error.
num_sing_vals_err = tf.math.count_nonzero(
tf.cast(trunc_errs > abs_max_truncation_error, dtype=tf.int32))
else:
num_sing_vals_err = max_singular_values
num_sing_vals_keep = tf.minimum(max_singular_values, num_sing_vals_err)
# tf.svd() always returns the singular values as a vector of float{32,64}.
# since tf.math_ops.real is automatically applied to s. This causes
# s to possibly not be the same dtype as the original tensor, which can cause
# issues for later contractions. To fix it, we recast to the original dtype.
s = tf.cast(s, tensor.dtype)
s_rest = s[num_sing_vals_keep:]
s = s[:num_sing_vals_keep]
u = u[:, :num_sing_vals_keep]
v = v[:, :num_sing_vals_keep]
vh = tf.linalg.adjoint(v)
dim_s = tf.shape(s)[0] # must use tf.shape (not s.shape) to compile
u = tf.reshape(u, tf.concat([left_dims, [dim_s]], axis=-1))
vh = tf.reshape(vh, tf.concat([[dim_s], right_dims], axis=-1))
return u, s, vh, s_rest |
Computes the QR decomposition of a tensor.
The QR decomposition is performed by treating the tensor as a matrix,
with an effective left (row) index resulting from combining the
axes `tensor.shape[:pivot_axis]` and an effective right (column)
index resulting from combining the axes `tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2,
then `q` would have shape (2, 3, 6), and `r` would
have shape (6, 4, 5).
The output consists of two tensors `Q, R` such that:
```python
Q[i1,...,iN, j] * R[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
Returns:
Q: Left tensor factor.
R: Right tensor factor. | def qr(
tf: Any,
tensor: Tensor,
pivot_axis: int,
non_negative_diagonal: bool
) -> Tuple[Tensor, Tensor]:
"""Computes the QR decomposition of a tensor.
The QR decomposition is performed by treating the tensor as a matrix,
with an effective left (row) index resulting from combining the
axes `tensor.shape[:pivot_axis]` and an effective right (column)
index resulting from combining the axes `tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2,
then `q` would have shape (2, 3, 6), and `r` would
have shape (6, 4, 5).
The output consists of two tensors `Q, R` such that:
```python
Q[i1,...,iN, j] * R[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
Returns:
Q: Left tensor factor.
R: Right tensor factor.
"""
left_dims = tf.shape(tensor)[:pivot_axis]
right_dims = tf.shape(tensor)[pivot_axis:]
tensor = tf.reshape(tensor,
[tf.reduce_prod(left_dims),
tf.reduce_prod(right_dims)])
q, r = tf.linalg.qr(tensor)
if non_negative_diagonal:
phases = tf.math.sign(tf.linalg.diag_part(r))
q = q * phases
r = phases[:, None] * r
center_dim = tf.shape(q)[1]
q = tf.reshape(q, tf.concat([left_dims, [center_dim]], axis=-1))
r = tf.reshape(r, tf.concat([[center_dim], right_dims], axis=-1))
return q, r |
Computes the RQ decomposition of a tensor.
The QR decomposition is performed by treating the tensor as a matrix,
with an effective left (row) index resulting from combining the axes
`tensor.shape[:pivot_axis]` and an effective right (column) index
resulting from combining the axes `tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2,
then `r` would have shape (2, 3, 6), and `q` would
have shape (6, 4, 5).
The output consists of two tensors `Q, R` such that:
```python
Q[i1,...,iN, j] * R[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
Returns:
Q: Left tensor factor.
R: Right tensor factor. | def rq(
tf: Any,
tensor: Tensor,
pivot_axis: int,
non_negative_diagonal: bool
) -> Tuple[Tensor, Tensor]:
"""Computes the RQ decomposition of a tensor.
The QR decomposition is performed by treating the tensor as a matrix,
with an effective left (row) index resulting from combining the axes
`tensor.shape[:pivot_axis]` and an effective right (column) index
resulting from combining the axes `tensor.shape[pivot_axis:]`.
For example, if `tensor` had a shape (2, 3, 4, 5) and `pivot_axis` was 2,
then `r` would have shape (2, 3, 6), and `q` would
have shape (6, 4, 5).
The output consists of two tensors `Q, R` such that:
```python
Q[i1,...,iN, j] * R[j, k1,...,kM] == tensor[i1,...,iN, k1,...,kM]
```
Note that the output ordering matches numpy.linalg.svd rather than tf.svd.
Args:
tf: The tensorflow module.
tensor: A tensor to be decomposed.
pivot_axis: Where to split the tensor's axes before flattening into a
matrix.
Returns:
Q: Left tensor factor.
R: Right tensor factor.
"""
left_dims = tf.shape(tensor)[:pivot_axis]
right_dims = tf.shape(tensor)[pivot_axis:]
tensor = tf.reshape(tensor,
[tf.reduce_prod(left_dims),
tf.reduce_prod(right_dims)])
q, r = tf.linalg.qr(tf.math.conj(tf.transpose(tensor)))
if non_negative_diagonal:
phases = tf.math.sign(tf.linalg.diag_part(r))
q = q * phases
r = phases[:, None] * r
r, q = tf.math.conj(tf.transpose(r)), tf.math.conj(
tf.transpose(q)) #M=r*q at this point
center_dim = tf.shape(r)[1]
r = tf.reshape(r, tf.concat([left_dims, [center_dim]], axis=-1))
q = tf.reshape(q, tf.concat([[center_dim], right_dims], axis=-1))
return r, q |
Tensor contraction of a and b along specified axes.
Tensordot (also known as tensor contraction) sums the product of elements
from `a` and `b` over the indices specified by `a_axes` and `b_axes`.
The lists `a_axes` and `b_axes` specify those pairs of axes along which to
contract the tensors. The axis `a_axes[i]` of `a` must have the same dimension
as axis `b_axes[i]` of `b` for all `i` in `range(0, len(a_axes))`. The lists
`a_axes` and `b_axes` must have identical length and consist of unique
integers that specify valid axes for each of the tensors.
This operation corresponds to `numpy.tensordot(a, b, axes)`.
Example 1: When `a` and `b` are matrices (order 2), the case `axes = 1`
is equivalent to matrix multiplication.
Example 2: When `a` and `b` are matrices (order 2), the case
`axes = [[1], [0]]` is equivalent to matrix multiplication.
Example 3: Suppose that \\(a_{ijk}\\) and \\(b_{lmn}\\) represent two
tensors of order 3. Then, `contract(a, b, [[0], [2]])` is the order 4 tensor
\\(c_{jklm}\\) whose entry
corresponding to the indices \\((j,k,l,m)\\) is given by:
\\( c_{jklm} = \sum_i a_{ijk} b_{lmi} \\).
In general, `order(c) = order(a) + order(b) - 2*len(axes[0])`.
Args:
tf: The TensorFlow module. This must be passed in instead of imported
since we don't assume users have TensorFlow installed.
a: `Tensor` of type `float32` or `float64`.
b: `Tensor` with the same type as `a`.
axes: Either a scalar `N`, or a list or an `int32` `Tensor` of shape [2, k].
If axes is a scalar, sum over the last N axes of a and the first N axes of
b in order. If axes is a list or `Tensor` the first and second row contain
the set of unique integers specifying axes along which the contraction is
computed, for `a` and `b`, respectively. The number of axes for `a` and
`b` must be equal.
name: A name for the operation (optional).
Returns:
A `Tensor` with the same type as `a`.
Raises:
ValueError: If the shapes of `a`, `b`, and `axes` are incompatible.
IndexError: If the values in axes exceed the rank of the corresponding
tensor. | def tensordot(tf, a, b, axes, name: Optional[Text] = None) -> Tensor:
r"""Tensor contraction of a and b along specified axes.
Tensordot (also known as tensor contraction) sums the product of elements
from `a` and `b` over the indices specified by `a_axes` and `b_axes`.
The lists `a_axes` and `b_axes` specify those pairs of axes along which to
contract the tensors. The axis `a_axes[i]` of `a` must have the same dimension
as axis `b_axes[i]` of `b` for all `i` in `range(0, len(a_axes))`. The lists
`a_axes` and `b_axes` must have identical length and consist of unique
integers that specify valid axes for each of the tensors.
This operation corresponds to `numpy.tensordot(a, b, axes)`.
Example 1: When `a` and `b` are matrices (order 2), the case `axes = 1`
is equivalent to matrix multiplication.
Example 2: When `a` and `b` are matrices (order 2), the case
`axes = [[1], [0]]` is equivalent to matrix multiplication.
Example 3: Suppose that \\(a_{ijk}\\) and \\(b_{lmn}\\) represent two
tensors of order 3. Then, `contract(a, b, [[0], [2]])` is the order 4 tensor
\\(c_{jklm}\\) whose entry
corresponding to the indices \\((j,k,l,m)\\) is given by:
\\( c_{jklm} = \sum_i a_{ijk} b_{lmi} \\).
In general, `order(c) = order(a) + order(b) - 2*len(axes[0])`.
Args:
tf: The TensorFlow module. This must be passed in instead of imported
since we don't assume users have TensorFlow installed.
a: `Tensor` of type `float32` or `float64`.
b: `Tensor` with the same type as `a`.
axes: Either a scalar `N`, or a list or an `int32` `Tensor` of shape [2, k].
If axes is a scalar, sum over the last N axes of a and the first N axes of
b in order. If axes is a list or `Tensor` the first and second row contain
the set of unique integers specifying axes along which the contraction is
computed, for `a` and `b`, respectively. The number of axes for `a` and
`b` must be equal.
name: A name for the operation (optional).
Returns:
A `Tensor` with the same type as `a`.
Raises:
ValueError: If the shapes of `a`, `b`, and `axes` are incompatible.
IndexError: If the values in axes exceed the rank of the corresponding
tensor.
"""
def _tensordot_should_flip(contraction_axes: List[int],
free_axes: List[int]) -> bool:
"""Helper method to determine axis ordering.
We minimize the average distance the indices would have to move under the
transposition.
Args:
contraction_axes: The axes to be contracted.
free_axes: The free axes.
Returns:
should_flip: `True` if `contraction_axes` should be moved to the left,
`False` if they should be moved to the right.
"""
# NOTE: This will fail if the arguments contain any Tensors.
if contraction_axes and free_axes:
return bool(np.mean(contraction_axes) < np.mean(free_axes))
return False
def _tranpose_if_necessary(tensor: Tensor, perm: List[int]) -> Tensor:
"""Like transpose(), but avoids creating a new tensor if possible.
Although the graph optimizer should kill trivial transposes, it is
best not to add them in the first place!
"""
if perm == list(range(len(perm))):
return tensor
return tf.transpose(tensor, perm)
def _reshape_if_necessary(tensor: Tensor, new_shape: List[int]) -> Tensor:
"""Like reshape(), but avoids creating a new tensor if possible.
Assumes shapes are both fully specified.
"""
cur_shape = tensor.get_shape().as_list()
if (len(new_shape) == len(cur_shape) and
all(d0 == d1 for d0, d1 in zip(cur_shape, new_shape))):
return tensor
return tf.reshape(tensor, new_shape)
def _tensordot_reshape(
a: Tensor,
axes: Union[Sequence[int], Tensor],
is_right_term=False
) -> Tuple[Tensor, Union[List[int], Tensor], Optional[List[int]], bool]:
"""Helper method to perform transpose and reshape for contraction op.
This method is helpful in reducing `math_ops.tensordot` to `math_ops.matmul`
using `array_ops.transpose` and `array_ops.reshape`. The method takes a
tensor and performs the correct transpose and reshape operation for a given
set of indices. It returns the reshaped tensor as well as a list of indices
necessary to reshape the tensor again after matrix multiplication.
Args:
a: `Tensor`.
axes: List or `int32` `Tensor` of unique indices specifying valid axes of
`a`.
is_right_term: Whether `a` is the right (second) argument to `matmul`.
Returns:
A tuple `(reshaped_a, free_dims, free_dims_static, transpose_needed)`
where `reshaped_a` is the tensor `a` reshaped to allow contraction via
`matmul`, `free_dims` is either a list of integers or an `int32`
`Tensor`, depending on whether the shape of a is fully specified, and
free_dims_static is either a list of integers and None values, or None,
representing the inferred static shape of the free dimensions.
`transpose_needed` indicates whether `reshaped_a` must be transposed,
or not, when calling `matmul`.
"""
if a.get_shape().is_fully_defined() and isinstance(axes, (list, tuple)):
shape_a = a.get_shape().as_list()
# NOTE: This will fail if axes contains any tensors
axes = [i if i >= 0 else i + len(shape_a) for i in axes]
free = [i for i in range(len(shape_a)) if i not in axes]
flipped = _tensordot_should_flip(axes, free)
free_dims = [shape_a[i] for i in free]
prod_free = int(np.prod([shape_a[i] for i in free]))
prod_axes = int(np.prod([shape_a[i] for i in axes]))
perm = axes + free if flipped else free + axes
new_shape = [prod_axes, prod_free] if flipped else [prod_free, prod_axes]
transposed_a = _tranpose_if_necessary(a, perm)
reshaped_a = _reshape_if_necessary(transposed_a, new_shape)
transpose_needed = (not flipped) if is_right_term else flipped
return reshaped_a, free_dims, free_dims, transpose_needed
if a.get_shape().ndims is not None and isinstance(axes, (list, tuple)):
shape_a = a.get_shape().as_list()
axes = [i if i >= 0 else i + len(shape_a) for i in axes]
free = [i for i in range(len(shape_a)) if i not in axes]
flipped = _tensordot_should_flip(axes, free)
perm = axes + free if flipped else free + axes
axes_dims = [shape_a[i] for i in axes]
free_dims = [shape_a[i] for i in free]
free_dims_static = free_dims
axes = tf.convert_to_tensor(axes, dtype=tf.dtypes.int32, name="axes")
free = tf.convert_to_tensor(free, dtype=tf.dtypes.int32, name="free")
shape_a = tf.shape(a)
transposed_a = _tranpose_if_necessary(a, perm)
else:
free_dims_static = None
shape_a = tf.shape(a)
rank_a = tf.rank(a)
axes = tf.convert_to_tensor(axes, dtype=tf.dtypes.int32, name="axes")
axes = tf.where(axes >= 0, axes, axes + rank_a)
free, _ = tf.compat.v1.setdiff1d(tf.range(rank_a), axes)
# Matmul does not accept tensors for its transpose arguments, so fall
# back to the previous, fixed behavior.
# NOTE(amilsted): With a suitable wrapper for `matmul` using e.g. `case`
# to match transpose arguments to tensor values, we could also avoid
# unneeded tranposes in this case at the expense of a somewhat more
# complicated graph. Unclear whether this would be beneficial overall.
flipped = is_right_term
perm = (
tf.concat([axes, free], 0) if flipped else tf.concat([free, axes], 0))
transposed_a = tf.transpose(a, perm)
free_dims = tf.gather(shape_a, free)
axes_dims = tf.gather(shape_a, axes)
prod_free_dims = tf.reduce_prod(free_dims)
prod_axes_dims = tf.reduce_prod(axes_dims)
if flipped:
new_shape = tf.stack([prod_axes_dims, prod_free_dims])
else:
new_shape = tf.stack([prod_free_dims, prod_axes_dims])
reshaped_a = tf.reshape(transposed_a, new_shape)
transpose_needed = (not flipped) if is_right_term else flipped
return reshaped_a, free_dims, free_dims_static, transpose_needed
def _tensordot_axes(a: Tensor, axes) -> Tuple[Any, Any]:
"""Generates two sets of contraction axes for the two tensor arguments."""
a_shape = a.get_shape()
if isinstance(axes, tf.compat.integral_types):
if axes < 0:
raise ValueError("'axes' must be at least 0.")
if a_shape.ndims is not None:
if axes > a_shape.ndims:
raise ValueError("'axes' must not be larger than the number of "
"dimensions of tensor %s." % a)
return (list(range(a_shape.ndims - axes,
a_shape.ndims)), list(range(axes)))
rank = tf.rank(a)
return (tf.range(rank - axes, rank,
dtype=tf.int32), tf.range(axes, dtype=tf.int32))
if isinstance(axes, (list, tuple)):
if len(axes) != 2:
raise ValueError("'axes' must be an integer or have length 2.")
a_axes = axes[0]
b_axes = axes[1]
if isinstance(a_axes, tf.compat.integral_types) and \
isinstance(b_axes, tf.compat.integral_types):
a_axes = [a_axes]
b_axes = [b_axes]
# NOTE: This fails if either a_axes and b_axes are Tensors.
if len(a_axes) != len(b_axes):
raise ValueError(
"Different number of contraction axes 'a' and 'b', %s != %s." %
(len(a_axes), len(b_axes)))
# The contraction indices do not need to be permuted.
# Sort axes to avoid unnecessary permutations of a.
# NOTE: This fails if either a_axes and b_axes contain Tensors.
# pylint: disable=len-as-condition
if len(a_axes) > 0:
a_axes, b_axes = list(zip(*sorted(zip(a_axes, b_axes))))
return a_axes, b_axes
axes = tf.convert_to_tensor(axes, name="axes", dtype=tf.int32)
return axes[0], axes[1]
with tf.compat.v1.name_scope(name, "Tensordot", [a, b, axes]) as _name:
a = tf.convert_to_tensor(a, name="a")
b = tf.convert_to_tensor(b, name="b")
a_axes, b_axes = _tensordot_axes(a, axes)
a_reshape, a_free_dims, a_free_dims_static, a_transp = _tensordot_reshape(
a, a_axes)
b_reshape, b_free_dims, b_free_dims_static, b_transp = _tensordot_reshape(
b, b_axes, is_right_term=True)
ab_matmul = tf.matmul(
a_reshape, b_reshape, transpose_a=a_transp, transpose_b=b_transp)
if isinstance(a_free_dims, list) and isinstance(b_free_dims, list):
return tf.reshape(ab_matmul, a_free_dims + b_free_dims, name=_name)
a_free_dims = tf.convert_to_tensor(a_free_dims, dtype=tf.dtypes.int32)
b_free_dims = tf.convert_to_tensor(b_free_dims, dtype=tf.dtypes.int32)
product = tf.reshape(
ab_matmul, tf.concat([a_free_dims, b_free_dims], 0), name=_name)
if a_free_dims_static is not None and b_free_dims_static is not None:
product.set_shape(a_free_dims_static + b_free_dims_static)
return product |
Compare the shapes of `tensor1` and `tensor2`. Return `True` if the shapes
are identical.
Args:
tensor1, tensor2: Two tensors.
Returns:
bool: The result of comparing the shapes. | def compare_shapes(tensor1: ChargeArray, tensor2: ChargeArray) -> bool:
"""
Compare the shapes of `tensor1` and `tensor2`. Return `True` if the shapes
are identical.
Args:
tensor1, tensor2: Two tensors.
Returns:
bool: The result of comparing the shapes.
"""
if tensor1.shape != tensor2.shape:
return False
if len(tensor1._charges) != len(tensor2._charges):
return False
if not all(
charge_equal(c1, c2)
for c1, c2 in zip(tensor1._charges, tensor2._charges)):
return False
if not all(f1 == f2 for f1, f2 in zip(tensor1._flows, tensor2._flows)):
return False
return True |
Compute the outer product of two `BlockSparseTensor`.
The first `tensor1.ndim` indices of the resulting tensor are the
indices of `tensor1`, the last `tensor2.ndim` indices are those
of `tensor2`.
Args:
tensor1: A tensor.
tensor2: A tensor.
Returns:
BlockSparseTensor: The result of taking the outer product. | def outerproduct(tensor1: BlockSparseTensor,
tensor2: BlockSparseTensor) -> BlockSparseTensor:
"""
Compute the outer product of two `BlockSparseTensor`.
The first `tensor1.ndim` indices of the resulting tensor are the
indices of `tensor1`, the last `tensor2.ndim` indices are those
of `tensor2`.
Args:
tensor1: A tensor.
tensor2: A tensor.
Returns:
BlockSparseTensor: The result of taking the outer product.
"""
final_charges = tensor1._charges + tensor2._charges
final_flows = list(tensor1._flows) + list(tensor2._flows)
order2 = [list(np.asarray(s) + len(tensor1._charges)) for s in tensor2._order]
data = np.zeros(
compute_num_nonzero(final_charges, final_flows), dtype=tensor1.dtype)
if ((len(tensor1.data) > 0) and (len(tensor2.data) > 0)) and (len(data) > 0):
# find the location of the zero block in the output
final_block_maps, final_block_charges, _ = _find_diagonal_sparse_blocks(
final_charges, final_flows, len(tensor1._charges))
index = np.nonzero(
final_block_charges == final_block_charges.identity_charges(
dim=1))[0][0]
data[final_block_maps[index].ravel()] = np.outer(tensor1.data,
tensor2.data).ravel()
return BlockSparseTensor(
data,
charges=final_charges,
flows=final_flows,
order=tensor1._order + order2,
check_consistency=False) |
Contract two `BlockSparseTensor`s along `axes`.
Args:
tensor1: First tensor.
tensor2: Second tensor.
axes: The axes to contract.
Returns:
BlockSparseTensor: The result of the tensor contraction. | def tensordot(
tensor1: BlockSparseTensor,
tensor2: BlockSparseTensor,
axes: Optional[Union[Sequence[Sequence[int]], Sequence[int], int]] = 2
) -> BlockSparseTensor:
"""
Contract two `BlockSparseTensor`s along `axes`.
Args:
tensor1: First tensor.
tensor2: Second tensor.
axes: The axes to contract.
Returns:
BlockSparseTensor: The result of the tensor contraction.
"""
#process scalar input for `axes`
if isinstance(axes, (np.integer, int)):
axes = [
np.arange(tensor1.ndim - axes, tensor1.ndim, dtype=np.int16),
np.arange(0, axes, dtype=np.int16)
]
elif isinstance(axes[0], (np.integer, int)):
if len(axes) > 1:
raise ValueError("invalid input `axes = {}` to tensordot".format(axes))
axes = [np.array(axes, dtype=np.int16), np.array(axes, dtype=np.int16)]
axes1 = axes[0]
axes2 = axes[1]
if len(axes1) != len(axes2):
raise ValueError(
"`axes1 = {}` and `axes2 = {}` have to be of same length. ".format(
axes1, axes2))
if len(axes1) > len(tensor1.shape):
raise ValueError(
"`axes1 = {}` is incompatible with `tensor1.shape = {}. ".format(
axes1, tensor1.shape))
if len(axes2) > len(tensor2.shape):
raise ValueError(
"`axes2 = {}` is incompatible with `tensor2.shape = {}. ".format(
axes2, tensor2.shape))
#special case outer product
if len(axes1) == 0:
return outerproduct(tensor1, tensor2)
#more checks
if max(axes1) >= len(tensor1.shape):
raise ValueError(
"rank of `tensor1` is smaller than `max(axes1) = {}.`".format(
max(axes1)))
if max(axes2) >= len(tensor2.shape):
raise ValueError(
"rank of `tensor2` is smaller than `max(axes2) = {}`".format(
max(axes1)))
contr_flows_1 = []
contr_flows_2 = []
contr_charges_1 = []
contr_charges_2 = []
for a in axes1:
contr_flows_1.extend(tensor1._flows[tensor1._order[a]])
contr_charges_1.extend([tensor1._charges[n] for n in tensor1._order[a]])
for a in axes2:
contr_flows_2.extend(tensor2._flows[tensor2._order[a]])
contr_charges_2.extend([tensor2._charges[n] for n in tensor2._order[a]])
if len(contr_charges_2) != len(contr_charges_1):
raise ValueError(
"`axes1 = {}` and `axes2 = {}` have incompatible elementary"
" shapes {} and {}".format(axes1, axes2,
[e.dim for e in contr_charges_1],
[e.dim for e in contr_charges_2]))
if not np.all(
np.asarray(contr_flows_1) == np.logical_not(np.asarray(contr_flows_2))):
raise ValueError(
"`axes1 = {}` and `axes2 = {}` have incompatible elementary"
" flows {} and {}".format(axes1, axes2, contr_flows_1, contr_flows_2))
charge_check = [
charge_equal(c1, c2) for c1, c2 in zip(contr_charges_1, contr_charges_2)
]
if not np.all(charge_check):
inds = np.nonzero(np.logical_not(charge_check))[0]
raise ValueError(
"`axes = {}` of tensor1 and `axes = {}` of tensor2 have "
"incompatible charges {} and {}".format(
np.array(axes1)[inds],
np.array(axes2)[inds], [contr_charges_1[i] for i in inds],
[contr_charges_2[i] for i in inds]))
#checks finished
#special case inner product (returns an ndim=0 tensor)
if (len(axes1) == tensor1.ndim) and (len(axes2) == tensor2.ndim):
t1 = tensor1.transpose(axes1).contiguous()
t2 = tensor2.transpose(axes2).contiguous()
return BlockSparseTensor(
data=np.dot(t1.data, t2.data),
charges=[],
flows=[],
order=[],
check_consistency=False)
#in all other cases we perform a regular tensordot
free_axes1 = sorted(set(np.arange(tensor1.ndim)) - set(axes1))
free_axes2 = sorted(set(np.arange(tensor2.ndim)) - set(axes2))
new_order1 = [tensor1._order[n] for n in free_axes1
] + [tensor1._order[n] for n in axes1]
new_order2 = [tensor2._order[n] for n in axes2
] + [tensor2._order[n] for n in free_axes2]
flat_order_1 = flatten(new_order1)
flat_order_2 = flatten(new_order2)
flat_charges_1, flat_flows_1 = tensor1._charges, tensor1._flows
flat_charges_2, flat_flows_2 = tensor2._charges, tensor2._flows
left_charges = []
right_charges = []
left_flows = []
right_flows = []
left_order = []
right_order = []
s = 0
for n in free_axes1:
left_charges.extend([tensor1._charges[o] for o in tensor1._order[n]])
left_order.append(list(np.arange(s, s + len(tensor1._order[n]))))
s += len(tensor1._order[n])
left_flows.extend([tensor1._flows[o] for o in tensor1._order[n]])
s = 0
for n in free_axes2:
right_charges.extend([tensor2._charges[o] for o in tensor2._order[n]])
right_order.append(
list(len(left_charges) + np.arange(s, s + len(tensor2._order[n]))))
s += len(tensor2._order[n])
right_flows.extend([tensor2._flows[o] for o in tensor2._order[n]])
tr_sparse_blocks_1, charges1, shapes_1 = _find_transposed_diagonal_sparse_blocks(#pylint: disable=line-too-long
flat_charges_1, flat_flows_1, len(left_charges), flat_order_1)
tr_sparse_blocks_2, charges2, shapes_2 = _find_transposed_diagonal_sparse_blocks(#pylint: disable=line-too-long
flat_charges_2, flat_flows_2, len(contr_charges_2), flat_order_2)
common_charges, label_to_common_1, label_to_common_2 = intersect(
charges1.unique_charges,
charges2.unique_charges,
axis=0,
return_indices=True)
#Note: `cs` may contain charges that are not present in `common_charges`
charges = left_charges + right_charges
flows = left_flows + right_flows
sparse_blocks, cs, _ = _find_transposed_diagonal_sparse_blocks(
charges, flows, len(left_charges), list(range(len(charges))))
num_nonzero_elements = np.int64(np.sum([len(v) for v in sparse_blocks]))
#Note that empty is not a viable choice here.
data = np.zeros(
num_nonzero_elements, dtype=np.result_type(tensor1.dtype, tensor2.dtype))
label_to_common_final = intersect(
cs.unique_charges, common_charges, axis=0, return_indices=True)[1]
for n in range(common_charges.shape[0]):
n1 = label_to_common_1[n]
n2 = label_to_common_2[n]
nf = label_to_common_final[n]
data[sparse_blocks[nf].ravel()] = np.ravel(
np.matmul(tensor1.data[tr_sparse_blocks_1[n1].reshape(shapes_1[:, n1])],
tensor2.data[tr_sparse_blocks_2[n2].reshape(shapes_2[:,
n2])]))
res = BlockSparseTensor(
data=data,
charges=charges,
flows=flows,
order=left_order + right_order,
check_consistency=False)
return res |
Initialize a 1d np.ndarray using `numpy_initializer` function.
Args:
numpy_initializer: Callable, should return a 1d np.ndarray.
Function call signature: `numpy_initializer(*args, **kwargs)`.
comp_num_elements: Callable, computes the number of elements of
the returned 1d np.ndarray, using `numel = comp_num_elements(indices)`.
indices: List if `Index` objects.
*args, **kwargs: Arguments to `numpy_initializer`.
Returns:
np.ndarray: An initialized numpy array.
List[BaseCharge]: A list containing the flattened charges in `indices`
List[bool]: The flattened flows of `indices`.
List[List]: A list of list of int, the order information needed to
initialize a BlockSparseTensor. | def _data_initializer(
numpy_initializer: Callable, comp_num_elements: Callable,
indices: Sequence[Index], *args, **kwargs
) -> Tuple[np.ndarray, List[BaseCharge], List[bool], List[List[int]]]:
"""
Initialize a 1d np.ndarray using `numpy_initializer` function.
Args:
numpy_initializer: Callable, should return a 1d np.ndarray.
Function call signature: `numpy_initializer(*args, **kwargs)`.
comp_num_elements: Callable, computes the number of elements of
the returned 1d np.ndarray, using `numel = comp_num_elements(indices)`.
indices: List if `Index` objects.
*args, **kwargs: Arguments to `numpy_initializer`.
Returns:
np.ndarray: An initialized numpy array.
List[BaseCharge]: A list containing the flattened charges in `indices`
List[bool]: The flattened flows of `indices`.
List[List]: A list of list of int, the order information needed to
initialize a BlockSparseTensor.
"""
charges, flows = get_flat_meta_data(indices)
num_elements = comp_num_elements(charges, flows)
tmp = np.append(0, np.cumsum([len(i.flat_charges) for i in indices]))
order = [list(np.arange(tmp[n], tmp[n + 1])) for n in range(len(tmp) - 1)]
data = numpy_initializer(num_elements, *args, **kwargs)
return data, charges, flows, order |
Return charges and flows of flattened `indices`.
Args:
indices: A list of `Index` objects.
Returns:
List[BaseCharge]: The flattened charges.
List[bool]: The flattened flows. | def get_flat_meta_data(indices: Sequence[Index]) -> Tuple[List, List]:
"""
Return charges and flows of flattened `indices`.
Args:
indices: A list of `Index` objects.
Returns:
List[BaseCharge]: The flattened charges.
List[bool]: The flattened flows.
"""
charges = []
flows = []
for i in indices:
flows.extend(i.flat_flows)
charges.extend(i.flat_charges)
return charges, flows |
Compute lookup table for how dense index positions map
to sparse index positions, treating only those elements as non-zero
whose charges fuse to `target_charges`.
Args:
charges: List of `BaseCharge` objects.
flows: A list of `bool`; the flow directions.
target_charges: A `BaseCharge`; the target charges for which
the fusion of `charges` is non-zero.
Returns:
lookup: An np.ndarray of positive numbers between `0` and
`len(unique_charges)`. The position of values `n` in `lookup`
are positions with charge values `unique_charges[n]`.
unique_charges: The unique charges of fusion of `charges`
label_to_unique: The integer labels of the unique charges. | def compute_sparse_lookup(
charges: List[BaseCharge], flows: Union[np.ndarray, List[bool]],
target_charges: BaseCharge) -> Tuple[np.ndarray, BaseCharge, np.ndarray]:
"""
Compute lookup table for how dense index positions map
to sparse index positions, treating only those elements as non-zero
whose charges fuse to `target_charges`.
Args:
charges: List of `BaseCharge` objects.
flows: A list of `bool`; the flow directions.
target_charges: A `BaseCharge`; the target charges for which
the fusion of `charges` is non-zero.
Returns:
lookup: An np.ndarray of positive numbers between `0` and
`len(unique_charges)`. The position of values `n` in `lookup`
are positions with charge values `unique_charges[n]`.
unique_charges: The unique charges of fusion of `charges`
label_to_unique: The integer labels of the unique charges.
"""
fused_charges = fuse_charges(charges, flows)
unique_charges, inverse = unique(fused_charges.charges, return_inverse=True)
_, label_to_unique, _ = intersect(
unique_charges, target_charges.charges, return_indices=True)
# _, label_to_unique, _ = unique_charges.intersect(
# target_charges, return_indices=True)
tmp = np.full(
unique_charges.shape[0], fill_value=-1, dtype=charges[0].label_dtype)
obj = charges[0].__new__(type(charges[0]))
obj.__init__(
charges=unique_charges,
charge_labels=None,
charge_types=charges[0].charge_types)
tmp[label_to_unique] = label_to_unique
lookup = tmp[inverse]
lookup = lookup[lookup >= 0]
return lookup, obj, np.sort(label_to_unique) |
For a list of charges, computes all possible fused charges resulting
from fusing `charges` and their respective degeneracies
Args:
charges: List of `BaseCharge`, one for each leg of a
tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
Returns:
BaseCharge: The unique fused charges.
np.ndarray: The degeneracies of each unqiue fused charge. | def compute_fused_charge_degeneracies(
charges: List[BaseCharge],
flows: Union[np.ndarray, List[bool]]) -> Tuple[BaseCharge, np.ndarray]:
"""
For a list of charges, computes all possible fused charges resulting
from fusing `charges` and their respective degeneracies
Args:
charges: List of `BaseCharge`, one for each leg of a
tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
Returns:
BaseCharge: The unique fused charges.
np.ndarray: The degeneracies of each unqiue fused charge.
"""
if len(charges) == 1:
return (charges[0] * flows[0]).unique(return_counts=True)
dims = [c.dim for c in charges]
# for small dims is faster to fuse all and use unique
# directly
if reduce(mul, dims, 1) < 20000:
fused = fuse_charges(charges, flows)
return fused.unique(return_counts=True)
partition = _find_best_partition(dims)
fused_left = fuse_charges(charges[:partition], flows[:partition])
fused_right = fuse_charges(charges[partition:], flows[partition:])
left_unique, left_degens = fused_left.unique(return_counts=True)
right_unique, right_degens = fused_right.unique(return_counts=True)
fused = left_unique + right_unique
unique_charges, charge_labels = fused.unique(return_inverse=True)
fused_degeneracies = fuse_degeneracies(left_degens, right_degens)
new_ord = np.argsort(charge_labels)
all_degens = np.cumsum(fused_degeneracies[new_ord])
cum_degens = all_degens[np.flatnonzero(np.diff(charge_labels[new_ord]))]
final_degeneracies = np.append(cum_degens, all_degens[-1]) - np.append(
0, cum_degens)
return unique_charges, final_degeneracies |
For a list of charges, compute all possible fused charges resulting
from fusing `charges`.
Args:
charges: List of `BaseCharge`, one for each leg of a
tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
Returns:
BaseCharge: The unique fused charges. | def compute_unique_fused_charges(
charges: List[BaseCharge], flows: Union[np.ndarray,
List[bool]]) -> BaseCharge:
"""
For a list of charges, compute all possible fused charges resulting
from fusing `charges`.
Args:
charges: List of `BaseCharge`, one for each leg of a
tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
Returns:
BaseCharge: The unique fused charges.
"""
if len(charges) == 1:
return (charges[0] * flows[0]).unique()
accumulated_charges = (charges[0] * flows[0]).unique()
for n in range(1, len(charges)):
leg_charges = charges[n].unique()
fused_charges = accumulated_charges + leg_charges * flows[n]
accumulated_charges = fused_charges.unique()
return accumulated_charges |
Compute the number of non-zero elements, given the meta-data of
a symmetric tensor.
Args:
charges: List of `BaseCharge`, one for each leg of a
tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
Returns:
int: The number of non-zero elements. | def compute_num_nonzero(charges: List[BaseCharge],
flows: Union[np.ndarray, List[bool]]) -> int:
"""
Compute the number of non-zero elements, given the meta-data of
a symmetric tensor.
Args:
charges: List of `BaseCharge`, one for each leg of a
tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
Returns:
int: The number of non-zero elements.
"""
if np.any([len(c) == 0 for c in charges]):
return 0
#pylint: disable=line-too-long
accumulated_charges, accumulated_degeneracies = compute_fused_charge_degeneracies(
charges, flows)
res = accumulated_charges == accumulated_charges.identity_charges(dim=1)
nz_inds = np.nonzero(res)[0]
if len(nz_inds) > 0:
return np.squeeze(accumulated_degeneracies[nz_inds][0])
return 0 |
Add quantum numbers arising from combining two or more charges into a
single index, keeping only the quantum numbers that appear in
`target_charges`. Equilvalent to using "combine_charges" followed
by "reduce", but is generally much more efficient.
Args:
charges: List of `BaseCharge`, one for each leg of a
tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
target_charges: n-by-D array of charges which should be kept,
with `n` the number of symmetries.
return_locations: If `True` return the location of the kept
values of the fused charges
strides: Index strides with which to compute the
retured locations of the kept elements. Defaults to trivial strides
(based on row major order).
Returns:
BaseCharge: the fused index after reduction.
np.ndarray: Locations of the fused BaseCharge charges that were kept. | def reduce_charges(charges: List[BaseCharge],
flows: Union[np.ndarray, List[bool]],
target_charges: np.ndarray,
return_locations: Optional[bool] = False,
strides: Optional[np.ndarray] = None) -> Any:
"""
Add quantum numbers arising from combining two or more charges into a
single index, keeping only the quantum numbers that appear in
`target_charges`. Equilvalent to using "combine_charges" followed
by "reduce", but is generally much more efficient.
Args:
charges: List of `BaseCharge`, one for each leg of a
tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
target_charges: n-by-D array of charges which should be kept,
with `n` the number of symmetries.
return_locations: If `True` return the location of the kept
values of the fused charges
strides: Index strides with which to compute the
retured locations of the kept elements. Defaults to trivial strides
(based on row major order).
Returns:
BaseCharge: the fused index after reduction.
np.ndarray: Locations of the fused BaseCharge charges that were kept.
"""
tensor_dims = [len(c) for c in charges]
if len(charges) == 1:
# reduce single index
if strides is None:
strides = np.array([1], dtype=SIZE_T)
return charges[0].dual(flows[0]).reduce(
target_charges, return_locations=return_locations, strides=strides[0])
# find size-balanced partition of charges
partition = _find_best_partition(tensor_dims)
# compute quantum numbers for each partition
left_ind = fuse_charges(charges[:partition], flows[:partition])
right_ind = fuse_charges(charges[partition:], flows[partition:])
# compute combined qnums
comb_qnums = fuse_ndarray_charges(left_ind.unique_charges,
right_ind.unique_charges,
charges[0].charge_types)
#special case of empty charges
#pylint: disable=unsubscriptable-object
if (comb_qnums.shape[0] == 0) or (len(left_ind.charge_labels) == 0) or (len(
right_ind.charge_labels) == 0):
obj = charges[0].__new__(type(charges[0]))
obj.__init__(
np.empty((0, charges[0].num_symmetries), dtype=charges[0].dtype),
np.empty(0, dtype=charges[0].label_dtype), charges[0].charge_types)
if return_locations:
return obj, np.empty(0, dtype=SIZE_T)
return obj
unique_comb_qnums, comb_labels = unique(comb_qnums, return_inverse=True)
num_unique = unique_comb_qnums.shape[0]
# intersect combined qnums and target_charges
reduced_qnums, label_to_unique, _ = intersect(
unique_comb_qnums, target_charges, axis=0, return_indices=True)
map_to_kept = -np.ones(num_unique, dtype=charges[0].label_dtype)
map_to_kept[label_to_unique] = np.arange(len(label_to_unique))
# new_comb_labels is a matrix of shape
# (left_ind.num_unique, right_ind.num_unique)
# each row new_comb_labels[n,:] contains integers values.
# Positions where values > 0
# denote labels of right-charges that are kept.
new_comb_labels = map_to_kept[comb_labels].reshape(
[left_ind.num_unique, right_ind.num_unique])
reduced_rows = [0] * left_ind.num_unique
for n in range(left_ind.num_unique):
temp_label = new_comb_labels[n, right_ind.charge_labels]
reduced_rows[n] = temp_label[temp_label >= 0]
reduced_labels = np.concatenate(
[reduced_rows[n] for n in left_ind.charge_labels])
obj = charges[0].__new__(type(charges[0]))
obj.__init__(reduced_qnums, reduced_labels, charges[0].charge_types)
if return_locations:
row_locs = [0] * left_ind.num_unique
if strides is not None:
# computed locations based on non-trivial strides
row_pos = fuse_stride_arrays(tensor_dims[:partition], strides[:partition])
col_pos = fuse_stride_arrays(tensor_dims[partition:], strides[partition:])
for n in range(left_ind.num_unique):
temp_label = new_comb_labels[n, right_ind.charge_labels]
temp_keep = temp_label >= 0
if strides is not None:
row_locs[n] = col_pos[temp_keep]
else:
row_locs[n] = np.where(temp_keep)[0]
if strides is not None:
reduced_locs = np.concatenate([
row_pos[n] + row_locs[left_ind.charge_labels[n]]
for n in range(left_ind.dim)
])
else:
reduced_locs = np.concatenate([
n * right_ind.dim + row_locs[left_ind.charge_labels[n]]
for n in range(left_ind.dim)
])
return obj, reduced_locs
return obj |
Find the location of all non-trivial symmetry blocks from the data vector of
of BlockSparseTensor (when viewed as a matrix across some prescribed index
bi-partition).
Args:
charges: List of `BaseCharge`, one for each leg of a tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
partition: location of tensor partition (i.e. such that the
tensor is viewed as a matrix between `charges[:partition]` and
the remaining charges).
Returns:
block_maps (List[np.ndarray]): list of integer arrays, which each
containing the location of a symmetry block in the data vector.
block_qnums (BaseCharge): The charges of the corresponding blocks.n
block, with 'n' the number of symmetries and 'm' the number of blocks.
block_dims (np.ndarray): 2-by-m array of matrix dimensions of each block. | def _find_diagonal_sparse_blocks(
charges: List[BaseCharge], flows: Union[np.ndarray, List[bool]],
partition: int) -> Tuple[List, BaseCharge, np.ndarray]:
"""
Find the location of all non-trivial symmetry blocks from the data vector of
of BlockSparseTensor (when viewed as a matrix across some prescribed index
bi-partition).
Args:
charges: List of `BaseCharge`, one for each leg of a tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
partition: location of tensor partition (i.e. such that the
tensor is viewed as a matrix between `charges[:partition]` and
the remaining charges).
Returns:
block_maps (List[np.ndarray]): list of integer arrays, which each
containing the location of a symmetry block in the data vector.
block_qnums (BaseCharge): The charges of the corresponding blocks.n
block, with 'n' the number of symmetries and 'm' the number of blocks.
block_dims (np.ndarray): 2-by-m array of matrix dimensions of each block.
"""
cacher = get_cacher()
if cacher.do_caching:
hash_val = _to_string(charges, flows, partition, list(range(len(charges))))
if hash_val in cacher.cache:
return cacher.cache[hash_val]
num_inds = len(charges)
if partition in (0, num_inds):
# special cases (matrix of trivial height or width)
num_nonzero = compute_num_nonzero(charges, flows)
block_maps = [np.arange(0, num_nonzero, dtype=SIZE_T).ravel()]
block_qnums = charges[0].identity_charges(dim=1).charges
block_dims = np.array([[1], [num_nonzero]])
if partition == len(flows):
block_dims = np.flipud(block_dims)
obj = charges[0].__new__(type(charges[0]))
obj.__init__(block_qnums, np.arange(1, dtype=charges[0].label_dtype),
charges[0].charge_types)
return block_maps, obj, block_dims
unique_row_qnums, row_degen = compute_fused_charge_degeneracies(
charges[:partition], flows[:partition])
unique_col_qnums, col_degen = compute_fused_charge_degeneracies(
charges[partition:], np.logical_not(flows[partition:]))
block_qnums, row_to_block, col_to_block = intersect(
unique_row_qnums.unique_charges,
unique_col_qnums.unique_charges,
axis=0,
return_indices=True)
num_blocks = block_qnums.shape[0]
if num_blocks == 0:
obj = charges[0].__new__(type(charges[0]))
obj.__init__(
np.zeros((0, charges[0].num_symmetries), dtype=charges[0].dtype),
np.arange(0, dtype=charges[0].label_dtype), charges[0].charge_types)
return [], obj, np.empty((2, 0), dtype=SIZE_T)
# calculate number of non-zero elements in each row of the matrix
row_ind = reduce_charges(charges[:partition], flows[:partition], block_qnums)
row_num_nz = col_degen[col_to_block[row_ind.charge_labels]]
cumulate_num_nz = np.insert(np.cumsum(row_num_nz[0:-1]), 0, 0).astype(SIZE_T)
# calculate mappings for the position in datavector of each block
if num_blocks < 15:
# faster method for small number of blocks
row_locs = np.concatenate([
(row_ind.charge_labels == n) for n in range(num_blocks)
]).reshape(num_blocks, row_ind.dim)
else:
# faster method for large number of blocks
row_locs = np.zeros([num_blocks, row_ind.dim], dtype=bool)
row_locs[row_ind.charge_labels,
np.arange(row_ind.dim)] = np.ones(
row_ind.dim, dtype=bool)
block_dims = np.array(
[[row_degen[row_to_block[n]], col_degen[col_to_block[n]]]
for n in range(num_blocks)],
dtype=SIZE_T).T
#pylint: disable=unsubscriptable-object
block_maps = [
np.ravel(cumulate_num_nz[row_locs[n, :]][:, None] +
np.arange(block_dims[1, n])[None, :]) for n in range(num_blocks)
]
obj = charges[0].__new__(type(charges[0]))
obj.__init__(block_qnums,
np.arange(block_qnums.shape[0], dtype=charges[0].label_dtype),
charges[0].charge_types)
if cacher.do_caching:
cacher.cache[hash_val] = (block_maps, obj, block_dims)
return cacher.cache[hash_val]
return block_maps, obj, block_dims |
Find the diagonal blocks of a transposed tensor with
meta-data `charges` and `flows`. `charges` and `flows`
are the charges and flows of the untransposed tensor,
`order` is the final transposition, and `tr_partition`
is the partition of the transposed tensor according to
which the diagonal blocks should be found.
Args:
charges: List of `BaseCharge`, one for each leg of a tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
tr_partition: Location of the transposed tensor partition
(i.e. such that the tensor is viewed as a matrix between
`charges[order[:partition]]` and `charges[order[partition:]]`).
order: Order with which to permute the tensor axes.
Returns:
block_maps (List[np.ndarray]): list of integer arrays, which each
containing the location of a symmetry block in the data vector.
block_qnums (BaseCharge): The charges of the corresponding blocks.
block_dims (np.ndarray): 2-by-m array of matrix dimensions of each block. | def _find_transposed_diagonal_sparse_blocks(
charges: List[BaseCharge],
flows: Union[np.ndarray, List[bool]],
tr_partition: int,
order: Optional[Union[List, np.ndarray]] = None
) -> Tuple[List, BaseCharge, np.ndarray]:
"""
Find the diagonal blocks of a transposed tensor with
meta-data `charges` and `flows`. `charges` and `flows`
are the charges and flows of the untransposed tensor,
`order` is the final transposition, and `tr_partition`
is the partition of the transposed tensor according to
which the diagonal blocks should be found.
Args:
charges: List of `BaseCharge`, one for each leg of a tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
tr_partition: Location of the transposed tensor partition
(i.e. such that the tensor is viewed as a matrix between
`charges[order[:partition]]` and `charges[order[partition:]]`).
order: Order with which to permute the tensor axes.
Returns:
block_maps (List[np.ndarray]): list of integer arrays, which each
containing the location of a symmetry block in the data vector.
block_qnums (BaseCharge): The charges of the corresponding blocks.
block_dims (np.ndarray): 2-by-m array of matrix dimensions of each block.
"""
flows = np.asarray(flows)
cacher = get_cacher()
if cacher.do_caching:
hash_val = _to_string(charges, flows, tr_partition, order)
if hash_val in cacher.cache:
return cacher.cache[hash_val]
if np.array_equal(order, None) or (np.array_equal(
np.array(order), np.arange(len(charges)))):
# no transpose order
return _find_diagonal_sparse_blocks(charges, flows, tr_partition)
# general case: non-trivial transposition is required
num_inds = len(charges)
tensor_dims = np.array([charges[n].dim for n in range(num_inds)], dtype=int)
strides = np.append(np.flip(np.cumprod(np.flip(tensor_dims[1:]))), 1)
# compute qnums of row/cols in original tensor
orig_partition = _find_best_partition(tensor_dims)
orig_width = np.prod(tensor_dims[orig_partition:])
orig_unique_row_qnums = compute_unique_fused_charges(charges[:orig_partition],
flows[:orig_partition])
orig_unique_col_qnums, orig_col_degen = compute_fused_charge_degeneracies(
charges[orig_partition:], np.logical_not(flows[orig_partition:]))
orig_block_qnums, row_map, col_map = intersect(
orig_unique_row_qnums.unique_charges,
orig_unique_col_qnums.unique_charges,
axis=0,
return_indices=True)
orig_num_blocks = orig_block_qnums.shape[0]
if orig_num_blocks == 0:
# special case: trivial number of non-zero elements
obj = charges[0].__new__(type(charges[0]))
obj.__init__(
np.empty((0, charges[0].num_symmetries), dtype=charges[0].dtype),
np.arange(0, dtype=charges[0].label_dtype), charges[0].charge_types)
return [], obj, np.empty((2, 0), dtype=SIZE_T)
orig_row_ind = fuse_charges(charges[:orig_partition], flows[:orig_partition])
orig_col_ind = fuse_charges(charges[orig_partition:],
np.logical_not(flows[orig_partition:]))
inv_row_map = -np.ones(
orig_unique_row_qnums.unique_charges.shape[0],
dtype=charges[0].label_dtype)
inv_row_map[row_map] = np.arange(len(row_map), dtype=charges[0].label_dtype)
all_degens = np.append(orig_col_degen[col_map],
0)[inv_row_map[orig_row_ind.charge_labels]]
all_cumul_degens = np.cumsum(np.insert(all_degens[:-1], 0, 0)).astype(SIZE_T)
dense_to_sparse = np.empty(orig_width, dtype=SIZE_T)
for n in range(orig_num_blocks):
dense_to_sparse[orig_col_ind.charge_labels == col_map[n]] = np.arange(
orig_col_degen[col_map[n]], dtype=SIZE_T)
# define properties of new tensor resulting from transposition
new_strides = strides[order]
new_row_charges = [charges[n] for n in order[:tr_partition]]
new_col_charges = [charges[n] for n in order[tr_partition:]]
new_row_flows = flows[order[:tr_partition]]
new_col_flows = flows[order[tr_partition:]]
if tr_partition == 0:
# special case: reshape into row vector
# compute qnums of row/cols in transposed tensor
unique_col_qnums, new_col_degen = compute_fused_charge_degeneracies(
new_col_charges, np.logical_not(new_col_flows))
identity_charges = charges[0].identity_charges(dim=1)
block_qnums, new_row_map, new_col_map = intersect(
identity_charges.unique_charges,
unique_col_qnums.unique_charges,
axis=0,
return_indices=True)
block_dims = np.array([[1], new_col_degen[new_col_map]], dtype=SIZE_T)
num_blocks = 1
col_ind, col_locs = reduce_charges(
new_col_charges,
np.logical_not(new_col_flows),
block_qnums,
return_locations=True,
strides=new_strides[tr_partition:])
# find location of blocks in transposed tensor (w.r.t positions in original)
#pylint: disable=no-member
orig_row_posR, orig_col_posR = np.divmod(
col_locs[col_ind.charge_labels == 0], orig_width)
block_maps = [(all_cumul_degens[orig_row_posR] +
dense_to_sparse[orig_col_posR]).ravel()]
obj = charges[0].__new__(type(charges[0]))
obj.__init__(block_qnums,
np.arange(block_qnums.shape[0], dtype=charges[0].label_dtype),
charges[0].charge_types)
elif tr_partition == len(charges):
# special case: reshape into col vector
# compute qnums of row/cols in transposed tensor
unique_row_qnums, new_row_degen = compute_fused_charge_degeneracies(
new_row_charges, new_row_flows)
identity_charges = charges[0].identity_charges(dim=1)
block_qnums, new_row_map, new_col_map = intersect(
unique_row_qnums.unique_charges,
identity_charges.unique_charges,
axis=0,
return_indices=True)
block_dims = np.array([new_row_degen[new_row_map], [1]], dtype=SIZE_T)
num_blocks = 1
row_ind, row_locs = reduce_charges(
new_row_charges,
new_row_flows,
block_qnums,
return_locations=True,
strides=new_strides[:tr_partition])
# find location of blocks in transposed tensor (w.r.t positions in original)
#pylint: disable=no-member
orig_row_posL, orig_col_posL = np.divmod(
row_locs[row_ind.charge_labels == 0], orig_width)
block_maps = [(all_cumul_degens[orig_row_posL] +
dense_to_sparse[orig_col_posL]).ravel()]
obj = charges[0].__new__(type(charges[0]))
obj.__init__(block_qnums,
np.arange(block_qnums.shape[0], dtype=charges[0].label_dtype),
charges[0].charge_types)
else:
unique_row_qnums, new_row_degen = compute_fused_charge_degeneracies(
new_row_charges, new_row_flows)
unique_col_qnums, new_col_degen = compute_fused_charge_degeneracies(
new_col_charges, np.logical_not(new_col_flows))
block_qnums, new_row_map, new_col_map = intersect(
unique_row_qnums.unique_charges,
unique_col_qnums.unique_charges,
axis=0,
return_indices=True)
block_dims = np.array(
[new_row_degen[new_row_map], new_col_degen[new_col_map]], dtype=SIZE_T)
num_blocks = len(new_row_map)
row_ind, row_locs = reduce_charges(
new_row_charges,
new_row_flows,
block_qnums,
return_locations=True,
strides=new_strides[:tr_partition])
col_ind, col_locs = reduce_charges(
new_col_charges,
np.logical_not(new_col_flows),
block_qnums,
return_locations=True,
strides=new_strides[tr_partition:])
block_maps = [0] * num_blocks
for n in range(num_blocks):
#pylint: disable=no-member
orig_row_posL, orig_col_posL = np.divmod(
row_locs[row_ind.charge_labels == n], orig_width)
#pylint: disable=no-member
orig_row_posR, orig_col_posR = np.divmod(
col_locs[col_ind.charge_labels == n], orig_width)
block_maps[n] = (
all_cumul_degens[np.add.outer(orig_row_posL, orig_row_posR)] +
dense_to_sparse[np.add.outer(orig_col_posL, orig_col_posR)]).ravel()
obj = charges[0].__new__(type(charges[0]))
obj.__init__(block_qnums,
np.arange(block_qnums.shape[0], dtype=charges[0].label_dtype),
charges[0].charge_types)
if cacher.do_caching:
cacher.cache[hash_val] = (block_maps, obj, block_dims)
return cacher.cache[hash_val]
return block_maps, obj, block_dims |
map the input arguments of _find_transposed_diagonal_sparse_blocks
to a string.
Args:
charges: List of `BaseCharge`, one for each leg of a tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
tr_partition: Location of the transposed tensor partition
(i.e. such that the tensor is viewed as a matrix between
`charges[order[:partition]]` and `charges[order[partition:]]`).
order: Order with which to permute the tensor axes.
Returns:
str: The string representation of the input | def _to_string(charges: List[BaseCharge], flows: Union[np.ndarray, List],
tr_partition: int, order: List[int]) -> str:
"""
map the input arguments of _find_transposed_diagonal_sparse_blocks
to a string.
Args:
charges: List of `BaseCharge`, one for each leg of a tensor.
flows: A list of bool, one for each leg of a tensor.
with values `False` or `True` denoting inflowing and
outflowing charge direction, respectively.
tr_partition: Location of the transposed tensor partition
(i.e. such that the tensor is viewed as a matrix between
`charges[order[:partition]]` and `charges[order[partition:]]`).
order: Order with which to permute the tensor axes.
Returns:
str: The string representation of the input
"""
return ''.join([str(c.charges.tostring()) for c in charges] + [
str(np.array(flows).tostring()),
str(tr_partition),
str(np.array(order, dtype=np.int16).tostring())
]) |
Return a `Cacher` object which can be used to perform
caching of block-data for block-sparse tensor contractions. | def get_cacher() -> Cacher:
"""
Return a `Cacher` object which can be used to perform
caching of block-data for block-sparse tensor contractions.
"""
if len(_INSTANTIATED_CACHERS) == 0:
_INSTANTIATED_CACHERS.append(Cacher())
return _INSTANTIATED_CACHERS[0] |
Enable caching of block-data for block-sparse contraction.
If enabled, all data that is needed to perform binary tensor contractions
will be cached in a dictionary for later reuse.
Enabling caching can significantly speed tensor contractions,
but can lead to substantially larger memory footprints.
In particular if the code uses tensor decompositions like QR, SVD
eig, eigh or any similar method, enabling caching can cause
catastrophic memory clutter, so use caching with great care.
The user can at any point clear the cache by calling
`tn.block_sparse.clear_cache()`. | def enable_caching() -> None:
"""
Enable caching of block-data for block-sparse contraction.
If enabled, all data that is needed to perform binary tensor contractions
will be cached in a dictionary for later reuse.
Enabling caching can significantly speed tensor contractions,
but can lead to substantially larger memory footprints.
In particular if the code uses tensor decompositions like QR, SVD
eig, eigh or any similar method, enabling caching can cause
catastrophic memory clutter, so use caching with great care.
The user can at any point clear the cache by calling
`tn.block_sparse.clear_cache()`.
"""
get_cacher().set_status(True) |
Disable caching of block-data for block-sparse tensor contractions.
Note that the cache WILL NOT BE CLEARED.
Clearing the cache can be achieved by calling
`tn.block_sparse.clear_cache()`. | def disable_caching() -> None:
"""
Disable caching of block-data for block-sparse tensor contractions.
Note that the cache WILL NOT BE CLEARED.
Clearing the cache can be achieved by calling
`tn.block_sparse.clear_cache()`.
"""
get_cacher().set_status(False) |
Clear the cache that stores block-data for block-sparse tensor contractions. | def clear_cache() -> None:
"""
Clear the cache that stores block-data for block-sparse tensor contractions.
"""
get_cacher().clear_cache() |
Contstructor for charge classes of the ZN symmetry groups.
Args:
n: The module of the symmetry group.
Returns:
A charge class of your given ZN symmetry group. | def ZNCharge(n: int) -> Callable:
"""Contstructor for charge classes of the ZN symmetry groups.
Args:
n: The module of the symmetry group.
Returns:
A charge class of your given ZN symmetry group.
"""
if n < 2:
raise ValueError(f"n must be >= 2, found {n}")
class ModularCharge(BaseCharge):
def __init__(self,
charges: Union[List, np.ndarray],
charge_labels: Optional[np.ndarray] = None,
charge_types: Optional[List[Type["BaseCharge"]]] = None,
charge_dtype: Optional[Type[np.number]] = np.int16) -> None:
unique_charges = unique(np.ravel(charges))
if not np.all(np.isin(unique_charges, list(range(n)))):
raise ValueError(f"Z{n} charges must be in range({n}), found: {unique}")
super().__init__(
charges,
charge_labels,
charge_types=[type(self)],
charge_dtype=charge_dtype)
@staticmethod
def fuse(charge1: np.ndarray, charge2: np.ndarray) -> np.ndarray:
return np.add.outer(charge1, charge2).ravel() % n
@staticmethod
def dual_charges(charges: np.ndarray) -> np.ndarray:
return (n - charges) % n
@staticmethod
def identity_charge() -> np.ndarray:
return np.int16(0)
@classmethod
def random(cls,
dimension: int,
minval: int = 0,
maxval: int = n - 1) -> BaseCharge:
if maxval >= n:
raise ValueError(f"maxval must be less than n={n}, got {maxval}")
if minval < 0:
raise ValueError(f"minval must be greater than 0, found {minval}")
# No need for the mod due to the checks above.
charges = np.random.randint(minval, maxval + 1, dimension, dtype=np.int16)
return cls(charges=charges)
return ModularCharge |
Fuse the quantum numbers of two indices under their kronecker addition.
Args:
charges_A (np.ndarray): n-by-D1 dimensional array integers encoding charges,
with n the number of symmetries and D1 the index dimension.
charges__B (np.ndarray): n-by-D2 dimensional array of charges.
charge_types: A list of types of the charges.
Returns:
np.ndarray: n-by-(D1 * D2) dimensional array of the fused charges. | def fuse_ndarray_charges(charges_A: np.ndarray, charges_B: np.ndarray,
charge_types: List[Type[BaseCharge]]) -> np.ndarray:
"""
Fuse the quantum numbers of two indices under their kronecker addition.
Args:
charges_A (np.ndarray): n-by-D1 dimensional array integers encoding charges,
with n the number of symmetries and D1 the index dimension.
charges__B (np.ndarray): n-by-D2 dimensional array of charges.
charge_types: A list of types of the charges.
Returns:
np.ndarray: n-by-(D1 * D2) dimensional array of the fused charges.
"""
comb_charges = [0] * len(charge_types)
for n, ct in enumerate(charge_types):
comb_charges[n] = ct.fuse(charges_A[:, n], charges_B[:, n])[:, None]
return np.concatenate(comb_charges, axis=1) |
Fuse all `charges` into a new charge.
Charges are fused from "right to left",
in accordance with row-major order.
Args:
charges: A list of charges to be fused.
flows: A list of flows, one for each element in `charges`.
Returns:
BaseCharge: The result of fusing `charges`. | def fuse_charges(charges: List[BaseCharge], flows: List[bool]) -> BaseCharge:
"""
Fuse all `charges` into a new charge.
Charges are fused from "right to left",
in accordance with row-major order.
Args:
charges: A list of charges to be fused.
flows: A list of flows, one for each element in `charges`.
Returns:
BaseCharge: The result of fusing `charges`.
"""
if len(charges) != len(flows):
raise ValueError(
"`charges` and `flows` are of unequal lengths {} != {}".format(
len(charges), len(flows)))
fused_charges = charges[0] * flows[0]
for n in range(1, len(charges)):
fused_charges = fused_charges + charges[n] * flows[n]
return fused_charges |
Compare two BaseCharges `c1` and `c2`.
Return `True` if they are equal, else `False`. | def charge_equal(c1: BaseCharge, c2: BaseCharge) -> bool:
"""
Compare two BaseCharges `c1` and `c2`.
Return `True` if they are equal, else `False`.
"""
res = True
if c1.dim != c2.dim:
return False
res = True
if c1._unique_charges is not None and c2._unique_charges is not None:
if c1._unique_charges.shape != c2._unique_charges.shape:
res = False
elif not np.all(c1._unique_charges == c2._unique_charges):
res = False
elif not np.all(c1.charge_labels == c2.charge_labels):
res = False
return res
if c1._charges is not None and c2._charges is not None:
if c1._charges.shape != c2._charges.shape:
res = False
elif not np.all(c1._charges == c2._charges):
res = False
return res
if c1.charges.shape != c2.charges.shape:
res = False
elif not np.all(c1.charges == c2.charges):
res = False
return res |
Fuse two consecutive indices (legs) of a symmetric tensor.
Args:
left_index: A tensor Index.
right_index: A tensor Index.
flow: An optional flow of the resulting `Index` object.
Returns:
Index: The result of fusing `index1` and `index2`. | def fuse_index_pair(left_index: Index, right_index: Index) -> Index:
"""
Fuse two consecutive indices (legs) of a symmetric tensor.
Args:
left_index: A tensor Index.
right_index: A tensor Index.
flow: An optional flow of the resulting `Index` object.
Returns:
Index: The result of fusing `index1` and `index2`.
"""
return Index(
charges=left_index.flat_charges + right_index.flat_charges,
flow=left_index.flat_flows + right_index.flat_flows) |
Fuse a list of indices (legs) of a symmetric tensor.
Args:
indices: A list of tensor Index objects
flow: An optional flow of the resulting `Index` object.
Returns:
Index: The result of fusing `indices`. | def fuse_indices(indices: List[Index]) -> Index:
"""
Fuse a list of indices (legs) of a symmetric tensor.
Args:
indices: A list of tensor Index objects
flow: An optional flow of the resulting `Index` object.
Returns:
Index: The result of fusing `indices`.
"""
index = indices[0]
for n in range(1, len(indices)):
index = fuse_index_pair(index, indices[n])
return index |
Initialize a symmetric tensor with ones.
Args:
indices: List of `Index` objecst, one for each leg.
dtype: An optional numpy dtype. The dtype of the tensor
Returns:
BlockSparseTensor | def ones(indices: Sequence[Index],
dtype: Optional[Type[np.number]] = None) -> BlockSparseTensor:
"""
Initialize a symmetric tensor with ones.
Args:
indices: List of `Index` objecst, one for each leg.
dtype: An optional numpy dtype. The dtype of the tensor
Returns:
BlockSparseTensor
"""
return BlockSparseTensor.ones(indices, dtype) |
Initialize a symmetric tensor with zeros.
Args:
indices: List of `Index` objecst, one for each leg.
dtype: An optional numpy dtype. The dtype of the tensor
Returns:
BlockSparseTensor | def zeros(indices: Sequence[Index],
dtype: Optional[Type[np.number]] = None) -> BlockSparseTensor:
"""
Initialize a symmetric tensor with zeros.
Args:
indices: List of `Index` objecst, one for each leg.
dtype: An optional numpy dtype. The dtype of the tensor
Returns:
BlockSparseTensor
"""
return BlockSparseTensor.zeros(indices, dtype) |
Initialize a random symmetric tensor from random normal distribution.
Args:
indices: List of `Index` objecst, one for each leg.
dtype: An optional numpy dtype. The dtype of the tensor
Returns:
BlockSparseTensor | def randn(indices: Sequence[Index],
dtype: Optional[Type[np.number]] = None) -> BlockSparseTensor:
"""
Initialize a random symmetric tensor from random normal distribution.
Args:
indices: List of `Index` objecst, one for each leg.
dtype: An optional numpy dtype. The dtype of the tensor
Returns:
BlockSparseTensor
"""
return BlockSparseTensor.randn(indices, dtype) |
Initialize a random symmetric tensor from random uniform distribution.
Args:
indices: List of `Index` objecst, one for each leg.
boundaries: Tuple of interval boundaries for the random uniform
distribution.
dtype: An optional numpy dtype. The dtype of the tensor
Returns:
BlockSparseTensor | def random(indices: Sequence[Index],
boundaries: Optional[Tuple[float, float]] = (0.0, 1.0),
dtype: Optional[Type[np.number]] = None) -> BlockSparseTensor:
"""
Initialize a random symmetric tensor from random uniform distribution.
Args:
indices: List of `Index` objecst, one for each leg.
boundaries: Tuple of interval boundaries for the random uniform
distribution.
dtype: An optional numpy dtype. The dtype of the tensor
Returns:
BlockSparseTensor
"""
return BlockSparseTensor.random(indices, boundaries, dtype) |
Initialize a symmetric tensor with an uninitialized np.ndarray.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor | def empty_like(tensor: BlockSparseTensor) -> BlockSparseTensor:
"""
Initialize a symmetric tensor with an uninitialized np.ndarray.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor
"""
return BlockSparseTensor(
np.empty(tensor.data.size, dtype=tensor.dtype),
charges=tensor._charges,
flows=tensor._flows,
order=tensor._order,
check_consistency=False) |
Initialize a symmetric tensor with ones.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor | def ones_like(tensor: BlockSparseTensor) -> BlockSparseTensor:
"""
Initialize a symmetric tensor with ones.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor
"""
return BlockSparseTensor(
np.ones(tensor.data.size, dtype=tensor.dtype),
charges=tensor._charges,
flows=tensor._flows,
order=tensor._order,
check_consistency=False) |
Initialize a symmetric tensor with zeros.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor | def zeros_like(tensor: BlockSparseTensor) -> BlockSparseTensor:
"""
Initialize a symmetric tensor with zeros.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor
"""
return BlockSparseTensor(
np.zeros(tensor.data.size, dtype=tensor.dtype),
charges=tensor._charges,
flows=tensor._flows,
order=tensor._order,
check_consistency=False) |
Initialize a symmetric tensor with random gaussian numbers.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor | def randn_like(tensor: BlockSparseTensor) -> BlockSparseTensor:
"""
Initialize a symmetric tensor with random gaussian numbers.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor
"""
return BlockSparseTensor(
_randn(tensor.data.size, dtype=tensor.dtype),
charges=tensor._charges,
flows=tensor._flows,
order=tensor._order,
check_consistency=False) |
Initialize a symmetric tensor with random uniform numbers.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor | def random_like(
tensor: BlockSparseTensor, boundaries: Tuple = (0, 1)) -> BlockSparseTensor:
"""
Initialize a symmetric tensor with random uniform numbers.
The resulting tensor has the same shape and dtype as `tensor`.
Args:
tensor: A BlockSparseTensor.
Returns:
BlockSparseTensor
"""
return BlockSparseTensor(
_random(tensor.data.size, dtype=tensor.dtype, boundaries=boundaries),
charges=tensor._charges,
flows=tensor._flows,
order=tensor._order,
check_consistency=False) |
The norm of the tensor. | def norm(tensor: BlockSparseTensor) -> float:
"""
The norm of the tensor.
"""
return np.linalg.norm(tensor.data) |
Return a diagonal `BlockSparseTensor` from a `ChargeArray`, or
return the diagonal of a `BlockSparseTensor` as a `ChargeArray`.
For input of type `BlockSparseTensor`:
The full diagonal is obtained from finding the diagonal blocks of the
`BlockSparseTensor`, taking the diagonal elements of those and packing
the result into a ChargeArray. Note that the computed diagonal elements
are usually different from the diagonal elements obtained from
converting the `BlockSparseTensor` to dense storage and taking the diagonal.
Note that the flow of the resulting 1d `ChargeArray` object is `False`.
Args:
tensor: A `ChargeArray`.
Returns:
ChargeArray: A 1d `CharggeArray` containing the diagonal of `tensor`,
or a diagonal matrix of type `BlockSparseTensor` containing `tensor`
on its diagonal. | def diag(tensor: ChargeArray) -> Any:
"""
Return a diagonal `BlockSparseTensor` from a `ChargeArray`, or
return the diagonal of a `BlockSparseTensor` as a `ChargeArray`.
For input of type `BlockSparseTensor`:
The full diagonal is obtained from finding the diagonal blocks of the
`BlockSparseTensor`, taking the diagonal elements of those and packing
the result into a ChargeArray. Note that the computed diagonal elements
are usually different from the diagonal elements obtained from
converting the `BlockSparseTensor` to dense storage and taking the diagonal.
Note that the flow of the resulting 1d `ChargeArray` object is `False`.
Args:
tensor: A `ChargeArray`.
Returns:
ChargeArray: A 1d `CharggeArray` containing the diagonal of `tensor`,
or a diagonal matrix of type `BlockSparseTensor` containing `tensor`
on its diagonal.
"""
if tensor.ndim > 2:
raise ValueError("`diag` currently only implemented for matrices, "
"found `ndim={}".format(tensor.ndim))
if not isinstance(tensor, BlockSparseTensor):
if tensor.ndim > 1:
raise ValueError(
"`diag` currently only implemented for `ChargeArray` with ndim=1, "
"found `ndim={}`".format(tensor.ndim))
flat_charges = tensor._charges + tensor._charges
flat_flows = list(tensor._flows) + list(np.logical_not(tensor._flows))
flat_order = list(tensor.flat_order) + list(
np.asarray(tensor.flat_order) + len(tensor._charges))
tr_partition = len(tensor._order[0])
blocks, charges, shapes = _find_transposed_diagonal_sparse_blocks(
flat_charges, flat_flows, tr_partition, flat_order)
data = np.zeros(
np.int64(np.sum(np.prod(shapes, axis=0))), dtype=tensor.dtype)
lookup, unique, labels = compute_sparse_lookup(tensor._charges,
tensor._flows, charges)
for n, block in enumerate(blocks):
label = labels[np.nonzero(unique == charges[n])[0][0]]
data[block] = np.ravel(
np.diag(tensor.data[np.nonzero(lookup == label)[0]]))
order = [
tensor._order[0],
list(np.asarray(tensor._order[0]) + len(tensor._charges))
]
new_charges = [tensor._charges[0].copy(), tensor._charges[0].copy()]
return BlockSparseTensor(
data,
charges=new_charges,
flows=list(tensor._flows) + list(np.logical_not(tensor._flows)),
order=order,
check_consistency=False)
flat_charges = tensor._charges
flat_flows = tensor._flows
flat_order = tensor.flat_order
tr_partition = len(tensor._order[0])
sparse_blocks, charges, block_shapes = _find_transposed_diagonal_sparse_blocks( #pylint: disable=line-too-long
flat_charges, flat_flows, tr_partition, flat_order)
shapes = np.min(block_shapes, axis=0)
if len(sparse_blocks) > 0:
data = np.concatenate([
np.diag(np.reshape(tensor.data[sparse_blocks[n]], block_shapes[:, n]))
for n in range(len(sparse_blocks))
])
charge_labels = np.concatenate([
np.full(shapes[n], fill_value=n, dtype=np.int16)
for n in range(len(sparse_blocks))
])
else:
data = np.empty(0, dtype=tensor.dtype)
charge_labels = np.empty(0, dtype=np.int16)
newcharges = [charges[charge_labels]]
flows = [False]
return ChargeArray(data, newcharges, flows) |
Reshape `tensor` into `shape.
`ChargeArray.reshape` works the same as the dense
version, with the notable exception that the tensor can only be
reshaped into a form compatible with its elementary shape.
The elementary shape is the shape determined by ChargeArray._charges.
For example, while the following reshaping is possible for regular
dense numpy tensor,
```
A = np.random.rand(6,6,6)
np.reshape(A, (2,3,6,6))
```
the same code for ChargeArray
```
q1 = U1Charge(np.random.randint(0,10,6))
q2 = U1Charge(np.random.randint(0,10,6))
q3 = U1Charge(np.random.randint(0,10,6))
i1 = Index(charges=q1,flow=False)
i2 = Index(charges=q2,flow=True)
i3 = Index(charges=q3,flow=False)
A = ChargeArray.randn(indices=[i1,i2,i3])
print(A.shape) #prints (6,6,6)
A.reshape((2,3,6,6)) #raises ValueError
```
raises a `ValueError` since (2,3,6,6)
is incompatible with the elementary shape (6,6,6) of the tensor.
Args:
tensor: A symmetric tensor.
shape: The new shape. Can either be a list of `Index`
or a list of `int`.
Returns:
ChargeArray: A new tensor reshaped into `shape` | def reshape(tensor: ChargeArray, shape: Sequence[Union[Index,
int]]) -> ChargeArray:
"""
Reshape `tensor` into `shape.
`ChargeArray.reshape` works the same as the dense
version, with the notable exception that the tensor can only be
reshaped into a form compatible with its elementary shape.
The elementary shape is the shape determined by ChargeArray._charges.
For example, while the following reshaping is possible for regular
dense numpy tensor,
```
A = np.random.rand(6,6,6)
np.reshape(A, (2,3,6,6))
```
the same code for ChargeArray
```
q1 = U1Charge(np.random.randint(0,10,6))
q2 = U1Charge(np.random.randint(0,10,6))
q3 = U1Charge(np.random.randint(0,10,6))
i1 = Index(charges=q1,flow=False)
i2 = Index(charges=q2,flow=True)
i3 = Index(charges=q3,flow=False)
A = ChargeArray.randn(indices=[i1,i2,i3])
print(A.shape) #prints (6,6,6)
A.reshape((2,3,6,6)) #raises ValueError
```
raises a `ValueError` since (2,3,6,6)
is incompatible with the elementary shape (6,6,6) of the tensor.
Args:
tensor: A symmetric tensor.
shape: The new shape. Can either be a list of `Index`
or a list of `int`.
Returns:
ChargeArray: A new tensor reshaped into `shape`
"""
return tensor.reshape(shape) |
Return the complex conjugate of `tensor` in a new
`ChargeArray`.
Args:
tensor: A `ChargeArray` object.
Returns:
ChargeArray | def conj(tensor: ChargeArray) -> ChargeArray:
"""
Return the complex conjugate of `tensor` in a new
`ChargeArray`.
Args:
tensor: A `ChargeArray` object.
Returns:
ChargeArray
"""
return tensor.conj() |
Transpose the tensor into the new order `order`. If `shuffle=False`
no data-reshuffling is done.
Args:
order: The new order of indices.
shuffle: If `True`, reshuffle data.
Returns:
ChargeArray: The transposed tensor. | def transpose(tensor: ChargeArray,
order: Sequence[int] = np.asarray([1, 0]),
shuffle: Optional[bool] = False) -> ChargeArray:
"""
Transpose the tensor into the new order `order`. If `shuffle=False`
no data-reshuffling is done.
Args:
order: The new order of indices.
shuffle: If `True`, reshuffle data.
Returns:
ChargeArray: The transposed tensor.
"""
return tensor.transpose(order, shuffle) |
Compute the singular value decomposition of `matrix`.
The matrix if factorized into `u * s * vh`, with
`u` and `vh` the left and right singular vectors of `matrix`,
and `s` its singular values.
Args:
matrix: A matrix (i.e. an order-2 tensor) of type `BlockSparseTensor`
full_matrices: If `True`, expand `u` and `v` to square matrices
If `False` return the "economic" svd, i.e. `u.shape[1]=s.shape[0]`
and `v.shape[0]=s.shape[1]`
compute_uv: If `True`, return `u` and `v`.
hermitian: If `True`, assume hermiticity of `matrix`.
Returns:
If `compute_uv` is `True`: Three BlockSparseTensors `U,S,V`.
If `compute_uv` is `False`: A BlockSparseTensors `S` containing the
singular values. | def svd(matrix: BlockSparseTensor,
full_matrices: Optional[bool] = True,
compute_uv: Optional[bool] = True,
hermitian: Optional[bool] = False) -> Any:
"""
Compute the singular value decomposition of `matrix`.
The matrix if factorized into `u * s * vh`, with
`u` and `vh` the left and right singular vectors of `matrix`,
and `s` its singular values.
Args:
matrix: A matrix (i.e. an order-2 tensor) of type `BlockSparseTensor`
full_matrices: If `True`, expand `u` and `v` to square matrices
If `False` return the "economic" svd, i.e. `u.shape[1]=s.shape[0]`
and `v.shape[0]=s.shape[1]`
compute_uv: If `True`, return `u` and `v`.
hermitian: If `True`, assume hermiticity of `matrix`.
Returns:
If `compute_uv` is `True`: Three BlockSparseTensors `U,S,V`.
If `compute_uv` is `False`: A BlockSparseTensors `S` containing the
singular values.
"""
if matrix.ndim != 2:
raise NotImplementedError("svd currently supports only tensors of order 2.")
flat_charges = matrix._charges
flat_flows = matrix._flows
flat_order = matrix.flat_order
tr_partition = len(matrix._order[0])
blocks, charges, shapes = _find_transposed_diagonal_sparse_blocks(
flat_charges, flat_flows, tr_partition, flat_order)
u_blocks = []
singvals = []
v_blocks = []
for n, block in enumerate(blocks):
out = np.linalg.svd(
np.reshape(matrix.data[block], shapes[:, n]), full_matrices, compute_uv,
hermitian)
if compute_uv:
u_blocks.append(out[0])
singvals.append(out[1])
v_blocks.append(out[2])
else:
singvals.append(out)
tmp_labels = [
np.full(len(singvals[n]), fill_value=n, dtype=np.int16)
for n in range(len(singvals))
]
if len(tmp_labels) > 0:
left_singval_charge_labels = np.concatenate(tmp_labels)
else:
left_singval_charge_labels = np.empty(0, dtype=np.int16)
left_singval_charge = charges[left_singval_charge_labels]
if len(singvals) > 0:
all_singvals = np.concatenate(singvals)
else:
all_singvals = np.empty(0, dtype=get_real_dtype(matrix.dtype))
S = ChargeArray(all_singvals, [left_singval_charge], [False])
if compute_uv:
#define the new charges on the two central bonds
tmp_left_labels = [
np.full(u_blocks[n].shape[1], fill_value=n, dtype=np.int16)
for n in range(len(u_blocks))
]
if len(tmp_left_labels) > 0:
left_charge_labels = np.concatenate(tmp_left_labels)
else:
left_charge_labels = np.empty(0, dtype=np.int16)
tmp_right_labels = [
np.full(v_blocks[n].shape[0], fill_value=n, dtype=np.int16)
for n in range(len(v_blocks))
]
if len(tmp_right_labels) > 0:
right_charge_labels = np.concatenate(tmp_right_labels)
else:
right_charge_labels = np.empty(0, dtype=np.int16)
new_left_charge = charges[left_charge_labels]
new_right_charge = charges[right_charge_labels]
charges_u = [new_left_charge
] + [matrix._charges[o] for o in matrix._order[0]]
order_u = [[0]] + [list(np.arange(1, len(matrix._order[0]) + 1))]
flows_u = [True] + [matrix._flows[o] for o in matrix._order[0]]
charges_v = [new_right_charge
] + [matrix._charges[o] for o in matrix._order[1]]
flows_v = [False] + [matrix._flows[o] for o in matrix._order[1]]
order_v = [[0]] + [list(np.arange(1, len(matrix._order[1]) + 1))]
# We fill in data into the transposed U
# note that transposing is essentially free
if len(u_blocks) > 0:
all_u_blocks = np.concatenate([np.ravel(u.T) for u in u_blocks])
all_v_blocks = np.concatenate([np.ravel(v) for v in v_blocks])
else:
all_u_blocks = np.empty(0, dtype=matrix.dtype)
all_v_blocks = np.empty(0, dtype=matrix.dtype)
return BlockSparseTensor(
all_u_blocks,
charges=charges_u,
flows=flows_u,
order=order_u,
check_consistency=False).transpose((1, 0)), S, BlockSparseTensor(
all_v_blocks,
charges=charges_v,
flows=flows_v,
order=order_v,
check_consistency=False)
return S |
Compute the qr decomposition of an `M` by `N` matrix `matrix`.
The matrix is factorized into `q*r`, with
`q` an orthogonal matrix and `r` an upper triangular matrix.
Args:
matrix: A matrix (i.e. a rank-2 tensor) of type `BlockSparseTensor`
mode : Can take values {'reduced', 'complete', 'r', 'raw'}.
If K = min(M, N), then
* 'reduced' : returns q, r with dimensions (M, K), (K, N) (default)
* 'complete' : returns q, r with dimensions (M, M), (M, N)
* 'r' : returns r only with dimensions (K, N)
Returns:
(BlockSparseTensor,BlockSparseTensor): If mode = `reduced` or `complete`
BlockSparseTensor: If mode = `r`. | def qr(matrix: BlockSparseTensor, mode: Text = 'reduced') -> Any:
"""
Compute the qr decomposition of an `M` by `N` matrix `matrix`.
The matrix is factorized into `q*r`, with
`q` an orthogonal matrix and `r` an upper triangular matrix.
Args:
matrix: A matrix (i.e. a rank-2 tensor) of type `BlockSparseTensor`
mode : Can take values {'reduced', 'complete', 'r', 'raw'}.
If K = min(M, N), then
* 'reduced' : returns q, r with dimensions (M, K), (K, N) (default)
* 'complete' : returns q, r with dimensions (M, M), (M, N)
* 'r' : returns r only with dimensions (K, N)
Returns:
(BlockSparseTensor,BlockSparseTensor): If mode = `reduced` or `complete`
BlockSparseTensor: If mode = `r`.
"""
if matrix.ndim != 2:
raise NotImplementedError("qr currently supports only rank-2 tensors.")
if mode not in ('reduced', 'complete', 'raw', 'r'):
raise ValueError('unknown value {} for input `mode`'.format(mode))
if mode == 'raw':
raise NotImplementedError('mode `raw` currenntly not supported')
flat_charges = matrix._charges
flat_flows = matrix._flows
flat_order = matrix.flat_order
tr_partition = len(matrix._order[0])
blocks, charges, shapes = _find_transposed_diagonal_sparse_blocks(
flat_charges, flat_flows, tr_partition, flat_order)
q_blocks = []
r_blocks = []
for n, block in enumerate(blocks):
out = np.linalg.qr(np.reshape(matrix.data[block], shapes[:, n]), mode)
if mode in ('reduced', 'complete'):
q_blocks.append(out[0])
r_blocks.append(out[1])
else:
r_blocks.append(out)
tmp_r_charge_labels = [
np.full(r_blocks[n].shape[0], fill_value=n, dtype=np.int16)
for n in range(len(r_blocks))
]
if len(tmp_r_charge_labels) > 0:
left_r_charge_labels = np.concatenate(tmp_r_charge_labels)
else:
left_r_charge_labels = np.empty(0, dtype=np.int16)
left_r_charge = charges[left_r_charge_labels]
charges_r = [left_r_charge] + [matrix._charges[o] for o in matrix._order[1]]
flows_r = [False] + [matrix._flows[o] for o in matrix._order[1]]
order_r = [[0]] + [list(np.arange(1, len(matrix._order[1]) + 1))]
if len(r_blocks) > 0:
all_r_blocks = np.concatenate([np.ravel(r) for r in r_blocks])
else:
all_r_blocks = np.empty(0, dtype=matrix.dtype)
R = BlockSparseTensor(
all_r_blocks,
charges=charges_r,
flows=flows_r,
order=order_r,
check_consistency=False)
if mode in ('reduced', 'complete'):
tmp_right_q_charge_labels = [
np.full(q_blocks[n].shape[1], fill_value=n, dtype=np.int16)
for n in range(len(q_blocks))
]
if len(tmp_right_q_charge_labels) > 0:
right_q_charge_labels = np.concatenate(tmp_right_q_charge_labels)
else:
right_q_charge_labels = np.empty(0, dtype=np.int16)
right_q_charge = charges[right_q_charge_labels]
charges_q = [
right_q_charge,
] + [matrix._charges[o] for o in matrix._order[0]]
order_q = [[0]] + [list(np.arange(1, len(matrix._order[0]) + 1))]
flows_q = [True] + [matrix._flows[o] for o in matrix._order[0]]
if len(q_blocks) > 0:
all_q_blocks = np.concatenate([np.ravel(q.T) for q in q_blocks])
else:
all_q_blocks = np.empty(0, dtype=matrix.dtype)
return BlockSparseTensor(
all_q_blocks,
charges=charges_q,
flows=flows_q,
order=order_q,
check_consistency=False).transpose((1, 0)), R
return R |
Compute the eigen decomposition of a hermitian `M` by `M` matrix `matrix`.
Args:
matrix: A matrix (i.e. a rank-2 tensor) of type `BlockSparseTensor`
Returns:
(ChargeArray,BlockSparseTensor): The eigenvalues and eigenvectors | def eigh(matrix: BlockSparseTensor,
UPLO: Optional[Text] = 'L') -> Tuple[ChargeArray, BlockSparseTensor]:
"""
Compute the eigen decomposition of a hermitian `M` by `M` matrix `matrix`.
Args:
matrix: A matrix (i.e. a rank-2 tensor) of type `BlockSparseTensor`
Returns:
(ChargeArray,BlockSparseTensor): The eigenvalues and eigenvectors
"""
if matrix.ndim != 2:
raise NotImplementedError("eigh currently supports only rank-2 tensors.")
flat_charges = matrix._charges
flat_flows = matrix._flows
flat_order = matrix.flat_order
tr_partition = len(matrix._order[0])
blocks, charges, shapes = _find_transposed_diagonal_sparse_blocks(
flat_charges, flat_flows, tr_partition, flat_order)
eigvals = []
v_blocks = []
for n, block in enumerate(blocks):
e, v = np.linalg.eigh(np.reshape(matrix.data[block], shapes[:, n]), UPLO)
eigvals.append(e)
v_blocks.append(v)
tmp_labels = [
np.full(len(eigvals[n]), fill_value=n, dtype=np.int16)
for n in range(len(eigvals))
]
if len(tmp_labels) > 0:
eigvalscharge_labels = np.concatenate(tmp_labels)
else:
eigvalscharge_labels = np.empty(0, dtype=np.int16)
eigvalscharge = charges[eigvalscharge_labels]
if len(eigvals) > 0:
all_eigvals = np.concatenate(eigvals)
else:
all_eigvals = np.empty(0, dtype=get_real_dtype(matrix.dtype))
E = ChargeArray(all_eigvals, [eigvalscharge], [False])
charges_v = [eigvalscharge] + [matrix._charges[o] for o in matrix._order[0]]
order_v = [[0]] + [list(np.arange(1, len(matrix._order[0]) + 1))]
flows_v = [True] + [matrix._flows[o] for o in matrix._order[0]]
if len(v_blocks) > 0:
all_v_blocks = np.concatenate([np.ravel(v.T) for v in v_blocks])
else:
all_v_blocks = np.empty(0, dtype=matrix.dtype)
V = BlockSparseTensor(
all_v_blocks,
charges=charges_v,
flows=flows_v,
order=order_v,
check_consistency=False).transpose()
return E, V |
Compute the eigen decomposition of an `M` by `M` matrix `matrix`.
Args:
matrix: A matrix (i.e. a rank-2 tensor) of type `BlockSparseTensor`
Returns:
(ChargeArray,BlockSparseTensor): The eigenvalues and eigenvectors | def eig(matrix: BlockSparseTensor) -> Tuple[ChargeArray, BlockSparseTensor]:
"""
Compute the eigen decomposition of an `M` by `M` matrix `matrix`.
Args:
matrix: A matrix (i.e. a rank-2 tensor) of type `BlockSparseTensor`
Returns:
(ChargeArray,BlockSparseTensor): The eigenvalues and eigenvectors
"""
if matrix.ndim != 2:
raise NotImplementedError("eig currently supports only rank-2 tensors.")
flat_charges = matrix._charges
flat_flows = matrix._flows
flat_order = matrix.flat_order
tr_partition = len(matrix._order[0])
blocks, charges, shapes = _find_transposed_diagonal_sparse_blocks(
flat_charges, flat_flows, tr_partition, flat_order)
eigvals = []
v_blocks = []
for n, block in enumerate(blocks):
e, v = np.linalg.eig(np.reshape(matrix.data[block], shapes[:, n]))
eigvals.append(e)
v_blocks.append(v)
tmp_labels = [
np.full(len(eigvals[n]), fill_value=n, dtype=np.int16)
for n in range(len(eigvals))
]
if len(tmp_labels) > 0:
eigvalscharge_labels = np.concatenate(tmp_labels)
else:
eigvalscharge_labels = np.empty(0, dtype=np.int16)
eigvalscharge = charges[eigvalscharge_labels]
if len(eigvals) > 0:
all_eigvals = np.concatenate(eigvals)
else:
all_eigvals = np.empty(0, dtype=get_real_dtype(matrix.dtype))
E = ChargeArray(all_eigvals, [eigvalscharge], [False])
charges_v = [eigvalscharge] + [matrix._charges[o] for o in matrix._order[0]]
order_v = [[0]] + [list(np.arange(1, len(matrix._order[0]) + 1))]
flows_v = [True] + [matrix._flows[o] for o in matrix._order[0]]
if len(v_blocks) > 0:
all_v_blocks = np.concatenate([np.ravel(v.T) for v in v_blocks])
else:
all_v_blocks = np.empty(0, dtype=matrix.dtype)
V = BlockSparseTensor(
all_v_blocks,
charges=charges_v,
flows=flows_v,
order=order_v,
check_consistency=False).transpose()
return E, V |
Compute the matrix inverse of `matrix`.
Returns:
BlockSparseTensor: The inverse of `matrix`. | def inv(matrix: BlockSparseTensor) -> BlockSparseTensor:
"""
Compute the matrix inverse of `matrix`.
Returns:
BlockSparseTensor: The inverse of `matrix`.
"""
if matrix.ndim != 2:
raise ValueError("`inv` can only be taken for matrices, "
"found tensor.ndim={}".format(matrix.ndim))
flat_charges = matrix._charges
flat_flows = matrix._flows
flat_order = matrix.flat_order
tr_partition = len(matrix._order[0])
blocks, _, shapes = _find_transposed_diagonal_sparse_blocks(
flat_charges, flat_flows, tr_partition, flat_order)
data = np.empty(np.sum(np.prod(shapes, axis=0)), dtype=matrix.dtype)
for n, block in enumerate(blocks):
data[block] = np.ravel(
np.linalg.inv(np.reshape(matrix.data[block], shapes[:, n])).T)
#pylint: disable=line-too-long
return BlockSparseTensor(
data=data,
charges=matrix._charges,
flows=np.logical_not(matrix._flows),
order=matrix._order,
check_consistency=False).transpose((1, 0)) |
Return an identity matrix.
Args:
column_index: The column index of the matrix.
row_index: The row index of the matrix.
dtype: The dtype of the matrix.
Returns:
BlockSparseTensor | def eye(column_index: Index,
row_index: Optional[Index] = None,
dtype: Optional[Type[np.number]] = None) -> BlockSparseTensor:
"""
Return an identity matrix.
Args:
column_index: The column index of the matrix.
row_index: The row index of the matrix.
dtype: The dtype of the matrix.
Returns:
BlockSparseTensor
"""
if row_index is None:
row_index = column_index.copy().flip_flow()
if dtype is None:
dtype = np.float64
blocks, _, shapes = _find_diagonal_sparse_blocks(
column_index.flat_charges + row_index.flat_charges,
column_index.flat_flows + row_index.flat_flows,
len(column_index.flat_charges))
data = np.empty(np.int64(np.sum(np.prod(shapes, axis=0))), dtype=dtype)
for n, block in enumerate(blocks):
data[block] = np.ravel(np.eye(shapes[0, n], shapes[1, n], dtype=dtype))
order = [list(np.arange(0, len(column_index.flat_charges)))] + [
list(
np.arange(
len(column_index.flat_charges),
len(column_index.flat_charges) + len(row_index.flat_charges)))
]
return BlockSparseTensor(
data=data,
charges=column_index.flat_charges + row_index.flat_charges,
flows=column_index.flat_flows + row_index.flat_flows,
order=order,
check_consistency=False) |
Compute the trace of a matrix or tensor. If input has `ndim>2`, take
the trace over the last two dimensions.
Args:
tensor: A `BlockSparseTensor`.
axes: The axes over which the trace should be computed.
Defaults to the last two indices of the tensor.
Returns:
BlockSparseTensor: The result of taking the trace. | def trace(tensor: BlockSparseTensor,
axes: Optional[Sequence[int]] = None) -> BlockSparseTensor:
"""
Compute the trace of a matrix or tensor. If input has `ndim>2`, take
the trace over the last two dimensions.
Args:
tensor: A `BlockSparseTensor`.
axes: The axes over which the trace should be computed.
Defaults to the last two indices of the tensor.
Returns:
BlockSparseTensor: The result of taking the trace.
"""
if tensor.ndim > 1:
if axes is None:
axes = (tensor.ndim - 2, tensor.ndim - 1)
if len(axes) != 2:
raise ValueError(f"`len(axes)` has to be 2, found `axes = {axes}`")
if not np.array_equal(tensor.flows[axes[0]],
np.logical_not(tensor.flows[axes[1]])):
raise ValueError(
f"trace indices for axes {axes} have non-matching flows.")
sparse_shape = tensor.sparse_shape
if sparse_shape[axes[0]].copy().flip_flow() != sparse_shape[axes[1]]:
raise ValueError(f"trace indices for axes {axes} are not matching")
#flatten the shape of `tensor`
out = tensor.reshape(
flatten([[tensor._charges[n].dim for n in o] for o in tensor._order]))
_, _, labels0 = np.intersect1d(
tensor._order[axes[0]], flatten(out._order), return_indices=True)
_, _, labels1 = np.intersect1d(
tensor._order[axes[1]], flatten(out._order), return_indices=True)
a0 = list(labels0[np.argsort(tensor._order[axes[0]])])
a1 = list(labels1[np.argsort(tensor._order[axes[1]])])
while len(a0) > 0:
i = a0.pop(0)
j = a1.pop(0)
identity = eye(
Index([out._charges[out._order[i][0]]],
[not out._flows[out._order[i][0]]]))
#pylint: disable=line-too-long
out = tensordot(out, identity, ([i, j], [0, 1])) # pytype: disable=wrong-arg-types
a0ar = np.asarray(a0)
mask_min = a0ar > np.min([i, j])
mask_max = a0ar > np.max([i, j])
a0ar[np.logical_and(mask_min, mask_max)] -= 2
a0ar[np.logical_xor(mask_min, mask_max)] -= 1
a1ar = np.asarray(a1)
mask_min = a1ar > np.min([i, j])
mask_max = a1ar > np.max([i, j])
a1ar[np.logical_and(mask_min, mask_max)] -= 2
a1ar[np.logical_xor(mask_min, mask_max)] -= 1
a0 = list(a0ar)
a1 = list(a1ar)
if out.ndim == 0:
return out.item()
return out # pytype: disable=bad-return-type
raise ValueError("trace can only be taken for tensors with ndim > 1") |
Compute the Moore-Penrose pseudo inverse of `matrix`.
Args:
rcond: Pseudo inverse cutoff.
Returns:
BlockSparseTensor: The pseudo inverse of `matrix`. | def pinv(matrix: BlockSparseTensor,
rcond: Optional[float] = 1E-15,
hermitian: Optional[bool] = False) -> BlockSparseTensor:
"""
Compute the Moore-Penrose pseudo inverse of `matrix`.
Args:
rcond: Pseudo inverse cutoff.
Returns:
BlockSparseTensor: The pseudo inverse of `matrix`.
"""
if matrix.ndim != 2:
raise ValueError("`pinv` can only be taken for matrices, "
"found tensor.ndim={}".format(matrix.ndim))
flat_charges = matrix._charges
flat_flows = matrix._flows
flat_order = matrix.flat_order
tr_partition = len(matrix._order[0])
blocks, _, shapes = _find_transposed_diagonal_sparse_blocks(
flat_charges, flat_flows, tr_partition, flat_order)
data = np.empty(np.sum(np.prod(shapes, axis=0)), dtype=matrix.dtype)
for n, block in enumerate(blocks):
data[block] = np.ravel(
np.linalg.pinv(
np.reshape(matrix.data[block], shapes[:, n]),
rcond=rcond,
hermitian=hermitian).T)
#pylint: disable=line-too-long
return BlockSparseTensor(
data=data,
charges=matrix._charges,
flows=np.logical_not(matrix._flows),
order=matrix._order,
check_consistency=False).transpose((1, 0)) |
Initialize a 1d np.ndarray of length `size` of dtype `dtype`
with random gaussian values.
Args:
size: The length of the array.
dtype: The desired dtype.
Returns:
np.ndarray: The data array. | def _randn(size: int, dtype: Type[np.number] = np.float64) -> np.ndarray:
"""
Initialize a 1d np.ndarray of length `size` of dtype `dtype`
with random gaussian values.
Args:
size: The length of the array.
dtype: The desired dtype.
Returns:
np.ndarray: The data array.
"""
data = np.random.randn(size).astype(dtype)
if ((np.dtype(dtype) is np.dtype(np.complex128)) or
(np.dtype(dtype) is np.dtype(np.complex64))):
data += 1j * np.random.randn(size).astype(dtype)
return data |
Initialize a 1d np.ndarray of length `size` of dtype `dtype`
with random uniform values.
Args:
size: The length of the array.
dtype: The desired dtype.
boundaries: The boundaries of the interval where numbers are
drawn from.
Returns:
np.ndarray: The data array. | def _random(size: int,
dtype: Type[np.number] = np.float64,
boundaries: Tuple = (0, 1)) -> np.ndarray:
"""
Initialize a 1d np.ndarray of length `size` of dtype `dtype`
with random uniform values.
Args:
size: The length of the array.
dtype: The desired dtype.
boundaries: The boundaries of the interval where numbers are
drawn from.
Returns:
np.ndarray: The data array.
"""
data = np.random.uniform(boundaries[0], boundaries[1], size).astype(dtype)
if ((np.dtype(dtype) is np.dtype(np.complex128)) or
(np.dtype(dtype) is np.dtype(np.complex64))):
data += 1j * np.random.uniform(boundaries[0], boundaries[1],
size).astype(dtype)
return data |
Flatten a list of lists into a single list.
Args:
list_of_lists: A list of lists.
Returns:
list: The flattened input. | def flatten(list_of_list: List[List]) -> np.ndarray:
"""
Flatten a list of lists into a single list.
Args:
list_of_lists: A list of lists.
Returns:
list: The flattened input.
"""
return np.array([l for sublist in list_of_list for l in sublist]) |
Compute linear positions of tensor elements
of a tensor with dimensions `dims` according to `strides`.
Args:
dims: An np.ndarray of (original) tensor dimensions.
strides: An np.ndarray of (possibly permuted) strides.
Returns:
np.ndarray: Linear positions of tensor elements according to `strides`. | def fuse_stride_arrays(dims: Union[List[int], np.ndarray],
strides: Union[List[int], np.ndarray]) -> np.ndarray:
"""
Compute linear positions of tensor elements
of a tensor with dimensions `dims` according to `strides`.
Args:
dims: An np.ndarray of (original) tensor dimensions.
strides: An np.ndarray of (possibly permuted) strides.
Returns:
np.ndarray: Linear positions of tensor elements according to `strides`.
"""
return fuse_ndarrays([
np.arange(0, strides[n] * dims[n], strides[n], dtype=SIZE_T)
for n in range(len(dims))
]) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.