file_name
large_stringlengths 4
69
| prefix
large_stringlengths 0
26.7k
| suffix
large_stringlengths 0
24.8k
| middle
large_stringlengths 0
2.12k
| fim_type
large_stringclasses 4
values |
---|---|---|---|---|
lib.rs | // Copyright (c) The Diem Core Contributors
// SPDX-License-Identifier: Apache-2.0
#![forbid(unsafe_code)]
//! This module provides algorithms for accessing and updating a Merkle Accumulator structure
//! persisted in a key-value store. Note that this doesn't write to the storage directly, rather,
//! it reads from it via the `HashReader` trait and yields writes via an in memory `HashMap`.
//!
//! # Merkle Accumulator
//! Given an ever growing (append only) series of "leaf" hashes, we construct an evolving Merkle
//! Tree for which proofs of inclusion/exclusion of a leaf hash at a leaf index in a snapshot
//! of the tree (represented by root hash) can be given.
//!
//! # Leaf Nodes
//! Leaf nodes carry hash values to be stored and proved. They are only appended to the tree but
//! never deleted or updated.
//!
//! # Internal Nodes
//! A non-leaf node carries the hash value derived from both its left and right children.
//!
//! # Placeholder Nodes
//! To make sure each Leaf node has a Merkle Proof towards the root, placeholder nodes are added so
//! that along the route from a leaf to the root, each node has a sibling. Placeholder nodes have
//! the hash value `ACCUMULATOR_PLACEHOLDER_HASH`
//!
//! A placeholder node can appear as either a Leaf node or a non-Leaf node, but there is at most one
//! placeholder leaf at any time.
//!
//! # Frozen Nodes & Non-frozen Nodes
//! As leaves are added to the tree, placeholder nodes get replaced by non-placeholder nodes, and
//! when a node has all its descendants being non-placeholder, it becomes "Frozen" -- its hash value
//! won't change again in the event of new leaves being added. All leaves appended (not counting the
//! one possible placeholder leaf) are by definition Frozen.
//!
//! Other nodes, which have one or more placeholder descendants are Non-Frozen. As new elements are
//! appended to the accumulator the hash value of these nodes will change.
//!
//! # Leaf Count
//! Given a count of the number of leaves in a Merkle Accumulator it is possible to determine the
//! shape of the accumulator -- which nodes are filled and which nodes are placeholder nodes.
//!
//! Example:
//! Logical view of a Merkle Accumulator with 5 leaves:
//! ```text
//! Non-fzn
//! / \
//! / \
//! / \
//! Fzn2 Non-fzn
//! / \ / \
//! / \ / \
//! Fzn1 Fzn3 Non-fzn [Placeholder]
//! / \ / \ / \
//! L0 L1 L2 L3 L4 [Placeholder]
//! ```
//!
//! # Position and Physical Representation
//! As a Merkle Accumulator tree expands to the right and upwards, we number newly frozen nodes
//! monotonically. One way to do it is simply to use in-order index of nodes, and this is what
//! we do for the in-memory representation. We call the stated numbers identifying nodes below
//! simply "Position", and unless otherwise stated, this is the in-order position.
//!
//! For writing to disk however, we write all the children of a node before the parent.
//! Thus for disk write order, it is more convenient to use the post-order position as an index.
//! And with that we can map a Merkle Accumulator into a key-value storage: key is the post-order
//! position of a node, and the value is hash value it carries.
//!
//! We store only Frozen nodes, and generate non-Frozen nodes on the fly when accessing the tree.
//! This way, the physical representation of the tree is append-only, i.e. once written to physical
//! storage, nodes won't be either modified or deleted.
//!
//! Here is what we persist for the logical tree in the above example:
//!
//! ```text
//! Fzn2(6)
//! / \
//! / \
//! Fzn1(2) Fzn3(5)
//! / \ / \
//! L0(0) L1(1) L2(3) L3(4) L4(7)
//! ```
//!
//! When the next leaf node is persisted, the physical representation will be:
//!
//! ```text
//! Fzn2(6)
//! / \
//! / \
//! Fzn1(2) Fzn3(5) Fzn4(9)
//! / \ / \ / \
//! L0(0) L1(1) L2(3) L3(4) L4(7) L5(8)
//! ```
//!
//! The numbering corresponds to the post-order traversal of the tree.
//!
//! To think in key-value pairs:
//! ```text
//! |<-key->|<--value-->|
//! | 0 | hash_L0 |
//! | 1 | hash_L1 |
//! | 2 | hash_Fzn1 |
//! | ... | ... |
//! ```
#[cfg(any(test, feature = "fuzzing"))]
pub mod test_helpers;
use anyhow::{ensure, format_err, Result};
use diem_crypto::hash::{CryptoHash, CryptoHasher, HashValue, ACCUMULATOR_PLACEHOLDER_HASH};
use diem_types::proof::{
definition::{LeafCount, MAX_ACCUMULATOR_PROOF_DEPTH},
position::{FrozenSubTreeIterator, FrozenSubtreeSiblingIterator, Position},
AccumulatorConsistencyProof, AccumulatorProof, AccumulatorRangeProof, MerkleTreeInternalNode,
};
use mirai_annotations::*;
use std::marker::PhantomData;
/// Defines the interface between `MerkleAccumulator` and underlying storage.
pub trait HashReader {
/// Return `HashValue` carried by the node at `Position`.
fn get(&self, position: Position) -> Result<HashValue>;
}
/// A `Node` in a `MerkleAccumulator` tree is a `HashValue` at a `Position`
type Node = (Position, HashValue);
/// In this live Merkle Accumulator algorithms.
pub struct MerkleAccumulator<R, H> {
reader: PhantomData<R>,
hasher: PhantomData<H>,
}
impl<R, H> MerkleAccumulator<R, H>
where
R: HashReader,
H: CryptoHasher,
{
/// Given an existing Merkle Accumulator (represented by `num_existing_leaves` and a `reader`
/// that is able to fetch all existing frozen nodes), and a list of leaves to be appended,
/// returns the result root hash and new nodes to be frozen.
pub fn append(
reader: &R,
num_existing_leaves: LeafCount,
new_leaves: &[HashValue],
) -> Result<(HashValue, Vec<Node>)> {
MerkleAccumulatorView::<R, H>::new(reader, num_existing_leaves).append(new_leaves)
}
/// Get proof of inclusion of the leaf at `leaf_index` in this Merkle Accumulator of
/// `num_leaves` leaves in total. Siblings are read via `reader` (or generated dynamically
/// if they are non-frozen).
///
/// See [`diem_types::proof::AccumulatorProof`] for proof format.
pub fn get_proof(
reader: &R,
num_leaves: LeafCount,
leaf_index: u64,
) -> Result<AccumulatorProof<H>> {
MerkleAccumulatorView::<R, H>::new(reader, num_leaves).get_proof(leaf_index)
}
/// Gets a proof that shows the full accumulator is consistent with a smaller accumulator.
///
/// See [`diem_types::proof::AccumulatorConsistencyProof`] for proof format.
pub fn get_consistency_proof(
reader: &R,
full_acc_leaves: LeafCount,
sub_acc_leaves: LeafCount,
) -> Result<AccumulatorConsistencyProof> {
MerkleAccumulatorView::<R, H>::new(reader, full_acc_leaves)
.get_consistency_proof(sub_acc_leaves)
}
/// Gets a proof that shows a range of leaves are part of the accumulator.
///
/// See [`diem_types::proof::AccumulatorRangeProof`] for proof format.
pub fn get_range_proof(
reader: &R,
full_acc_leaves: LeafCount,
first_leaf_index: Option<u64>,
num_leaves: LeafCount,
) -> Result<AccumulatorRangeProof<H>> {
MerkleAccumulatorView::<R, H>::new(reader, full_acc_leaves)
.get_range_proof(first_leaf_index, num_leaves)
}
/// See `get_range_proof`. This is the version of it that returns `Position`s only.
pub fn get_range_proof_positions(
reader: &R,
full_acc_leaves: LeafCount,
first_leaf_index: Option<u64>,
num_leaves: LeafCount,
) -> Result<(Vec<Position>, Vec<Position>)> {
MerkleAccumulatorView::<R, H>::new(reader, full_acc_leaves)
.get_range_proof_positions(first_leaf_index, num_leaves)
}
/// From left to right, gets frozen subtree root hashes of the accumulator. For example, if the
/// accumulator has 5 leaves, `x` and `e` are returned.
/// ```text
/// root
/// / \
/// / \
/// / \
/// x o
/// / \ / \
/// / \ / \
/// o o o placeholder
/// / \ / \ / \
/// a b c d e placeholder
/// ```
pub fn get_frozen_subtree_hashes(reader: &R, num_leaves: LeafCount) -> Result<Vec<HashValue>> |
/// Get root hash at a specific version (hence num_leaves).
pub fn get_root_hash(reader: &R, num_leaves: LeafCount) -> Result<HashValue> {
MerkleAccumulatorView::<R, H>::new(reader, num_leaves).get_root_hash()
}
}
/// Actual implementation of Merkle Accumulator algorithms, which carries the `reader` and
/// `num_leaves` on an instance for convenience
struct MerkleAccumulatorView<'a, R, H> {
reader: &'a R,
num_leaves: LeafCount,
hasher: PhantomData<H>,
}
impl<'a, R, H> MerkleAccumulatorView<'a, R, H>
where
R: HashReader,
H: CryptoHasher,
{
fn new(reader: &'a R, num_leaves: LeafCount) -> Self {
Self {
reader,
num_leaves,
hasher: PhantomData,
}
}
/// implementation for pub interface `MerkleAccumulator::append`
fn append(&self, new_leaves: &[HashValue]) -> Result<(HashValue, Vec<Node>)> {
// Deal with the case where new_leaves is empty
if new_leaves.is_empty() {
if self.num_leaves == 0 {
return Ok((*ACCUMULATOR_PLACEHOLDER_HASH, Vec::new()));
} else {
let root_hash = self.get_hash(Position::root_from_leaf_count(self.num_leaves))?;
return Ok((root_hash, Vec::new()));
}
}
let num_new_leaves = new_leaves.len();
let last_new_leaf_count = self.num_leaves + num_new_leaves as LeafCount;
let root_level = Position::root_level_from_leaf_count(last_new_leaf_count);
let mut to_freeze = Vec::with_capacity(Self::max_to_freeze(num_new_leaves, root_level));
// Iterate over the new leaves, adding them to to_freeze and then adding any frozen parents
// when right children are encountered. This has the effect of creating frozen nodes in
// perfect post-order, which can be used as a strictly increasing append only index for
// the underlying storage.
//
// We will track newly created left siblings while iterating so we can pair them with their
// right sibling, if and when it becomes frozen. If the frozen left sibling is not created
// in this iteration, it must already exist in storage.
let mut left_siblings: Vec<(_, _)> = Vec::new();
for (leaf_offset, leaf) in new_leaves.iter().enumerate() {
let leaf_pos = Position::from_leaf_index(self.num_leaves + leaf_offset as LeafCount);
let mut hash = *leaf;
to_freeze.push((leaf_pos, hash));
let mut pos = leaf_pos;
while pos.is_right_child() {
let sibling = pos.sibling();
hash = match left_siblings.pop() {
Some((x, left_hash)) => {
assert_eq!(x, sibling);
Self::hash_internal_node(left_hash, hash)
}
None => Self::hash_internal_node(self.reader.get(sibling)?, hash),
};
pos = pos.parent();
to_freeze.push((pos, hash));
}
// The node remaining must be a left child, possibly the root of a complete binary tree.
left_siblings.push((pos, hash));
}
// Now reconstruct the final root hash by walking up to root level and adding
// placeholder hash nodes as needed on the right, and left siblings that have either
// been newly created or read from storage.
let (mut pos, mut hash) = left_siblings.pop().expect("Must have at least one node");
for _ in pos.level()..root_level as u32 {
hash = if pos.is_left_child() {
Self::hash_internal_node(hash, *ACCUMULATOR_PLACEHOLDER_HASH)
} else {
let sibling = pos.sibling();
match left_siblings.pop() {
Some((x, left_hash)) => {
assert_eq!(x, sibling);
Self::hash_internal_node(left_hash, hash)
}
None => Self::hash_internal_node(self.reader.get(sibling)?, hash),
}
};
pos = pos.parent();
}
assert!(left_siblings.is_empty());
Ok((hash, to_freeze))
}
/// upper bound of num of frozen nodes:
/// new leaves and resulting frozen internal nodes forming a complete binary subtree
/// num_new_leaves * 2 - 1 < num_new_leaves * 2
/// and the full route from root of that subtree to the accumulator root turns frozen
/// height - (log2(num_new_leaves) + 1) < height - 1 = root_level
fn max_to_freeze(num_new_leaves: usize, root_level: u32) -> usize {
precondition!(root_level as usize <= MAX_ACCUMULATOR_PROOF_DEPTH);
precondition!(num_new_leaves < (usize::max_value() / 2));
precondition!(num_new_leaves * 2 <= usize::max_value() - root_level as usize);
num_new_leaves * 2 + root_level as usize
}
fn hash_internal_node(left: HashValue, right: HashValue) -> HashValue {
MerkleTreeInternalNode::<H>::new(left, right).hash()
}
fn rightmost_leaf_index(&self) -> u64 {
(self.num_leaves - 1) as u64
}
fn get_hash(&self, position: Position) -> Result<HashValue> {
let idx = self.rightmost_leaf_index();
if position.is_placeholder(idx) {
Ok(*ACCUMULATOR_PLACEHOLDER_HASH)
} else if position.is_freezable(idx) {
self.reader.get(position)
} else {
// non-frozen non-placeholder node
Ok(Self::hash_internal_node(
self.get_hash(position.left_child())?,
self.get_hash(position.right_child())?,
))
}
}
fn get_hashes(&self, positions: &[Position]) -> Result<Vec<HashValue>> {
positions.iter().map(|p| self.get_hash(*p)).collect()
}
fn get_root_hash(&self) -> Result<HashValue> {
self.get_hash(Position::root_from_leaf_count(self.num_leaves))
}
/// implementation for pub interface `MerkleAccumulator::get_proof`
fn get_proof(&self, leaf_index: u64) -> Result<AccumulatorProof<H>> {
ensure!(
leaf_index < self.num_leaves as u64,
"invalid leaf_index {}, num_leaves {}",
leaf_index,
self.num_leaves
);
let siblings = self.get_siblings(leaf_index, |_p| true)?;
Ok(AccumulatorProof::new(siblings))
}
/// Implementation for public interface `MerkleAccumulator::get_consistency_proof`.
fn get_consistency_proof(
&self,
sub_acc_leaves: LeafCount,
) -> Result<AccumulatorConsistencyProof> {
ensure!(
sub_acc_leaves <= self.num_leaves,
"Can't get accumulator consistency proof for a version newer than the local version. \
Local next version: {}, asked next version: {}",
self.num_leaves,
sub_acc_leaves,
);
let subtrees = FrozenSubtreeSiblingIterator::new(sub_acc_leaves, self.num_leaves)
.map(|p| self.reader.get(p))
.collect::<Result<Vec<_>>>()?;
Ok(AccumulatorConsistencyProof::new(subtrees))
}
/// Implementation for public interface `MerkleAccumulator::get_range_proof`.
fn get_range_proof(
&self,
first_leaf_index: Option<u64>,
num_leaves: LeafCount,
) -> Result<AccumulatorRangeProof<H>> {
let (left_siblings, right_siblings) =
self.get_range_proof_positions(first_leaf_index, num_leaves)?;
Ok(AccumulatorRangeProof::new(
self.get_hashes(&left_siblings)?,
self.get_hashes(&right_siblings)?,
))
}
fn get_range_proof_positions(
&self,
first_leaf_index: Option<u64>,
num_leaves: LeafCount,
) -> Result<(Vec<Position>, Vec<Position>)> {
if first_leaf_index.is_none() {
ensure!(
num_leaves == 0,
"num_leaves is not zero while first_leaf_index is None.",
);
return Ok((Vec::new(), Vec::new()));
}
let first_leaf_index = first_leaf_index.expect("first_leaf_index should not be None.");
ensure!(
num_leaves > 0,
"num_leaves is zero while first_leaf_index is not None.",
);
let last_leaf_index = first_leaf_index
.checked_add(num_leaves - 1)
.ok_or_else(|| format_err!("Requesting too many leaves."))?;
ensure!(
last_leaf_index < self.num_leaves as u64,
"Invalid last_leaf_index: {}, num_leaves: {}",
last_leaf_index,
self.num_leaves,
);
let left_siblings = self.get_sibling_positions(first_leaf_index, |p| p.is_left_child());
let right_siblings = self.get_sibling_positions(last_leaf_index, |p| p.is_right_child());
Ok((left_siblings, right_siblings))
}
fn get_siblings(
&self,
leaf_index: u64,
filter: impl Fn(Position) -> bool,
) -> Result<Vec<HashValue>> {
self.get_hashes(&self.get_sibling_positions(leaf_index, filter))
}
/// Helper function to get siblings on the path from the given leaf to the root. An additional
/// filter function can be applied to filter out certain siblings.
fn get_sibling_positions(
&self,
leaf_index: u64,
filter: impl Fn(Position) -> bool,
) -> Vec<Position> {
let root_pos = Position::root_from_leaf_count(self.num_leaves);
Position::from_leaf_index(leaf_index)
.iter_ancestor_sibling()
.take(root_pos.level() as usize)
.filter(|p| filter(*p))
.collect()
}
/// Implementation for public interface `MerkleAccumulator::get_frozen_subtree_hashes`.
fn get_frozen_subtree_hashes(&self) -> Result<Vec<HashValue>> {
FrozenSubTreeIterator::new(self.num_leaves)
.map(|p| self.reader.get(p))
.collect::<Result<Vec<_>>>()
}
}
#[cfg(test)]
mod tests;
| {
MerkleAccumulatorView::<R, H>::new(reader, num_leaves).get_frozen_subtree_hashes()
} | identifier_body |
lib.rs | // Copyright (c) The Diem Core Contributors
// SPDX-License-Identifier: Apache-2.0
#![forbid(unsafe_code)]
//! This module provides algorithms for accessing and updating a Merkle Accumulator structure
//! persisted in a key-value store. Note that this doesn't write to the storage directly, rather,
//! it reads from it via the `HashReader` trait and yields writes via an in memory `HashMap`.
//!
//! # Merkle Accumulator
//! Given an ever growing (append only) series of "leaf" hashes, we construct an evolving Merkle
//! Tree for which proofs of inclusion/exclusion of a leaf hash at a leaf index in a snapshot
//! of the tree (represented by root hash) can be given.
//!
//! # Leaf Nodes
//! Leaf nodes carry hash values to be stored and proved. They are only appended to the tree but
//! never deleted or updated.
//!
//! # Internal Nodes
//! A non-leaf node carries the hash value derived from both its left and right children.
//!
//! # Placeholder Nodes
//! To make sure each Leaf node has a Merkle Proof towards the root, placeholder nodes are added so
//! that along the route from a leaf to the root, each node has a sibling. Placeholder nodes have
//! the hash value `ACCUMULATOR_PLACEHOLDER_HASH`
//!
//! A placeholder node can appear as either a Leaf node or a non-Leaf node, but there is at most one
//! placeholder leaf at any time.
//!
//! # Frozen Nodes & Non-frozen Nodes
//! As leaves are added to the tree, placeholder nodes get replaced by non-placeholder nodes, and
//! when a node has all its descendants being non-placeholder, it becomes "Frozen" -- its hash value
//! won't change again in the event of new leaves being added. All leaves appended (not counting the
//! one possible placeholder leaf) are by definition Frozen.
//!
//! Other nodes, which have one or more placeholder descendants are Non-Frozen. As new elements are
//! appended to the accumulator the hash value of these nodes will change.
//!
//! # Leaf Count
//! Given a count of the number of leaves in a Merkle Accumulator it is possible to determine the
//! shape of the accumulator -- which nodes are filled and which nodes are placeholder nodes.
//!
//! Example:
//! Logical view of a Merkle Accumulator with 5 leaves:
//! ```text
//! Non-fzn
//! / \
//! / \
//! / \
//! Fzn2 Non-fzn
//! / \ / \
//! / \ / \
//! Fzn1 Fzn3 Non-fzn [Placeholder]
//! / \ / \ / \
//! L0 L1 L2 L3 L4 [Placeholder]
//! ```
//!
//! # Position and Physical Representation
//! As a Merkle Accumulator tree expands to the right and upwards, we number newly frozen nodes
//! monotonically. One way to do it is simply to use in-order index of nodes, and this is what
//! we do for the in-memory representation. We call the stated numbers identifying nodes below
//! simply "Position", and unless otherwise stated, this is the in-order position.
//!
//! For writing to disk however, we write all the children of a node before the parent.
//! Thus for disk write order, it is more convenient to use the post-order position as an index.
//! And with that we can map a Merkle Accumulator into a key-value storage: key is the post-order
//! position of a node, and the value is hash value it carries.
//!
//! We store only Frozen nodes, and generate non-Frozen nodes on the fly when accessing the tree.
//! This way, the physical representation of the tree is append-only, i.e. once written to physical
//! storage, nodes won't be either modified or deleted.
//!
//! Here is what we persist for the logical tree in the above example:
//!
//! ```text
//! Fzn2(6)
//! / \
//! / \
//! Fzn1(2) Fzn3(5)
//! / \ / \
//! L0(0) L1(1) L2(3) L3(4) L4(7)
//! ```
//!
//! When the next leaf node is persisted, the physical representation will be:
//!
//! ```text
//! Fzn2(6)
//! / \
//! / \
//! Fzn1(2) Fzn3(5) Fzn4(9)
//! / \ / \ / \
//! L0(0) L1(1) L2(3) L3(4) L4(7) L5(8)
//! ```
//!
//! The numbering corresponds to the post-order traversal of the tree.
//!
//! To think in key-value pairs:
//! ```text
//! |<-key->|<--value-->|
//! | 0 | hash_L0 |
//! | 1 | hash_L1 |
//! | 2 | hash_Fzn1 |
//! | ... | ... |
//! ```
#[cfg(any(test, feature = "fuzzing"))]
pub mod test_helpers;
use anyhow::{ensure, format_err, Result};
use diem_crypto::hash::{CryptoHash, CryptoHasher, HashValue, ACCUMULATOR_PLACEHOLDER_HASH};
use diem_types::proof::{
definition::{LeafCount, MAX_ACCUMULATOR_PROOF_DEPTH},
position::{FrozenSubTreeIterator, FrozenSubtreeSiblingIterator, Position},
AccumulatorConsistencyProof, AccumulatorProof, AccumulatorRangeProof, MerkleTreeInternalNode,
};
use mirai_annotations::*;
use std::marker::PhantomData;
/// Defines the interface between `MerkleAccumulator` and underlying storage.
pub trait HashReader {
/// Return `HashValue` carried by the node at `Position`.
fn get(&self, position: Position) -> Result<HashValue>;
}
/// A `Node` in a `MerkleAccumulator` tree is a `HashValue` at a `Position`
type Node = (Position, HashValue);
/// In this live Merkle Accumulator algorithms.
pub struct MerkleAccumulator<R, H> {
reader: PhantomData<R>,
hasher: PhantomData<H>,
}
impl<R, H> MerkleAccumulator<R, H>
where
R: HashReader,
H: CryptoHasher,
{
/// Given an existing Merkle Accumulator (represented by `num_existing_leaves` and a `reader`
/// that is able to fetch all existing frozen nodes), and a list of leaves to be appended,
/// returns the result root hash and new nodes to be frozen.
pub fn append(
reader: &R,
num_existing_leaves: LeafCount,
new_leaves: &[HashValue],
) -> Result<(HashValue, Vec<Node>)> {
MerkleAccumulatorView::<R, H>::new(reader, num_existing_leaves).append(new_leaves)
}
/// Get proof of inclusion of the leaf at `leaf_index` in this Merkle Accumulator of
/// `num_leaves` leaves in total. Siblings are read via `reader` (or generated dynamically
/// if they are non-frozen).
///
/// See [`diem_types::proof::AccumulatorProof`] for proof format.
pub fn get_proof(
reader: &R,
num_leaves: LeafCount,
leaf_index: u64,
) -> Result<AccumulatorProof<H>> {
MerkleAccumulatorView::<R, H>::new(reader, num_leaves).get_proof(leaf_index)
}
/// Gets a proof that shows the full accumulator is consistent with a smaller accumulator.
///
/// See [`diem_types::proof::AccumulatorConsistencyProof`] for proof format.
pub fn get_consistency_proof(
reader: &R,
full_acc_leaves: LeafCount,
sub_acc_leaves: LeafCount,
) -> Result<AccumulatorConsistencyProof> {
MerkleAccumulatorView::<R, H>::new(reader, full_acc_leaves)
.get_consistency_proof(sub_acc_leaves)
}
/// Gets a proof that shows a range of leaves are part of the accumulator.
///
/// See [`diem_types::proof::AccumulatorRangeProof`] for proof format.
pub fn get_range_proof(
reader: &R,
full_acc_leaves: LeafCount,
first_leaf_index: Option<u64>,
num_leaves: LeafCount,
) -> Result<AccumulatorRangeProof<H>> {
MerkleAccumulatorView::<R, H>::new(reader, full_acc_leaves)
.get_range_proof(first_leaf_index, num_leaves)
}
/// See `get_range_proof`. This is the version of it that returns `Position`s only.
pub fn get_range_proof_positions(
reader: &R,
full_acc_leaves: LeafCount,
first_leaf_index: Option<u64>,
num_leaves: LeafCount,
) -> Result<(Vec<Position>, Vec<Position>)> {
MerkleAccumulatorView::<R, H>::new(reader, full_acc_leaves)
.get_range_proof_positions(first_leaf_index, num_leaves)
}
/// From left to right, gets frozen subtree root hashes of the accumulator. For example, if the
/// accumulator has 5 leaves, `x` and `e` are returned.
/// ```text
/// root
/// / \
/// / \
/// / \
/// x o
/// / \ / \
/// / \ / \
/// o o o placeholder
/// / \ / \ / \
/// a b c d e placeholder
/// ```
pub fn get_frozen_subtree_hashes(reader: &R, num_leaves: LeafCount) -> Result<Vec<HashValue>> {
MerkleAccumulatorView::<R, H>::new(reader, num_leaves).get_frozen_subtree_hashes()
}
/// Get root hash at a specific version (hence num_leaves).
pub fn get_root_hash(reader: &R, num_leaves: LeafCount) -> Result<HashValue> {
MerkleAccumulatorView::<R, H>::new(reader, num_leaves).get_root_hash()
}
}
/// Actual implementation of Merkle Accumulator algorithms, which carries the `reader` and
/// `num_leaves` on an instance for convenience
struct MerkleAccumulatorView<'a, R, H> {
reader: &'a R,
num_leaves: LeafCount,
hasher: PhantomData<H>,
}
impl<'a, R, H> MerkleAccumulatorView<'a, R, H>
where
R: HashReader,
H: CryptoHasher,
{
fn new(reader: &'a R, num_leaves: LeafCount) -> Self {
Self {
reader,
num_leaves,
hasher: PhantomData,
}
}
/// implementation for pub interface `MerkleAccumulator::append`
fn append(&self, new_leaves: &[HashValue]) -> Result<(HashValue, Vec<Node>)> {
// Deal with the case where new_leaves is empty
if new_leaves.is_empty() {
if self.num_leaves == 0 {
return Ok((*ACCUMULATOR_PLACEHOLDER_HASH, Vec::new()));
} else {
let root_hash = self.get_hash(Position::root_from_leaf_count(self.num_leaves))?;
return Ok((root_hash, Vec::new()));
}
}
let num_new_leaves = new_leaves.len();
let last_new_leaf_count = self.num_leaves + num_new_leaves as LeafCount;
let root_level = Position::root_level_from_leaf_count(last_new_leaf_count);
let mut to_freeze = Vec::with_capacity(Self::max_to_freeze(num_new_leaves, root_level));
// Iterate over the new leaves, adding them to to_freeze and then adding any frozen parents
// when right children are encountered. This has the effect of creating frozen nodes in
// perfect post-order, which can be used as a strictly increasing append only index for
// the underlying storage.
//
// We will track newly created left siblings while iterating so we can pair them with their
// right sibling, if and when it becomes frozen. If the frozen left sibling is not created
// in this iteration, it must already exist in storage.
let mut left_siblings: Vec<(_, _)> = Vec::new();
for (leaf_offset, leaf) in new_leaves.iter().enumerate() {
let leaf_pos = Position::from_leaf_index(self.num_leaves + leaf_offset as LeafCount);
let mut hash = *leaf;
to_freeze.push((leaf_pos, hash));
let mut pos = leaf_pos;
while pos.is_right_child() {
let sibling = pos.sibling();
hash = match left_siblings.pop() {
Some((x, left_hash)) => {
assert_eq!(x, sibling);
Self::hash_internal_node(left_hash, hash)
}
None => Self::hash_internal_node(self.reader.get(sibling)?, hash),
};
pos = pos.parent();
to_freeze.push((pos, hash));
}
// The node remaining must be a left child, possibly the root of a complete binary tree.
left_siblings.push((pos, hash));
}
// Now reconstruct the final root hash by walking up to root level and adding
// placeholder hash nodes as needed on the right, and left siblings that have either
// been newly created or read from storage.
let (mut pos, mut hash) = left_siblings.pop().expect("Must have at least one node");
for _ in pos.level()..root_level as u32 {
hash = if pos.is_left_child() {
Self::hash_internal_node(hash, *ACCUMULATOR_PLACEHOLDER_HASH)
} else {
let sibling = pos.sibling();
match left_siblings.pop() {
Some((x, left_hash)) => {
assert_eq!(x, sibling);
Self::hash_internal_node(left_hash, hash)
}
None => Self::hash_internal_node(self.reader.get(sibling)?, hash),
}
};
pos = pos.parent();
}
assert!(left_siblings.is_empty());
Ok((hash, to_freeze))
}
/// upper bound of num of frozen nodes:
/// new leaves and resulting frozen internal nodes forming a complete binary subtree
/// num_new_leaves * 2 - 1 < num_new_leaves * 2
/// and the full route from root of that subtree to the accumulator root turns frozen
/// height - (log2(num_new_leaves) + 1) < height - 1 = root_level
fn max_to_freeze(num_new_leaves: usize, root_level: u32) -> usize {
precondition!(root_level as usize <= MAX_ACCUMULATOR_PROOF_DEPTH);
precondition!(num_new_leaves < (usize::max_value() / 2));
precondition!(num_new_leaves * 2 <= usize::max_value() - root_level as usize);
num_new_leaves * 2 + root_level as usize
}
fn hash_internal_node(left: HashValue, right: HashValue) -> HashValue {
MerkleTreeInternalNode::<H>::new(left, right).hash()
}
fn rightmost_leaf_index(&self) -> u64 {
(self.num_leaves - 1) as u64
}
fn get_hash(&self, position: Position) -> Result<HashValue> {
let idx = self.rightmost_leaf_index();
if position.is_placeholder(idx) {
Ok(*ACCUMULATOR_PLACEHOLDER_HASH)
} else if position.is_freezable(idx) {
self.reader.get(position)
} else {
// non-frozen non-placeholder node
Ok(Self::hash_internal_node(
self.get_hash(position.left_child())?,
self.get_hash(position.right_child())?,
))
}
}
fn get_hashes(&self, positions: &[Position]) -> Result<Vec<HashValue>> {
positions.iter().map(|p| self.get_hash(*p)).collect()
}
fn get_root_hash(&self) -> Result<HashValue> {
self.get_hash(Position::root_from_leaf_count(self.num_leaves))
}
/// implementation for pub interface `MerkleAccumulator::get_proof`
fn get_proof(&self, leaf_index: u64) -> Result<AccumulatorProof<H>> {
ensure!(
leaf_index < self.num_leaves as u64,
"invalid leaf_index {}, num_leaves {}",
leaf_index,
self.num_leaves
);
let siblings = self.get_siblings(leaf_index, |_p| true)?;
Ok(AccumulatorProof::new(siblings))
}
/// Implementation for public interface `MerkleAccumulator::get_consistency_proof`.
fn get_consistency_proof(
&self,
sub_acc_leaves: LeafCount,
) -> Result<AccumulatorConsistencyProof> {
ensure!(
sub_acc_leaves <= self.num_leaves,
"Can't get accumulator consistency proof for a version newer than the local version. \
Local next version: {}, asked next version: {}",
self.num_leaves,
sub_acc_leaves,
);
let subtrees = FrozenSubtreeSiblingIterator::new(sub_acc_leaves, self.num_leaves)
.map(|p| self.reader.get(p))
.collect::<Result<Vec<_>>>()?;
Ok(AccumulatorConsistencyProof::new(subtrees))
}
/// Implementation for public interface `MerkleAccumulator::get_range_proof`.
fn get_range_proof(
&self,
first_leaf_index: Option<u64>,
num_leaves: LeafCount,
) -> Result<AccumulatorRangeProof<H>> {
let (left_siblings, right_siblings) =
self.get_range_proof_positions(first_leaf_index, num_leaves)?;
Ok(AccumulatorRangeProof::new(
self.get_hashes(&left_siblings)?,
self.get_hashes(&right_siblings)?,
))
}
fn get_range_proof_positions(
&self,
first_leaf_index: Option<u64>,
num_leaves: LeafCount,
) -> Result<(Vec<Position>, Vec<Position>)> {
if first_leaf_index.is_none() {
ensure!(
num_leaves == 0,
"num_leaves is not zero while first_leaf_index is None.",
);
return Ok((Vec::new(), Vec::new()));
}
let first_leaf_index = first_leaf_index.expect("first_leaf_index should not be None.");
ensure!(
num_leaves > 0,
"num_leaves is zero while first_leaf_index is not None.",
);
let last_leaf_index = first_leaf_index
.checked_add(num_leaves - 1)
.ok_or_else(|| format_err!("Requesting too many leaves."))?;
ensure!(
last_leaf_index < self.num_leaves as u64,
"Invalid last_leaf_index: {}, num_leaves: {}",
last_leaf_index,
self.num_leaves,
);
let left_siblings = self.get_sibling_positions(first_leaf_index, |p| p.is_left_child());
let right_siblings = self.get_sibling_positions(last_leaf_index, |p| p.is_right_child());
Ok((left_siblings, right_siblings))
}
fn | (
&self,
leaf_index: u64,
filter: impl Fn(Position) -> bool,
) -> Result<Vec<HashValue>> {
self.get_hashes(&self.get_sibling_positions(leaf_index, filter))
}
/// Helper function to get siblings on the path from the given leaf to the root. An additional
/// filter function can be applied to filter out certain siblings.
fn get_sibling_positions(
&self,
leaf_index: u64,
filter: impl Fn(Position) -> bool,
) -> Vec<Position> {
let root_pos = Position::root_from_leaf_count(self.num_leaves);
Position::from_leaf_index(leaf_index)
.iter_ancestor_sibling()
.take(root_pos.level() as usize)
.filter(|p| filter(*p))
.collect()
}
/// Implementation for public interface `MerkleAccumulator::get_frozen_subtree_hashes`.
fn get_frozen_subtree_hashes(&self) -> Result<Vec<HashValue>> {
FrozenSubTreeIterator::new(self.num_leaves)
.map(|p| self.reader.get(p))
.collect::<Result<Vec<_>>>()
}
}
#[cfg(test)]
mod tests;
| get_siblings | identifier_name |
instruction.rs | use core::convert::{TryFrom, TryInto};
use core::fmt::{self, Debug, Display, Error, Formatter};
use core::marker::PhantomData;
use core::ops::Deref;
use core::slice;
use core::str;
use capstone_sys::*;
use crate::arch::ArchDetail;
use crate::constants::Arch;
use crate::ffi::str_from_cstr_ptr;
/// Represents a slice of [`Insn`] returned by [`Capstone`](crate::Capstone) `disasm*()` methods.
///
/// To access inner [`&[Insn]`](Insn), use [`.as_ref()`](AsRef::as_ref).
/// ```
/// # use capstone::Instructions;
/// # use capstone::prelude::*;
/// # let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build().unwrap();
/// let insns: Instructions = cs.disasm_all(b"\x55\x48\x8b\x05", 0x1000).unwrap();
/// for insn in insns.as_ref() {
/// println!("{}", insn);
/// }
/// ```
#[derive(Debug)]
pub struct Instructions<'a>(&'a mut [cs_insn]);
/// Integer type used in `InsnId`
pub type InsnIdInt = u32;
/// Represents an instruction id, which may be architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::insn_name()`](crate::Capstone::insn_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
pub struct InsnId(pub InsnIdInt);
/// Integer type used in `InsnGroupId`
pub type InsnGroupIdInt = u8;
/// Represents the group an instruction belongs to, which may be architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::group_name()`](crate::Capstone::group_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct InsnGroupId(pub InsnGroupIdInt);
pub use capstone_sys::cs_group_type as InsnGroupType;
/// Integer type used in `RegId`
pub type RegIdInt = u16;
/// Represents an register id, which is architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::reg_name()`](crate::Capstone::reg_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct RegId(pub RegIdInt);
impl RegId {
/// Invalid Register
pub const INVALID_REG: Self = Self(0);
}
impl core::convert::From<u32> for RegId {
fn from(v: u32) -> RegId {
RegId(v.try_into().ok().unwrap_or(Self::INVALID_REG.0))
}
}
/// Represents how the register is accessed.
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
pub enum RegAccessType {
/// Operand read from memory or register.
ReadOnly,
/// Operand write from memory or register.
WriteOnly,
/// Operand read and write from memory or register.
ReadWrite,
}
impl RegAccessType {
/// Returns whether the instruction reads from the operand.
///
/// Note that an instruction may read and write to the register
/// simultaneously. In this case, the operand is also considered as
/// readable.
pub fn is_readable(self) -> bool {
self == RegAccessType::ReadOnly || self == RegAccessType::ReadWrite
}
/// Returns whether the instruction writes from the operand.
///
/// Note that an instruction may read and write to the register
/// simultaneously. In this case, the operand is also considered as
/// writable.
pub fn is_writable(self) -> bool {
self == RegAccessType::WriteOnly || self == RegAccessType::ReadWrite
}
}
impl TryFrom<cs_ac_type> for RegAccessType {
type Error = ();
fn try_from(access: cs_ac_type) -> Result<Self, Self::Error> {
// Check for flags other than CS_AC_READ or CS_AC_WRITE.
let unknown_flag_mask =!(CS_AC_READ | CS_AC_WRITE).0;
if (access.0 & unknown_flag_mask)!= 0 {
return Err(());
}
let is_readable = (access & CS_AC_READ).0!= 0;
let is_writable = (access & CS_AC_WRITE).0!= 0;
match (is_readable, is_writable) {
(true, false) => Ok(RegAccessType::ReadOnly),
(false, true) => Ok(RegAccessType::WriteOnly),
(true, true) => Ok(RegAccessType::ReadWrite),
_ => Err(()),
}
}
}
impl<'a> Instructions<'a> {
pub(crate) unsafe fn from_raw_parts(ptr: *mut cs_insn, len: usize) -> Instructions<'a> {
Instructions(slice::from_raw_parts_mut(ptr, len))
}
pub(crate) fn new_empty() -> Instructions<'a> {
Instructions(&mut [])
}
}
impl<'a> core::ops::Deref for Instructions<'a> {
type Target = [Insn<'a>];
#[inline]
fn deref(&self) -> &[Insn<'a>] {
// SAFETY: `cs_insn` has the same memory layout as `Insn`
unsafe { &*(self.0 as *const [cs_insn] as *const [Insn]) }
}
}
impl<'a> AsRef<[Insn<'a>]> for Instructions<'a> {
#[inline]
fn as_ref(&self) -> &[Insn<'a>] {
self.deref()
}
}
impl<'a> Drop for Instructions<'a> {
fn drop(&mut self) {
if!self.is_empty() {
unsafe {
cs_free(self.0.as_mut_ptr(), self.len());
}
}
}
}
/// A single disassembled CPU instruction.
///
/// # Detail
///
/// To learn how to get more instruction details, see [`InsnDetail`].
#[derive(Clone)]
#[repr(transparent)]
pub struct Insn<'a> {
/// Inner `cs_insn`
pub(crate) insn: cs_insn,
/// Adds lifetime
pub(crate) _marker: PhantomData<&'a InsnDetail<'a>>,
}
/// Contains architecture-independent details about an [`Insn`].
///
/// To get more detail about the instruction, enable extra details for the
/// [`Capstone`](crate::Capstone) instance with
/// [`Capstone::set_detail(True)`](crate::Capstone::set_detail) and use
/// [`Capstone::insn_detail()`](crate::Capstone::insn_detail).
///
/// ```
/// # use capstone::Instructions;
/// # use capstone::prelude::*;
/// let cs = Capstone::new()
/// .x86()
/// .mode(arch::x86::ArchMode::Mode32)
/// .detail(true) // needed to enable detail
/// .build()
/// .unwrap();
/// let insns = cs.disasm_all(b"\x90", 0x1000).unwrap();
/// for insn in insns.as_ref() {
/// println!("{}", insn);
/// let insn_detail: InsnDetail = cs.insn_detail(insn).unwrap();
/// println!(" {:?}", insn_detail.groups());
/// }
/// ```
///
/// # Arch-specific detail
///
/// To get additional architecture-specific information, use the
/// [`.arch_detail()`](Self::arch_detail) method to get an `ArchDetail` enum.
///
pub struct InsnDetail<'a>(pub(crate) &'a cs_detail, pub(crate) Arch);
impl<'a> Insn<'a> {
/// Create an `Insn` from a raw pointer to a [`capstone_sys::cs_insn`].
///
/// This function serves to allow integration with libraries which generate `capstone_sys::cs_insn`'s internally.
///
/// # Safety
///
/// Note that this function is unsafe, and assumes that you know what you are doing. In
/// particular, it generates a lifetime for the `Insn` from nothing, and that lifetime is in
/// no-way actually tied to the cs_insn itself. It is the responsibility of the caller to
/// ensure that the resulting `Insn` lives only as long as the `cs_insn`. This function
/// assumes that the pointer passed is non-null and a valid `cs_insn` pointer.
pub unsafe fn from_raw(insn: *const cs_insn) -> Self {
Self {
insn: *insn,
_marker: PhantomData,
}
}
/// The mnemonic for the instruction
pub fn mnemonic(&self) -> Option<&str> {
unsafe { str_from_cstr_ptr(self.insn.mnemonic.as_ptr()) }
}
/// The operand string associated with the instruction
pub fn op_str(&self) -> Option<&str> {
unsafe { str_from_cstr_ptr(self.insn.op_str.as_ptr()) }
}
/// Access instruction id
pub fn id(&self) -> InsnId {
InsnId(self.insn.id)
}
/// Size of instruction (in bytes)
fn len(&self) -> usize {
self.insn.size as usize
}
/// Instruction address
pub fn address(&self) -> u64 {
self.insn.address as u64
}
/// Byte-level representation of the instruction
pub fn bytes(&self) -> &[u8] {
&self.insn.bytes[..self.len()]
}
/// Returns the `Detail` object, if there is one. It is up to the caller to determine
/// the pre-conditions are satisfied.
///
/// Be careful this is still in early stages and largely untested with various `cs_option` and
/// architecture matrices
pub(crate) unsafe fn detail(&self, arch: Arch) -> InsnDetail {
InsnDetail(&*self.insn.detail, arch)
}
}
impl<'a> Debug for Insn<'a> {
fn fmt(&self, fmt: &mut Formatter) -> Result<(), Error> {
fmt.debug_struct("Insn")
.field("address", &self.address())
.field("len", &self.len())
.field("bytes", &self.bytes())
.field("mnemonic", &self.mnemonic())
.field("op_str", &self.op_str())
.finish()
}
}
impl<'a> Display for Insn<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
write!(fmt, "{:#x}: ", self.address())?;
if let Some(mnemonic) = self.mnemonic() |
Ok(())
}
}
/// Iterator over instruction group ids
#[derive(Debug, Clone)]
pub struct InsnGroupIter<'a>(slice::Iter<'a, InsnGroupIdInt>);
impl<'a> InsnDetail<'a> {
/// Returns the implicit read registers
pub fn regs_read(&self) -> &[RegId] {
unsafe {
&*(&self.0.regs_read[..self.0.regs_read_count as usize] as *const [RegIdInt]
as *const [RegId])
}
}
/// Returns the implicit write registers
pub fn regs_write(&self) -> &[RegId] {
unsafe {
&*(&self.0.regs_write[..self.0.regs_write_count as usize] as *const [RegIdInt]
as *const [RegId])
}
}
/// Returns the groups to which this instruction belongs
pub fn groups(&self) -> &[InsnGroupId] {
unsafe {
&*(&self.0.groups[..self.0.groups_count as usize] as *const [InsnGroupIdInt]
as *const [InsnGroupId])
}
}
/// Architecture-specific detail
pub fn arch_detail(&self) -> ArchDetail {
macro_rules! def_arch_detail_match {
(
$( [ $ARCH:ident, $detail:ident, $insn_detail:ident, $arch:ident ] )*
) => {
use self::ArchDetail::*;
use crate::Arch::*;
$( use crate::arch::$arch::$insn_detail; )*
return match self.1 {
$(
$ARCH => {
$detail($insn_detail(unsafe { &self.0.__bindgen_anon_1.$arch }))
}
)*
_ => panic!("Unsupported detail arch"),
}
}
}
def_arch_detail_match!(
[ARM, ArmDetail, ArmInsnDetail, arm]
[ARM64, Arm64Detail, Arm64InsnDetail, arm64]
[EVM, EvmDetail, EvmInsnDetail, evm]
[M680X, M680xDetail, M680xInsnDetail, m680x]
[M68K, M68kDetail, M68kInsnDetail, m68k]
[MIPS, MipsDetail, MipsInsnDetail, mips]
[PPC, PpcDetail, PpcInsnDetail, ppc]
[RISCV, RiscVDetail, RiscVInsnDetail, riscv]
[SPARC, SparcDetail, SparcInsnDetail, sparc]
[TMS320C64X, Tms320c64xDetail, Tms320c64xInsnDetail, tms320c64x]
[X86, X86Detail, X86InsnDetail, x86]
[XCORE, XcoreDetail, XcoreInsnDetail, xcore]
);
}
}
impl<'a> Debug for InsnDetail<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
fmt.debug_struct("Detail")
.field("regs_read", &self.regs_read())
.field("regs_write", &self.regs_write())
.field("groups", &self.groups())
.finish()
}
}
impl<'a> Display for Instructions<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
for instruction in self.iter() {
write!(fmt, "{:x}:\t", instruction.address())?;
for byte in instruction.bytes() {
write!(fmt, " {:02x}", byte)?;
}
let remainder = 16 * 3 - instruction.bytes().len() * 3;
for _ in 0..remainder {
write!(fmt, " ")?;
}
if let Some(mnemonic) = instruction.mnemonic() {
write!(fmt, " {}", mnemonic)?;
if let Some(op_str) = instruction.op_str() {
write!(fmt, " {}", op_str)?;
}
}
writeln!(fmt)?;
}
Ok(())
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_invalid_reg_access() {
assert_eq!(RegAccessType::try_from(cs_ac_type(1337)), Err(()));
}
}
| {
write!(fmt, "{} ", mnemonic)?;
if let Some(op_str) = self.op_str() {
write!(fmt, "{}", op_str)?;
}
} | conditional_block |
instruction.rs | use core::convert::{TryFrom, TryInto};
use core::fmt::{self, Debug, Display, Error, Formatter};
use core::marker::PhantomData;
use core::ops::Deref;
use core::slice;
use core::str;
use capstone_sys::*;
use crate::arch::ArchDetail;
use crate::constants::Arch;
use crate::ffi::str_from_cstr_ptr;
/// Represents a slice of [`Insn`] returned by [`Capstone`](crate::Capstone) `disasm*()` methods.
///
/// To access inner [`&[Insn]`](Insn), use [`.as_ref()`](AsRef::as_ref).
/// ```
/// # use capstone::Instructions;
/// # use capstone::prelude::*;
/// # let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build().unwrap();
/// let insns: Instructions = cs.disasm_all(b"\x55\x48\x8b\x05", 0x1000).unwrap();
/// for insn in insns.as_ref() {
/// println!("{}", insn);
/// }
/// ```
#[derive(Debug)]
pub struct Instructions<'a>(&'a mut [cs_insn]);
/// Integer type used in `InsnId`
pub type InsnIdInt = u32;
/// Represents an instruction id, which may be architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::insn_name()`](crate::Capstone::insn_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
pub struct InsnId(pub InsnIdInt);
/// Integer type used in `InsnGroupId`
pub type InsnGroupIdInt = u8;
/// Represents the group an instruction belongs to, which may be architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::group_name()`](crate::Capstone::group_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct InsnGroupId(pub InsnGroupIdInt);
pub use capstone_sys::cs_group_type as InsnGroupType;
/// Integer type used in `RegId`
pub type RegIdInt = u16;
/// Represents an register id, which is architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::reg_name()`](crate::Capstone::reg_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct RegId(pub RegIdInt);
impl RegId {
/// Invalid Register
pub const INVALID_REG: Self = Self(0);
}
impl core::convert::From<u32> for RegId {
fn from(v: u32) -> RegId {
RegId(v.try_into().ok().unwrap_or(Self::INVALID_REG.0))
}
}
/// Represents how the register is accessed.
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
pub enum RegAccessType {
/// Operand read from memory or register.
ReadOnly,
/// Operand write from memory or register.
WriteOnly,
/// Operand read and write from memory or register.
ReadWrite,
}
impl RegAccessType {
/// Returns whether the instruction reads from the operand.
///
/// Note that an instruction may read and write to the register
/// simultaneously. In this case, the operand is also considered as
/// readable.
pub fn is_readable(self) -> bool {
self == RegAccessType::ReadOnly || self == RegAccessType::ReadWrite
}
/// Returns whether the instruction writes from the operand.
///
/// Note that an instruction may read and write to the register
/// simultaneously. In this case, the operand is also considered as
/// writable.
pub fn is_writable(self) -> bool {
self == RegAccessType::WriteOnly || self == RegAccessType::ReadWrite
}
}
impl TryFrom<cs_ac_type> for RegAccessType {
type Error = ();
fn try_from(access: cs_ac_type) -> Result<Self, Self::Error> {
// Check for flags other than CS_AC_READ or CS_AC_WRITE.
let unknown_flag_mask =!(CS_AC_READ | CS_AC_WRITE).0;
if (access.0 & unknown_flag_mask)!= 0 {
return Err(());
}
let is_readable = (access & CS_AC_READ).0!= 0;
let is_writable = (access & CS_AC_WRITE).0!= 0;
match (is_readable, is_writable) {
(true, false) => Ok(RegAccessType::ReadOnly),
(false, true) => Ok(RegAccessType::WriteOnly),
(true, true) => Ok(RegAccessType::ReadWrite),
_ => Err(()),
}
}
}
impl<'a> Instructions<'a> {
pub(crate) unsafe fn from_raw_parts(ptr: *mut cs_insn, len: usize) -> Instructions<'a> {
Instructions(slice::from_raw_parts_mut(ptr, len))
}
pub(crate) fn new_empty() -> Instructions<'a> {
Instructions(&mut [])
}
}
impl<'a> core::ops::Deref for Instructions<'a> {
type Target = [Insn<'a>];
#[inline]
fn deref(&self) -> &[Insn<'a>] {
// SAFETY: `cs_insn` has the same memory layout as `Insn`
unsafe { &*(self.0 as *const [cs_insn] as *const [Insn]) }
}
}
impl<'a> AsRef<[Insn<'a>]> for Instructions<'a> {
#[inline]
fn as_ref(&self) -> &[Insn<'a>] {
self.deref()
}
}
impl<'a> Drop for Instructions<'a> {
fn drop(&mut self) {
if!self.is_empty() {
unsafe {
cs_free(self.0.as_mut_ptr(), self.len());
}
}
}
}
/// A single disassembled CPU instruction.
///
/// # Detail
///
/// To learn how to get more instruction details, see [`InsnDetail`].
#[derive(Clone)]
#[repr(transparent)]
pub struct Insn<'a> {
/// Inner `cs_insn`
pub(crate) insn: cs_insn,
/// Adds lifetime
pub(crate) _marker: PhantomData<&'a InsnDetail<'a>>,
}
/// Contains architecture-independent details about an [`Insn`].
///
/// To get more detail about the instruction, enable extra details for the
/// [`Capstone`](crate::Capstone) instance with
/// [`Capstone::set_detail(True)`](crate::Capstone::set_detail) and use
/// [`Capstone::insn_detail()`](crate::Capstone::insn_detail).
///
/// ```
/// # use capstone::Instructions;
/// # use capstone::prelude::*;
/// let cs = Capstone::new()
/// .x86()
/// .mode(arch::x86::ArchMode::Mode32)
/// .detail(true) // needed to enable detail
/// .build()
/// .unwrap();
/// let insns = cs.disasm_all(b"\x90", 0x1000).unwrap();
/// for insn in insns.as_ref() {
/// println!("{}", insn);
/// let insn_detail: InsnDetail = cs.insn_detail(insn).unwrap();
/// println!(" {:?}", insn_detail.groups());
/// }
/// ```
///
/// # Arch-specific detail
///
/// To get additional architecture-specific information, use the
/// [`.arch_detail()`](Self::arch_detail) method to get an `ArchDetail` enum.
///
pub struct InsnDetail<'a>(pub(crate) &'a cs_detail, pub(crate) Arch);
impl<'a> Insn<'a> {
/// Create an `Insn` from a raw pointer to a [`capstone_sys::cs_insn`].
///
/// This function serves to allow integration with libraries which generate `capstone_sys::cs_insn`'s internally.
///
/// # Safety
///
/// Note that this function is unsafe, and assumes that you know what you are doing. In
/// particular, it generates a lifetime for the `Insn` from nothing, and that lifetime is in
/// no-way actually tied to the cs_insn itself. It is the responsibility of the caller to
/// ensure that the resulting `Insn` lives only as long as the `cs_insn`. This function
/// assumes that the pointer passed is non-null and a valid `cs_insn` pointer.
pub unsafe fn from_raw(insn: *const cs_insn) -> Self {
Self {
insn: *insn,
_marker: PhantomData,
}
}
/// The mnemonic for the instruction
pub fn mnemonic(&self) -> Option<&str> {
unsafe { str_from_cstr_ptr(self.insn.mnemonic.as_ptr()) }
}
/// The operand string associated with the instruction
pub fn op_str(&self) -> Option<&str> {
unsafe { str_from_cstr_ptr(self.insn.op_str.as_ptr()) }
}
/// Access instruction id
pub fn id(&self) -> InsnId {
InsnId(self.insn.id)
}
/// Size of instruction (in bytes)
fn len(&self) -> usize {
self.insn.size as usize
}
/// Instruction address
pub fn address(&self) -> u64 {
self.insn.address as u64
}
/// Byte-level representation of the instruction
pub fn bytes(&self) -> &[u8] {
&self.insn.bytes[..self.len()]
}
/// Returns the `Detail` object, if there is one. It is up to the caller to determine
/// the pre-conditions are satisfied.
///
/// Be careful this is still in early stages and largely untested with various `cs_option` and
/// architecture matrices
pub(crate) unsafe fn detail(&self, arch: Arch) -> InsnDetail {
InsnDetail(&*self.insn.detail, arch)
}
}
impl<'a> Debug for Insn<'a> {
fn fmt(&self, fmt: &mut Formatter) -> Result<(), Error> {
fmt.debug_struct("Insn")
.field("address", &self.address())
.field("len", &self.len())
.field("bytes", &self.bytes())
.field("mnemonic", &self.mnemonic())
.field("op_str", &self.op_str())
.finish()
}
}
impl<'a> Display for Insn<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
write!(fmt, "{:#x}: ", self.address())?;
if let Some(mnemonic) = self.mnemonic() {
write!(fmt, "{} ", mnemonic)?;
if let Some(op_str) = self.op_str() {
write!(fmt, "{}", op_str)?;
}
}
Ok(())
}
}
/// Iterator over instruction group ids
#[derive(Debug, Clone)]
pub struct InsnGroupIter<'a>(slice::Iter<'a, InsnGroupIdInt>);
impl<'a> InsnDetail<'a> {
/// Returns the implicit read registers
pub fn regs_read(&self) -> &[RegId] {
unsafe {
&*(&self.0.regs_read[..self.0.regs_read_count as usize] as *const [RegIdInt]
as *const [RegId])
}
}
/// Returns the implicit write registers
pub fn regs_write(&self) -> &[RegId] {
unsafe {
&*(&self.0.regs_write[..self.0.regs_write_count as usize] as *const [RegIdInt]
as *const [RegId])
}
}
/// Returns the groups to which this instruction belongs
pub fn groups(&self) -> &[InsnGroupId] {
unsafe {
&*(&self.0.groups[..self.0.groups_count as usize] as *const [InsnGroupIdInt]
as *const [InsnGroupId])
}
}
/// Architecture-specific detail
pub fn arch_detail(&self) -> ArchDetail | [ARM, ArmDetail, ArmInsnDetail, arm]
[ARM64, Arm64Detail, Arm64InsnDetail, arm64]
[EVM, EvmDetail, EvmInsnDetail, evm]
[M680X, M680xDetail, M680xInsnDetail, m680x]
[M68K, M68kDetail, M68kInsnDetail, m68k]
[MIPS, MipsDetail, MipsInsnDetail, mips]
[PPC, PpcDetail, PpcInsnDetail, ppc]
[RISCV, RiscVDetail, RiscVInsnDetail, riscv]
[SPARC, SparcDetail, SparcInsnDetail, sparc]
[TMS320C64X, Tms320c64xDetail, Tms320c64xInsnDetail, tms320c64x]
[X86, X86Detail, X86InsnDetail, x86]
[XCORE, XcoreDetail, XcoreInsnDetail, xcore]
);
}
}
impl<'a> Debug for InsnDetail<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
fmt.debug_struct("Detail")
.field("regs_read", &self.regs_read())
.field("regs_write", &self.regs_write())
.field("groups", &self.groups())
.finish()
}
}
impl<'a> Display for Instructions<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
for instruction in self.iter() {
write!(fmt, "{:x}:\t", instruction.address())?;
for byte in instruction.bytes() {
write!(fmt, " {:02x}", byte)?;
}
let remainder = 16 * 3 - instruction.bytes().len() * 3;
for _ in 0..remainder {
write!(fmt, " ")?;
}
if let Some(mnemonic) = instruction.mnemonic() {
write!(fmt, " {}", mnemonic)?;
if let Some(op_str) = instruction.op_str() {
write!(fmt, " {}", op_str)?;
}
}
writeln!(fmt)?;
}
Ok(())
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_invalid_reg_access() {
assert_eq!(RegAccessType::try_from(cs_ac_type(1337)), Err(()));
}
}
| {
macro_rules! def_arch_detail_match {
(
$( [ $ARCH:ident, $detail:ident, $insn_detail:ident, $arch:ident ] )*
) => {
use self::ArchDetail::*;
use crate::Arch::*;
$( use crate::arch::$arch::$insn_detail; )*
return match self.1 {
$(
$ARCH => {
$detail($insn_detail(unsafe { &self.0.__bindgen_anon_1.$arch }))
}
)*
_ => panic!("Unsupported detail arch"),
}
}
}
def_arch_detail_match!( | identifier_body |
instruction.rs | use core::convert::{TryFrom, TryInto};
use core::fmt::{self, Debug, Display, Error, Formatter};
use core::marker::PhantomData;
use core::ops::Deref;
use core::slice;
use core::str;
use capstone_sys::*;
use crate::arch::ArchDetail;
use crate::constants::Arch;
use crate::ffi::str_from_cstr_ptr;
/// Represents a slice of [`Insn`] returned by [`Capstone`](crate::Capstone) `disasm*()` methods.
///
/// To access inner [`&[Insn]`](Insn), use [`.as_ref()`](AsRef::as_ref).
/// ```
/// # use capstone::Instructions;
/// # use capstone::prelude::*;
/// # let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build().unwrap();
/// let insns: Instructions = cs.disasm_all(b"\x55\x48\x8b\x05", 0x1000).unwrap();
/// for insn in insns.as_ref() {
/// println!("{}", insn);
/// }
/// ```
#[derive(Debug)]
pub struct Instructions<'a>(&'a mut [cs_insn]);
/// Integer type used in `InsnId`
pub type InsnIdInt = u32;
/// Represents an instruction id, which may be architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::insn_name()`](crate::Capstone::insn_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
pub struct InsnId(pub InsnIdInt);
/// Integer type used in `InsnGroupId`
pub type InsnGroupIdInt = u8;
/// Represents the group an instruction belongs to, which may be architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::group_name()`](crate::Capstone::group_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct InsnGroupId(pub InsnGroupIdInt);
pub use capstone_sys::cs_group_type as InsnGroupType;
/// Integer type used in `RegId`
pub type RegIdInt = u16;
/// Represents an register id, which is architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::reg_name()`](crate::Capstone::reg_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct RegId(pub RegIdInt);
impl RegId {
/// Invalid Register
pub const INVALID_REG: Self = Self(0);
}
impl core::convert::From<u32> for RegId {
fn from(v: u32) -> RegId {
RegId(v.try_into().ok().unwrap_or(Self::INVALID_REG.0))
}
}
/// Represents how the register is accessed.
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
pub enum RegAccessType {
/// Operand read from memory or register.
ReadOnly,
/// Operand write from memory or register.
WriteOnly,
/// Operand read and write from memory or register.
ReadWrite,
}
impl RegAccessType {
/// Returns whether the instruction reads from the operand.
///
/// Note that an instruction may read and write to the register
/// simultaneously. In this case, the operand is also considered as
/// readable.
pub fn is_readable(self) -> bool {
self == RegAccessType::ReadOnly || self == RegAccessType::ReadWrite
}
/// Returns whether the instruction writes from the operand.
///
/// Note that an instruction may read and write to the register
/// simultaneously. In this case, the operand is also considered as
/// writable.
pub fn is_writable(self) -> bool {
self == RegAccessType::WriteOnly || self == RegAccessType::ReadWrite
}
}
impl TryFrom<cs_ac_type> for RegAccessType {
type Error = ();
fn try_from(access: cs_ac_type) -> Result<Self, Self::Error> {
// Check for flags other than CS_AC_READ or CS_AC_WRITE.
let unknown_flag_mask =!(CS_AC_READ | CS_AC_WRITE).0;
if (access.0 & unknown_flag_mask)!= 0 {
return Err(());
}
let is_readable = (access & CS_AC_READ).0!= 0;
let is_writable = (access & CS_AC_WRITE).0!= 0;
match (is_readable, is_writable) {
(true, false) => Ok(RegAccessType::ReadOnly),
(false, true) => Ok(RegAccessType::WriteOnly),
(true, true) => Ok(RegAccessType::ReadWrite),
_ => Err(()),
}
}
}
impl<'a> Instructions<'a> {
pub(crate) unsafe fn from_raw_parts(ptr: *mut cs_insn, len: usize) -> Instructions<'a> {
Instructions(slice::from_raw_parts_mut(ptr, len))
}
pub(crate) fn new_empty() -> Instructions<'a> {
Instructions(&mut [])
}
}
impl<'a> core::ops::Deref for Instructions<'a> {
type Target = [Insn<'a>];
#[inline]
fn deref(&self) -> &[Insn<'a>] {
// SAFETY: `cs_insn` has the same memory layout as `Insn`
unsafe { &*(self.0 as *const [cs_insn] as *const [Insn]) }
}
}
impl<'a> AsRef<[Insn<'a>]> for Instructions<'a> {
#[inline]
fn as_ref(&self) -> &[Insn<'a>] {
self.deref()
}
}
impl<'a> Drop for Instructions<'a> {
fn drop(&mut self) {
if!self.is_empty() {
unsafe {
cs_free(self.0.as_mut_ptr(), self.len());
}
}
}
}
/// A single disassembled CPU instruction.
///
/// # Detail
///
/// To learn how to get more instruction details, see [`InsnDetail`].
#[derive(Clone)]
#[repr(transparent)]
pub struct Insn<'a> {
/// Inner `cs_insn`
pub(crate) insn: cs_insn,
/// Adds lifetime
pub(crate) _marker: PhantomData<&'a InsnDetail<'a>>,
}
/// Contains architecture-independent details about an [`Insn`].
///
/// To get more detail about the instruction, enable extra details for the
/// [`Capstone`](crate::Capstone) instance with
/// [`Capstone::set_detail(True)`](crate::Capstone::set_detail) and use
/// [`Capstone::insn_detail()`](crate::Capstone::insn_detail).
///
/// ```
/// # use capstone::Instructions;
/// # use capstone::prelude::*;
/// let cs = Capstone::new()
/// .x86()
/// .mode(arch::x86::ArchMode::Mode32)
/// .detail(true) // needed to enable detail
/// .build()
/// .unwrap();
/// let insns = cs.disasm_all(b"\x90", 0x1000).unwrap();
/// for insn in insns.as_ref() {
/// println!("{}", insn);
/// let insn_detail: InsnDetail = cs.insn_detail(insn).unwrap();
/// println!(" {:?}", insn_detail.groups());
/// }
/// ```
///
/// # Arch-specific detail
///
/// To get additional architecture-specific information, use the
/// [`.arch_detail()`](Self::arch_detail) method to get an `ArchDetail` enum.
///
pub struct | <'a>(pub(crate) &'a cs_detail, pub(crate) Arch);
impl<'a> Insn<'a> {
/// Create an `Insn` from a raw pointer to a [`capstone_sys::cs_insn`].
///
/// This function serves to allow integration with libraries which generate `capstone_sys::cs_insn`'s internally.
///
/// # Safety
///
/// Note that this function is unsafe, and assumes that you know what you are doing. In
/// particular, it generates a lifetime for the `Insn` from nothing, and that lifetime is in
/// no-way actually tied to the cs_insn itself. It is the responsibility of the caller to
/// ensure that the resulting `Insn` lives only as long as the `cs_insn`. This function
/// assumes that the pointer passed is non-null and a valid `cs_insn` pointer.
pub unsafe fn from_raw(insn: *const cs_insn) -> Self {
Self {
insn: *insn,
_marker: PhantomData,
}
}
/// The mnemonic for the instruction
pub fn mnemonic(&self) -> Option<&str> {
unsafe { str_from_cstr_ptr(self.insn.mnemonic.as_ptr()) }
}
/// The operand string associated with the instruction
pub fn op_str(&self) -> Option<&str> {
unsafe { str_from_cstr_ptr(self.insn.op_str.as_ptr()) }
}
/// Access instruction id
pub fn id(&self) -> InsnId {
InsnId(self.insn.id)
}
/// Size of instruction (in bytes)
fn len(&self) -> usize {
self.insn.size as usize
}
/// Instruction address
pub fn address(&self) -> u64 {
self.insn.address as u64
}
/// Byte-level representation of the instruction
pub fn bytes(&self) -> &[u8] {
&self.insn.bytes[..self.len()]
}
/// Returns the `Detail` object, if there is one. It is up to the caller to determine
/// the pre-conditions are satisfied.
///
/// Be careful this is still in early stages and largely untested with various `cs_option` and
/// architecture matrices
pub(crate) unsafe fn detail(&self, arch: Arch) -> InsnDetail {
InsnDetail(&*self.insn.detail, arch)
}
}
impl<'a> Debug for Insn<'a> {
fn fmt(&self, fmt: &mut Formatter) -> Result<(), Error> {
fmt.debug_struct("Insn")
.field("address", &self.address())
.field("len", &self.len())
.field("bytes", &self.bytes())
.field("mnemonic", &self.mnemonic())
.field("op_str", &self.op_str())
.finish()
}
}
impl<'a> Display for Insn<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
write!(fmt, "{:#x}: ", self.address())?;
if let Some(mnemonic) = self.mnemonic() {
write!(fmt, "{} ", mnemonic)?;
if let Some(op_str) = self.op_str() {
write!(fmt, "{}", op_str)?;
}
}
Ok(())
}
}
/// Iterator over instruction group ids
#[derive(Debug, Clone)]
pub struct InsnGroupIter<'a>(slice::Iter<'a, InsnGroupIdInt>);
impl<'a> InsnDetail<'a> {
/// Returns the implicit read registers
pub fn regs_read(&self) -> &[RegId] {
unsafe {
&*(&self.0.regs_read[..self.0.regs_read_count as usize] as *const [RegIdInt]
as *const [RegId])
}
}
/// Returns the implicit write registers
pub fn regs_write(&self) -> &[RegId] {
unsafe {
&*(&self.0.regs_write[..self.0.regs_write_count as usize] as *const [RegIdInt]
as *const [RegId])
}
}
/// Returns the groups to which this instruction belongs
pub fn groups(&self) -> &[InsnGroupId] {
unsafe {
&*(&self.0.groups[..self.0.groups_count as usize] as *const [InsnGroupIdInt]
as *const [InsnGroupId])
}
}
/// Architecture-specific detail
pub fn arch_detail(&self) -> ArchDetail {
macro_rules! def_arch_detail_match {
(
$( [ $ARCH:ident, $detail:ident, $insn_detail:ident, $arch:ident ] )*
) => {
use self::ArchDetail::*;
use crate::Arch::*;
$( use crate::arch::$arch::$insn_detail; )*
return match self.1 {
$(
$ARCH => {
$detail($insn_detail(unsafe { &self.0.__bindgen_anon_1.$arch }))
}
)*
_ => panic!("Unsupported detail arch"),
}
}
}
def_arch_detail_match!(
[ARM, ArmDetail, ArmInsnDetail, arm]
[ARM64, Arm64Detail, Arm64InsnDetail, arm64]
[EVM, EvmDetail, EvmInsnDetail, evm]
[M680X, M680xDetail, M680xInsnDetail, m680x]
[M68K, M68kDetail, M68kInsnDetail, m68k]
[MIPS, MipsDetail, MipsInsnDetail, mips]
[PPC, PpcDetail, PpcInsnDetail, ppc]
[RISCV, RiscVDetail, RiscVInsnDetail, riscv]
[SPARC, SparcDetail, SparcInsnDetail, sparc]
[TMS320C64X, Tms320c64xDetail, Tms320c64xInsnDetail, tms320c64x]
[X86, X86Detail, X86InsnDetail, x86]
[XCORE, XcoreDetail, XcoreInsnDetail, xcore]
);
}
}
impl<'a> Debug for InsnDetail<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
fmt.debug_struct("Detail")
.field("regs_read", &self.regs_read())
.field("regs_write", &self.regs_write())
.field("groups", &self.groups())
.finish()
}
}
impl<'a> Display for Instructions<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
for instruction in self.iter() {
write!(fmt, "{:x}:\t", instruction.address())?;
for byte in instruction.bytes() {
write!(fmt, " {:02x}", byte)?;
}
let remainder = 16 * 3 - instruction.bytes().len() * 3;
for _ in 0..remainder {
write!(fmt, " ")?;
}
if let Some(mnemonic) = instruction.mnemonic() {
write!(fmt, " {}", mnemonic)?;
if let Some(op_str) = instruction.op_str() {
write!(fmt, " {}", op_str)?;
}
}
writeln!(fmt)?;
}
Ok(())
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_invalid_reg_access() {
assert_eq!(RegAccessType::try_from(cs_ac_type(1337)), Err(()));
}
}
| InsnDetail | identifier_name |
instruction.rs | use core::convert::{TryFrom, TryInto};
use core::fmt::{self, Debug, Display, Error, Formatter};
use core::marker::PhantomData;
use core::ops::Deref;
use core::slice;
use core::str;
use capstone_sys::*;
use crate::arch::ArchDetail;
use crate::constants::Arch;
use crate::ffi::str_from_cstr_ptr;
/// Represents a slice of [`Insn`] returned by [`Capstone`](crate::Capstone) `disasm*()` methods.
///
/// To access inner [`&[Insn]`](Insn), use [`.as_ref()`](AsRef::as_ref).
/// ```
/// # use capstone::Instructions;
/// # use capstone::prelude::*;
/// # let cs = Capstone::new().x86().mode(arch::x86::ArchMode::Mode32).build().unwrap();
/// let insns: Instructions = cs.disasm_all(b"\x55\x48\x8b\x05", 0x1000).unwrap();
/// for insn in insns.as_ref() {
/// println!("{}", insn);
/// }
/// ```
#[derive(Debug)]
pub struct Instructions<'a>(&'a mut [cs_insn]);
/// Integer type used in `InsnId`
pub type InsnIdInt = u32;
/// Represents an instruction id, which may be architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::insn_name()`](crate::Capstone::insn_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
pub struct InsnId(pub InsnIdInt);
/// Integer type used in `InsnGroupId`
pub type InsnGroupIdInt = u8;
/// Represents the group an instruction belongs to, which may be architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::group_name()`](crate::Capstone::group_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct InsnGroupId(pub InsnGroupIdInt);
pub use capstone_sys::cs_group_type as InsnGroupType;
/// Integer type used in `RegId`
pub type RegIdInt = u16;
/// Represents an register id, which is architecture-specific.
///
/// To translate to a human-readable name, see [`Capstone::reg_name()`](crate::Capstone::reg_name).
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq, PartialOrd, Ord)]
#[repr(transparent)]
pub struct RegId(pub RegIdInt);
impl RegId {
/// Invalid Register
pub const INVALID_REG: Self = Self(0);
}
impl core::convert::From<u32> for RegId {
fn from(v: u32) -> RegId {
RegId(v.try_into().ok().unwrap_or(Self::INVALID_REG.0))
}
}
/// Represents how the register is accessed.
#[derive(Copy, Clone, Debug, Hash, PartialEq, Eq)]
pub enum RegAccessType {
/// Operand read from memory or register.
ReadOnly,
/// Operand write from memory or register.
WriteOnly,
/// Operand read and write from memory or register.
ReadWrite,
}
impl RegAccessType {
/// Returns whether the instruction reads from the operand.
///
/// Note that an instruction may read and write to the register
/// simultaneously. In this case, the operand is also considered as
/// readable.
pub fn is_readable(self) -> bool {
self == RegAccessType::ReadOnly || self == RegAccessType::ReadWrite
}
/// Returns whether the instruction writes from the operand. | ///
/// Note that an instruction may read and write to the register
/// simultaneously. In this case, the operand is also considered as
/// writable.
pub fn is_writable(self) -> bool {
self == RegAccessType::WriteOnly || self == RegAccessType::ReadWrite
}
}
impl TryFrom<cs_ac_type> for RegAccessType {
type Error = ();
fn try_from(access: cs_ac_type) -> Result<Self, Self::Error> {
// Check for flags other than CS_AC_READ or CS_AC_WRITE.
let unknown_flag_mask =!(CS_AC_READ | CS_AC_WRITE).0;
if (access.0 & unknown_flag_mask)!= 0 {
return Err(());
}
let is_readable = (access & CS_AC_READ).0!= 0;
let is_writable = (access & CS_AC_WRITE).0!= 0;
match (is_readable, is_writable) {
(true, false) => Ok(RegAccessType::ReadOnly),
(false, true) => Ok(RegAccessType::WriteOnly),
(true, true) => Ok(RegAccessType::ReadWrite),
_ => Err(()),
}
}
}
impl<'a> Instructions<'a> {
pub(crate) unsafe fn from_raw_parts(ptr: *mut cs_insn, len: usize) -> Instructions<'a> {
Instructions(slice::from_raw_parts_mut(ptr, len))
}
pub(crate) fn new_empty() -> Instructions<'a> {
Instructions(&mut [])
}
}
impl<'a> core::ops::Deref for Instructions<'a> {
type Target = [Insn<'a>];
#[inline]
fn deref(&self) -> &[Insn<'a>] {
// SAFETY: `cs_insn` has the same memory layout as `Insn`
unsafe { &*(self.0 as *const [cs_insn] as *const [Insn]) }
}
}
impl<'a> AsRef<[Insn<'a>]> for Instructions<'a> {
#[inline]
fn as_ref(&self) -> &[Insn<'a>] {
self.deref()
}
}
impl<'a> Drop for Instructions<'a> {
fn drop(&mut self) {
if!self.is_empty() {
unsafe {
cs_free(self.0.as_mut_ptr(), self.len());
}
}
}
}
/// A single disassembled CPU instruction.
///
/// # Detail
///
/// To learn how to get more instruction details, see [`InsnDetail`].
#[derive(Clone)]
#[repr(transparent)]
pub struct Insn<'a> {
/// Inner `cs_insn`
pub(crate) insn: cs_insn,
/// Adds lifetime
pub(crate) _marker: PhantomData<&'a InsnDetail<'a>>,
}
/// Contains architecture-independent details about an [`Insn`].
///
/// To get more detail about the instruction, enable extra details for the
/// [`Capstone`](crate::Capstone) instance with
/// [`Capstone::set_detail(True)`](crate::Capstone::set_detail) and use
/// [`Capstone::insn_detail()`](crate::Capstone::insn_detail).
///
/// ```
/// # use capstone::Instructions;
/// # use capstone::prelude::*;
/// let cs = Capstone::new()
/// .x86()
/// .mode(arch::x86::ArchMode::Mode32)
/// .detail(true) // needed to enable detail
/// .build()
/// .unwrap();
/// let insns = cs.disasm_all(b"\x90", 0x1000).unwrap();
/// for insn in insns.as_ref() {
/// println!("{}", insn);
/// let insn_detail: InsnDetail = cs.insn_detail(insn).unwrap();
/// println!(" {:?}", insn_detail.groups());
/// }
/// ```
///
/// # Arch-specific detail
///
/// To get additional architecture-specific information, use the
/// [`.arch_detail()`](Self::arch_detail) method to get an `ArchDetail` enum.
///
pub struct InsnDetail<'a>(pub(crate) &'a cs_detail, pub(crate) Arch);
impl<'a> Insn<'a> {
/// Create an `Insn` from a raw pointer to a [`capstone_sys::cs_insn`].
///
/// This function serves to allow integration with libraries which generate `capstone_sys::cs_insn`'s internally.
///
/// # Safety
///
/// Note that this function is unsafe, and assumes that you know what you are doing. In
/// particular, it generates a lifetime for the `Insn` from nothing, and that lifetime is in
/// no-way actually tied to the cs_insn itself. It is the responsibility of the caller to
/// ensure that the resulting `Insn` lives only as long as the `cs_insn`. This function
/// assumes that the pointer passed is non-null and a valid `cs_insn` pointer.
pub unsafe fn from_raw(insn: *const cs_insn) -> Self {
Self {
insn: *insn,
_marker: PhantomData,
}
}
/// The mnemonic for the instruction
pub fn mnemonic(&self) -> Option<&str> {
unsafe { str_from_cstr_ptr(self.insn.mnemonic.as_ptr()) }
}
/// The operand string associated with the instruction
pub fn op_str(&self) -> Option<&str> {
unsafe { str_from_cstr_ptr(self.insn.op_str.as_ptr()) }
}
/// Access instruction id
pub fn id(&self) -> InsnId {
InsnId(self.insn.id)
}
/// Size of instruction (in bytes)
fn len(&self) -> usize {
self.insn.size as usize
}
/// Instruction address
pub fn address(&self) -> u64 {
self.insn.address as u64
}
/// Byte-level representation of the instruction
pub fn bytes(&self) -> &[u8] {
&self.insn.bytes[..self.len()]
}
/// Returns the `Detail` object, if there is one. It is up to the caller to determine
/// the pre-conditions are satisfied.
///
/// Be careful this is still in early stages and largely untested with various `cs_option` and
/// architecture matrices
pub(crate) unsafe fn detail(&self, arch: Arch) -> InsnDetail {
InsnDetail(&*self.insn.detail, arch)
}
}
impl<'a> Debug for Insn<'a> {
fn fmt(&self, fmt: &mut Formatter) -> Result<(), Error> {
fmt.debug_struct("Insn")
.field("address", &self.address())
.field("len", &self.len())
.field("bytes", &self.bytes())
.field("mnemonic", &self.mnemonic())
.field("op_str", &self.op_str())
.finish()
}
}
impl<'a> Display for Insn<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
write!(fmt, "{:#x}: ", self.address())?;
if let Some(mnemonic) = self.mnemonic() {
write!(fmt, "{} ", mnemonic)?;
if let Some(op_str) = self.op_str() {
write!(fmt, "{}", op_str)?;
}
}
Ok(())
}
}
/// Iterator over instruction group ids
#[derive(Debug, Clone)]
pub struct InsnGroupIter<'a>(slice::Iter<'a, InsnGroupIdInt>);
impl<'a> InsnDetail<'a> {
/// Returns the implicit read registers
pub fn regs_read(&self) -> &[RegId] {
unsafe {
&*(&self.0.regs_read[..self.0.regs_read_count as usize] as *const [RegIdInt]
as *const [RegId])
}
}
/// Returns the implicit write registers
pub fn regs_write(&self) -> &[RegId] {
unsafe {
&*(&self.0.regs_write[..self.0.regs_write_count as usize] as *const [RegIdInt]
as *const [RegId])
}
}
/// Returns the groups to which this instruction belongs
pub fn groups(&self) -> &[InsnGroupId] {
unsafe {
&*(&self.0.groups[..self.0.groups_count as usize] as *const [InsnGroupIdInt]
as *const [InsnGroupId])
}
}
/// Architecture-specific detail
pub fn arch_detail(&self) -> ArchDetail {
macro_rules! def_arch_detail_match {
(
$( [ $ARCH:ident, $detail:ident, $insn_detail:ident, $arch:ident ] )*
) => {
use self::ArchDetail::*;
use crate::Arch::*;
$( use crate::arch::$arch::$insn_detail; )*
return match self.1 {
$(
$ARCH => {
$detail($insn_detail(unsafe { &self.0.__bindgen_anon_1.$arch }))
}
)*
_ => panic!("Unsupported detail arch"),
}
}
}
def_arch_detail_match!(
[ARM, ArmDetail, ArmInsnDetail, arm]
[ARM64, Arm64Detail, Arm64InsnDetail, arm64]
[EVM, EvmDetail, EvmInsnDetail, evm]
[M680X, M680xDetail, M680xInsnDetail, m680x]
[M68K, M68kDetail, M68kInsnDetail, m68k]
[MIPS, MipsDetail, MipsInsnDetail, mips]
[PPC, PpcDetail, PpcInsnDetail, ppc]
[RISCV, RiscVDetail, RiscVInsnDetail, riscv]
[SPARC, SparcDetail, SparcInsnDetail, sparc]
[TMS320C64X, Tms320c64xDetail, Tms320c64xInsnDetail, tms320c64x]
[X86, X86Detail, X86InsnDetail, x86]
[XCORE, XcoreDetail, XcoreInsnDetail, xcore]
);
}
}
impl<'a> Debug for InsnDetail<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
fmt.debug_struct("Detail")
.field("regs_read", &self.regs_read())
.field("regs_write", &self.regs_write())
.field("groups", &self.groups())
.finish()
}
}
impl<'a> Display for Instructions<'a> {
fn fmt(&self, fmt: &mut Formatter) -> fmt::Result {
for instruction in self.iter() {
write!(fmt, "{:x}:\t", instruction.address())?;
for byte in instruction.bytes() {
write!(fmt, " {:02x}", byte)?;
}
let remainder = 16 * 3 - instruction.bytes().len() * 3;
for _ in 0..remainder {
write!(fmt, " ")?;
}
if let Some(mnemonic) = instruction.mnemonic() {
write!(fmt, " {}", mnemonic)?;
if let Some(op_str) = instruction.op_str() {
write!(fmt, " {}", op_str)?;
}
}
writeln!(fmt)?;
}
Ok(())
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_invalid_reg_access() {
assert_eq!(RegAccessType::try_from(cs_ac_type(1337)), Err(()));
}
} | random_line_split |
|
leb128.rs | // Copyright 2012-2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#[inline]
pub fn write_to_vec(vec: &mut Vec<u8>, position: &mut usize, byte: u8) {
if *position == vec.len() {
vec.push(byte);
} else {
vec[*position] = byte;
}
*position += 1;
}
pub fn write_unsigned_leb128(out: &mut Vec<u8>, start_position: usize, mut value: u64) -> usize {
let mut position = start_position;
loop {
let mut byte = (value & 0x7F) as u8;
value >>= 7;
if value!= 0 {
byte |= 0x80;
}
write_to_vec(out, &mut position, byte);
if value == 0 {
break;
}
}
return position - start_position;
}
#[inline]
pub fn read_unsigned_leb128(data: &[u8], start_position: usize) -> (u64, usize) {
let mut result = 0;
let mut shift = 0;
let mut position = start_position;
loop {
let byte = data[position];
position += 1;
result |= ((byte & 0x7F) as u64) << shift;
if (byte & 0x80) == 0 {
break;
}
shift += 7;
}
(result, position - start_position)
}
pub fn write_signed_leb128(out: &mut Vec<u8>, start_position: usize, mut value: i64) -> usize {
let mut position = start_position;
loop {
let mut byte = (value as u8) & 0x7f;
value >>= 7;
let more =!((((value == 0) && ((byte & 0x40) == 0)) ||
((value == -1) && ((byte & 0x40)!= 0))));
if more {
byte |= 0x80; // Mark this byte to show that more bytes will follow.
}
write_to_vec(out, &mut position, byte);
if!more {
break;
}
}
return position - start_position;
}
#[inline]
pub fn read_signed_leb128(data: &[u8], start_position: usize) -> (i64, usize) {
let mut result = 0;
let mut shift = 0;
let mut position = start_position;
let mut byte;
loop {
byte = data[position];
position += 1;
result |= ((byte & 0x7F) as i64) << shift;
shift += 7;
if (byte & 0x80) == 0 {
break;
}
}
if (shift < 64) && ((byte & 0x40)!= 0) {
// sign extend
result |= -(1i64 << shift);
}
(result, position - start_position)
}
#[test]
fn test_unsigned_leb128() {
let mut stream = Vec::with_capacity(10000);
for x in 0..62 {
let pos = stream.len();
let bytes_written = write_unsigned_leb128(&mut stream, pos, 3 << x);
assert_eq!(stream.len(), pos + bytes_written);
}
let mut position = 0;
for x in 0..62 {
let expected = 3 << x; | }
#[test]
fn test_signed_leb128() {
let mut values = Vec::new();
let mut i = -500;
while i < 500 {
values.push(i * 123457i64);
i += 1;
}
let mut stream = Vec::new();
for &x in &values {
let pos = stream.len();
let bytes_written = write_signed_leb128(&mut stream, pos, x);
assert_eq!(stream.len(), pos + bytes_written);
}
let mut pos = 0;
for &x in &values {
let (value, bytes_read) = read_signed_leb128(&mut stream, pos);
pos += bytes_read;
assert_eq!(x, value);
}
assert_eq!(pos, stream.len());
} | let (actual, bytes_read) = read_unsigned_leb128(&stream, position);
assert_eq!(expected, actual);
position += bytes_read;
}
assert_eq!(stream.len(), position); | random_line_split |
leb128.rs | // Copyright 2012-2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#[inline]
pub fn write_to_vec(vec: &mut Vec<u8>, position: &mut usize, byte: u8) {
if *position == vec.len() {
vec.push(byte);
} else {
vec[*position] = byte;
}
*position += 1;
}
pub fn write_unsigned_leb128(out: &mut Vec<u8>, start_position: usize, mut value: u64) -> usize {
let mut position = start_position;
loop {
let mut byte = (value & 0x7F) as u8;
value >>= 7;
if value!= 0 {
byte |= 0x80;
}
write_to_vec(out, &mut position, byte);
if value == 0 {
break;
}
}
return position - start_position;
}
#[inline]
pub fn read_unsigned_leb128(data: &[u8], start_position: usize) -> (u64, usize) {
let mut result = 0;
let mut shift = 0;
let mut position = start_position;
loop {
let byte = data[position];
position += 1;
result |= ((byte & 0x7F) as u64) << shift;
if (byte & 0x80) == 0 {
break;
}
shift += 7;
}
(result, position - start_position)
}
pub fn | (out: &mut Vec<u8>, start_position: usize, mut value: i64) -> usize {
let mut position = start_position;
loop {
let mut byte = (value as u8) & 0x7f;
value >>= 7;
let more =!((((value == 0) && ((byte & 0x40) == 0)) ||
((value == -1) && ((byte & 0x40)!= 0))));
if more {
byte |= 0x80; // Mark this byte to show that more bytes will follow.
}
write_to_vec(out, &mut position, byte);
if!more {
break;
}
}
return position - start_position;
}
#[inline]
pub fn read_signed_leb128(data: &[u8], start_position: usize) -> (i64, usize) {
let mut result = 0;
let mut shift = 0;
let mut position = start_position;
let mut byte;
loop {
byte = data[position];
position += 1;
result |= ((byte & 0x7F) as i64) << shift;
shift += 7;
if (byte & 0x80) == 0 {
break;
}
}
if (shift < 64) && ((byte & 0x40)!= 0) {
// sign extend
result |= -(1i64 << shift);
}
(result, position - start_position)
}
#[test]
fn test_unsigned_leb128() {
let mut stream = Vec::with_capacity(10000);
for x in 0..62 {
let pos = stream.len();
let bytes_written = write_unsigned_leb128(&mut stream, pos, 3 << x);
assert_eq!(stream.len(), pos + bytes_written);
}
let mut position = 0;
for x in 0..62 {
let expected = 3 << x;
let (actual, bytes_read) = read_unsigned_leb128(&stream, position);
assert_eq!(expected, actual);
position += bytes_read;
}
assert_eq!(stream.len(), position);
}
#[test]
fn test_signed_leb128() {
let mut values = Vec::new();
let mut i = -500;
while i < 500 {
values.push(i * 123457i64);
i += 1;
}
let mut stream = Vec::new();
for &x in &values {
let pos = stream.len();
let bytes_written = write_signed_leb128(&mut stream, pos, x);
assert_eq!(stream.len(), pos + bytes_written);
}
let mut pos = 0;
for &x in &values {
let (value, bytes_read) = read_signed_leb128(&mut stream, pos);
pos += bytes_read;
assert_eq!(x, value);
}
assert_eq!(pos, stream.len());
}
| write_signed_leb128 | identifier_name |
leb128.rs | // Copyright 2012-2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#[inline]
pub fn write_to_vec(vec: &mut Vec<u8>, position: &mut usize, byte: u8) {
if *position == vec.len() {
vec.push(byte);
} else {
vec[*position] = byte;
}
*position += 1;
}
pub fn write_unsigned_leb128(out: &mut Vec<u8>, start_position: usize, mut value: u64) -> usize |
#[inline]
pub fn read_unsigned_leb128(data: &[u8], start_position: usize) -> (u64, usize) {
let mut result = 0;
let mut shift = 0;
let mut position = start_position;
loop {
let byte = data[position];
position += 1;
result |= ((byte & 0x7F) as u64) << shift;
if (byte & 0x80) == 0 {
break;
}
shift += 7;
}
(result, position - start_position)
}
pub fn write_signed_leb128(out: &mut Vec<u8>, start_position: usize, mut value: i64) -> usize {
let mut position = start_position;
loop {
let mut byte = (value as u8) & 0x7f;
value >>= 7;
let more =!((((value == 0) && ((byte & 0x40) == 0)) ||
((value == -1) && ((byte & 0x40)!= 0))));
if more {
byte |= 0x80; // Mark this byte to show that more bytes will follow.
}
write_to_vec(out, &mut position, byte);
if!more {
break;
}
}
return position - start_position;
}
#[inline]
pub fn read_signed_leb128(data: &[u8], start_position: usize) -> (i64, usize) {
let mut result = 0;
let mut shift = 0;
let mut position = start_position;
let mut byte;
loop {
byte = data[position];
position += 1;
result |= ((byte & 0x7F) as i64) << shift;
shift += 7;
if (byte & 0x80) == 0 {
break;
}
}
if (shift < 64) && ((byte & 0x40)!= 0) {
// sign extend
result |= -(1i64 << shift);
}
(result, position - start_position)
}
#[test]
fn test_unsigned_leb128() {
let mut stream = Vec::with_capacity(10000);
for x in 0..62 {
let pos = stream.len();
let bytes_written = write_unsigned_leb128(&mut stream, pos, 3 << x);
assert_eq!(stream.len(), pos + bytes_written);
}
let mut position = 0;
for x in 0..62 {
let expected = 3 << x;
let (actual, bytes_read) = read_unsigned_leb128(&stream, position);
assert_eq!(expected, actual);
position += bytes_read;
}
assert_eq!(stream.len(), position);
}
#[test]
fn test_signed_leb128() {
let mut values = Vec::new();
let mut i = -500;
while i < 500 {
values.push(i * 123457i64);
i += 1;
}
let mut stream = Vec::new();
for &x in &values {
let pos = stream.len();
let bytes_written = write_signed_leb128(&mut stream, pos, x);
assert_eq!(stream.len(), pos + bytes_written);
}
let mut pos = 0;
for &x in &values {
let (value, bytes_read) = read_signed_leb128(&mut stream, pos);
pos += bytes_read;
assert_eq!(x, value);
}
assert_eq!(pos, stream.len());
}
| {
let mut position = start_position;
loop {
let mut byte = (value & 0x7F) as u8;
value >>= 7;
if value != 0 {
byte |= 0x80;
}
write_to_vec(out, &mut position, byte);
if value == 0 {
break;
}
}
return position - start_position;
} | identifier_body |
leb128.rs | // Copyright 2012-2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
#[inline]
pub fn write_to_vec(vec: &mut Vec<u8>, position: &mut usize, byte: u8) {
if *position == vec.len() {
vec.push(byte);
} else {
vec[*position] = byte;
}
*position += 1;
}
pub fn write_unsigned_leb128(out: &mut Vec<u8>, start_position: usize, mut value: u64) -> usize {
let mut position = start_position;
loop {
let mut byte = (value & 0x7F) as u8;
value >>= 7;
if value!= 0 {
byte |= 0x80;
}
write_to_vec(out, &mut position, byte);
if value == 0 {
break;
}
}
return position - start_position;
}
#[inline]
pub fn read_unsigned_leb128(data: &[u8], start_position: usize) -> (u64, usize) {
let mut result = 0;
let mut shift = 0;
let mut position = start_position;
loop {
let byte = data[position];
position += 1;
result |= ((byte & 0x7F) as u64) << shift;
if (byte & 0x80) == 0 {
break;
}
shift += 7;
}
(result, position - start_position)
}
pub fn write_signed_leb128(out: &mut Vec<u8>, start_position: usize, mut value: i64) -> usize {
let mut position = start_position;
loop {
let mut byte = (value as u8) & 0x7f;
value >>= 7;
let more =!((((value == 0) && ((byte & 0x40) == 0)) ||
((value == -1) && ((byte & 0x40)!= 0))));
if more {
byte |= 0x80; // Mark this byte to show that more bytes will follow.
}
write_to_vec(out, &mut position, byte);
if!more |
}
return position - start_position;
}
#[inline]
pub fn read_signed_leb128(data: &[u8], start_position: usize) -> (i64, usize) {
let mut result = 0;
let mut shift = 0;
let mut position = start_position;
let mut byte;
loop {
byte = data[position];
position += 1;
result |= ((byte & 0x7F) as i64) << shift;
shift += 7;
if (byte & 0x80) == 0 {
break;
}
}
if (shift < 64) && ((byte & 0x40)!= 0) {
// sign extend
result |= -(1i64 << shift);
}
(result, position - start_position)
}
#[test]
fn test_unsigned_leb128() {
let mut stream = Vec::with_capacity(10000);
for x in 0..62 {
let pos = stream.len();
let bytes_written = write_unsigned_leb128(&mut stream, pos, 3 << x);
assert_eq!(stream.len(), pos + bytes_written);
}
let mut position = 0;
for x in 0..62 {
let expected = 3 << x;
let (actual, bytes_read) = read_unsigned_leb128(&stream, position);
assert_eq!(expected, actual);
position += bytes_read;
}
assert_eq!(stream.len(), position);
}
#[test]
fn test_signed_leb128() {
let mut values = Vec::new();
let mut i = -500;
while i < 500 {
values.push(i * 123457i64);
i += 1;
}
let mut stream = Vec::new();
for &x in &values {
let pos = stream.len();
let bytes_written = write_signed_leb128(&mut stream, pos, x);
assert_eq!(stream.len(), pos + bytes_written);
}
let mut pos = 0;
for &x in &values {
let (value, bytes_read) = read_signed_leb128(&mut stream, pos);
pos += bytes_read;
assert_eq!(x, value);
}
assert_eq!(pos, stream.len());
}
| {
break;
} | conditional_block |
backup.rs | // Copyright 2016 Alex Regueiro
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
use {DB, Error};
use ffi;
use libc::{c_int, uint32_t};
use std::ffi::CString;
use std::path::Path;
pub struct BackupEngine {
inner: *mut ffi::rocksdb_backup_engine_t,
}
pub struct BackupEngineOptions {
inner: *mut ffi::rocksdb_options_t,
}
pub struct RestoreOptions {
inner: *mut ffi::rocksdb_restore_options_t,
}
impl BackupEngine {
/// Open a backup engine with the specified options.
pub fn | <P: AsRef<Path>>(opts: &BackupEngineOptions,
path: P)
-> Result<BackupEngine, Error> {
let path = path.as_ref();
let cpath = match CString::new(path.to_string_lossy().as_bytes()) {
Ok(c) => c,
Err(_) => {
return Err(Error::new("Failed to convert path to CString \
when opening backup engine"
.to_owned()))
}
};
let be: *mut ffi::rocksdb_backup_engine_t;
unsafe { be = ffi_try!(ffi::rocksdb_backup_engine_open(opts.inner, cpath.as_ptr())) }
if be.is_null() {
return Err(Error::new("Could not initialize backup engine.".to_owned()));
}
Ok(BackupEngine { inner: be })
}
pub fn create_new_backup(&mut self, db: &DB) -> Result<(), Error> {
unsafe {
ffi_try!(ffi::rocksdb_backup_engine_create_new_backup(self.inner, db.inner));
Ok(())
}
}
pub fn purge_old_backups(&mut self, num_backups_to_keep: usize) -> Result<(), Error> {
unsafe {
ffi_try!(ffi::rocksdb_backup_engine_purge_old_backups(self.inner,
num_backups_to_keep as uint32_t));
Ok(())
}
}
}
impl BackupEngineOptions {
//
}
impl RestoreOptions {
pub fn set_keep_log_files(&mut self, keep_log_files: bool) {
unsafe {
ffi::rocksdb_restore_options_set_keep_log_files(self.inner, keep_log_files as c_int);
}
}
}
impl Default for BackupEngineOptions {
fn default() -> BackupEngineOptions {
unsafe {
let opts = ffi::rocksdb_options_create();
if opts.is_null() {
panic!("Could not create RocksDB backup options".to_owned());
}
BackupEngineOptions { inner: opts }
}
}
}
impl Default for RestoreOptions {
fn default() -> RestoreOptions {
unsafe {
let opts = ffi::rocksdb_restore_options_create();
if opts.is_null() {
panic!("Could not create RocksDB restore options".to_owned());
}
RestoreOptions { inner: opts }
}
}
}
impl Drop for BackupEngine {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_backup_engine_close(self.inner);
}
}
}
impl Drop for BackupEngineOptions {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_options_destroy(self.inner);
}
}
}
impl Drop for RestoreOptions {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_restore_options_destroy(self.inner);
}
}
}
| open | identifier_name |
backup.rs | // Copyright 2016 Alex Regueiro
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
use {DB, Error};
use ffi;
use libc::{c_int, uint32_t};
use std::ffi::CString;
use std::path::Path;
pub struct BackupEngine {
inner: *mut ffi::rocksdb_backup_engine_t,
}
pub struct BackupEngineOptions {
inner: *mut ffi::rocksdb_options_t,
}
pub struct RestoreOptions {
inner: *mut ffi::rocksdb_restore_options_t,
}
impl BackupEngine {
/// Open a backup engine with the specified options.
pub fn open<P: AsRef<Path>>(opts: &BackupEngineOptions,
path: P)
-> Result<BackupEngine, Error> {
let path = path.as_ref();
let cpath = match CString::new(path.to_string_lossy().as_bytes()) {
Ok(c) => c,
Err(_) => {
return Err(Error::new("Failed to convert path to CString \
when opening backup engine"
.to_owned()))
}
};
let be: *mut ffi::rocksdb_backup_engine_t;
unsafe { be = ffi_try!(ffi::rocksdb_backup_engine_open(opts.inner, cpath.as_ptr())) }
if be.is_null() {
return Err(Error::new("Could not initialize backup engine.".to_owned()));
}
Ok(BackupEngine { inner: be })
}
pub fn create_new_backup(&mut self, db: &DB) -> Result<(), Error> {
unsafe {
ffi_try!(ffi::rocksdb_backup_engine_create_new_backup(self.inner, db.inner));
Ok(())
}
}
pub fn purge_old_backups(&mut self, num_backups_to_keep: usize) -> Result<(), Error> {
unsafe {
ffi_try!(ffi::rocksdb_backup_engine_purge_old_backups(self.inner,
num_backups_to_keep as uint32_t));
Ok(())
}
}
}
impl BackupEngineOptions {
//
}
impl RestoreOptions {
pub fn set_keep_log_files(&mut self, keep_log_files: bool) {
unsafe {
ffi::rocksdb_restore_options_set_keep_log_files(self.inner, keep_log_files as c_int);
}
}
}
impl Default for BackupEngineOptions {
fn default() -> BackupEngineOptions {
unsafe {
let opts = ffi::rocksdb_options_create();
if opts.is_null() {
panic!("Could not create RocksDB backup options".to_owned());
}
BackupEngineOptions { inner: opts }
}
}
}
impl Default for RestoreOptions {
fn default() -> RestoreOptions {
unsafe {
let opts = ffi::rocksdb_restore_options_create();
if opts.is_null() {
panic!("Could not create RocksDB restore options".to_owned());
}
RestoreOptions { inner: opts }
}
}
}
impl Drop for BackupEngine {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_backup_engine_close(self.inner);
}
}
}
impl Drop for BackupEngineOptions {
fn drop(&mut self) |
}
impl Drop for RestoreOptions {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_restore_options_destroy(self.inner);
}
}
}
| {
unsafe {
ffi::rocksdb_options_destroy(self.inner);
}
} | identifier_body |
backup.rs | // Copyright 2016 Alex Regueiro
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
use {DB, Error};
use ffi;
use libc::{c_int, uint32_t};
use std::ffi::CString;
use std::path::Path;
pub struct BackupEngine {
inner: *mut ffi::rocksdb_backup_engine_t,
}
pub struct BackupEngineOptions {
inner: *mut ffi::rocksdb_options_t,
}
pub struct RestoreOptions {
inner: *mut ffi::rocksdb_restore_options_t,
}
impl BackupEngine {
/// Open a backup engine with the specified options.
pub fn open<P: AsRef<Path>>(opts: &BackupEngineOptions,
path: P)
-> Result<BackupEngine, Error> {
let path = path.as_ref();
let cpath = match CString::new(path.to_string_lossy().as_bytes()) {
Ok(c) => c,
Err(_) => {
return Err(Error::new("Failed to convert path to CString \
when opening backup engine"
.to_owned()))
}
};
let be: *mut ffi::rocksdb_backup_engine_t;
unsafe { be = ffi_try!(ffi::rocksdb_backup_engine_open(opts.inner, cpath.as_ptr())) }
if be.is_null() {
return Err(Error::new("Could not initialize backup engine.".to_owned()));
}
Ok(BackupEngine { inner: be })
}
pub fn create_new_backup(&mut self, db: &DB) -> Result<(), Error> {
unsafe {
ffi_try!(ffi::rocksdb_backup_engine_create_new_backup(self.inner, db.inner));
Ok(())
}
}
pub fn purge_old_backups(&mut self, num_backups_to_keep: usize) -> Result<(), Error> {
unsafe {
ffi_try!(ffi::rocksdb_backup_engine_purge_old_backups(self.inner,
num_backups_to_keep as uint32_t));
Ok(())
}
}
}
impl BackupEngineOptions {
//
}
impl RestoreOptions {
pub fn set_keep_log_files(&mut self, keep_log_files: bool) {
unsafe { | impl Default for BackupEngineOptions {
fn default() -> BackupEngineOptions {
unsafe {
let opts = ffi::rocksdb_options_create();
if opts.is_null() {
panic!("Could not create RocksDB backup options".to_owned());
}
BackupEngineOptions { inner: opts }
}
}
}
impl Default for RestoreOptions {
fn default() -> RestoreOptions {
unsafe {
let opts = ffi::rocksdb_restore_options_create();
if opts.is_null() {
panic!("Could not create RocksDB restore options".to_owned());
}
RestoreOptions { inner: opts }
}
}
}
impl Drop for BackupEngine {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_backup_engine_close(self.inner);
}
}
}
impl Drop for BackupEngineOptions {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_options_destroy(self.inner);
}
}
}
impl Drop for RestoreOptions {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_restore_options_destroy(self.inner);
}
}
} | ffi::rocksdb_restore_options_set_keep_log_files(self.inner, keep_log_files as c_int);
}
}
}
| random_line_split |
backup.rs | // Copyright 2016 Alex Regueiro
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
use {DB, Error};
use ffi;
use libc::{c_int, uint32_t};
use std::ffi::CString;
use std::path::Path;
pub struct BackupEngine {
inner: *mut ffi::rocksdb_backup_engine_t,
}
pub struct BackupEngineOptions {
inner: *mut ffi::rocksdb_options_t,
}
pub struct RestoreOptions {
inner: *mut ffi::rocksdb_restore_options_t,
}
impl BackupEngine {
/// Open a backup engine with the specified options.
pub fn open<P: AsRef<Path>>(opts: &BackupEngineOptions,
path: P)
-> Result<BackupEngine, Error> {
let path = path.as_ref();
let cpath = match CString::new(path.to_string_lossy().as_bytes()) {
Ok(c) => c,
Err(_) => {
return Err(Error::new("Failed to convert path to CString \
when opening backup engine"
.to_owned()))
}
};
let be: *mut ffi::rocksdb_backup_engine_t;
unsafe { be = ffi_try!(ffi::rocksdb_backup_engine_open(opts.inner, cpath.as_ptr())) }
if be.is_null() |
Ok(BackupEngine { inner: be })
}
pub fn create_new_backup(&mut self, db: &DB) -> Result<(), Error> {
unsafe {
ffi_try!(ffi::rocksdb_backup_engine_create_new_backup(self.inner, db.inner));
Ok(())
}
}
pub fn purge_old_backups(&mut self, num_backups_to_keep: usize) -> Result<(), Error> {
unsafe {
ffi_try!(ffi::rocksdb_backup_engine_purge_old_backups(self.inner,
num_backups_to_keep as uint32_t));
Ok(())
}
}
}
impl BackupEngineOptions {
//
}
impl RestoreOptions {
pub fn set_keep_log_files(&mut self, keep_log_files: bool) {
unsafe {
ffi::rocksdb_restore_options_set_keep_log_files(self.inner, keep_log_files as c_int);
}
}
}
impl Default for BackupEngineOptions {
fn default() -> BackupEngineOptions {
unsafe {
let opts = ffi::rocksdb_options_create();
if opts.is_null() {
panic!("Could not create RocksDB backup options".to_owned());
}
BackupEngineOptions { inner: opts }
}
}
}
impl Default for RestoreOptions {
fn default() -> RestoreOptions {
unsafe {
let opts = ffi::rocksdb_restore_options_create();
if opts.is_null() {
panic!("Could not create RocksDB restore options".to_owned());
}
RestoreOptions { inner: opts }
}
}
}
impl Drop for BackupEngine {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_backup_engine_close(self.inner);
}
}
}
impl Drop for BackupEngineOptions {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_options_destroy(self.inner);
}
}
}
impl Drop for RestoreOptions {
fn drop(&mut self) {
unsafe {
ffi::rocksdb_restore_options_destroy(self.inner);
}
}
}
| {
return Err(Error::new("Could not initialize backup engine.".to_owned()));
} | conditional_block |
console.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use dom::bindings::codegen::ConsoleBinding;
use dom::bindings::js::JS;
use dom::bindings::utils::{Reflectable, Reflector, reflect_dom_object};
use dom::window::Window;
use servo_util::str::DOMString;
#[deriving(Encodable)]
pub struct Console {
reflector_: Reflector
}
impl Console {
pub fn | () -> Console {
Console {
reflector_: Reflector::new()
}
}
pub fn new(window: &JS<Window>) -> JS<Console> {
reflect_dom_object(~Console::new_inherited(), window, ConsoleBinding::Wrap)
}
pub fn Log(&self, message: DOMString) {
println!("{:s}", message);
}
pub fn Debug(&self, message: DOMString) {
println!("{:s}", message);
}
pub fn Info(&self, message: DOMString) {
println!("{:s}", message);
}
pub fn Warn(&self, message: DOMString) {
println!("{:s}", message);
}
pub fn Error(&self, message: DOMString) {
println!("{:s}", message);
}
}
impl Reflectable for Console {
fn reflector<'a>(&'a self) -> &'a Reflector {
&self.reflector_
}
fn mut_reflector<'a>(&'a mut self) -> &'a mut Reflector {
&mut self.reflector_
}
}
| new_inherited | identifier_name |
console.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
use dom::bindings::codegen::ConsoleBinding;
use dom::bindings::js::JS;
use dom::bindings::utils::{Reflectable, Reflector, reflect_dom_object};
use dom::window::Window;
use servo_util::str::DOMString;
#[deriving(Encodable)]
pub struct Console {
reflector_: Reflector
}
impl Console {
pub fn new_inherited() -> Console {
Console {
reflector_: Reflector::new()
} |
pub fn new(window: &JS<Window>) -> JS<Console> {
reflect_dom_object(~Console::new_inherited(), window, ConsoleBinding::Wrap)
}
pub fn Log(&self, message: DOMString) {
println!("{:s}", message);
}
pub fn Debug(&self, message: DOMString) {
println!("{:s}", message);
}
pub fn Info(&self, message: DOMString) {
println!("{:s}", message);
}
pub fn Warn(&self, message: DOMString) {
println!("{:s}", message);
}
pub fn Error(&self, message: DOMString) {
println!("{:s}", message);
}
}
impl Reflectable for Console {
fn reflector<'a>(&'a self) -> &'a Reflector {
&self.reflector_
}
fn mut_reflector<'a>(&'a mut self) -> &'a mut Reflector {
&mut self.reflector_
}
} | } | random_line_split |
lump.rs | /*
Copyright 2013 Jesse 'Jeaye' Wilkerson
See licensing in LICENSE file, or at:
http://www.opensource.org/licenses/BSD-3-Clause
File: shared/obj/bsp/lump.rs
Author: Jesse 'Jeaye' Wilkerson
Description:
Lump definitions for Q3 BSP maps.
*/
use math;
pub enum Lump_Type
{
Entity_Type = 0x00,
Texture_Type = 0x01,
Plane_Type = 0x02,
Node_Type = 0x03,
Leaf_Type = 0x04,
Leaf_Face_Type = 0x05,
Leaf_Brush_Type = 0x06,
Model_Type = 0x07,
Brush_Type = 0x08,
Brush_Side_Type = 0x09,
Vertex_Type = 0x0A,
Mesh_Vert_Type = 0x0B,
Effect_Type = 0x0C,
Face_Type = 0x0D,
Light_Map_Type = 0x0E,
Light_Vol_Type = 0x0F,
Vis_Data_Type = 0x10
}
static VERSION: i32 = 0x2E;
#[packed]
pub struct Lump
{
/* Offset from the BOF to the lump. */
offset: i32,
/* Always a multiple of 4. */
length: i32
}
impl Lump
{
pub fn new() -> Lump
{ Lump{ offset: 0, length: 0 } }
}
#[packed]
pub struct Header
{
/* Always "IBSP" */
magic: [i8,..4],
/* Should be 0x2e for Q3 files. */
version: i32,
/* Fixed number of lumps. */
lumps: [Lump,..17]
}
impl Header
{
pub fn | () -> Header
{ Header{ magic: [0,..4], version: 0, lumps: [Lump::new(),..17] } }
}
#[packed]
pub struct Entity
{
/* Size of the buffer. */
size: i32,
buffer: ~[i8] /* TODO: Read binary into this? */
}
impl Entity
{
pub fn new() -> Entity
{ Entity{ size: 0, buffer: ~[] } }
}
#[packed]
pub struct Texture
{
name: [i8,..64],
surface_flags: i32,
content_flags: i32
}
#[packed]
pub struct Plane
{
normal: math::Vec3f,
/* Distance the plane is from the origin, along the normal. */
distance: f32
}
#[packed]
pub struct Node
{
/* Index of the corresponding plane. */
plane: i32,
/* Child indices; negative means lead: -(leaf + 1) */
children: math::Vec2i,
/* Bounding box. */
mins: math::Vec3i,
maxs: math::Vec3i,
}
#[packed]
pub struct Leaf
{
/* Visdata cluster index. */
cluster: i32,
/* Areaportal area. */
area: i32,
/* Bounding box. */
mins: math::Vec2i,
maxs: math::Vec2i,
/* First leaf face. */
face: i32,
num_faces: i32,
/* First leaf brush. */
brush: i32,
num_brushes: i32
}
#[packed]
pub struct Leaf_Face
{
/* Face index. */
face: i32
}
#[packed]
pub struct Leaf_Brush
{
/* Brush index. */
brush: i32
}
#[packed]
pub struct Model
{
/* Bounding box. */
mins: math::Vec2f,
maxs: math::Vec2f,
/* First face. */
face: i32,
num_faces: i32,
/* First brush. */
brush: i32,
num_brushes: i32
}
#[packed]
pub struct Brush
{
/* First brush side. */
side: i32,
num_sides: i32,
/* Texture index. */
texture: i32
}
#[packed]
pub struct Brush_Side
{
/* Plane index. */
plane: i32,
/* Texture index. */
texture: i32
}
#[packed]
pub struct Vertex
{
position: math::Vec3f,
tex_coords: [math::Vec2f,..2], /* 0 = Surface; 1 = Lightmap */
normal: math::Vec3f,
color: math::Vec4u8
}
impl Vertex
{
pub fn new() -> Vertex
{ Vertex { position: math::Vec3f::zero(),
tex_coords: [math::Vec2f::zero(),..2],
normal: math::Vec3f::zero(),
color: math::Vec4u8::new(1, 1, 1, 1) } }
}
#[packed]
pub struct Mesh_Vert
{
/* Vertex index offset, relative to the first vertex of face. */
offset: i32
}
impl Mesh_Vert
{
pub fn new() -> Mesh_Vert
{ Mesh_Vert { offset: 0 } }
}
#[packed]
pub struct Effect
{
name: [i8,..64],
/* Brush that generated this effect. */
brush: i32,
/* Always seems to be 5. */
unknown: i32
}
#[packed]
pub struct Face
{
/* Texture index. */
texture: i32,
/* Effect index. */
effect: i32,
kind: i32, /* 1 = Polygon; 2 = Patch; 3 = Mesh; 4 = Billboard */
/* Index of first vertex. */
start_vertex: i32,
num_vertices: i32,
/* Index of first mesh vert. */
start_mesh_vertex: i32,
num_mesh_vertices: i32,
/* Light map index. */
lightmap: i32,
lightmap_corner: math::Vec2i,
lightmap_size: math::Vec2i,
lightmap_origin: math::Vec3f,
/* World-space s and t unit vectors. */
lightmap_vecs: [math::Vec3f,..2],
normal: math::Vec3f,
/* Patch dimensions. */
patch_size: math::Vec2i,
}
impl Face
{
pub fn new() -> Face
{
Face { texture: 0,
effect: 0,
kind: 0,
start_vertex: 0,
num_vertices: 0,
start_mesh_vertex: 0,
num_mesh_vertices: 0,
lightmap: 0,
lightmap_corner: math::Vec2i::zero(),
lightmap_size: math::Vec2i::zero(),
lightmap_origin: math::Vec3f::zero(),
lightmap_vecs: [math::Vec3f::zero(),..2],
normal: math::Vec3f::zero(),
patch_size: math::Vec2i::zero() }
}
}
#[packed]
pub struct Light_Map
{
data: [[[u8,..128],..128],..3]
}
#[packed]
pub struct Light_Vol
{
/* Ambient color compontn RGB. */
ambient: math::Vec3u8,
/* Directional color component RGB. */
directional: math::Vec3u8,
/* Direction to the light. */
direction: math::Vec2u8, /* 0 = phi; 1 = theta */
}
#[packed]
pub struct Vis_Data
{
num_clusters: i32,
bytes_per_cluster: i32,
buffer: ~[u8]
}
| new | identifier_name |
lump.rs | /*
Copyright 2013 Jesse 'Jeaye' Wilkerson
See licensing in LICENSE file, or at:
http://www.opensource.org/licenses/BSD-3-Clause
File: shared/obj/bsp/lump.rs
Author: Jesse 'Jeaye' Wilkerson
Description:
Lump definitions for Q3 BSP maps.
*/
use math;
pub enum Lump_Type
{
Entity_Type = 0x00,
Texture_Type = 0x01,
Plane_Type = 0x02,
Node_Type = 0x03,
Leaf_Type = 0x04,
Leaf_Face_Type = 0x05,
Leaf_Brush_Type = 0x06,
Model_Type = 0x07,
Brush_Type = 0x08,
Brush_Side_Type = 0x09,
Vertex_Type = 0x0A,
Mesh_Vert_Type = 0x0B,
Effect_Type = 0x0C,
Face_Type = 0x0D,
Light_Map_Type = 0x0E,
Light_Vol_Type = 0x0F,
Vis_Data_Type = 0x10
}
static VERSION: i32 = 0x2E;
#[packed]
pub struct Lump
{
/* Offset from the BOF to the lump. */
offset: i32,
/* Always a multiple of 4. */
length: i32
}
impl Lump
{
pub fn new() -> Lump
{ Lump{ offset: 0, length: 0 } }
}
#[packed]
pub struct Header
{
/* Always "IBSP" */
magic: [i8,..4],
/* Should be 0x2e for Q3 files. */
version: i32,
/* Fixed number of lumps. */
lumps: [Lump,..17]
}
impl Header
{
pub fn new() -> Header
{ Header{ magic: [0,..4], version: 0, lumps: [Lump::new(),..17] } }
}
#[packed]
pub struct Entity
{
/* Size of the buffer. */
size: i32,
buffer: ~[i8] /* TODO: Read binary into this? */
}
impl Entity
{
pub fn new() -> Entity
{ Entity{ size: 0, buffer: ~[] } }
}
#[packed]
pub struct Texture
{
name: [i8,..64],
surface_flags: i32,
content_flags: i32
}
#[packed]
pub struct Plane
{
normal: math::Vec3f,
/* Distance the plane is from the origin, along the normal. */
distance: f32
}
#[packed]
pub struct Node
{
/* Index of the corresponding plane. */
plane: i32,
/* Child indices; negative means lead: -(leaf + 1) */
children: math::Vec2i,
/* Bounding box. */
mins: math::Vec3i,
maxs: math::Vec3i,
}
#[packed]
pub struct Leaf
{
/* Visdata cluster index. */
cluster: i32,
/* Areaportal area. */
area: i32,
/* Bounding box. */
mins: math::Vec2i,
maxs: math::Vec2i,
/* First leaf face. */
face: i32,
num_faces: i32,
/* First leaf brush. */
brush: i32,
num_brushes: i32
}
#[packed]
pub struct Leaf_Face
{
/* Face index. */
face: i32
}
#[packed]
pub struct Leaf_Brush
{
/* Brush index. */
brush: i32
}
#[packed]
pub struct Model
{
/* Bounding box. */
mins: math::Vec2f,
maxs: math::Vec2f,
/* First face. */
face: i32,
num_faces: i32,
/* First brush. */
brush: i32,
num_brushes: i32
}
#[packed]
pub struct Brush
{
/* First brush side. */
side: i32,
num_sides: i32,
/* Texture index. */
texture: i32
}
#[packed]
pub struct Brush_Side
{
/* Plane index. */
plane: i32,
/* Texture index. */
texture: i32
}
#[packed]
pub struct Vertex
{
position: math::Vec3f,
tex_coords: [math::Vec2f,..2], /* 0 = Surface; 1 = Lightmap */
normal: math::Vec3f,
color: math::Vec4u8
}
impl Vertex
{
pub fn new() -> Vertex
{ Vertex { position: math::Vec3f::zero(),
tex_coords: [math::Vec2f::zero(),..2],
normal: math::Vec3f::zero(),
color: math::Vec4u8::new(1, 1, 1, 1) } }
}
#[packed]
pub struct Mesh_Vert
{
/* Vertex index offset, relative to the first vertex of face. */
offset: i32
}
impl Mesh_Vert
{
pub fn new() -> Mesh_Vert
{ Mesh_Vert { offset: 0 } }
}
#[packed]
pub struct Effect
{
name: [i8,..64],
/* Brush that generated this effect. */
brush: i32,
/* Always seems to be 5. */
unknown: i32
}
#[packed]
pub struct Face
{
/* Texture index. */
texture: i32,
/* Effect index. */
effect: i32,
kind: i32, /* 1 = Polygon; 2 = Patch; 3 = Mesh; 4 = Billboard */
/* Index of first vertex. */
start_vertex: i32,
num_vertices: i32,
/* Index of first mesh vert. */
start_mesh_vertex: i32,
num_mesh_vertices: i32,
/* Light map index. */
lightmap: i32,
lightmap_corner: math::Vec2i,
lightmap_size: math::Vec2i,
lightmap_origin: math::Vec3f,
/* World-space s and t unit vectors. */
lightmap_vecs: [math::Vec3f,..2],
normal: math::Vec3f,
/* Patch dimensions. */
patch_size: math::Vec2i,
}
impl Face
{
pub fn new() -> Face
{
Face { texture: 0,
effect: 0,
kind: 0,
start_vertex: 0,
num_vertices: 0,
start_mesh_vertex: 0,
num_mesh_vertices: 0,
lightmap: 0,
lightmap_corner: math::Vec2i::zero(), | normal: math::Vec3f::zero(),
patch_size: math::Vec2i::zero() }
}
}
#[packed]
pub struct Light_Map
{
data: [[[u8,..128],..128],..3]
}
#[packed]
pub struct Light_Vol
{
/* Ambient color compontn RGB. */
ambient: math::Vec3u8,
/* Directional color component RGB. */
directional: math::Vec3u8,
/* Direction to the light. */
direction: math::Vec2u8, /* 0 = phi; 1 = theta */
}
#[packed]
pub struct Vis_Data
{
num_clusters: i32,
bytes_per_cluster: i32,
buffer: ~[u8]
} | lightmap_size: math::Vec2i::zero(),
lightmap_origin: math::Vec3f::zero(),
lightmap_vecs: [math::Vec3f::zero(), ..2], | random_line_split |
TestLog1p.rs | /*
* Copyright (C) 2014 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software | * distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma version(1)
#pragma rs java_package_name(android.renderscript.cts)
// Don't edit this file! It is auto-generated by frameworks/rs/api/gen_runtime.
float __attribute__((kernel)) testLog1pFloatFloat(float in) {
return log1p(in);
}
float2 __attribute__((kernel)) testLog1pFloat2Float2(float2 in) {
return log1p(in);
}
float3 __attribute__((kernel)) testLog1pFloat3Float3(float3 in) {
return log1p(in);
}
float4 __attribute__((kernel)) testLog1pFloat4Float4(float4 in) {
return log1p(in);
} | random_line_split |
|
request_context.rs | /*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This software may be used and distributed according to the terms of the
* GNU General Public License version 2.
*/
use gotham::state::{FromState, State};
use gotham_derive::StateData;
use hyper::{Body, Response};
use rate_limiting::RateLimitEnvironment;
use slog::{o, Logger};
use std::sync::Arc;
use context::{CoreContext, SessionContainer};
use fbinit::FacebookInit;
use gotham_ext::{
middleware::{ClientIdentity, Middleware},
state_ext::StateExt,
};
use metadata::Metadata;
use scuba_ext::MononokeScubaSampleBuilder;
#[derive(StateData, Clone)]
pub struct RequestContext {
pub ctx: CoreContext,
pub logger: Logger,
}
impl RequestContext {
async fn new(ctx: CoreContext, logger: Logger) -> Self {
Self { ctx, logger }
}
}
#[derive(Clone)]
pub struct RequestContextMiddleware {
fb: FacebookInit,
logger: Logger,
scuba: Arc<MononokeScubaSampleBuilder>,
rate_limiter: Option<RateLimitEnvironment>,
}
impl RequestContextMiddleware {
pub fn | (
fb: FacebookInit,
logger: Logger,
scuba: MononokeScubaSampleBuilder,
rate_limiter: Option<RateLimitEnvironment>,
) -> Self {
Self {
fb,
logger,
scuba: Arc::new(scuba),
rate_limiter,
}
}
}
#[async_trait::async_trait]
impl Middleware for RequestContextMiddleware {
async fn inbound(&self, state: &mut State) -> Option<Response<Body>> {
let identities = ClientIdentity::borrow_from(state)
.identities()
.clone()
.unwrap_or_default();
let metadata = Metadata::default().set_identities(identities);
let metadata = Arc::new(metadata);
let session = SessionContainer::builder(self.fb)
.metadata(metadata)
.rate_limiter(self.rate_limiter.as_ref().map(|r| r.get_rate_limiter()))
.build();
let request_id = state.short_request_id();
let logger = self.logger.new(o!("request_id" => request_id.to_string()));
let ctx = session.new_context(logger.clone(), (*self.scuba).clone());
state.put(RequestContext::new(ctx, logger).await);
None
}
}
| new | identifier_name |
request_context.rs | /*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This software may be used and distributed according to the terms of the |
use gotham::state::{FromState, State};
use gotham_derive::StateData;
use hyper::{Body, Response};
use rate_limiting::RateLimitEnvironment;
use slog::{o, Logger};
use std::sync::Arc;
use context::{CoreContext, SessionContainer};
use fbinit::FacebookInit;
use gotham_ext::{
middleware::{ClientIdentity, Middleware},
state_ext::StateExt,
};
use metadata::Metadata;
use scuba_ext::MononokeScubaSampleBuilder;
#[derive(StateData, Clone)]
pub struct RequestContext {
pub ctx: CoreContext,
pub logger: Logger,
}
impl RequestContext {
async fn new(ctx: CoreContext, logger: Logger) -> Self {
Self { ctx, logger }
}
}
#[derive(Clone)]
pub struct RequestContextMiddleware {
fb: FacebookInit,
logger: Logger,
scuba: Arc<MononokeScubaSampleBuilder>,
rate_limiter: Option<RateLimitEnvironment>,
}
impl RequestContextMiddleware {
pub fn new(
fb: FacebookInit,
logger: Logger,
scuba: MononokeScubaSampleBuilder,
rate_limiter: Option<RateLimitEnvironment>,
) -> Self {
Self {
fb,
logger,
scuba: Arc::new(scuba),
rate_limiter,
}
}
}
#[async_trait::async_trait]
impl Middleware for RequestContextMiddleware {
async fn inbound(&self, state: &mut State) -> Option<Response<Body>> {
let identities = ClientIdentity::borrow_from(state)
.identities()
.clone()
.unwrap_or_default();
let metadata = Metadata::default().set_identities(identities);
let metadata = Arc::new(metadata);
let session = SessionContainer::builder(self.fb)
.metadata(metadata)
.rate_limiter(self.rate_limiter.as_ref().map(|r| r.get_rate_limiter()))
.build();
let request_id = state.short_request_id();
let logger = self.logger.new(o!("request_id" => request_id.to_string()));
let ctx = session.new_context(logger.clone(), (*self.scuba).clone());
state.put(RequestContext::new(ctx, logger).await);
None
}
} | * GNU General Public License version 2.
*/ | random_line_split |
request_context.rs | /*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This software may be used and distributed according to the terms of the
* GNU General Public License version 2.
*/
use gotham::state::{FromState, State};
use gotham_derive::StateData;
use hyper::{Body, Response};
use rate_limiting::RateLimitEnvironment;
use slog::{o, Logger};
use std::sync::Arc;
use context::{CoreContext, SessionContainer};
use fbinit::FacebookInit;
use gotham_ext::{
middleware::{ClientIdentity, Middleware},
state_ext::StateExt,
};
use metadata::Metadata;
use scuba_ext::MononokeScubaSampleBuilder;
#[derive(StateData, Clone)]
pub struct RequestContext {
pub ctx: CoreContext,
pub logger: Logger,
}
impl RequestContext {
async fn new(ctx: CoreContext, logger: Logger) -> Self {
Self { ctx, logger }
}
}
#[derive(Clone)]
pub struct RequestContextMiddleware {
fb: FacebookInit,
logger: Logger,
scuba: Arc<MononokeScubaSampleBuilder>,
rate_limiter: Option<RateLimitEnvironment>,
}
impl RequestContextMiddleware {
pub fn new(
fb: FacebookInit,
logger: Logger,
scuba: MononokeScubaSampleBuilder,
rate_limiter: Option<RateLimitEnvironment>,
) -> Self {
Self {
fb,
logger,
scuba: Arc::new(scuba),
rate_limiter,
}
}
}
#[async_trait::async_trait]
impl Middleware for RequestContextMiddleware {
async fn inbound(&self, state: &mut State) -> Option<Response<Body>> | }
}
| {
let identities = ClientIdentity::borrow_from(state)
.identities()
.clone()
.unwrap_or_default();
let metadata = Metadata::default().set_identities(identities);
let metadata = Arc::new(metadata);
let session = SessionContainer::builder(self.fb)
.metadata(metadata)
.rate_limiter(self.rate_limiter.as_ref().map(|r| r.get_rate_limiter()))
.build();
let request_id = state.short_request_id();
let logger = self.logger.new(o!("request_id" => request_id.to_string()));
let ctx = session.new_context(logger.clone(), (*self.scuba).clone());
state.put(RequestContext::new(ctx, logger).await);
None | identifier_body |
schema.rs | table! {
attachments (id) {
id -> Integer,
message_id -> Integer,
file_name -> Text,
mime_type -> Text,
character_set -> Text,
content_id -> Text,
content_location -> Text,
part_id -> Text,
encoding -> Integer,
data -> Binary,
is_inline -> Integer,
}
}
table! {
folders (id) {
id -> Integer,
folder_name -> Text,
folder_path -> Text,
identity_id -> Integer,
uid_validity -> Nullable<BigInt>,
flags -> Integer,
}
}
table! {
identities (id) {
id -> Integer,
email_address -> Text,
gmail_refresh_token -> Text,
identity_type -> Text,
expires_at -> Timestamp,
full_name -> Text,
account_name -> Text,
}
}
table! {
messages (id) {
id -> Integer,
message_id -> Text,
subject -> Text,
folder_id -> Integer,
time_received -> Timestamp,
from -> Text, | in_reply_to -> Text,
uid -> BigInt,
modification_sequence -> BigInt,
seen -> Bool,
flagged -> Bool,
draft -> Bool,
deleted -> Bool,
}
}
allow_tables_to_appear_in_same_query!(attachments, folders, identities, messages,); | to -> Text,
cc -> Text,
bcc -> Text,
content -> Nullable<Text>,
references -> Text, | random_line_split |
body_sync.rs | // Copyright 2021 The Grin Developers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License. | // http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use chrono::prelude::{DateTime, Utc};
use chrono::Duration;
use p2p::Capabilities;
use rand::prelude::*;
use std::cmp;
use std::sync::Arc;
use crate::chain::{self, SyncState, SyncStatus, Tip};
use crate::core::core::hash::{Hash, Hashed};
use crate::core::core::BlockHeader;
use crate::p2p;
pub struct BodySync {
chain: Arc<chain::Chain>,
peers: Arc<p2p::Peers>,
sync_state: Arc<SyncState>,
blocks_requested: u64,
receive_timeout: DateTime<Utc>,
prev_blocks_received: u64,
}
impl BodySync {
pub fn new(
sync_state: Arc<SyncState>,
peers: Arc<p2p::Peers>,
chain: Arc<chain::Chain>,
) -> BodySync {
BodySync {
sync_state,
peers,
chain,
blocks_requested: 0,
receive_timeout: Utc::now(),
prev_blocks_received: 0,
}
}
/// Check whether a body sync is needed and run it if so.
/// Return true if txhashset download is needed (when requested block is under the horizon).
pub fn check_run(
&mut self,
head: &chain::Tip,
highest_height: u64,
) -> Result<bool, chain::Error> {
// run the body_sync every 5s
if self.body_sync_due()? {
if self.body_sync()? {
return Ok(true);
}
self.sync_state.update(SyncStatus::BodySync {
current_height: head.height,
highest_height: highest_height,
});
}
Ok(false)
}
/// Is our local node running in archive_mode?
fn archive_mode(&self) -> bool {
self.chain.archive_mode()
}
/// Return true if txhashset download is needed (when requested block is under the horizon).
/// Otherwise go request some missing blocks and return false.
fn body_sync(&mut self) -> Result<bool, chain::Error> {
let head = self.chain.head()?;
let header_head = self.chain.header_head()?;
let fork_point = self.chain.fork_point()?;
if self.chain.check_txhashset_needed(&fork_point)? {
debug!(
"body_sync: cannot sync full blocks earlier than horizon. will request txhashset",
);
return Ok(true);
}
let peers = {
// Find connected peers with strictly greater difficulty than us.
let peers_iter = || {
// If we are running with archive mode enabled we only want to sync
// from other archive nodes.
let cap = if self.archive_mode() {
Capabilities::BLOCK_HIST
} else {
Capabilities::UNKNOWN
};
self.peers
.iter()
.with_capabilities(cap)
.with_difficulty(|x| x > head.total_difficulty)
.connected()
};
// We prefer outbound peers with greater difficulty.
let mut peers: Vec<_> = peers_iter().outbound().into_iter().collect();
if peers.is_empty() {
debug!("no outbound peers with more work, considering inbound");
peers = peers_iter().inbound().into_iter().collect();
}
// If we have no peers (outbound or inbound) then we are done for now.
if peers.is_empty() {
debug!("no peers (inbound or outbound) with more work");
return Ok(false);
}
peers
};
// if we have 5 peers to sync from then ask for 50 blocks total (peer_count *
// 10) max will be 80 if all 8 peers are advertising more work
// also if the chain is already saturated with orphans, throttle
let block_count = cmp::min(
cmp::min(100, peers.len() * 10),
chain::MAX_ORPHAN_SIZE.saturating_sub(self.chain.orphans_len()) + 1,
);
let hashes = self.block_hashes_to_sync(&fork_point, &header_head, block_count as u64)?;
if!hashes.is_empty() {
debug!(
"block_sync: {}/{} requesting blocks {:?} from {} peers",
head.height,
header_head.height,
hashes,
peers.len(),
);
// reinitialize download tracking state
self.blocks_requested = 0;
self.receive_timeout = Utc::now() + Duration::seconds(6);
let mut rng = rand::thread_rng();
for hash in hashes {
if let Some(peer) = peers.choose(&mut rng) {
if let Err(e) = peer.send_block_request(hash, chain::Options::SYNC) {
debug!("Skipped request to {}: {:?}", peer.info.addr, e);
peer.stop();
} else {
self.blocks_requested += 1;
}
}
}
}
return Ok(false);
}
fn block_hashes_to_sync(
&self,
fork_point: &BlockHeader,
header_head: &Tip,
count: u64,
) -> Result<Vec<Hash>, chain::Error> {
let mut hashes = vec![];
let max_height = cmp::min(fork_point.height + count, header_head.height);
let mut current = self.chain.get_header_by_height(max_height)?;
while current.height > fork_point.height {
if!self.chain.is_orphan(¤t.hash()) {
hashes.push(current.hash());
}
current = self.chain.get_previous_header(¤t)?;
}
hashes.reverse();
Ok(hashes)
}
// Should we run block body sync and ask for more full blocks?
fn body_sync_due(&mut self) -> Result<bool, chain::Error> {
let blocks_received = self.blocks_received()?;
// some blocks have been requested
if self.blocks_requested > 0 {
// but none received since timeout, ask again
let timeout = Utc::now() > self.receive_timeout;
if timeout && blocks_received <= self.prev_blocks_received {
debug!(
"body_sync: expecting {} more blocks and none received for a while",
self.blocks_requested,
);
return Ok(true);
}
}
if blocks_received > self.prev_blocks_received {
// some received, update for next check
self.receive_timeout = Utc::now() + Duration::seconds(1);
self.blocks_requested = self
.blocks_requested
.saturating_sub(blocks_received - self.prev_blocks_received);
self.prev_blocks_received = blocks_received;
}
// off by one to account for broadcast adding a couple orphans
if self.blocks_requested < 2 {
// no pending block requests, ask more
debug!("body_sync: no pending block request, asking more");
return Ok(true);
}
Ok(false)
}
// Total numbers received on this chain, including the head and orphans
fn blocks_received(&self) -> Result<u64, chain::Error> {
Ok((self.chain.head()?).height
+ self.chain.orphans_len() as u64
+ self.chain.orphans_evicted_len() as u64)
}
} | // You may obtain a copy of the License at
// | random_line_split |
body_sync.rs | // Copyright 2021 The Grin Developers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use chrono::prelude::{DateTime, Utc};
use chrono::Duration;
use p2p::Capabilities;
use rand::prelude::*;
use std::cmp;
use std::sync::Arc;
use crate::chain::{self, SyncState, SyncStatus, Tip};
use crate::core::core::hash::{Hash, Hashed};
use crate::core::core::BlockHeader;
use crate::p2p;
pub struct BodySync {
chain: Arc<chain::Chain>,
peers: Arc<p2p::Peers>,
sync_state: Arc<SyncState>,
blocks_requested: u64,
receive_timeout: DateTime<Utc>,
prev_blocks_received: u64,
}
impl BodySync {
pub fn new(
sync_state: Arc<SyncState>,
peers: Arc<p2p::Peers>,
chain: Arc<chain::Chain>,
) -> BodySync {
BodySync {
sync_state,
peers,
chain,
blocks_requested: 0,
receive_timeout: Utc::now(),
prev_blocks_received: 0,
}
}
/// Check whether a body sync is needed and run it if so.
/// Return true if txhashset download is needed (when requested block is under the horizon).
pub fn check_run(
&mut self,
head: &chain::Tip,
highest_height: u64,
) -> Result<bool, chain::Error> {
// run the body_sync every 5s
if self.body_sync_due()? {
if self.body_sync()? {
return Ok(true);
}
self.sync_state.update(SyncStatus::BodySync {
current_height: head.height,
highest_height: highest_height,
});
}
Ok(false)
}
/// Is our local node running in archive_mode?
fn archive_mode(&self) -> bool |
/// Return true if txhashset download is needed (when requested block is under the horizon).
/// Otherwise go request some missing blocks and return false.
fn body_sync(&mut self) -> Result<bool, chain::Error> {
let head = self.chain.head()?;
let header_head = self.chain.header_head()?;
let fork_point = self.chain.fork_point()?;
if self.chain.check_txhashset_needed(&fork_point)? {
debug!(
"body_sync: cannot sync full blocks earlier than horizon. will request txhashset",
);
return Ok(true);
}
let peers = {
// Find connected peers with strictly greater difficulty than us.
let peers_iter = || {
// If we are running with archive mode enabled we only want to sync
// from other archive nodes.
let cap = if self.archive_mode() {
Capabilities::BLOCK_HIST
} else {
Capabilities::UNKNOWN
};
self.peers
.iter()
.with_capabilities(cap)
.with_difficulty(|x| x > head.total_difficulty)
.connected()
};
// We prefer outbound peers with greater difficulty.
let mut peers: Vec<_> = peers_iter().outbound().into_iter().collect();
if peers.is_empty() {
debug!("no outbound peers with more work, considering inbound");
peers = peers_iter().inbound().into_iter().collect();
}
// If we have no peers (outbound or inbound) then we are done for now.
if peers.is_empty() {
debug!("no peers (inbound or outbound) with more work");
return Ok(false);
}
peers
};
// if we have 5 peers to sync from then ask for 50 blocks total (peer_count *
// 10) max will be 80 if all 8 peers are advertising more work
// also if the chain is already saturated with orphans, throttle
let block_count = cmp::min(
cmp::min(100, peers.len() * 10),
chain::MAX_ORPHAN_SIZE.saturating_sub(self.chain.orphans_len()) + 1,
);
let hashes = self.block_hashes_to_sync(&fork_point, &header_head, block_count as u64)?;
if!hashes.is_empty() {
debug!(
"block_sync: {}/{} requesting blocks {:?} from {} peers",
head.height,
header_head.height,
hashes,
peers.len(),
);
// reinitialize download tracking state
self.blocks_requested = 0;
self.receive_timeout = Utc::now() + Duration::seconds(6);
let mut rng = rand::thread_rng();
for hash in hashes {
if let Some(peer) = peers.choose(&mut rng) {
if let Err(e) = peer.send_block_request(hash, chain::Options::SYNC) {
debug!("Skipped request to {}: {:?}", peer.info.addr, e);
peer.stop();
} else {
self.blocks_requested += 1;
}
}
}
}
return Ok(false);
}
fn block_hashes_to_sync(
&self,
fork_point: &BlockHeader,
header_head: &Tip,
count: u64,
) -> Result<Vec<Hash>, chain::Error> {
let mut hashes = vec![];
let max_height = cmp::min(fork_point.height + count, header_head.height);
let mut current = self.chain.get_header_by_height(max_height)?;
while current.height > fork_point.height {
if!self.chain.is_orphan(¤t.hash()) {
hashes.push(current.hash());
}
current = self.chain.get_previous_header(¤t)?;
}
hashes.reverse();
Ok(hashes)
}
// Should we run block body sync and ask for more full blocks?
fn body_sync_due(&mut self) -> Result<bool, chain::Error> {
let blocks_received = self.blocks_received()?;
// some blocks have been requested
if self.blocks_requested > 0 {
// but none received since timeout, ask again
let timeout = Utc::now() > self.receive_timeout;
if timeout && blocks_received <= self.prev_blocks_received {
debug!(
"body_sync: expecting {} more blocks and none received for a while",
self.blocks_requested,
);
return Ok(true);
}
}
if blocks_received > self.prev_blocks_received {
// some received, update for next check
self.receive_timeout = Utc::now() + Duration::seconds(1);
self.blocks_requested = self
.blocks_requested
.saturating_sub(blocks_received - self.prev_blocks_received);
self.prev_blocks_received = blocks_received;
}
// off by one to account for broadcast adding a couple orphans
if self.blocks_requested < 2 {
// no pending block requests, ask more
debug!("body_sync: no pending block request, asking more");
return Ok(true);
}
Ok(false)
}
// Total numbers received on this chain, including the head and orphans
fn blocks_received(&self) -> Result<u64, chain::Error> {
Ok((self.chain.head()?).height
+ self.chain.orphans_len() as u64
+ self.chain.orphans_evicted_len() as u64)
}
}
| {
self.chain.archive_mode()
} | identifier_body |
body_sync.rs | // Copyright 2021 The Grin Developers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use chrono::prelude::{DateTime, Utc};
use chrono::Duration;
use p2p::Capabilities;
use rand::prelude::*;
use std::cmp;
use std::sync::Arc;
use crate::chain::{self, SyncState, SyncStatus, Tip};
use crate::core::core::hash::{Hash, Hashed};
use crate::core::core::BlockHeader;
use crate::p2p;
pub struct BodySync {
chain: Arc<chain::Chain>,
peers: Arc<p2p::Peers>,
sync_state: Arc<SyncState>,
blocks_requested: u64,
receive_timeout: DateTime<Utc>,
prev_blocks_received: u64,
}
impl BodySync {
pub fn new(
sync_state: Arc<SyncState>,
peers: Arc<p2p::Peers>,
chain: Arc<chain::Chain>,
) -> BodySync {
BodySync {
sync_state,
peers,
chain,
blocks_requested: 0,
receive_timeout: Utc::now(),
prev_blocks_received: 0,
}
}
/// Check whether a body sync is needed and run it if so.
/// Return true if txhashset download is needed (when requested block is under the horizon).
pub fn check_run(
&mut self,
head: &chain::Tip,
highest_height: u64,
) -> Result<bool, chain::Error> {
// run the body_sync every 5s
if self.body_sync_due()? {
if self.body_sync()? {
return Ok(true);
}
self.sync_state.update(SyncStatus::BodySync {
current_height: head.height,
highest_height: highest_height,
});
}
Ok(false)
}
/// Is our local node running in archive_mode?
fn archive_mode(&self) -> bool {
self.chain.archive_mode()
}
/// Return true if txhashset download is needed (when requested block is under the horizon).
/// Otherwise go request some missing blocks and return false.
fn body_sync(&mut self) -> Result<bool, chain::Error> {
let head = self.chain.head()?;
let header_head = self.chain.header_head()?;
let fork_point = self.chain.fork_point()?;
if self.chain.check_txhashset_needed(&fork_point)? {
debug!(
"body_sync: cannot sync full blocks earlier than horizon. will request txhashset",
);
return Ok(true);
}
let peers = {
// Find connected peers with strictly greater difficulty than us.
let peers_iter = || {
// If we are running with archive mode enabled we only want to sync
// from other archive nodes.
let cap = if self.archive_mode() {
Capabilities::BLOCK_HIST
} else {
Capabilities::UNKNOWN
};
self.peers
.iter()
.with_capabilities(cap)
.with_difficulty(|x| x > head.total_difficulty)
.connected()
};
// We prefer outbound peers with greater difficulty.
let mut peers: Vec<_> = peers_iter().outbound().into_iter().collect();
if peers.is_empty() {
debug!("no outbound peers with more work, considering inbound");
peers = peers_iter().inbound().into_iter().collect();
}
// If we have no peers (outbound or inbound) then we are done for now.
if peers.is_empty() {
debug!("no peers (inbound or outbound) with more work");
return Ok(false);
}
peers
};
// if we have 5 peers to sync from then ask for 50 blocks total (peer_count *
// 10) max will be 80 if all 8 peers are advertising more work
// also if the chain is already saturated with orphans, throttle
let block_count = cmp::min(
cmp::min(100, peers.len() * 10),
chain::MAX_ORPHAN_SIZE.saturating_sub(self.chain.orphans_len()) + 1,
);
let hashes = self.block_hashes_to_sync(&fork_point, &header_head, block_count as u64)?;
if!hashes.is_empty() {
debug!(
"block_sync: {}/{} requesting blocks {:?} from {} peers",
head.height,
header_head.height,
hashes,
peers.len(),
);
// reinitialize download tracking state
self.blocks_requested = 0;
self.receive_timeout = Utc::now() + Duration::seconds(6);
let mut rng = rand::thread_rng();
for hash in hashes {
if let Some(peer) = peers.choose(&mut rng) {
if let Err(e) = peer.send_block_request(hash, chain::Options::SYNC) | else {
self.blocks_requested += 1;
}
}
}
}
return Ok(false);
}
fn block_hashes_to_sync(
&self,
fork_point: &BlockHeader,
header_head: &Tip,
count: u64,
) -> Result<Vec<Hash>, chain::Error> {
let mut hashes = vec![];
let max_height = cmp::min(fork_point.height + count, header_head.height);
let mut current = self.chain.get_header_by_height(max_height)?;
while current.height > fork_point.height {
if!self.chain.is_orphan(¤t.hash()) {
hashes.push(current.hash());
}
current = self.chain.get_previous_header(¤t)?;
}
hashes.reverse();
Ok(hashes)
}
// Should we run block body sync and ask for more full blocks?
fn body_sync_due(&mut self) -> Result<bool, chain::Error> {
let blocks_received = self.blocks_received()?;
// some blocks have been requested
if self.blocks_requested > 0 {
// but none received since timeout, ask again
let timeout = Utc::now() > self.receive_timeout;
if timeout && blocks_received <= self.prev_blocks_received {
debug!(
"body_sync: expecting {} more blocks and none received for a while",
self.blocks_requested,
);
return Ok(true);
}
}
if blocks_received > self.prev_blocks_received {
// some received, update for next check
self.receive_timeout = Utc::now() + Duration::seconds(1);
self.blocks_requested = self
.blocks_requested
.saturating_sub(blocks_received - self.prev_blocks_received);
self.prev_blocks_received = blocks_received;
}
// off by one to account for broadcast adding a couple orphans
if self.blocks_requested < 2 {
// no pending block requests, ask more
debug!("body_sync: no pending block request, asking more");
return Ok(true);
}
Ok(false)
}
// Total numbers received on this chain, including the head and orphans
fn blocks_received(&self) -> Result<u64, chain::Error> {
Ok((self.chain.head()?).height
+ self.chain.orphans_len() as u64
+ self.chain.orphans_evicted_len() as u64)
}
}
| {
debug!("Skipped request to {}: {:?}", peer.info.addr, e);
peer.stop();
} | conditional_block |
body_sync.rs | // Copyright 2021 The Grin Developers
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use chrono::prelude::{DateTime, Utc};
use chrono::Duration;
use p2p::Capabilities;
use rand::prelude::*;
use std::cmp;
use std::sync::Arc;
use crate::chain::{self, SyncState, SyncStatus, Tip};
use crate::core::core::hash::{Hash, Hashed};
use crate::core::core::BlockHeader;
use crate::p2p;
pub struct BodySync {
chain: Arc<chain::Chain>,
peers: Arc<p2p::Peers>,
sync_state: Arc<SyncState>,
blocks_requested: u64,
receive_timeout: DateTime<Utc>,
prev_blocks_received: u64,
}
impl BodySync {
pub fn new(
sync_state: Arc<SyncState>,
peers: Arc<p2p::Peers>,
chain: Arc<chain::Chain>,
) -> BodySync {
BodySync {
sync_state,
peers,
chain,
blocks_requested: 0,
receive_timeout: Utc::now(),
prev_blocks_received: 0,
}
}
/// Check whether a body sync is needed and run it if so.
/// Return true if txhashset download is needed (when requested block is under the horizon).
pub fn check_run(
&mut self,
head: &chain::Tip,
highest_height: u64,
) -> Result<bool, chain::Error> {
// run the body_sync every 5s
if self.body_sync_due()? {
if self.body_sync()? {
return Ok(true);
}
self.sync_state.update(SyncStatus::BodySync {
current_height: head.height,
highest_height: highest_height,
});
}
Ok(false)
}
/// Is our local node running in archive_mode?
fn archive_mode(&self) -> bool {
self.chain.archive_mode()
}
/// Return true if txhashset download is needed (when requested block is under the horizon).
/// Otherwise go request some missing blocks and return false.
fn body_sync(&mut self) -> Result<bool, chain::Error> {
let head = self.chain.head()?;
let header_head = self.chain.header_head()?;
let fork_point = self.chain.fork_point()?;
if self.chain.check_txhashset_needed(&fork_point)? {
debug!(
"body_sync: cannot sync full blocks earlier than horizon. will request txhashset",
);
return Ok(true);
}
let peers = {
// Find connected peers with strictly greater difficulty than us.
let peers_iter = || {
// If we are running with archive mode enabled we only want to sync
// from other archive nodes.
let cap = if self.archive_mode() {
Capabilities::BLOCK_HIST
} else {
Capabilities::UNKNOWN
};
self.peers
.iter()
.with_capabilities(cap)
.with_difficulty(|x| x > head.total_difficulty)
.connected()
};
// We prefer outbound peers with greater difficulty.
let mut peers: Vec<_> = peers_iter().outbound().into_iter().collect();
if peers.is_empty() {
debug!("no outbound peers with more work, considering inbound");
peers = peers_iter().inbound().into_iter().collect();
}
// If we have no peers (outbound or inbound) then we are done for now.
if peers.is_empty() {
debug!("no peers (inbound or outbound) with more work");
return Ok(false);
}
peers
};
// if we have 5 peers to sync from then ask for 50 blocks total (peer_count *
// 10) max will be 80 if all 8 peers are advertising more work
// also if the chain is already saturated with orphans, throttle
let block_count = cmp::min(
cmp::min(100, peers.len() * 10),
chain::MAX_ORPHAN_SIZE.saturating_sub(self.chain.orphans_len()) + 1,
);
let hashes = self.block_hashes_to_sync(&fork_point, &header_head, block_count as u64)?;
if!hashes.is_empty() {
debug!(
"block_sync: {}/{} requesting blocks {:?} from {} peers",
head.height,
header_head.height,
hashes,
peers.len(),
);
// reinitialize download tracking state
self.blocks_requested = 0;
self.receive_timeout = Utc::now() + Duration::seconds(6);
let mut rng = rand::thread_rng();
for hash in hashes {
if let Some(peer) = peers.choose(&mut rng) {
if let Err(e) = peer.send_block_request(hash, chain::Options::SYNC) {
debug!("Skipped request to {}: {:?}", peer.info.addr, e);
peer.stop();
} else {
self.blocks_requested += 1;
}
}
}
}
return Ok(false);
}
fn block_hashes_to_sync(
&self,
fork_point: &BlockHeader,
header_head: &Tip,
count: u64,
) -> Result<Vec<Hash>, chain::Error> {
let mut hashes = vec![];
let max_height = cmp::min(fork_point.height + count, header_head.height);
let mut current = self.chain.get_header_by_height(max_height)?;
while current.height > fork_point.height {
if!self.chain.is_orphan(¤t.hash()) {
hashes.push(current.hash());
}
current = self.chain.get_previous_header(¤t)?;
}
hashes.reverse();
Ok(hashes)
}
// Should we run block body sync and ask for more full blocks?
fn | (&mut self) -> Result<bool, chain::Error> {
let blocks_received = self.blocks_received()?;
// some blocks have been requested
if self.blocks_requested > 0 {
// but none received since timeout, ask again
let timeout = Utc::now() > self.receive_timeout;
if timeout && blocks_received <= self.prev_blocks_received {
debug!(
"body_sync: expecting {} more blocks and none received for a while",
self.blocks_requested,
);
return Ok(true);
}
}
if blocks_received > self.prev_blocks_received {
// some received, update for next check
self.receive_timeout = Utc::now() + Duration::seconds(1);
self.blocks_requested = self
.blocks_requested
.saturating_sub(blocks_received - self.prev_blocks_received);
self.prev_blocks_received = blocks_received;
}
// off by one to account for broadcast adding a couple orphans
if self.blocks_requested < 2 {
// no pending block requests, ask more
debug!("body_sync: no pending block request, asking more");
return Ok(true);
}
Ok(false)
}
// Total numbers received on this chain, including the head and orphans
fn blocks_received(&self) -> Result<u64, chain::Error> {
Ok((self.chain.head()?).height
+ self.chain.orphans_len() as u64
+ self.chain.orphans_evicted_len() as u64)
}
}
| body_sync_due | identifier_name |
local.rs | this `Key` is
// stored as a `static`, and it's not valid for a static to reference the
// address of another thread_local static. For this reason we kinda wonkily
// work around this by generating a shim function which will give us the
// address of the inner TLS key at runtime.
//
// This is trivially devirtualizable by LLVM because we never store anything
// to this field and rustc can declare the `static` as constant as well.
inner: fn() -> &'static __KeyInner<T>,
// initialization routine to invoke to create a value
init: fn() -> T,
}
// Macro pain #4586:
//
// When cross compiling, rustc will load plugins and macros from the *host*
// platform before search for macros from the target platform. This is primarily
// done to detect, for example, plugins. Ideally the macro below would be
// defined once per module below, but unfortunately this means we have the
// following situation:
//
// 1. We compile libstd for x86_64-unknown-linux-gnu, this thread_local!() macro
// will inject #[thread_local] statics.
// 2. We then try to compile a program for arm-linux-androideabi
// 3. The compiler has a host of linux and a target of android, so it loads
// macros from the *linux* libstd.
// 4. The macro generates a #[thread_local] field, but the android libstd does
// not use #[thread_local]
// 5. Compile error about structs with wrong fields.
//
// To get around this, we're forced to inject the #[cfg] logic into the macro
// itself. Woohoo.
/// Declare a new thread local storage key of type `std::thread::LocalKey`.
///
/// See [LocalKey documentation](thread/struct.LocalKey.html) for more
/// information.
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]
#[cfg(not(no_elf_tls))]
macro_rules! thread_local {
(static $name:ident: $t:ty = $init:expr) => (
static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init,
#[cfg_attr(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64")),
thread_local)]);
);
(pub static $name:ident: $t:ty = $init:expr) => (
pub static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init,
#[cfg_attr(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64")),
thread_local)]);
);
}
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]
#[cfg(no_elf_tls)]
macro_rules! thread_local {
(static $name:ident: $t:ty = $init:expr) => (
static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init, #[]);
);
(pub static $name:ident: $t:ty = $init:expr) => (
pub static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init, #[]);
);
}
#[doc(hidden)]
#[unstable(feature = "thread_local_internals",
reason = "should not be necessary")]
#[macro_export]
#[allow_internal_unstable]
macro_rules! __thread_local_inner {
($t:ty, $init:expr, #[$($attr:meta),*]) => {{
$(#[$attr])*
static __KEY: $crate::thread::__LocalKeyInner<$t> =
$crate::thread::__LocalKeyInner::new();
fn __init() -> $t { $init }
fn __getit() -> &'static $crate::thread::__LocalKeyInner<$t> { &__KEY }
$crate::thread::LocalKey::new(__getit, __init)
}}
}
/// Indicator of the state of a thread local storage key.
#[unstable(feature = "thread_local_state",
reason = "state querying was recently added")]
#[derive(Eq, PartialEq, Copy, Clone)]
pub enum LocalKeyState {
/// All keys are in this state whenever a thread starts. Keys will
/// transition to the `Valid` state once the first call to `with` happens
/// and the initialization expression succeeds.
///
/// Keys in the `Uninitialized` state will yield a reference to the closure
/// passed to `with` so long as the initialization routine does not panic.
Uninitialized,
/// Once a key has been accessed successfully, it will enter the `Valid`
/// state. Keys in the `Valid` state will remain so until the thread exits,
/// at which point the destructor will be run and the key will enter the
/// `Destroyed` state.
///
/// Keys in the `Valid` state will be guaranteed to yield a reference to the
/// closure passed to `with`.
Valid,
/// When a thread exits, the destructors for keys will be run (if
/// necessary). While a destructor is running, and possibly after a
/// destructor has run, a key is in the `Destroyed` state.
///
/// Keys in the `Destroyed` states will trigger a panic when accessed via
/// `with`.
Destroyed,
}
impl<T:'static> LocalKey<T> {
#[doc(hidden)]
#[unstable(feature = "thread_local_internals",
reason = "recently added to create a key")]
pub const fn new(inner: fn() -> &'static __KeyInner<T>,
init: fn() -> T) -> LocalKey<T> {
LocalKey {
inner: inner,
init: init
}
}
/// Acquires a reference to the value in this TLS key.
///
/// This will lazily initialize the value if this thread has not referenced
/// this key yet.
///
/// # Panics
///
/// This function will `panic!()` if the key currently has its
/// destructor running, and it **may** panic if the destructor has
/// previously been run for this thread.
#[stable(feature = "rust1", since = "1.0.0")]
pub fn with<F, R>(&'static self, f: F) -> R
where F: FnOnce(&T) -> R {
let slot = (self.inner)();
unsafe {
let slot = slot.get().expect("cannot access a TLS value during or \
after it is destroyed");
f(match *slot.get() {
Some(ref inner) => inner,
None => self.init(slot),
})
}
}
unsafe fn init(&self, slot: &UnsafeCell<Option<T>>) -> &T {
// Execute the initialization up front, *then* move it into our slot,
// just in case initialization fails.
let value = (self.init)();
let ptr = slot.get();
*ptr = Some(value);
(*ptr).as_ref().unwrap()
}
/// Query the current state of this key.
///
/// A key is initially in the `Uninitialized` state whenever a thread
/// starts. It will remain in this state up until the first call to `with`
/// within a thread has run the initialization expression successfully.
///
/// Once the initialization expression succeeds, the key transitions to the
/// `Valid` state which will guarantee that future calls to `with` will
/// succeed within the thread.
///
/// When a thread exits, each key will be destroyed in turn, and as keys are
/// destroyed they will enter the `Destroyed` state just before the
/// destructor starts to run. Keys may remain in the `Destroyed` state after
/// destruction has completed. Keys without destructors (e.g. with types
/// that are `Copy`), may never enter the `Destroyed` state.
///
/// Keys in the `Uninitialized` can be accessed so long as the
/// initialization does not panic. Keys in the `Valid` state are guaranteed
/// to be able to be accessed. Keys in the `Destroyed` state will panic on
/// any call to `with`.
#[unstable(feature = "thread_local_state",
reason = "state querying was recently added")]
pub fn state(&'static self) -> LocalKeyState {
unsafe {
match (self.inner)().get() {
Some(cell) => {
match *cell.get() {
Some(..) => LocalKeyState::Valid,
None => LocalKeyState::Uninitialized,
}
}
None => LocalKeyState::Destroyed,
}
}
}
}
#[cfg(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64"),
not(no_elf_tls)))]
#[doc(hidden)]
mod imp {
#[cfg(stage0)]
use prelude::v1::*;
use cell::{Cell, UnsafeCell};
use intrinsics;
use ptr;
pub struct Key<T> {
inner: UnsafeCell<Option<T>>,
// Metadata to keep track of the state of the destructor. Remember that
// these variables are thread-local, not global.
dtor_registered: Cell<bool>,
dtor_running: Cell<bool>,
}
unsafe impl<T> ::marker::Sync for Key<T> { }
impl<T> Key<T> {
pub const fn new() -> Key<T> {
Key {
inner: UnsafeCell::new(None),
dtor_registered: Cell::new(false),
dtor_running: Cell::new(false)
}
}
pub unsafe fn get(&'static self) -> Option<&'static UnsafeCell<Option<T>>> {
if intrinsics::needs_drop::<T>() && self.dtor_running.get() {
return None
}
self.register_dtor();
Some(&self.inner)
}
unsafe fn register_dtor(&self) {
if!intrinsics::needs_drop::<T>() || self.dtor_registered.get() {
return
}
register_dtor(self as *const _ as *mut u8,
destroy_value::<T>);
self.dtor_registered.set(true);
}
}
// Since what appears to be glibc 2.18 this symbol has been shipped which
// GCC and clang both use to invoke destructors in thread_local globals, so
// let's do the same!
//
// Note, however, that we run on lots older linuxes, as well as cross
// compiling from a newer linux to an older linux, so we also have a
// fallback implementation to use as well.
//
// Due to rust-lang/rust#18804, make sure this is not generic!
#[cfg(target_os = "linux")]
unsafe fn register_dtor(t: *mut u8, dtor: unsafe extern fn(*mut u8)) {
use prelude::v1::*;
use mem;
use libc;
use sys_common::thread_local as os;
extern {
#[linkage = "extern_weak"]
static __dso_handle: *mut u8;
#[linkage = "extern_weak"]
static __cxa_thread_atexit_impl: *const libc::c_void;
}
if!__cxa_thread_atexit_impl.is_null() {
type F = unsafe extern fn(dtor: unsafe extern fn(*mut u8),
arg: *mut u8,
dso_handle: *mut u8) -> libc::c_int;
mem::transmute::<*const libc::c_void, F>(__cxa_thread_atexit_impl)
(dtor, t, &__dso_handle as *const _ as *mut _);
return
}
// The fallback implementation uses a vanilla OS-based TLS key to track
// the list of destructors that need to be run for this thread. The key
// then has its own destructor which runs all the other destructors.
//
// The destructor for DTORS is a little special in that it has a `while`
// loop to continuously drain the list of registered destructors. It
// *should* be the case that this loop always terminates because we
// provide the guarantee that a TLS key cannot be set after it is
// flagged for destruction.
static DTORS: os::StaticKey = os::StaticKey::new(Some(run_dtors));
type List = Vec<(*mut u8, unsafe extern fn(*mut u8))>;
if DTORS.get().is_null() {
let v: Box<List> = box Vec::new();
DTORS.set(Box::into_raw(v) as *mut u8);
}
let list: &mut List = &mut *(DTORS.get() as *mut List);
list.push((t, dtor));
unsafe extern fn run_dtors(mut ptr: *mut u8) {
while!ptr.is_null() {
let list: Box<List> = Box::from_raw(ptr as *mut List);
for &(ptr, dtor) in list.iter() {
dtor(ptr);
}
ptr = DTORS.get();
DTORS.set(ptr::null_mut());
}
}
}
// OSX's analog of the above linux function is this _tlv_atexit function.
// The disassembly of thread_local globals in C++ (at least produced by
// clang) will have this show up in the output.
#[cfg(target_os = "macos")]
unsafe fn register_dtor(t: *mut u8, dtor: unsafe extern fn(*mut u8)) {
extern {
fn _tlv_atexit(dtor: unsafe extern fn(*mut u8),
arg: *mut u8);
}
_tlv_atexit(dtor, t);
}
pub unsafe extern fn destroy_value<T>(ptr: *mut u8) {
let ptr = ptr as *mut Key<T>;
// Right before we run the user destructor be sure to flag the
// destructor as running for this thread so calls to `get` will return
// `None`.
(*ptr).dtor_running.set(true);
// The OSX implementation of TLS apparently had an odd aspect to it
// where the pointer we have may be overwritten while this destructor
// is running. Specifically if a TLS destructor re-accesses TLS it may
// trigger a re-initialization of all TLS variables, paving over at
// least some destroyed ones with initial values.
//
// This means that if we drop a TLS value in place on OSX that we could
// revert the value to its original state halfway through the
// destructor, which would be bad!
//
// Hence, we use `ptr::read` on OSX (to move to a "safe" location)
// instead of drop_in_place.
if cfg!(target_os = "macos") {
ptr::read((*ptr).inner.get());
} else {
intrinsics::drop_in_place((*ptr).inner.get());
}
}
}
#[cfg(any(not(any(target_os = "macos", target_os = "linux")),
target_arch = "aarch64",
no_elf_tls))]
#[doc(hidden)]
mod imp {
use prelude::v1::*;
use cell::{Cell, UnsafeCell};
use marker;
use ptr;
use sys_common::thread_local::StaticKey as OsStaticKey;
pub struct Key<T> {
// OS-TLS key that we'll use to key off.
os: OsStaticKey,
marker: marker::PhantomData<Cell<T>>,
}
unsafe impl<T> ::marker::Sync for Key<T> { }
struct Value<T:'static> {
key: &'static Key<T>,
value: UnsafeCell<Option<T>>,
}
impl<T:'static> Key<T> {
pub const fn new() -> Key<T> {
Key {
os: OsStaticKey::new(Some(destroy_value::<T>)),
marker: marker::PhantomData
}
}
pub unsafe fn get(&'static self) -> Option<&'static UnsafeCell<Option<T>>> {
let ptr = self.os.get() as *mut Value<T>;
if!ptr.is_null() {
if ptr as usize == 1 {
return None
}
return Some(&(*ptr).value);
}
// If the lookup returned null, we haven't initialized our own local
// copy, so do that now.
let ptr: Box<Value<T>> = box Value {
key: self,
value: UnsafeCell::new(None),
};
let ptr = Box::into_raw(ptr);
self.os.set(ptr as *mut u8);
Some(&(*ptr).value)
}
}
pub unsafe extern fn destroy_value<T:'static>(ptr: *mut u8) {
// The OS TLS ensures that this key contains a NULL value when this
// destructor starts to run. We set it back to a sentinel value of 1 to
// ensure that any future calls to `get` for this thread will return
// `None`.
//
// Note that to prevent an infinite loop we reset it back to null right
// before we return from the destructor ourselves.
let ptr = Box::from_raw(ptr as *mut Value<T>);
let key = ptr.key;
key.os.set(1 as *mut u8);
drop(ptr);
key.os.set(ptr::null_mut());
}
}
#[cfg(test)]
mod tests {
use prelude::v1::*;
use sync::mpsc::{channel, Sender};
use cell::{Cell, UnsafeCell};
use super::LocalKeyState;
use thread;
struct Foo(Sender<()>);
impl Drop for Foo {
fn drop(&mut self) {
let Foo(ref s) = *self;
s.send(()).unwrap();
}
}
#[test]
fn smoke_no_dtor() {
thread_local!(static FOO: Cell<i32> = Cell::new(1));
FOO.with(|f| {
assert_eq!(f.get(), 1);
f.set(2);
});
let (tx, rx) = channel();
let _t = thread::spawn(move|| {
FOO.with(|f| {
assert_eq!(f.get(), 1);
});
tx.send(()).unwrap();
});
rx.recv().unwrap();
FOO.with(|f| {
assert_eq!(f.get(), 2);
});
}
#[test]
fn states() {
struct Foo;
impl Drop for Foo {
fn drop(&mut self) {
assert!(FOO.state() == LocalKeyState::Destroyed);
}
}
fn foo() -> Foo {
assert!(FOO.state() == LocalKeyState::Uninitialized);
Foo
}
thread_local!(static FOO: Foo = foo());
thread::spawn(|| {
assert!(FOO.state() == LocalKeyState::Uninitialized);
FOO.with(|_| {
assert!(FOO.state() == LocalKeyState::Valid);
});
assert!(FOO.state() == LocalKeyState::Valid);
}).join().ok().unwrap();
}
#[test]
fn smoke_dtor() {
thread_local!(static FOO: UnsafeCell<Option<Foo>> = UnsafeCell::new(None));
let (tx, rx) = channel();
let _t = thread::spawn(move|| unsafe {
let mut tx = Some(tx);
FOO.with(|f| {
*f.get() = Some(Foo(tx.take().unwrap()));
});
});
rx.recv().unwrap(); | }
#[test]
fn circular() {
struct S1;
struct S2;
thread_local!(static K1: UnsafeCell<Option<S1>> = UnsafeCell::new(None));
thread_local!(static K2: UnsafeCell<Option<S2>> = UnsafeCell::new(None));
static mut HITS: u32 = 0;
impl Drop for S1 {
fn drop(&mut self) {
unsafe {
HITS += 1;
if K2.state() == LocalKeyState::Destroyed {
assert_eq!(HITS, 3);
} else {
if HITS == 1 {
K2.with(|s| *s.get() = Some(S2));
} else {
assert_eq!(HITS, 3);
}
}
}
}
}
impl Drop for S2 {
| random_line_split |
|
local.rs | `Key` is
// stored as a `static`, and it's not valid for a static to reference the
// address of another thread_local static. For this reason we kinda wonkily
// work around this by generating a shim function which will give us the
// address of the inner TLS key at runtime.
//
// This is trivially devirtualizable by LLVM because we never store anything
// to this field and rustc can declare the `static` as constant as well.
inner: fn() -> &'static __KeyInner<T>,
// initialization routine to invoke to create a value
init: fn() -> T,
}
// Macro pain #4586:
//
// When cross compiling, rustc will load plugins and macros from the *host*
// platform before search for macros from the target platform. This is primarily
// done to detect, for example, plugins. Ideally the macro below would be
// defined once per module below, but unfortunately this means we have the
// following situation:
//
// 1. We compile libstd for x86_64-unknown-linux-gnu, this thread_local!() macro
// will inject #[thread_local] statics.
// 2. We then try to compile a program for arm-linux-androideabi
// 3. The compiler has a host of linux and a target of android, so it loads
// macros from the *linux* libstd.
// 4. The macro generates a #[thread_local] field, but the android libstd does
// not use #[thread_local]
// 5. Compile error about structs with wrong fields.
//
// To get around this, we're forced to inject the #[cfg] logic into the macro
// itself. Woohoo.
/// Declare a new thread local storage key of type `std::thread::LocalKey`.
///
/// See [LocalKey documentation](thread/struct.LocalKey.html) for more
/// information.
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]
#[cfg(not(no_elf_tls))]
macro_rules! thread_local {
(static $name:ident: $t:ty = $init:expr) => (
static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init,
#[cfg_attr(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64")),
thread_local)]);
);
(pub static $name:ident: $t:ty = $init:expr) => (
pub static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init,
#[cfg_attr(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64")),
thread_local)]);
);
}
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]
#[cfg(no_elf_tls)]
macro_rules! thread_local {
(static $name:ident: $t:ty = $init:expr) => (
static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init, #[]);
);
(pub static $name:ident: $t:ty = $init:expr) => (
pub static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init, #[]);
);
}
#[doc(hidden)]
#[unstable(feature = "thread_local_internals",
reason = "should not be necessary")]
#[macro_export]
#[allow_internal_unstable]
macro_rules! __thread_local_inner {
($t:ty, $init:expr, #[$($attr:meta),*]) => {{
$(#[$attr])*
static __KEY: $crate::thread::__LocalKeyInner<$t> =
$crate::thread::__LocalKeyInner::new();
fn __init() -> $t { $init }
fn __getit() -> &'static $crate::thread::__LocalKeyInner<$t> { &__KEY }
$crate::thread::LocalKey::new(__getit, __init)
}}
}
/// Indicator of the state of a thread local storage key.
#[unstable(feature = "thread_local_state",
reason = "state querying was recently added")]
#[derive(Eq, PartialEq, Copy, Clone)]
pub enum LocalKeyState {
/// All keys are in this state whenever a thread starts. Keys will
/// transition to the `Valid` state once the first call to `with` happens
/// and the initialization expression succeeds.
///
/// Keys in the `Uninitialized` state will yield a reference to the closure
/// passed to `with` so long as the initialization routine does not panic.
Uninitialized,
/// Once a key has been accessed successfully, it will enter the `Valid`
/// state. Keys in the `Valid` state will remain so until the thread exits,
/// at which point the destructor will be run and the key will enter the
/// `Destroyed` state.
///
/// Keys in the `Valid` state will be guaranteed to yield a reference to the
/// closure passed to `with`.
Valid,
/// When a thread exits, the destructors for keys will be run (if
/// necessary). While a destructor is running, and possibly after a
/// destructor has run, a key is in the `Destroyed` state.
///
/// Keys in the `Destroyed` states will trigger a panic when accessed via
/// `with`.
Destroyed,
}
impl<T:'static> LocalKey<T> {
#[doc(hidden)]
#[unstable(feature = "thread_local_internals",
reason = "recently added to create a key")]
pub const fn new(inner: fn() -> &'static __KeyInner<T>,
init: fn() -> T) -> LocalKey<T> {
LocalKey {
inner: inner,
init: init
}
}
/// Acquires a reference to the value in this TLS key.
///
/// This will lazily initialize the value if this thread has not referenced
/// this key yet.
///
/// # Panics
///
/// This function will `panic!()` if the key currently has its
/// destructor running, and it **may** panic if the destructor has
/// previously been run for this thread.
#[stable(feature = "rust1", since = "1.0.0")]
pub fn with<F, R>(&'static self, f: F) -> R
where F: FnOnce(&T) -> R {
let slot = (self.inner)();
unsafe {
let slot = slot.get().expect("cannot access a TLS value during or \
after it is destroyed");
f(match *slot.get() {
Some(ref inner) => inner,
None => self.init(slot),
})
}
}
unsafe fn init(&self, slot: &UnsafeCell<Option<T>>) -> &T {
// Execute the initialization up front, *then* move it into our slot,
// just in case initialization fails.
let value = (self.init)();
let ptr = slot.get();
*ptr = Some(value);
(*ptr).as_ref().unwrap()
}
/// Query the current state of this key.
///
/// A key is initially in the `Uninitialized` state whenever a thread
/// starts. It will remain in this state up until the first call to `with`
/// within a thread has run the initialization expression successfully.
///
/// Once the initialization expression succeeds, the key transitions to the
/// `Valid` state which will guarantee that future calls to `with` will
/// succeed within the thread.
///
/// When a thread exits, each key will be destroyed in turn, and as keys are
/// destroyed they will enter the `Destroyed` state just before the
/// destructor starts to run. Keys may remain in the `Destroyed` state after
/// destruction has completed. Keys without destructors (e.g. with types
/// that are `Copy`), may never enter the `Destroyed` state.
///
/// Keys in the `Uninitialized` can be accessed so long as the
/// initialization does not panic. Keys in the `Valid` state are guaranteed
/// to be able to be accessed. Keys in the `Destroyed` state will panic on
/// any call to `with`.
#[unstable(feature = "thread_local_state",
reason = "state querying was recently added")]
pub fn state(&'static self) -> LocalKeyState {
unsafe {
match (self.inner)().get() {
Some(cell) => {
match *cell.get() {
Some(..) => LocalKeyState::Valid,
None => LocalKeyState::Uninitialized,
}
}
None => LocalKeyState::Destroyed,
}
}
}
}
#[cfg(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64"),
not(no_elf_tls)))]
#[doc(hidden)]
mod imp {
#[cfg(stage0)]
use prelude::v1::*;
use cell::{Cell, UnsafeCell};
use intrinsics;
use ptr;
pub struct Key<T> {
inner: UnsafeCell<Option<T>>,
// Metadata to keep track of the state of the destructor. Remember that
// these variables are thread-local, not global.
dtor_registered: Cell<bool>,
dtor_running: Cell<bool>,
}
unsafe impl<T> ::marker::Sync for Key<T> { }
impl<T> Key<T> {
pub const fn new() -> Key<T> {
Key {
inner: UnsafeCell::new(None),
dtor_registered: Cell::new(false),
dtor_running: Cell::new(false)
}
}
pub unsafe fn | (&'static self) -> Option<&'static UnsafeCell<Option<T>>> {
if intrinsics::needs_drop::<T>() && self.dtor_running.get() {
return None
}
self.register_dtor();
Some(&self.inner)
}
unsafe fn register_dtor(&self) {
if!intrinsics::needs_drop::<T>() || self.dtor_registered.get() {
return
}
register_dtor(self as *const _ as *mut u8,
destroy_value::<T>);
self.dtor_registered.set(true);
}
}
// Since what appears to be glibc 2.18 this symbol has been shipped which
// GCC and clang both use to invoke destructors in thread_local globals, so
// let's do the same!
//
// Note, however, that we run on lots older linuxes, as well as cross
// compiling from a newer linux to an older linux, so we also have a
// fallback implementation to use as well.
//
// Due to rust-lang/rust#18804, make sure this is not generic!
#[cfg(target_os = "linux")]
unsafe fn register_dtor(t: *mut u8, dtor: unsafe extern fn(*mut u8)) {
use prelude::v1::*;
use mem;
use libc;
use sys_common::thread_local as os;
extern {
#[linkage = "extern_weak"]
static __dso_handle: *mut u8;
#[linkage = "extern_weak"]
static __cxa_thread_atexit_impl: *const libc::c_void;
}
if!__cxa_thread_atexit_impl.is_null() {
type F = unsafe extern fn(dtor: unsafe extern fn(*mut u8),
arg: *mut u8,
dso_handle: *mut u8) -> libc::c_int;
mem::transmute::<*const libc::c_void, F>(__cxa_thread_atexit_impl)
(dtor, t, &__dso_handle as *const _ as *mut _);
return
}
// The fallback implementation uses a vanilla OS-based TLS key to track
// the list of destructors that need to be run for this thread. The key
// then has its own destructor which runs all the other destructors.
//
// The destructor for DTORS is a little special in that it has a `while`
// loop to continuously drain the list of registered destructors. It
// *should* be the case that this loop always terminates because we
// provide the guarantee that a TLS key cannot be set after it is
// flagged for destruction.
static DTORS: os::StaticKey = os::StaticKey::new(Some(run_dtors));
type List = Vec<(*mut u8, unsafe extern fn(*mut u8))>;
if DTORS.get().is_null() {
let v: Box<List> = box Vec::new();
DTORS.set(Box::into_raw(v) as *mut u8);
}
let list: &mut List = &mut *(DTORS.get() as *mut List);
list.push((t, dtor));
unsafe extern fn run_dtors(mut ptr: *mut u8) {
while!ptr.is_null() {
let list: Box<List> = Box::from_raw(ptr as *mut List);
for &(ptr, dtor) in list.iter() {
dtor(ptr);
}
ptr = DTORS.get();
DTORS.set(ptr::null_mut());
}
}
}
// OSX's analog of the above linux function is this _tlv_atexit function.
// The disassembly of thread_local globals in C++ (at least produced by
// clang) will have this show up in the output.
#[cfg(target_os = "macos")]
unsafe fn register_dtor(t: *mut u8, dtor: unsafe extern fn(*mut u8)) {
extern {
fn _tlv_atexit(dtor: unsafe extern fn(*mut u8),
arg: *mut u8);
}
_tlv_atexit(dtor, t);
}
pub unsafe extern fn destroy_value<T>(ptr: *mut u8) {
let ptr = ptr as *mut Key<T>;
// Right before we run the user destructor be sure to flag the
// destructor as running for this thread so calls to `get` will return
// `None`.
(*ptr).dtor_running.set(true);
// The OSX implementation of TLS apparently had an odd aspect to it
// where the pointer we have may be overwritten while this destructor
// is running. Specifically if a TLS destructor re-accesses TLS it may
// trigger a re-initialization of all TLS variables, paving over at
// least some destroyed ones with initial values.
//
// This means that if we drop a TLS value in place on OSX that we could
// revert the value to its original state halfway through the
// destructor, which would be bad!
//
// Hence, we use `ptr::read` on OSX (to move to a "safe" location)
// instead of drop_in_place.
if cfg!(target_os = "macos") {
ptr::read((*ptr).inner.get());
} else {
intrinsics::drop_in_place((*ptr).inner.get());
}
}
}
#[cfg(any(not(any(target_os = "macos", target_os = "linux")),
target_arch = "aarch64",
no_elf_tls))]
#[doc(hidden)]
mod imp {
use prelude::v1::*;
use cell::{Cell, UnsafeCell};
use marker;
use ptr;
use sys_common::thread_local::StaticKey as OsStaticKey;
pub struct Key<T> {
// OS-TLS key that we'll use to key off.
os: OsStaticKey,
marker: marker::PhantomData<Cell<T>>,
}
unsafe impl<T> ::marker::Sync for Key<T> { }
struct Value<T:'static> {
key: &'static Key<T>,
value: UnsafeCell<Option<T>>,
}
impl<T:'static> Key<T> {
pub const fn new() -> Key<T> {
Key {
os: OsStaticKey::new(Some(destroy_value::<T>)),
marker: marker::PhantomData
}
}
pub unsafe fn get(&'static self) -> Option<&'static UnsafeCell<Option<T>>> {
let ptr = self.os.get() as *mut Value<T>;
if!ptr.is_null() {
if ptr as usize == 1 {
return None
}
return Some(&(*ptr).value);
}
// If the lookup returned null, we haven't initialized our own local
// copy, so do that now.
let ptr: Box<Value<T>> = box Value {
key: self,
value: UnsafeCell::new(None),
};
let ptr = Box::into_raw(ptr);
self.os.set(ptr as *mut u8);
Some(&(*ptr).value)
}
}
pub unsafe extern fn destroy_value<T:'static>(ptr: *mut u8) {
// The OS TLS ensures that this key contains a NULL value when this
// destructor starts to run. We set it back to a sentinel value of 1 to
// ensure that any future calls to `get` for this thread will return
// `None`.
//
// Note that to prevent an infinite loop we reset it back to null right
// before we return from the destructor ourselves.
let ptr = Box::from_raw(ptr as *mut Value<T>);
let key = ptr.key;
key.os.set(1 as *mut u8);
drop(ptr);
key.os.set(ptr::null_mut());
}
}
#[cfg(test)]
mod tests {
use prelude::v1::*;
use sync::mpsc::{channel, Sender};
use cell::{Cell, UnsafeCell};
use super::LocalKeyState;
use thread;
struct Foo(Sender<()>);
impl Drop for Foo {
fn drop(&mut self) {
let Foo(ref s) = *self;
s.send(()).unwrap();
}
}
#[test]
fn smoke_no_dtor() {
thread_local!(static FOO: Cell<i32> = Cell::new(1));
FOO.with(|f| {
assert_eq!(f.get(), 1);
f.set(2);
});
let (tx, rx) = channel();
let _t = thread::spawn(move|| {
FOO.with(|f| {
assert_eq!(f.get(), 1);
});
tx.send(()).unwrap();
});
rx.recv().unwrap();
FOO.with(|f| {
assert_eq!(f.get(), 2);
});
}
#[test]
fn states() {
struct Foo;
impl Drop for Foo {
fn drop(&mut self) {
assert!(FOO.state() == LocalKeyState::Destroyed);
}
}
fn foo() -> Foo {
assert!(FOO.state() == LocalKeyState::Uninitialized);
Foo
}
thread_local!(static FOO: Foo = foo());
thread::spawn(|| {
assert!(FOO.state() == LocalKeyState::Uninitialized);
FOO.with(|_| {
assert!(FOO.state() == LocalKeyState::Valid);
});
assert!(FOO.state() == LocalKeyState::Valid);
}).join().ok().unwrap();
}
#[test]
fn smoke_dtor() {
thread_local!(static FOO: UnsafeCell<Option<Foo>> = UnsafeCell::new(None));
let (tx, rx) = channel();
let _t = thread::spawn(move|| unsafe {
let mut tx = Some(tx);
FOO.with(|f| {
*f.get() = Some(Foo(tx.take().unwrap()));
});
});
rx.recv().unwrap();
}
#[test]
fn circular() {
struct S1;
struct S2;
thread_local!(static K1: UnsafeCell<Option<S1>> = UnsafeCell::new(None));
thread_local!(static K2: UnsafeCell<Option<S2>> = UnsafeCell::new(None));
static mut HITS: u32 = 0;
impl Drop for S1 {
fn drop(&mut self) {
unsafe {
HITS += 1;
if K2.state() == LocalKeyState::Destroyed {
assert_eq!(HITS, 3);
} else {
if HITS == 1 {
K2.with(|s| *s.get() = Some(S2));
} else {
assert_eq!(HITS, 3);
}
}
}
}
}
impl Drop for S2 {
| get | identifier_name |
local.rs | __thread_local_inner!($t, $init,
#[cfg_attr(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64")),
thread_local)]);
);
}
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]
#[cfg(no_elf_tls)]
macro_rules! thread_local {
(static $name:ident: $t:ty = $init:expr) => (
static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init, #[]);
);
(pub static $name:ident: $t:ty = $init:expr) => (
pub static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init, #[]);
);
}
#[doc(hidden)]
#[unstable(feature = "thread_local_internals",
reason = "should not be necessary")]
#[macro_export]
#[allow_internal_unstable]
macro_rules! __thread_local_inner {
($t:ty, $init:expr, #[$($attr:meta),*]) => {{
$(#[$attr])*
static __KEY: $crate::thread::__LocalKeyInner<$t> =
$crate::thread::__LocalKeyInner::new();
fn __init() -> $t { $init }
fn __getit() -> &'static $crate::thread::__LocalKeyInner<$t> { &__KEY }
$crate::thread::LocalKey::new(__getit, __init)
}}
}
/// Indicator of the state of a thread local storage key.
#[unstable(feature = "thread_local_state",
reason = "state querying was recently added")]
#[derive(Eq, PartialEq, Copy, Clone)]
pub enum LocalKeyState {
/// All keys are in this state whenever a thread starts. Keys will
/// transition to the `Valid` state once the first call to `with` happens
/// and the initialization expression succeeds.
///
/// Keys in the `Uninitialized` state will yield a reference to the closure
/// passed to `with` so long as the initialization routine does not panic.
Uninitialized,
/// Once a key has been accessed successfully, it will enter the `Valid`
/// state. Keys in the `Valid` state will remain so until the thread exits,
/// at which point the destructor will be run and the key will enter the
/// `Destroyed` state.
///
/// Keys in the `Valid` state will be guaranteed to yield a reference to the
/// closure passed to `with`.
Valid,
/// When a thread exits, the destructors for keys will be run (if
/// necessary). While a destructor is running, and possibly after a
/// destructor has run, a key is in the `Destroyed` state.
///
/// Keys in the `Destroyed` states will trigger a panic when accessed via
/// `with`.
Destroyed,
}
impl<T:'static> LocalKey<T> {
#[doc(hidden)]
#[unstable(feature = "thread_local_internals",
reason = "recently added to create a key")]
pub const fn new(inner: fn() -> &'static __KeyInner<T>,
init: fn() -> T) -> LocalKey<T> {
LocalKey {
inner: inner,
init: init
}
}
/// Acquires a reference to the value in this TLS key.
///
/// This will lazily initialize the value if this thread has not referenced
/// this key yet.
///
/// # Panics
///
/// This function will `panic!()` if the key currently has its
/// destructor running, and it **may** panic if the destructor has
/// previously been run for this thread.
#[stable(feature = "rust1", since = "1.0.0")]
pub fn with<F, R>(&'static self, f: F) -> R
where F: FnOnce(&T) -> R {
let slot = (self.inner)();
unsafe {
let slot = slot.get().expect("cannot access a TLS value during or \
after it is destroyed");
f(match *slot.get() {
Some(ref inner) => inner,
None => self.init(slot),
})
}
}
unsafe fn init(&self, slot: &UnsafeCell<Option<T>>) -> &T {
// Execute the initialization up front, *then* move it into our slot,
// just in case initialization fails.
let value = (self.init)();
let ptr = slot.get();
*ptr = Some(value);
(*ptr).as_ref().unwrap()
}
/// Query the current state of this key.
///
/// A key is initially in the `Uninitialized` state whenever a thread
/// starts. It will remain in this state up until the first call to `with`
/// within a thread has run the initialization expression successfully.
///
/// Once the initialization expression succeeds, the key transitions to the
/// `Valid` state which will guarantee that future calls to `with` will
/// succeed within the thread.
///
/// When a thread exits, each key will be destroyed in turn, and as keys are
/// destroyed they will enter the `Destroyed` state just before the
/// destructor starts to run. Keys may remain in the `Destroyed` state after
/// destruction has completed. Keys without destructors (e.g. with types
/// that are `Copy`), may never enter the `Destroyed` state.
///
/// Keys in the `Uninitialized` can be accessed so long as the
/// initialization does not panic. Keys in the `Valid` state are guaranteed
/// to be able to be accessed. Keys in the `Destroyed` state will panic on
/// any call to `with`.
#[unstable(feature = "thread_local_state",
reason = "state querying was recently added")]
pub fn state(&'static self) -> LocalKeyState {
unsafe {
match (self.inner)().get() {
Some(cell) => {
match *cell.get() {
Some(..) => LocalKeyState::Valid,
None => LocalKeyState::Uninitialized,
}
}
None => LocalKeyState::Destroyed,
}
}
}
}
#[cfg(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64"),
not(no_elf_tls)))]
#[doc(hidden)]
mod imp {
#[cfg(stage0)]
use prelude::v1::*;
use cell::{Cell, UnsafeCell};
use intrinsics;
use ptr;
pub struct Key<T> {
inner: UnsafeCell<Option<T>>,
// Metadata to keep track of the state of the destructor. Remember that
// these variables are thread-local, not global.
dtor_registered: Cell<bool>,
dtor_running: Cell<bool>,
}
unsafe impl<T> ::marker::Sync for Key<T> { }
impl<T> Key<T> {
pub const fn new() -> Key<T> {
Key {
inner: UnsafeCell::new(None),
dtor_registered: Cell::new(false),
dtor_running: Cell::new(false)
}
}
pub unsafe fn get(&'static self) -> Option<&'static UnsafeCell<Option<T>>> {
if intrinsics::needs_drop::<T>() && self.dtor_running.get() {
return None
}
self.register_dtor();
Some(&self.inner)
}
unsafe fn register_dtor(&self) {
if!intrinsics::needs_drop::<T>() || self.dtor_registered.get() {
return
}
register_dtor(self as *const _ as *mut u8,
destroy_value::<T>);
self.dtor_registered.set(true);
}
}
// Since what appears to be glibc 2.18 this symbol has been shipped which
// GCC and clang both use to invoke destructors in thread_local globals, so
// let's do the same!
//
// Note, however, that we run on lots older linuxes, as well as cross
// compiling from a newer linux to an older linux, so we also have a
// fallback implementation to use as well.
//
// Due to rust-lang/rust#18804, make sure this is not generic!
#[cfg(target_os = "linux")]
unsafe fn register_dtor(t: *mut u8, dtor: unsafe extern fn(*mut u8)) {
use prelude::v1::*;
use mem;
use libc;
use sys_common::thread_local as os;
extern {
#[linkage = "extern_weak"]
static __dso_handle: *mut u8;
#[linkage = "extern_weak"]
static __cxa_thread_atexit_impl: *const libc::c_void;
}
if!__cxa_thread_atexit_impl.is_null() {
type F = unsafe extern fn(dtor: unsafe extern fn(*mut u8),
arg: *mut u8,
dso_handle: *mut u8) -> libc::c_int;
mem::transmute::<*const libc::c_void, F>(__cxa_thread_atexit_impl)
(dtor, t, &__dso_handle as *const _ as *mut _);
return
}
// The fallback implementation uses a vanilla OS-based TLS key to track
// the list of destructors that need to be run for this thread. The key
// then has its own destructor which runs all the other destructors.
//
// The destructor for DTORS is a little special in that it has a `while`
// loop to continuously drain the list of registered destructors. It
// *should* be the case that this loop always terminates because we
// provide the guarantee that a TLS key cannot be set after it is
// flagged for destruction.
static DTORS: os::StaticKey = os::StaticKey::new(Some(run_dtors));
type List = Vec<(*mut u8, unsafe extern fn(*mut u8))>;
if DTORS.get().is_null() {
let v: Box<List> = box Vec::new();
DTORS.set(Box::into_raw(v) as *mut u8);
}
let list: &mut List = &mut *(DTORS.get() as *mut List);
list.push((t, dtor));
unsafe extern fn run_dtors(mut ptr: *mut u8) {
while!ptr.is_null() {
let list: Box<List> = Box::from_raw(ptr as *mut List);
for &(ptr, dtor) in list.iter() {
dtor(ptr);
}
ptr = DTORS.get();
DTORS.set(ptr::null_mut());
}
}
}
// OSX's analog of the above linux function is this _tlv_atexit function.
// The disassembly of thread_local globals in C++ (at least produced by
// clang) will have this show up in the output.
#[cfg(target_os = "macos")]
unsafe fn register_dtor(t: *mut u8, dtor: unsafe extern fn(*mut u8)) {
extern {
fn _tlv_atexit(dtor: unsafe extern fn(*mut u8),
arg: *mut u8);
}
_tlv_atexit(dtor, t);
}
pub unsafe extern fn destroy_value<T>(ptr: *mut u8) {
let ptr = ptr as *mut Key<T>;
// Right before we run the user destructor be sure to flag the
// destructor as running for this thread so calls to `get` will return
// `None`.
(*ptr).dtor_running.set(true);
// The OSX implementation of TLS apparently had an odd aspect to it
// where the pointer we have may be overwritten while this destructor
// is running. Specifically if a TLS destructor re-accesses TLS it may
// trigger a re-initialization of all TLS variables, paving over at
// least some destroyed ones with initial values.
//
// This means that if we drop a TLS value in place on OSX that we could
// revert the value to its original state halfway through the
// destructor, which would be bad!
//
// Hence, we use `ptr::read` on OSX (to move to a "safe" location)
// instead of drop_in_place.
if cfg!(target_os = "macos") {
ptr::read((*ptr).inner.get());
} else {
intrinsics::drop_in_place((*ptr).inner.get());
}
}
}
#[cfg(any(not(any(target_os = "macos", target_os = "linux")),
target_arch = "aarch64",
no_elf_tls))]
#[doc(hidden)]
mod imp {
use prelude::v1::*;
use cell::{Cell, UnsafeCell};
use marker;
use ptr;
use sys_common::thread_local::StaticKey as OsStaticKey;
pub struct Key<T> {
// OS-TLS key that we'll use to key off.
os: OsStaticKey,
marker: marker::PhantomData<Cell<T>>,
}
unsafe impl<T> ::marker::Sync for Key<T> { }
struct Value<T:'static> {
key: &'static Key<T>,
value: UnsafeCell<Option<T>>,
}
impl<T:'static> Key<T> {
pub const fn new() -> Key<T> {
Key {
os: OsStaticKey::new(Some(destroy_value::<T>)),
marker: marker::PhantomData
}
}
pub unsafe fn get(&'static self) -> Option<&'static UnsafeCell<Option<T>>> {
let ptr = self.os.get() as *mut Value<T>;
if!ptr.is_null() {
if ptr as usize == 1 {
return None
}
return Some(&(*ptr).value);
}
// If the lookup returned null, we haven't initialized our own local
// copy, so do that now.
let ptr: Box<Value<T>> = box Value {
key: self,
value: UnsafeCell::new(None),
};
let ptr = Box::into_raw(ptr);
self.os.set(ptr as *mut u8);
Some(&(*ptr).value)
}
}
pub unsafe extern fn destroy_value<T:'static>(ptr: *mut u8) {
// The OS TLS ensures that this key contains a NULL value when this
// destructor starts to run. We set it back to a sentinel value of 1 to
// ensure that any future calls to `get` for this thread will return
// `None`.
//
// Note that to prevent an infinite loop we reset it back to null right
// before we return from the destructor ourselves.
let ptr = Box::from_raw(ptr as *mut Value<T>);
let key = ptr.key;
key.os.set(1 as *mut u8);
drop(ptr);
key.os.set(ptr::null_mut());
}
}
#[cfg(test)]
mod tests {
use prelude::v1::*;
use sync::mpsc::{channel, Sender};
use cell::{Cell, UnsafeCell};
use super::LocalKeyState;
use thread;
struct Foo(Sender<()>);
impl Drop for Foo {
fn drop(&mut self) {
let Foo(ref s) = *self;
s.send(()).unwrap();
}
}
#[test]
fn smoke_no_dtor() {
thread_local!(static FOO: Cell<i32> = Cell::new(1));
FOO.with(|f| {
assert_eq!(f.get(), 1);
f.set(2);
});
let (tx, rx) = channel();
let _t = thread::spawn(move|| {
FOO.with(|f| {
assert_eq!(f.get(), 1);
});
tx.send(()).unwrap();
});
rx.recv().unwrap();
FOO.with(|f| {
assert_eq!(f.get(), 2);
});
}
#[test]
fn states() {
struct Foo;
impl Drop for Foo {
fn drop(&mut self) {
assert!(FOO.state() == LocalKeyState::Destroyed);
}
}
fn foo() -> Foo {
assert!(FOO.state() == LocalKeyState::Uninitialized);
Foo
}
thread_local!(static FOO: Foo = foo());
thread::spawn(|| {
assert!(FOO.state() == LocalKeyState::Uninitialized);
FOO.with(|_| {
assert!(FOO.state() == LocalKeyState::Valid);
});
assert!(FOO.state() == LocalKeyState::Valid);
}).join().ok().unwrap();
}
#[test]
fn smoke_dtor() {
thread_local!(static FOO: UnsafeCell<Option<Foo>> = UnsafeCell::new(None));
let (tx, rx) = channel();
let _t = thread::spawn(move|| unsafe {
let mut tx = Some(tx);
FOO.with(|f| {
*f.get() = Some(Foo(tx.take().unwrap()));
});
});
rx.recv().unwrap();
}
#[test]
fn circular() {
struct S1;
struct S2;
thread_local!(static K1: UnsafeCell<Option<S1>> = UnsafeCell::new(None));
thread_local!(static K2: UnsafeCell<Option<S2>> = UnsafeCell::new(None));
static mut HITS: u32 = 0;
impl Drop for S1 {
fn drop(&mut self) {
unsafe {
HITS += 1;
if K2.state() == LocalKeyState::Destroyed {
assert_eq!(HITS, 3);
} else {
if HITS == 1 {
K2.with(|s| *s.get() = Some(S2));
} else {
assert_eq!(HITS, 3);
}
}
}
}
}
impl Drop for S2 {
fn drop(&mut self) {
unsafe {
HITS += 1;
assert!(K1.state()!= LocalKeyState::Destroyed);
assert_eq!(HITS, 2);
K1.with(|s| *s.get() = Some(S1));
}
}
}
thread::spawn(move|| {
drop(S1);
}).join().ok().unwrap();
}
#[test]
fn self_referential() {
struct S1;
thread_local!(static K1: UnsafeCell<Option<S1>> = UnsafeCell::new(None));
impl Drop for S1 {
fn drop(&mut self) {
assert!(K1.state() == LocalKeyState::Destroyed);
}
}
thread::spawn(move|| unsafe {
K1.with(|s| *s.get() = Some(S1));
}).join().ok().unwrap();
}
#[test]
fn dtors_in_dtors_in_dtors() {
struct S1(Sender<()>);
thread_local!(static K1: UnsafeCell<Option<S1>> = UnsafeCell::new(None));
thread_local!(static K2: UnsafeCell<Option<Foo>> = UnsafeCell::new(None));
impl Drop for S1 {
fn drop(&mut self) {
let S1(ref tx) = *self;
unsafe {
if K2.state()!= LocalKeyState::Destroyed | {
K2.with(|s| *s.get() = Some(Foo(tx.clone())));
} | conditional_block |
|
local.rs | `Key` is
// stored as a `static`, and it's not valid for a static to reference the
// address of another thread_local static. For this reason we kinda wonkily
// work around this by generating a shim function which will give us the
// address of the inner TLS key at runtime.
//
// This is trivially devirtualizable by LLVM because we never store anything
// to this field and rustc can declare the `static` as constant as well.
inner: fn() -> &'static __KeyInner<T>,
// initialization routine to invoke to create a value
init: fn() -> T,
}
// Macro pain #4586:
//
// When cross compiling, rustc will load plugins and macros from the *host*
// platform before search for macros from the target platform. This is primarily
// done to detect, for example, plugins. Ideally the macro below would be
// defined once per module below, but unfortunately this means we have the
// following situation:
//
// 1. We compile libstd for x86_64-unknown-linux-gnu, this thread_local!() macro
// will inject #[thread_local] statics.
// 2. We then try to compile a program for arm-linux-androideabi
// 3. The compiler has a host of linux and a target of android, so it loads
// macros from the *linux* libstd.
// 4. The macro generates a #[thread_local] field, but the android libstd does
// not use #[thread_local]
// 5. Compile error about structs with wrong fields.
//
// To get around this, we're forced to inject the #[cfg] logic into the macro
// itself. Woohoo.
/// Declare a new thread local storage key of type `std::thread::LocalKey`.
///
/// See [LocalKey documentation](thread/struct.LocalKey.html) for more
/// information.
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]
#[cfg(not(no_elf_tls))]
macro_rules! thread_local {
(static $name:ident: $t:ty = $init:expr) => (
static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init,
#[cfg_attr(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64")),
thread_local)]);
);
(pub static $name:ident: $t:ty = $init:expr) => (
pub static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init,
#[cfg_attr(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64")),
thread_local)]);
);
}
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]
#[cfg(no_elf_tls)]
macro_rules! thread_local {
(static $name:ident: $t:ty = $init:expr) => (
static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init, #[]);
);
(pub static $name:ident: $t:ty = $init:expr) => (
pub static $name: $crate::thread::LocalKey<$t> =
__thread_local_inner!($t, $init, #[]);
);
}
#[doc(hidden)]
#[unstable(feature = "thread_local_internals",
reason = "should not be necessary")]
#[macro_export]
#[allow_internal_unstable]
macro_rules! __thread_local_inner {
($t:ty, $init:expr, #[$($attr:meta),*]) => {{
$(#[$attr])*
static __KEY: $crate::thread::__LocalKeyInner<$t> =
$crate::thread::__LocalKeyInner::new();
fn __init() -> $t { $init }
fn __getit() -> &'static $crate::thread::__LocalKeyInner<$t> { &__KEY }
$crate::thread::LocalKey::new(__getit, __init)
}}
}
/// Indicator of the state of a thread local storage key.
#[unstable(feature = "thread_local_state",
reason = "state querying was recently added")]
#[derive(Eq, PartialEq, Copy, Clone)]
pub enum LocalKeyState {
/// All keys are in this state whenever a thread starts. Keys will
/// transition to the `Valid` state once the first call to `with` happens
/// and the initialization expression succeeds.
///
/// Keys in the `Uninitialized` state will yield a reference to the closure
/// passed to `with` so long as the initialization routine does not panic.
Uninitialized,
/// Once a key has been accessed successfully, it will enter the `Valid`
/// state. Keys in the `Valid` state will remain so until the thread exits,
/// at which point the destructor will be run and the key will enter the
/// `Destroyed` state.
///
/// Keys in the `Valid` state will be guaranteed to yield a reference to the
/// closure passed to `with`.
Valid,
/// When a thread exits, the destructors for keys will be run (if
/// necessary). While a destructor is running, and possibly after a
/// destructor has run, a key is in the `Destroyed` state.
///
/// Keys in the `Destroyed` states will trigger a panic when accessed via
/// `with`.
Destroyed,
}
impl<T:'static> LocalKey<T> {
#[doc(hidden)]
#[unstable(feature = "thread_local_internals",
reason = "recently added to create a key")]
pub const fn new(inner: fn() -> &'static __KeyInner<T>,
init: fn() -> T) -> LocalKey<T> {
LocalKey {
inner: inner,
init: init
}
}
/// Acquires a reference to the value in this TLS key.
///
/// This will lazily initialize the value if this thread has not referenced
/// this key yet.
///
/// # Panics
///
/// This function will `panic!()` if the key currently has its
/// destructor running, and it **may** panic if the destructor has
/// previously been run for this thread.
#[stable(feature = "rust1", since = "1.0.0")]
pub fn with<F, R>(&'static self, f: F) -> R
where F: FnOnce(&T) -> R {
let slot = (self.inner)();
unsafe {
let slot = slot.get().expect("cannot access a TLS value during or \
after it is destroyed");
f(match *slot.get() {
Some(ref inner) => inner,
None => self.init(slot),
})
}
}
unsafe fn init(&self, slot: &UnsafeCell<Option<T>>) -> &T {
// Execute the initialization up front, *then* move it into our slot,
// just in case initialization fails.
let value = (self.init)();
let ptr = slot.get();
*ptr = Some(value);
(*ptr).as_ref().unwrap()
}
/// Query the current state of this key.
///
/// A key is initially in the `Uninitialized` state whenever a thread
/// starts. It will remain in this state up until the first call to `with`
/// within a thread has run the initialization expression successfully.
///
/// Once the initialization expression succeeds, the key transitions to the
/// `Valid` state which will guarantee that future calls to `with` will
/// succeed within the thread.
///
/// When a thread exits, each key will be destroyed in turn, and as keys are
/// destroyed they will enter the `Destroyed` state just before the
/// destructor starts to run. Keys may remain in the `Destroyed` state after
/// destruction has completed. Keys without destructors (e.g. with types
/// that are `Copy`), may never enter the `Destroyed` state.
///
/// Keys in the `Uninitialized` can be accessed so long as the
/// initialization does not panic. Keys in the `Valid` state are guaranteed
/// to be able to be accessed. Keys in the `Destroyed` state will panic on
/// any call to `with`.
#[unstable(feature = "thread_local_state",
reason = "state querying was recently added")]
pub fn state(&'static self) -> LocalKeyState {
unsafe {
match (self.inner)().get() {
Some(cell) => {
match *cell.get() {
Some(..) => LocalKeyState::Valid,
None => LocalKeyState::Uninitialized,
}
}
None => LocalKeyState::Destroyed,
}
}
}
}
#[cfg(all(any(target_os = "macos", target_os = "linux"),
not(target_arch = "aarch64"),
not(no_elf_tls)))]
#[doc(hidden)]
mod imp {
#[cfg(stage0)]
use prelude::v1::*;
use cell::{Cell, UnsafeCell};
use intrinsics;
use ptr;
pub struct Key<T> {
inner: UnsafeCell<Option<T>>,
// Metadata to keep track of the state of the destructor. Remember that
// these variables are thread-local, not global.
dtor_registered: Cell<bool>,
dtor_running: Cell<bool>,
}
unsafe impl<T> ::marker::Sync for Key<T> { }
impl<T> Key<T> {
pub const fn new() -> Key<T> {
Key {
inner: UnsafeCell::new(None),
dtor_registered: Cell::new(false),
dtor_running: Cell::new(false)
}
}
pub unsafe fn get(&'static self) -> Option<&'static UnsafeCell<Option<T>>> {
if intrinsics::needs_drop::<T>() && self.dtor_running.get() {
return None
}
self.register_dtor();
Some(&self.inner)
}
unsafe fn register_dtor(&self) {
if!intrinsics::needs_drop::<T>() || self.dtor_registered.get() {
return
}
register_dtor(self as *const _ as *mut u8,
destroy_value::<T>);
self.dtor_registered.set(true);
}
}
// Since what appears to be glibc 2.18 this symbol has been shipped which
// GCC and clang both use to invoke destructors in thread_local globals, so
// let's do the same!
//
// Note, however, that we run on lots older linuxes, as well as cross
// compiling from a newer linux to an older linux, so we also have a
// fallback implementation to use as well.
//
// Due to rust-lang/rust#18804, make sure this is not generic!
#[cfg(target_os = "linux")]
unsafe fn register_dtor(t: *mut u8, dtor: unsafe extern fn(*mut u8)) {
use prelude::v1::*;
use mem;
use libc;
use sys_common::thread_local as os;
extern {
#[linkage = "extern_weak"]
static __dso_handle: *mut u8;
#[linkage = "extern_weak"]
static __cxa_thread_atexit_impl: *const libc::c_void;
}
if!__cxa_thread_atexit_impl.is_null() {
type F = unsafe extern fn(dtor: unsafe extern fn(*mut u8),
arg: *mut u8,
dso_handle: *mut u8) -> libc::c_int;
mem::transmute::<*const libc::c_void, F>(__cxa_thread_atexit_impl)
(dtor, t, &__dso_handle as *const _ as *mut _);
return
}
// The fallback implementation uses a vanilla OS-based TLS key to track
// the list of destructors that need to be run for this thread. The key
// then has its own destructor which runs all the other destructors.
//
// The destructor for DTORS is a little special in that it has a `while`
// loop to continuously drain the list of registered destructors. It
// *should* be the case that this loop always terminates because we
// provide the guarantee that a TLS key cannot be set after it is
// flagged for destruction.
static DTORS: os::StaticKey = os::StaticKey::new(Some(run_dtors));
type List = Vec<(*mut u8, unsafe extern fn(*mut u8))>;
if DTORS.get().is_null() {
let v: Box<List> = box Vec::new();
DTORS.set(Box::into_raw(v) as *mut u8);
}
let list: &mut List = &mut *(DTORS.get() as *mut List);
list.push((t, dtor));
unsafe extern fn run_dtors(mut ptr: *mut u8) {
while!ptr.is_null() {
let list: Box<List> = Box::from_raw(ptr as *mut List);
for &(ptr, dtor) in list.iter() {
dtor(ptr);
}
ptr = DTORS.get();
DTORS.set(ptr::null_mut());
}
}
}
// OSX's analog of the above linux function is this _tlv_atexit function.
// The disassembly of thread_local globals in C++ (at least produced by
// clang) will have this show up in the output.
#[cfg(target_os = "macos")]
unsafe fn register_dtor(t: *mut u8, dtor: unsafe extern fn(*mut u8)) {
extern {
fn _tlv_atexit(dtor: unsafe extern fn(*mut u8),
arg: *mut u8);
}
_tlv_atexit(dtor, t);
}
pub unsafe extern fn destroy_value<T>(ptr: *mut u8) | ptr::read((*ptr).inner.get());
} else {
intrinsics::drop_in_place((*ptr).inner.get());
}
}
}
#[cfg(any(not(any(target_os = "macos", target_os = "linux")),
target_arch = "aarch64",
no_elf_tls))]
#[doc(hidden)]
mod imp {
use prelude::v1::*;
use cell::{Cell, UnsafeCell};
use marker;
use ptr;
use sys_common::thread_local::StaticKey as OsStaticKey;
pub struct Key<T> {
// OS-TLS key that we'll use to key off.
os: OsStaticKey,
marker: marker::PhantomData<Cell<T>>,
}
unsafe impl<T> ::marker::Sync for Key<T> { }
struct Value<T:'static> {
key: &'static Key<T>,
value: UnsafeCell<Option<T>>,
}
impl<T:'static> Key<T> {
pub const fn new() -> Key<T> {
Key {
os: OsStaticKey::new(Some(destroy_value::<T>)),
marker: marker::PhantomData
}
}
pub unsafe fn get(&'static self) -> Option<&'static UnsafeCell<Option<T>>> {
let ptr = self.os.get() as *mut Value<T>;
if!ptr.is_null() {
if ptr as usize == 1 {
return None
}
return Some(&(*ptr).value);
}
// If the lookup returned null, we haven't initialized our own local
// copy, so do that now.
let ptr: Box<Value<T>> = box Value {
key: self,
value: UnsafeCell::new(None),
};
let ptr = Box::into_raw(ptr);
self.os.set(ptr as *mut u8);
Some(&(*ptr).value)
}
}
pub unsafe extern fn destroy_value<T:'static>(ptr: *mut u8) {
// The OS TLS ensures that this key contains a NULL value when this
// destructor starts to run. We set it back to a sentinel value of 1 to
// ensure that any future calls to `get` for this thread will return
// `None`.
//
// Note that to prevent an infinite loop we reset it back to null right
// before we return from the destructor ourselves.
let ptr = Box::from_raw(ptr as *mut Value<T>);
let key = ptr.key;
key.os.set(1 as *mut u8);
drop(ptr);
key.os.set(ptr::null_mut());
}
}
#[cfg(test)]
mod tests {
use prelude::v1::*;
use sync::mpsc::{channel, Sender};
use cell::{Cell, UnsafeCell};
use super::LocalKeyState;
use thread;
struct Foo(Sender<()>);
impl Drop for Foo {
fn drop(&mut self) {
let Foo(ref s) = *self;
s.send(()).unwrap();
}
}
#[test]
fn smoke_no_dtor() {
thread_local!(static FOO: Cell<i32> = Cell::new(1));
FOO.with(|f| {
assert_eq!(f.get(), 1);
f.set(2);
});
let (tx, rx) = channel();
let _t = thread::spawn(move|| {
FOO.with(|f| {
assert_eq!(f.get(), 1);
});
tx.send(()).unwrap();
});
rx.recv().unwrap();
FOO.with(|f| {
assert_eq!(f.get(), 2);
});
}
#[test]
fn states() {
struct Foo;
impl Drop for Foo {
fn drop(&mut self) {
assert!(FOO.state() == LocalKeyState::Destroyed);
}
}
fn foo() -> Foo {
assert!(FOO.state() == LocalKeyState::Uninitialized);
Foo
}
thread_local!(static FOO: Foo = foo());
thread::spawn(|| {
assert!(FOO.state() == LocalKeyState::Uninitialized);
FOO.with(|_| {
assert!(FOO.state() == LocalKeyState::Valid);
});
assert!(FOO.state() == LocalKeyState::Valid);
}).join().ok().unwrap();
}
#[test]
fn smoke_dtor() {
thread_local!(static FOO: UnsafeCell<Option<Foo>> = UnsafeCell::new(None));
let (tx, rx) = channel();
let _t = thread::spawn(move|| unsafe {
let mut tx = Some(tx);
FOO.with(|f| {
*f.get() = Some(Foo(tx.take().unwrap()));
});
});
rx.recv().unwrap();
}
#[test]
fn circular() {
struct S1;
struct S2;
thread_local!(static K1: UnsafeCell<Option<S1>> = UnsafeCell::new(None));
thread_local!(static K2: UnsafeCell<Option<S2>> = UnsafeCell::new(None));
static mut HITS: u32 = 0;
impl Drop for S1 {
fn drop(&mut self) {
unsafe {
HITS += 1;
if K2.state() == LocalKeyState::Destroyed {
assert_eq!(HITS, 3);
} else {
if HITS == 1 {
K2.with(|s| *s.get() = Some(S2));
} else {
assert_eq!(HITS, 3);
}
}
}
}
}
impl Drop for S2 {
| {
let ptr = ptr as *mut Key<T>;
// Right before we run the user destructor be sure to flag the
// destructor as running for this thread so calls to `get` will return
// `None`.
(*ptr).dtor_running.set(true);
// The OSX implementation of TLS apparently had an odd aspect to it
// where the pointer we have may be overwritten while this destructor
// is running. Specifically if a TLS destructor re-accesses TLS it may
// trigger a re-initialization of all TLS variables, paving over at
// least some destroyed ones with initial values.
//
// This means that if we drop a TLS value in place on OSX that we could
// revert the value to its original state halfway through the
// destructor, which would be bad!
//
// Hence, we use `ptr::read` on OSX (to move to a "safe" location)
// instead of drop_in_place.
if cfg!(target_os = "macos") { | identifier_body |
addrinfo.rs | // Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use ai = std::io::net::addrinfo;
use std::cast;
use std::libc;
use std::libc::c_int;
use std::ptr::null;
use std::rt::task::BlockedTask;
use net;
use super::{Loop, UvError, Request, wait_until_woken_after, wakeup};
use uvll;
struct Addrinfo {
handle: *libc::addrinfo,
}
struct Ctx {
slot: Option<BlockedTask>,
status: c_int,
addrinfo: Option<Addrinfo>,
}
pub struct GetAddrInfoRequest;
impl GetAddrInfoRequest {
pub fn run(loop_: &Loop, node: Option<&str>, service: Option<&str>,
hints: Option<ai::Hint>) -> Result<~[ai::Info], UvError> {
assert!(node.is_some() || service.is_some());
let (_c_node, c_node_ptr) = match node {
Some(n) => {
let c_node = n.to_c_str();
let c_node_ptr = c_node.with_ref(|r| r);
(Some(c_node), c_node_ptr)
}
None => (None, null())
};
let (_c_service, c_service_ptr) = match service {
Some(s) => {
let c_service = s.to_c_str();
let c_service_ptr = c_service.with_ref(|r| r);
(Some(c_service), c_service_ptr)
}
None => (None, null())
};
let hint = hints.map(|hint| {
let mut flags = 0;
each_ai_flag(|cval, aival| {
if hint.flags & (aival as uint)!= 0 {
flags |= cval as i32;
}
});
let socktype = 0;
let protocol = 0;
libc::addrinfo {
ai_flags: flags,
ai_family: hint.family as c_int,
ai_socktype: socktype,
ai_protocol: protocol,
ai_addrlen: 0,
ai_canonname: null(),
ai_addr: null(),
ai_next: null(),
}
});
let hint_ptr = hint.as_ref().map_or(null(), |x| x as *libc::addrinfo);
let mut req = Request::new(uvll::UV_GETADDRINFO);
return match unsafe {
uvll::uv_getaddrinfo(loop_.handle, req.handle,
getaddrinfo_cb, c_node_ptr, c_service_ptr,
hint_ptr)
} {
0 => {
req.defuse(); // uv callback now owns this request
let mut cx = Ctx { slot: None, status: 0, addrinfo: None };
wait_until_woken_after(&mut cx.slot, loop_, || {
req.set_data(&cx);
});
match cx.status {
0 => Ok(accum_addrinfo(cx.addrinfo.get_ref())),
n => Err(UvError(n))
}
}
n => Err(UvError(n))
};
extern fn getaddrinfo_cb(req: *uvll::uv_getaddrinfo_t,
status: c_int,
res: *libc::addrinfo) {
let req = Request::wrap(req);
assert!(status!= uvll::ECANCELED);
let cx: &mut Ctx = unsafe { req.get_data() };
cx.status = status;
cx.addrinfo = Some(Addrinfo { handle: res });
wakeup(&mut cx.slot);
}
}
}
| }
fn each_ai_flag(_f: |c_int, ai::Flag|) {
/* XXX: do we really want to support these?
unsafe {
f(uvll::rust_AI_ADDRCONFIG(), ai::AddrConfig);
f(uvll::rust_AI_ALL(), ai::All);
f(uvll::rust_AI_CANONNAME(), ai::CanonName);
f(uvll::rust_AI_NUMERICHOST(), ai::NumericHost);
f(uvll::rust_AI_NUMERICSERV(), ai::NumericServ);
f(uvll::rust_AI_PASSIVE(), ai::Passive);
f(uvll::rust_AI_V4MAPPED(), ai::V4Mapped);
}
*/
}
// Traverse the addrinfo linked list, producing a vector of Rust socket addresses
pub fn accum_addrinfo(addr: &Addrinfo) -> ~[ai::Info] {
unsafe {
let mut addr = addr.handle;
let mut addrs = ~[];
loop {
let rustaddr = net::sockaddr_to_addr(cast::transmute((*addr).ai_addr),
(*addr).ai_addrlen as uint);
let mut flags = 0;
each_ai_flag(|cval, aival| {
if (*addr).ai_flags & cval!= 0 {
flags |= aival as uint;
}
});
/* XXX: do we really want to support these
let protocol = match (*addr).ai_protocol {
p if p == uvll::rust_IPPROTO_UDP() => Some(ai::UDP),
p if p == uvll::rust_IPPROTO_TCP() => Some(ai::TCP),
_ => None,
};
let socktype = match (*addr).ai_socktype {
p if p == uvll::rust_SOCK_STREAM() => Some(ai::Stream),
p if p == uvll::rust_SOCK_DGRAM() => Some(ai::Datagram),
p if p == uvll::rust_SOCK_RAW() => Some(ai::Raw),
_ => None,
};
*/
let protocol = None;
let socktype = None;
addrs.push(ai::Info {
address: rustaddr,
family: (*addr).ai_family as uint,
socktype: socktype,
protocol: protocol,
flags: flags,
});
if (*addr).ai_next.is_not_null() {
addr = (*addr).ai_next;
} else {
break;
}
}
return addrs;
}
} | impl Drop for Addrinfo {
fn drop(&mut self) {
unsafe { uvll::uv_freeaddrinfo(self.handle) }
} | random_line_split |
addrinfo.rs | // Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use ai = std::io::net::addrinfo;
use std::cast;
use std::libc;
use std::libc::c_int;
use std::ptr::null;
use std::rt::task::BlockedTask;
use net;
use super::{Loop, UvError, Request, wait_until_woken_after, wakeup};
use uvll;
struct Addrinfo {
handle: *libc::addrinfo,
}
struct Ctx {
slot: Option<BlockedTask>,
status: c_int,
addrinfo: Option<Addrinfo>,
}
pub struct GetAddrInfoRequest;
impl GetAddrInfoRequest {
pub fn run(loop_: &Loop, node: Option<&str>, service: Option<&str>,
hints: Option<ai::Hint>) -> Result<~[ai::Info], UvError> {
assert!(node.is_some() || service.is_some());
let (_c_node, c_node_ptr) = match node {
Some(n) => {
let c_node = n.to_c_str();
let c_node_ptr = c_node.with_ref(|r| r);
(Some(c_node), c_node_ptr)
}
None => (None, null())
};
let (_c_service, c_service_ptr) = match service {
Some(s) => {
let c_service = s.to_c_str();
let c_service_ptr = c_service.with_ref(|r| r);
(Some(c_service), c_service_ptr)
}
None => (None, null())
};
let hint = hints.map(|hint| {
let mut flags = 0;
each_ai_flag(|cval, aival| {
if hint.flags & (aival as uint)!= 0 {
flags |= cval as i32;
}
});
let socktype = 0;
let protocol = 0;
libc::addrinfo {
ai_flags: flags,
ai_family: hint.family as c_int,
ai_socktype: socktype,
ai_protocol: protocol,
ai_addrlen: 0,
ai_canonname: null(),
ai_addr: null(),
ai_next: null(),
}
});
let hint_ptr = hint.as_ref().map_or(null(), |x| x as *libc::addrinfo);
let mut req = Request::new(uvll::UV_GETADDRINFO);
return match unsafe {
uvll::uv_getaddrinfo(loop_.handle, req.handle,
getaddrinfo_cb, c_node_ptr, c_service_ptr,
hint_ptr)
} {
0 => {
req.defuse(); // uv callback now owns this request
let mut cx = Ctx { slot: None, status: 0, addrinfo: None };
wait_until_woken_after(&mut cx.slot, loop_, || {
req.set_data(&cx);
});
match cx.status {
0 => Ok(accum_addrinfo(cx.addrinfo.get_ref())),
n => Err(UvError(n))
}
}
n => Err(UvError(n))
};
extern fn | (req: *uvll::uv_getaddrinfo_t,
status: c_int,
res: *libc::addrinfo) {
let req = Request::wrap(req);
assert!(status!= uvll::ECANCELED);
let cx: &mut Ctx = unsafe { req.get_data() };
cx.status = status;
cx.addrinfo = Some(Addrinfo { handle: res });
wakeup(&mut cx.slot);
}
}
}
impl Drop for Addrinfo {
fn drop(&mut self) {
unsafe { uvll::uv_freeaddrinfo(self.handle) }
}
}
fn each_ai_flag(_f: |c_int, ai::Flag|) {
/* XXX: do we really want to support these?
unsafe {
f(uvll::rust_AI_ADDRCONFIG(), ai::AddrConfig);
f(uvll::rust_AI_ALL(), ai::All);
f(uvll::rust_AI_CANONNAME(), ai::CanonName);
f(uvll::rust_AI_NUMERICHOST(), ai::NumericHost);
f(uvll::rust_AI_NUMERICSERV(), ai::NumericServ);
f(uvll::rust_AI_PASSIVE(), ai::Passive);
f(uvll::rust_AI_V4MAPPED(), ai::V4Mapped);
}
*/
}
// Traverse the addrinfo linked list, producing a vector of Rust socket addresses
pub fn accum_addrinfo(addr: &Addrinfo) -> ~[ai::Info] {
unsafe {
let mut addr = addr.handle;
let mut addrs = ~[];
loop {
let rustaddr = net::sockaddr_to_addr(cast::transmute((*addr).ai_addr),
(*addr).ai_addrlen as uint);
let mut flags = 0;
each_ai_flag(|cval, aival| {
if (*addr).ai_flags & cval!= 0 {
flags |= aival as uint;
}
});
/* XXX: do we really want to support these
let protocol = match (*addr).ai_protocol {
p if p == uvll::rust_IPPROTO_UDP() => Some(ai::UDP),
p if p == uvll::rust_IPPROTO_TCP() => Some(ai::TCP),
_ => None,
};
let socktype = match (*addr).ai_socktype {
p if p == uvll::rust_SOCK_STREAM() => Some(ai::Stream),
p if p == uvll::rust_SOCK_DGRAM() => Some(ai::Datagram),
p if p == uvll::rust_SOCK_RAW() => Some(ai::Raw),
_ => None,
};
*/
let protocol = None;
let socktype = None;
addrs.push(ai::Info {
address: rustaddr,
family: (*addr).ai_family as uint,
socktype: socktype,
protocol: protocol,
flags: flags,
});
if (*addr).ai_next.is_not_null() {
addr = (*addr).ai_next;
} else {
break;
}
}
return addrs;
}
}
| getaddrinfo_cb | identifier_name |
nfs2_records.rs | /* Copyright (C) 2017 Open Information Security Foundation
*
* You can copy, redistribute or modify this Program under the terms of
* the GNU General Public License version 2 as published by the Free
* Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* version 2 along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
* 02110-1301, USA.
*/
//! Nom parsers for NFSv2 records
use nom::{be_u32, rest};
use nfs::nfs_records::*; | }
named!(pub parse_nfs2_handle<Nfs2Handle>,
do_parse!(
handle: take!(32)
>> (
Nfs2Handle {
value:handle,
}
))
);
#[derive(Debug,PartialEq)]
pub struct Nfs2RequestLookup<'a> {
pub handle: Nfs2Handle<'a>,
pub name_vec: Vec<u8>,
}
named!(pub parse_nfs2_request_lookup<Nfs2RequestLookup>,
do_parse!(
handle: parse_nfs2_handle
>> name_len: be_u32
>> name_contents: take!(name_len)
>> name_padding: rest
>> (
Nfs2RequestLookup {
handle:handle,
name_vec:name_contents.to_vec(),
}
))
);
#[derive(Debug,PartialEq)]
pub struct Nfs2RequestRead<'a> {
pub handle: Nfs2Handle<'a>,
pub offset: u32,
}
named!(pub parse_nfs2_request_read<Nfs2RequestRead>,
do_parse!(
handle: parse_nfs2_handle
>> offset: be_u32
>> count: be_u32
>> (
Nfs2RequestRead {
handle:handle,
offset:offset,
}
))
);
named!(pub parse_nfs2_reply_read<NfsReplyRead>,
do_parse!(
status: be_u32
>> attr_blob: take!(68)
>> data_len: be_u32
>> data_contents: rest
>> (
NfsReplyRead {
status:status,
attr_follows:1,
attr_blob:attr_blob,
count:data_len,
eof:false,
data_len:data_len,
data:data_contents,
}
))
);
#[derive(Debug,PartialEq)]
pub struct Nfs2Attributes<> {
pub atype: u32,
pub asize: u32,
}
named!(pub parse_nfs2_attribs<Nfs2Attributes>,
do_parse!(
atype: be_u32
>> blob1: take!(16)
>> asize: be_u32
>> blob2: take!(44)
>> (
Nfs2Attributes {
atype:atype,
asize:asize,
}
))
); |
#[derive(Debug,PartialEq)]
pub struct Nfs2Handle<'a> {
pub value: &'a[u8], | random_line_split |
nfs2_records.rs | /* Copyright (C) 2017 Open Information Security Foundation
*
* You can copy, redistribute or modify this Program under the terms of
* the GNU General Public License version 2 as published by the Free
* Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* version 2 along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
* 02110-1301, USA.
*/
//! Nom parsers for NFSv2 records
use nom::{be_u32, rest};
use nfs::nfs_records::*;
#[derive(Debug,PartialEq)]
pub struct Nfs2Handle<'a> {
pub value: &'a[u8],
}
named!(pub parse_nfs2_handle<Nfs2Handle>,
do_parse!(
handle: take!(32)
>> (
Nfs2Handle {
value:handle,
}
))
);
#[derive(Debug,PartialEq)]
pub struct Nfs2RequestLookup<'a> {
pub handle: Nfs2Handle<'a>,
pub name_vec: Vec<u8>,
}
named!(pub parse_nfs2_request_lookup<Nfs2RequestLookup>,
do_parse!(
handle: parse_nfs2_handle
>> name_len: be_u32
>> name_contents: take!(name_len)
>> name_padding: rest
>> (
Nfs2RequestLookup {
handle:handle,
name_vec:name_contents.to_vec(),
}
))
);
#[derive(Debug,PartialEq)]
pub struct | <'a> {
pub handle: Nfs2Handle<'a>,
pub offset: u32,
}
named!(pub parse_nfs2_request_read<Nfs2RequestRead>,
do_parse!(
handle: parse_nfs2_handle
>> offset: be_u32
>> count: be_u32
>> (
Nfs2RequestRead {
handle:handle,
offset:offset,
}
))
);
named!(pub parse_nfs2_reply_read<NfsReplyRead>,
do_parse!(
status: be_u32
>> attr_blob: take!(68)
>> data_len: be_u32
>> data_contents: rest
>> (
NfsReplyRead {
status:status,
attr_follows:1,
attr_blob:attr_blob,
count:data_len,
eof:false,
data_len:data_len,
data:data_contents,
}
))
);
#[derive(Debug,PartialEq)]
pub struct Nfs2Attributes<> {
pub atype: u32,
pub asize: u32,
}
named!(pub parse_nfs2_attribs<Nfs2Attributes>,
do_parse!(
atype: be_u32
>> blob1: take!(16)
>> asize: be_u32
>> blob2: take!(44)
>> (
Nfs2Attributes {
atype:atype,
asize:asize,
}
))
);
| Nfs2RequestRead | identifier_name |
lib.rs | extern crate urlencoding;
mod query;
pub use urlencoding::{encode, decode};
pub use query::{Query, QueryValue};
#[cfg(test)]
mod tests {
use super::{encode, decode};
use super::{Query, QueryValue};
const DECODED: &str = "title=Encode some URLs";
const ENCODED: &str = "title%3DEncode%20some%20URLs";
#[test]
fn parse_query_test() {
if let Ok(query) = ENCODED.parse::<Query>() {
assert_eq!(Some(&"Encode some URLs".to_string()), query.get_first("title"));
} else |
}
#[test]
fn parse_query_vector_test() {
let expected = vec![
"apple".to_string(),
"banana".to_string(),
"coconut".to_string(),
];
if let Ok(query) = "fruits=apple,banana,coconut".parse::<Query>() {
assert_eq!(Some(&expected), query.get("fruits"));
} else {
assert!(false);
}
}
#[test]
fn encode_test() {
let encoded = encode(DECODED);
assert_eq!(encoded, ENCODED);
}
#[test]
fn decode_test() {
if let Ok(decoded) = decode(ENCODED) {
assert_eq!(decoded, DECODED);
} else {
assert!(false);
}
}
}
| {
assert!(false);
} | conditional_block |
lib.rs | extern crate urlencoding;
mod query;
pub use urlencoding::{encode, decode};
pub use query::{Query, QueryValue};
#[cfg(test)]
mod tests {
use super::{encode, decode};
use super::{Query, QueryValue};
const DECODED: &str = "title=Encode some URLs";
const ENCODED: &str = "title%3DEncode%20some%20URLs";
#[test]
fn parse_query_test() {
if let Ok(query) = ENCODED.parse::<Query>() {
assert_eq!(Some(&"Encode some URLs".to_string()), query.get_first("title"));
} else {
assert!(false);
}
}
#[test]
fn | () {
let expected = vec![
"apple".to_string(),
"banana".to_string(),
"coconut".to_string(),
];
if let Ok(query) = "fruits=apple,banana,coconut".parse::<Query>() {
assert_eq!(Some(&expected), query.get("fruits"));
} else {
assert!(false);
}
}
#[test]
fn encode_test() {
let encoded = encode(DECODED);
assert_eq!(encoded, ENCODED);
}
#[test]
fn decode_test() {
if let Ok(decoded) = decode(ENCODED) {
assert_eq!(decoded, DECODED);
} else {
assert!(false);
}
}
}
| parse_query_vector_test | identifier_name |
lib.rs | extern crate urlencoding;
mod query;
pub use urlencoding::{encode, decode};
pub use query::{Query, QueryValue};
#[cfg(test)]
mod tests {
use super::{encode, decode};
use super::{Query, QueryValue};
const DECODED: &str = "title=Encode some URLs";
const ENCODED: &str = "title%3DEncode%20some%20URLs";
#[test]
fn parse_query_test() |
#[test]
fn parse_query_vector_test() {
let expected = vec![
"apple".to_string(),
"banana".to_string(),
"coconut".to_string(),
];
if let Ok(query) = "fruits=apple,banana,coconut".parse::<Query>() {
assert_eq!(Some(&expected), query.get("fruits"));
} else {
assert!(false);
}
}
#[test]
fn encode_test() {
let encoded = encode(DECODED);
assert_eq!(encoded, ENCODED);
}
#[test]
fn decode_test() {
if let Ok(decoded) = decode(ENCODED) {
assert_eq!(decoded, DECODED);
} else {
assert!(false);
}
}
}
| {
if let Ok(query) = ENCODED.parse::<Query>() {
assert_eq!(Some(&"Encode some URLs".to_string()), query.get_first("title"));
} else {
assert!(false);
}
} | identifier_body |
lib.rs | extern crate urlencoding;
mod query;
pub use urlencoding::{encode, decode};
pub use query::{Query, QueryValue};
#[cfg(test)]
mod tests {
use super::{encode, decode};
use super::{Query, QueryValue};
const DECODED: &str = "title=Encode some URLs";
const ENCODED: &str = "title%3DEncode%20some%20URLs"; |
#[test]
fn parse_query_test() {
if let Ok(query) = ENCODED.parse::<Query>() {
assert_eq!(Some(&"Encode some URLs".to_string()), query.get_first("title"));
} else {
assert!(false);
}
}
#[test]
fn parse_query_vector_test() {
let expected = vec![
"apple".to_string(),
"banana".to_string(),
"coconut".to_string(),
];
if let Ok(query) = "fruits=apple,banana,coconut".parse::<Query>() {
assert_eq!(Some(&expected), query.get("fruits"));
} else {
assert!(false);
}
}
#[test]
fn encode_test() {
let encoded = encode(DECODED);
assert_eq!(encoded, ENCODED);
}
#[test]
fn decode_test() {
if let Ok(decoded) = decode(ENCODED) {
assert_eq!(decoded, DECODED);
} else {
assert!(false);
}
}
} | random_line_split |
|
tictactoe.rs | use crate::{game, statistics};
use r4::iterate;
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]
pub enum Player {
X,
O,
}
impl statistics::two_player::PlayerMapping for Player {
fn player_one() -> Self {
Player::X
}
fn player_two() -> Self {
Player::O
}
fn resolve_player(&self) -> statistics::two_player::Player {
match *self {
Player::X => statistics::two_player::Player::One,
Player::O => statistics::two_player::Player::Two,
}
}
}
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
pub struct Board {
cells: [[Option<Player>; 3]; 3],
}
#[derive(Copy, Clone, Debug, Eq, Hash, PartialEq)]
pub struct Action {
pub row: usize,
pub column: usize,
pub player: Player,
}
fn winning_player(a: Option<Player>, b: Option<Player>, c: Option<Player>) -> Option<Player> {
if a.is_some() && a == b && b == c {
a
} else {
None
}
}
pub enum Outcome { | Tie,
}
impl Board {
pub fn new() -> Self {
Board {
cells: [[None, None, None], [None, None, None], [None, None, None]],
}
}
pub fn set(&mut self, row: usize, column: usize, value: Player) {
assert!(row < 3);
assert!(column < 3);
assert!(self.cells[row][column].is_none());
self.cells[row][column] = Some(value);
}
pub fn get(&self, row: usize, column: usize) -> Option<Player> {
assert!(row < 3);
assert!(column < 3);
self.cells[row][column]
}
pub fn outcome(&self) -> Option<Outcome> {
for n in 0..3 {
let mut p = winning_player(self.cells[n][0], self.cells[n][1], self.cells[n][2]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
p = winning_player(self.cells[0][n], self.cells[1][n], self.cells[2][n]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
}
let mut p = winning_player(self.cells[0][0], self.cells[1][1], self.cells[2][2]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
p = winning_player(self.cells[0][2], self.cells[1][1], self.cells[2][0]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
for row in 0..3 {
for column in 0..3 {
if self.cells[row][column].is_none() {
return None;
}
}
}
Some(Outcome::Tie)
}
}
#[derive(Clone, Debug, Hash, Eq, PartialEq)]
pub struct State {
pub active_player: Player,
pub board: Board,
}
impl Default for State {
fn default() -> Self {
State {
active_player: Player::X,
board: Board::new(),
}
}
}
impl game::State for State {
type Action = Action;
type PlayerId = Player;
fn active_player(&self) -> &Player {
&self.active_player
}
fn actions<'s>(&'s self) -> Box<dyn Iterator<Item = Action> +'s> {
Box::new(iterate![for row in 0..3;
for column in 0..3;
if self.board.get(row, column).is_none();
yield Action { row, column, player: self.active_player, }])
}
fn do_action(&mut self, action: &Action) {
self.board.set(action.row, action.column, action.player);
self.active_player = match self.active_player {
Player::X => Player::O,
Player::O => Player::X,
};
}
}
#[derive(Debug)]
pub struct ScoredGame {}
impl game::Game for ScoredGame {
type Action = Action;
type PlayerId = Player;
type Payoff = statistics::two_player::ScoredPayoff;
type State = State;
type Statistics = statistics::two_player::ScoredStatistics<Player>;
fn payoff_of(state: &State) -> Option<statistics::two_player::ScoredPayoff> {
state.board.outcome().map(|p| match p {
Outcome::Winner(Player::X) => statistics::two_player::ScoredPayoff {
visits: 1,
score_one: 1,
score_two: 0,
},
Outcome::Winner(Player::O) => statistics::two_player::ScoredPayoff {
visits: 1,
score_one: 0,
score_two: 1,
},
Outcome::Tie => statistics::two_player::ScoredPayoff {
visits: 1,
score_one: 0,
score_two: 0,
},
})
}
} | Winner(Player), | random_line_split |
tictactoe.rs | use crate::{game, statistics};
use r4::iterate;
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]
pub enum | {
X,
O,
}
impl statistics::two_player::PlayerMapping for Player {
fn player_one() -> Self {
Player::X
}
fn player_two() -> Self {
Player::O
}
fn resolve_player(&self) -> statistics::two_player::Player {
match *self {
Player::X => statistics::two_player::Player::One,
Player::O => statistics::two_player::Player::Two,
}
}
}
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
pub struct Board {
cells: [[Option<Player>; 3]; 3],
}
#[derive(Copy, Clone, Debug, Eq, Hash, PartialEq)]
pub struct Action {
pub row: usize,
pub column: usize,
pub player: Player,
}
fn winning_player(a: Option<Player>, b: Option<Player>, c: Option<Player>) -> Option<Player> {
if a.is_some() && a == b && b == c {
a
} else {
None
}
}
pub enum Outcome {
Winner(Player),
Tie,
}
impl Board {
pub fn new() -> Self {
Board {
cells: [[None, None, None], [None, None, None], [None, None, None]],
}
}
pub fn set(&mut self, row: usize, column: usize, value: Player) {
assert!(row < 3);
assert!(column < 3);
assert!(self.cells[row][column].is_none());
self.cells[row][column] = Some(value);
}
pub fn get(&self, row: usize, column: usize) -> Option<Player> {
assert!(row < 3);
assert!(column < 3);
self.cells[row][column]
}
pub fn outcome(&self) -> Option<Outcome> {
for n in 0..3 {
let mut p = winning_player(self.cells[n][0], self.cells[n][1], self.cells[n][2]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
p = winning_player(self.cells[0][n], self.cells[1][n], self.cells[2][n]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
}
let mut p = winning_player(self.cells[0][0], self.cells[1][1], self.cells[2][2]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
p = winning_player(self.cells[0][2], self.cells[1][1], self.cells[2][0]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
for row in 0..3 {
for column in 0..3 {
if self.cells[row][column].is_none() {
return None;
}
}
}
Some(Outcome::Tie)
}
}
#[derive(Clone, Debug, Hash, Eq, PartialEq)]
pub struct State {
pub active_player: Player,
pub board: Board,
}
impl Default for State {
fn default() -> Self {
State {
active_player: Player::X,
board: Board::new(),
}
}
}
impl game::State for State {
type Action = Action;
type PlayerId = Player;
fn active_player(&self) -> &Player {
&self.active_player
}
fn actions<'s>(&'s self) -> Box<dyn Iterator<Item = Action> +'s> {
Box::new(iterate![for row in 0..3;
for column in 0..3;
if self.board.get(row, column).is_none();
yield Action { row, column, player: self.active_player, }])
}
fn do_action(&mut self, action: &Action) {
self.board.set(action.row, action.column, action.player);
self.active_player = match self.active_player {
Player::X => Player::O,
Player::O => Player::X,
};
}
}
#[derive(Debug)]
pub struct ScoredGame {}
impl game::Game for ScoredGame {
type Action = Action;
type PlayerId = Player;
type Payoff = statistics::two_player::ScoredPayoff;
type State = State;
type Statistics = statistics::two_player::ScoredStatistics<Player>;
fn payoff_of(state: &State) -> Option<statistics::two_player::ScoredPayoff> {
state.board.outcome().map(|p| match p {
Outcome::Winner(Player::X) => statistics::two_player::ScoredPayoff {
visits: 1,
score_one: 1,
score_two: 0,
},
Outcome::Winner(Player::O) => statistics::two_player::ScoredPayoff {
visits: 1,
score_one: 0,
score_two: 1,
},
Outcome::Tie => statistics::two_player::ScoredPayoff {
visits: 1,
score_one: 0,
score_two: 0,
},
})
}
}
| Player | identifier_name |
tictactoe.rs | use crate::{game, statistics};
use r4::iterate;
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]
pub enum Player {
X,
O,
}
impl statistics::two_player::PlayerMapping for Player {
fn player_one() -> Self {
Player::X
}
fn player_two() -> Self {
Player::O
}
fn resolve_player(&self) -> statistics::two_player::Player {
match *self {
Player::X => statistics::two_player::Player::One,
Player::O => statistics::two_player::Player::Two,
}
}
}
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
pub struct Board {
cells: [[Option<Player>; 3]; 3],
}
#[derive(Copy, Clone, Debug, Eq, Hash, PartialEq)]
pub struct Action {
pub row: usize,
pub column: usize,
pub player: Player,
}
fn winning_player(a: Option<Player>, b: Option<Player>, c: Option<Player>) -> Option<Player> {
if a.is_some() && a == b && b == c {
a
} else {
None
}
}
pub enum Outcome {
Winner(Player),
Tie,
}
impl Board {
pub fn new() -> Self {
Board {
cells: [[None, None, None], [None, None, None], [None, None, None]],
}
}
pub fn set(&mut self, row: usize, column: usize, value: Player) |
pub fn get(&self, row: usize, column: usize) -> Option<Player> {
assert!(row < 3);
assert!(column < 3);
self.cells[row][column]
}
pub fn outcome(&self) -> Option<Outcome> {
for n in 0..3 {
let mut p = winning_player(self.cells[n][0], self.cells[n][1], self.cells[n][2]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
p = winning_player(self.cells[0][n], self.cells[1][n], self.cells[2][n]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
}
let mut p = winning_player(self.cells[0][0], self.cells[1][1], self.cells[2][2]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
p = winning_player(self.cells[0][2], self.cells[1][1], self.cells[2][0]);
if p.is_some() {
return p.map(|p| Outcome::Winner(p));
}
for row in 0..3 {
for column in 0..3 {
if self.cells[row][column].is_none() {
return None;
}
}
}
Some(Outcome::Tie)
}
}
#[derive(Clone, Debug, Hash, Eq, PartialEq)]
pub struct State {
pub active_player: Player,
pub board: Board,
}
impl Default for State {
fn default() -> Self {
State {
active_player: Player::X,
board: Board::new(),
}
}
}
impl game::State for State {
type Action = Action;
type PlayerId = Player;
fn active_player(&self) -> &Player {
&self.active_player
}
fn actions<'s>(&'s self) -> Box<dyn Iterator<Item = Action> +'s> {
Box::new(iterate![for row in 0..3;
for column in 0..3;
if self.board.get(row, column).is_none();
yield Action { row, column, player: self.active_player, }])
}
fn do_action(&mut self, action: &Action) {
self.board.set(action.row, action.column, action.player);
self.active_player = match self.active_player {
Player::X => Player::O,
Player::O => Player::X,
};
}
}
#[derive(Debug)]
pub struct ScoredGame {}
impl game::Game for ScoredGame {
type Action = Action;
type PlayerId = Player;
type Payoff = statistics::two_player::ScoredPayoff;
type State = State;
type Statistics = statistics::two_player::ScoredStatistics<Player>;
fn payoff_of(state: &State) -> Option<statistics::two_player::ScoredPayoff> {
state.board.outcome().map(|p| match p {
Outcome::Winner(Player::X) => statistics::two_player::ScoredPayoff {
visits: 1,
score_one: 1,
score_two: 0,
},
Outcome::Winner(Player::O) => statistics::two_player::ScoredPayoff {
visits: 1,
score_one: 0,
score_two: 1,
},
Outcome::Tie => statistics::two_player::ScoredPayoff {
visits: 1,
score_one: 0,
score_two: 0,
},
})
}
}
| {
assert!(row < 3);
assert!(column < 3);
assert!(self.cells[row][column].is_none());
self.cells[row][column] = Some(value);
} | identifier_body |
p_4_3_1_01.rs | // P_4_3_1_01
//
// Generative Gestaltung – Creative Coding im Web
// ISBN: 978-3-87439-902-9, First Edition, Hermann Schmidt, Mainz, 2018
// Benedikt Groß, Hartmut Bohnacker, Julia Laub, Claudius Lazzeroni
// with contributions by Joey Lee and Niels Poldervaart
// Copyright 2018
//
// http://www.generative-gestaltung.de
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/**
* pixel mapping. each pixel is translated into a new element
*
* MOUSE
* position x/y : various parameters (depending on draw mode)
*
* KEYS
* 1-9 : switch draw mode
* s : save png
*/
use nannou::prelude::*;
use nannou::image;
use nannou::image::GenericImageView;
fn main() {
nannou::app(model).run();
}
struct Model {
image: image::DynamicImage,
draw_mode: u8,
}
fn model(app: &App) -> Model {
// Create a new window! Store the ID so we can refer to it later.
app.new_window()
.size(603, 873)
.view(view)
.key_released(key_released)
.build()
.unwrap();
let assets = app.assets_path().unwrap();
let img_path = assets
.join("images")
.join("generative_examples")
.join("p_4_3_1_01.png");
let image = image::open(img_path).unwrap();
Model {
image,
draw_mode: 1,
}
}
// Draw the state of your `Model` into the given `Frame` here.
fn view(app: &App, model: &Model, frame: Frame) {
frame.clear(WHITE);
let draw = app.draw();
let win = app.window_rect();
let mouse_x_factor = map_range(app.mouse.x, win.left(), win.right(), 0.01, 1.0);
let mouse_y_factor = map_range(app.mouse.y, win.bottom(), win.top(), 0.01, 1.0);
let (w, h) = model.image.dimensions();
for grid_x in 0..w {
for grid_y in 0..h {
// get current color
let c = model.image.get_pixel(grid_x, grid_y);
// greyscale conversion
let red = c[0] as f32 / 255.0;
let green = c[1] as f32 / 255.0;
let blue = c[2] as f32 / 255.0;
let greyscale = red * 0.222 + green * 0.707 + blue * 0.071;
// Grid position + tile size
let tile_width = win.w() / w as f32;
let tile_height = win.h() / h as f32;
let pos_x = win.left() + tile_width * grid_x as f32 + (tile_width / 2.0);
let pos_y = win.top() - tile_height * grid_y as f32 - (tile_height / 2.0);
| draw.line()
.start(pt2(pos_x, pos_y))
.end(pt2(pos_x + 5.0, pos_y + 5.0))
.weight(w1 * mouse_x_factor)
.caps_round()
.color(BLACK);
}
2 => {
// greyscale to ellise area
let mut r2 = 1.1284 * (tile_width * tile_width * (1.0 - greyscale)).sqrt();
r2 *= mouse_x_factor * 3.0;
draw.ellipse()
.x_y(pos_x, pos_y)
.radius(r2 / 2.0)
.color(BLACK);
}
3 => {
// greyscale to line length
let mut l3 = map_range(greyscale, 0.0, 1.0, 30.0, 0.1);
l3 *= mouse_x_factor;
draw.line()
.start(pt2(pos_x, pos_y))
.end(pt2(pos_x + l3, pos_y - l3))
.weight(10.0 * mouse_y_factor)
.caps_round()
.color(BLACK);
}
4 => {
// greyscale to rotation, line length and stroke weight
let w4 = map_range(greyscale, 0.0, 1.0, 10.0, 0.0);
let mut l4 = map_range(greyscale, 0.0, 1.0, 35.0, 0.0);
l4 *= mouse_x_factor;
let draw = draw.x_y(pos_x, pos_y).rotate(greyscale * PI);
draw.line()
.start(pt2(0.0, 0.0))
.end(pt2(l4, -l4))
.weight(w4 * mouse_x_factor + 0.1)
.caps_round()
.color(BLACK);
}
5 => {
// greyscale to line relief
let w5 = map_range(greyscale, 0.0, 1.0, 5.0, 0.2);
// get neighbour pixel, limit it to image width
let c2 = model.image.get_pixel((grid_x + 1).min(w - 1), grid_y);
// greyscale conversion
let red = c2[0] as f32 / 255.0;
let green = c2[1] as f32 / 255.0;
let blue = c2[2] as f32 / 255.0;
let greyscale2 = red * 0.222 + green * 0.707 + blue * 0.071;
let h5 = 50.0 * mouse_x_factor;
let d1 = map_range(greyscale, 0.0, 1.0, h5, 0.0);
let d2 = map_range(greyscale2, 0.0, 1.0, h5, 0.0);
draw.line()
.start(pt2(pos_x - d1, pos_y - d1))
.end(pt2(pos_x + tile_width - d2, pos_y - d2))
.weight(w5 * mouse_y_factor + 0.1)
.rgb(red, green, blue);
}
6 => {
// pixel color to fill, greyscale to ellipse size
let w6 = map_range(greyscale, 0.0, 1.0, 25.0, 0.0);
draw.ellipse()
.x_y(pos_x, pos_y)
.w_h(w6 * mouse_x_factor, w6 * mouse_x_factor)
.rgb(red, green, blue);
}
7 => {
let w7 = map_range(greyscale, 0.0, 1.0, 5.0, 0.1);
let draw = draw
.x_y(pos_x, pos_y)
.rotate(greyscale * PI * mouse_y_factor);
draw.rect()
.x_y(0.0, 0.0)
.w_h(15.0, 15.0)
.stroke_weight(w7)
.stroke(rgb(red, green, blue))
.rgba(1.0, 1.0, 1.0, mouse_x_factor);
}
8 => {
let col = rgb(greyscale, greyscale * mouse_x_factor, mouse_y_factor);
draw.rect().x_y(pos_x, pos_y).w_h(3.5, 3.5).color(col);
draw.rect().x_y(pos_x + 4.0, pos_y).w_h(3.5, 3.5).color(col);
draw.rect().x_y(pos_x, pos_y - 4.0).w_h(3.5, 3.5).color(col);
draw.rect()
.x_y(pos_x + 4.0, pos_y - 4.0)
.w_h(3.5, 3.5)
.color(col);
}
9 => {
let draw = draw.x_y(pos_x, pos_y).rotate(greyscale * PI);
draw.rect()
.x_y(0.0, 0.0)
.w_h(15.0 * mouse_x_factor, 15.0 * mouse_y_factor)
.stroke_weight(1.0)
.stroke(rgb(1.0, greyscale, 0.0))
.no_fill();
let w9 = map_range(greyscale, 0.0, 1.0, 15.0, 0.1);
draw.ellipse()
.x_y(0.0, 0.0)
.w_h(5.0, 2.5)
.stroke_weight(w9)
.stroke(rgb(0.0, 0.0, 0.27))
.no_fill();
}
_ => (),
}
}
}
draw.to_frame(app, &frame).unwrap();
}
fn key_released(app: &App, model: &mut Model, key: Key) {
match key {
Key::Key1 => {
model.draw_mode = 1;
}
Key::Key2 => {
model.draw_mode = 2;
}
Key::Key3 => {
model.draw_mode = 3;
}
Key::Key4 => {
model.draw_mode = 4;
}
Key::Key5 => {
model.draw_mode = 5;
}
Key::Key6 => {
model.draw_mode = 6;
}
Key::Key7 => {
model.draw_mode = 7;
}
Key::Key8 => {
model.draw_mode = 8;
}
Key::Key9 => {
model.draw_mode = 9;
}
Key::S => {
app.main_window()
.capture_frame(app.exe_name().unwrap() + ".png");
}
_otherkey => (),
}
} | match model.draw_mode {
1 => {
// greyscale to stroke weight
let w1 = map_range(greyscale, 0.0, 1.0, 15.0, 0.1); | random_line_split |
p_4_3_1_01.rs | // P_4_3_1_01
//
// Generative Gestaltung – Creative Coding im Web
// ISBN: 978-3-87439-902-9, First Edition, Hermann Schmidt, Mainz, 2018
// Benedikt Groß, Hartmut Bohnacker, Julia Laub, Claudius Lazzeroni
// with contributions by Joey Lee and Niels Poldervaart
// Copyright 2018
//
// http://www.generative-gestaltung.de
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/**
* pixel mapping. each pixel is translated into a new element
*
* MOUSE
* position x/y : various parameters (depending on draw mode)
*
* KEYS
* 1-9 : switch draw mode
* s : save png
*/
use nannou::prelude::*;
use nannou::image;
use nannou::image::GenericImageView;
fn main() {
nannou::app(model).run();
}
struct Model {
image: image::DynamicImage,
draw_mode: u8,
}
fn model(app: &App) -> Model {
// Create a new window! Store the ID so we can refer to it later.
app.new_window()
.size(603, 873)
.view(view)
.key_released(key_released)
.build()
.unwrap();
let assets = app.assets_path().unwrap();
let img_path = assets
.join("images")
.join("generative_examples")
.join("p_4_3_1_01.png");
let image = image::open(img_path).unwrap();
Model {
image,
draw_mode: 1,
}
}
// Draw the state of your `Model` into the given `Frame` here.
fn view(app: &App, model: &Model, frame: Frame) {
| let tile_width = win.w() / w as f32;
let tile_height = win.h() / h as f32;
let pos_x = win.left() + tile_width * grid_x as f32 + (tile_width / 2.0);
let pos_y = win.top() - tile_height * grid_y as f32 - (tile_height / 2.0);
match model.draw_mode {
1 => {
// greyscale to stroke weight
let w1 = map_range(greyscale, 0.0, 1.0, 15.0, 0.1);
draw.line()
.start(pt2(pos_x, pos_y))
.end(pt2(pos_x + 5.0, pos_y + 5.0))
.weight(w1 * mouse_x_factor)
.caps_round()
.color(BLACK);
}
2 => {
// greyscale to ellise area
let mut r2 = 1.1284 * (tile_width * tile_width * (1.0 - greyscale)).sqrt();
r2 *= mouse_x_factor * 3.0;
draw.ellipse()
.x_y(pos_x, pos_y)
.radius(r2 / 2.0)
.color(BLACK);
}
3 => {
// greyscale to line length
let mut l3 = map_range(greyscale, 0.0, 1.0, 30.0, 0.1);
l3 *= mouse_x_factor;
draw.line()
.start(pt2(pos_x, pos_y))
.end(pt2(pos_x + l3, pos_y - l3))
.weight(10.0 * mouse_y_factor)
.caps_round()
.color(BLACK);
}
4 => {
// greyscale to rotation, line length and stroke weight
let w4 = map_range(greyscale, 0.0, 1.0, 10.0, 0.0);
let mut l4 = map_range(greyscale, 0.0, 1.0, 35.0, 0.0);
l4 *= mouse_x_factor;
let draw = draw.x_y(pos_x, pos_y).rotate(greyscale * PI);
draw.line()
.start(pt2(0.0, 0.0))
.end(pt2(l4, -l4))
.weight(w4 * mouse_x_factor + 0.1)
.caps_round()
.color(BLACK);
}
5 => {
// greyscale to line relief
let w5 = map_range(greyscale, 0.0, 1.0, 5.0, 0.2);
// get neighbour pixel, limit it to image width
let c2 = model.image.get_pixel((grid_x + 1).min(w - 1), grid_y);
// greyscale conversion
let red = c2[0] as f32 / 255.0;
let green = c2[1] as f32 / 255.0;
let blue = c2[2] as f32 / 255.0;
let greyscale2 = red * 0.222 + green * 0.707 + blue * 0.071;
let h5 = 50.0 * mouse_x_factor;
let d1 = map_range(greyscale, 0.0, 1.0, h5, 0.0);
let d2 = map_range(greyscale2, 0.0, 1.0, h5, 0.0);
draw.line()
.start(pt2(pos_x - d1, pos_y - d1))
.end(pt2(pos_x + tile_width - d2, pos_y - d2))
.weight(w5 * mouse_y_factor + 0.1)
.rgb(red, green, blue);
}
6 => {
// pixel color to fill, greyscale to ellipse size
let w6 = map_range(greyscale, 0.0, 1.0, 25.0, 0.0);
draw.ellipse()
.x_y(pos_x, pos_y)
.w_h(w6 * mouse_x_factor, w6 * mouse_x_factor)
.rgb(red, green, blue);
}
7 => {
let w7 = map_range(greyscale, 0.0, 1.0, 5.0, 0.1);
let draw = draw
.x_y(pos_x, pos_y)
.rotate(greyscale * PI * mouse_y_factor);
draw.rect()
.x_y(0.0, 0.0)
.w_h(15.0, 15.0)
.stroke_weight(w7)
.stroke(rgb(red, green, blue))
.rgba(1.0, 1.0, 1.0, mouse_x_factor);
}
8 => {
let col = rgb(greyscale, greyscale * mouse_x_factor, mouse_y_factor);
draw.rect().x_y(pos_x, pos_y).w_h(3.5, 3.5).color(col);
draw.rect().x_y(pos_x + 4.0, pos_y).w_h(3.5, 3.5).color(col);
draw.rect().x_y(pos_x, pos_y - 4.0).w_h(3.5, 3.5).color(col);
draw.rect()
.x_y(pos_x + 4.0, pos_y - 4.0)
.w_h(3.5, 3.5)
.color(col);
}
9 => {
let draw = draw.x_y(pos_x, pos_y).rotate(greyscale * PI);
draw.rect()
.x_y(0.0, 0.0)
.w_h(15.0 * mouse_x_factor, 15.0 * mouse_y_factor)
.stroke_weight(1.0)
.stroke(rgb(1.0, greyscale, 0.0))
.no_fill();
let w9 = map_range(greyscale, 0.0, 1.0, 15.0, 0.1);
draw.ellipse()
.x_y(0.0, 0.0)
.w_h(5.0, 2.5)
.stroke_weight(w9)
.stroke(rgb(0.0, 0.0, 0.27))
.no_fill();
}
_ => (),
}
}
}
draw.to_frame(app, &frame).unwrap();
}
f
n key_released(app: &App, model: &mut Model, key: Key) {
match key {
Key::Key1 => {
model.draw_mode = 1;
}
Key::Key2 => {
model.draw_mode = 2;
}
Key::Key3 => {
model.draw_mode = 3;
}
Key::Key4 => {
model.draw_mode = 4;
}
Key::Key5 => {
model.draw_mode = 5;
}
Key::Key6 => {
model.draw_mode = 6;
}
Key::Key7 => {
model.draw_mode = 7;
}
Key::Key8 => {
model.draw_mode = 8;
}
Key::Key9 => {
model.draw_mode = 9;
}
Key::S => {
app.main_window()
.capture_frame(app.exe_name().unwrap() + ".png");
}
_otherkey => (),
}
}
| frame.clear(WHITE);
let draw = app.draw();
let win = app.window_rect();
let mouse_x_factor = map_range(app.mouse.x, win.left(), win.right(), 0.01, 1.0);
let mouse_y_factor = map_range(app.mouse.y, win.bottom(), win.top(), 0.01, 1.0);
let (w, h) = model.image.dimensions();
for grid_x in 0..w {
for grid_y in 0..h {
// get current color
let c = model.image.get_pixel(grid_x, grid_y);
// greyscale conversion
let red = c[0] as f32 / 255.0;
let green = c[1] as f32 / 255.0;
let blue = c[2] as f32 / 255.0;
let greyscale = red * 0.222 + green * 0.707 + blue * 0.071;
// Grid position + tile size | identifier_body |
p_4_3_1_01.rs | // P_4_3_1_01
//
// Generative Gestaltung – Creative Coding im Web
// ISBN: 978-3-87439-902-9, First Edition, Hermann Schmidt, Mainz, 2018
// Benedikt Groß, Hartmut Bohnacker, Julia Laub, Claudius Lazzeroni
// with contributions by Joey Lee and Niels Poldervaart
// Copyright 2018
//
// http://www.generative-gestaltung.de
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/**
* pixel mapping. each pixel is translated into a new element
*
* MOUSE
* position x/y : various parameters (depending on draw mode)
*
* KEYS
* 1-9 : switch draw mode
* s : save png
*/
use nannou::prelude::*;
use nannou::image;
use nannou::image::GenericImageView;
fn main() {
nannou::app(model).run();
}
struct Model {
image: image::DynamicImage,
draw_mode: u8,
}
fn model(app: &App) -> Model {
// Create a new window! Store the ID so we can refer to it later.
app.new_window()
.size(603, 873)
.view(view)
.key_released(key_released)
.build()
.unwrap();
let assets = app.assets_path().unwrap();
let img_path = assets
.join("images")
.join("generative_examples")
.join("p_4_3_1_01.png");
let image = image::open(img_path).unwrap();
Model {
image,
draw_mode: 1,
}
}
// Draw the state of your `Model` into the given `Frame` here.
fn view(app: &App, model: &Model, frame: Frame) {
frame.clear(WHITE);
let draw = app.draw();
let win = app.window_rect();
let mouse_x_factor = map_range(app.mouse.x, win.left(), win.right(), 0.01, 1.0);
let mouse_y_factor = map_range(app.mouse.y, win.bottom(), win.top(), 0.01, 1.0);
let (w, h) = model.image.dimensions();
for grid_x in 0..w {
for grid_y in 0..h {
// get current color
let c = model.image.get_pixel(grid_x, grid_y);
// greyscale conversion
let red = c[0] as f32 / 255.0;
let green = c[1] as f32 / 255.0;
let blue = c[2] as f32 / 255.0;
let greyscale = red * 0.222 + green * 0.707 + blue * 0.071;
// Grid position + tile size
let tile_width = win.w() / w as f32;
let tile_height = win.h() / h as f32;
let pos_x = win.left() + tile_width * grid_x as f32 + (tile_width / 2.0);
let pos_y = win.top() - tile_height * grid_y as f32 - (tile_height / 2.0);
match model.draw_mode {
1 => {
// greyscale to stroke weight
let w1 = map_range(greyscale, 0.0, 1.0, 15.0, 0.1);
draw.line()
.start(pt2(pos_x, pos_y))
.end(pt2(pos_x + 5.0, pos_y + 5.0))
.weight(w1 * mouse_x_factor)
.caps_round()
.color(BLACK);
}
2 => {
// greyscale to ellise area
let mut r2 = 1.1284 * (tile_width * tile_width * (1.0 - greyscale)).sqrt();
r2 *= mouse_x_factor * 3.0;
draw.ellipse()
.x_y(pos_x, pos_y)
.radius(r2 / 2.0)
.color(BLACK);
}
3 => {
// greyscale to line length
let mut l3 = map_range(greyscale, 0.0, 1.0, 30.0, 0.1);
l3 *= mouse_x_factor;
draw.line()
.start(pt2(pos_x, pos_y))
.end(pt2(pos_x + l3, pos_y - l3))
.weight(10.0 * mouse_y_factor)
.caps_round()
.color(BLACK);
}
4 => {
// greyscale to rotation, line length and stroke weight
let w4 = map_range(greyscale, 0.0, 1.0, 10.0, 0.0);
let mut l4 = map_range(greyscale, 0.0, 1.0, 35.0, 0.0);
l4 *= mouse_x_factor;
let draw = draw.x_y(pos_x, pos_y).rotate(greyscale * PI);
draw.line()
.start(pt2(0.0, 0.0))
.end(pt2(l4, -l4))
.weight(w4 * mouse_x_factor + 0.1)
.caps_round()
.color(BLACK);
}
5 => {
// greyscale to line relief
let w5 = map_range(greyscale, 0.0, 1.0, 5.0, 0.2);
// get neighbour pixel, limit it to image width
let c2 = model.image.get_pixel((grid_x + 1).min(w - 1), grid_y);
// greyscale conversion
let red = c2[0] as f32 / 255.0;
let green = c2[1] as f32 / 255.0;
let blue = c2[2] as f32 / 255.0;
let greyscale2 = red * 0.222 + green * 0.707 + blue * 0.071;
let h5 = 50.0 * mouse_x_factor;
let d1 = map_range(greyscale, 0.0, 1.0, h5, 0.0);
let d2 = map_range(greyscale2, 0.0, 1.0, h5, 0.0);
draw.line()
.start(pt2(pos_x - d1, pos_y - d1))
.end(pt2(pos_x + tile_width - d2, pos_y - d2))
.weight(w5 * mouse_y_factor + 0.1)
.rgb(red, green, blue);
}
6 => {
// pixel color to fill, greyscale to ellipse size
let w6 = map_range(greyscale, 0.0, 1.0, 25.0, 0.0);
draw.ellipse()
.x_y(pos_x, pos_y)
.w_h(w6 * mouse_x_factor, w6 * mouse_x_factor)
.rgb(red, green, blue);
}
7 => {
let w7 = map_range(greyscale, 0.0, 1.0, 5.0, 0.1);
let draw = draw
.x_y(pos_x, pos_y)
.rotate(greyscale * PI * mouse_y_factor);
draw.rect()
.x_y(0.0, 0.0)
.w_h(15.0, 15.0)
.stroke_weight(w7)
.stroke(rgb(red, green, blue))
.rgba(1.0, 1.0, 1.0, mouse_x_factor);
}
8 => {
let col = rgb(greyscale, greyscale * mouse_x_factor, mouse_y_factor);
draw.rect().x_y(pos_x, pos_y).w_h(3.5, 3.5).color(col);
draw.rect().x_y(pos_x + 4.0, pos_y).w_h(3.5, 3.5).color(col);
draw.rect().x_y(pos_x, pos_y - 4.0).w_h(3.5, 3.5).color(col);
draw.rect()
.x_y(pos_x + 4.0, pos_y - 4.0)
.w_h(3.5, 3.5)
.color(col);
}
9 => {
| _ => (),
}
}
}
draw.to_frame(app, &frame).unwrap();
}
fn key_released(app: &App, model: &mut Model, key: Key) {
match key {
Key::Key1 => {
model.draw_mode = 1;
}
Key::Key2 => {
model.draw_mode = 2;
}
Key::Key3 => {
model.draw_mode = 3;
}
Key::Key4 => {
model.draw_mode = 4;
}
Key::Key5 => {
model.draw_mode = 5;
}
Key::Key6 => {
model.draw_mode = 6;
}
Key::Key7 => {
model.draw_mode = 7;
}
Key::Key8 => {
model.draw_mode = 8;
}
Key::Key9 => {
model.draw_mode = 9;
}
Key::S => {
app.main_window()
.capture_frame(app.exe_name().unwrap() + ".png");
}
_otherkey => (),
}
}
| let draw = draw.x_y(pos_x, pos_y).rotate(greyscale * PI);
draw.rect()
.x_y(0.0, 0.0)
.w_h(15.0 * mouse_x_factor, 15.0 * mouse_y_factor)
.stroke_weight(1.0)
.stroke(rgb(1.0, greyscale, 0.0))
.no_fill();
let w9 = map_range(greyscale, 0.0, 1.0, 15.0, 0.1);
draw.ellipse()
.x_y(0.0, 0.0)
.w_h(5.0, 2.5)
.stroke_weight(w9)
.stroke(rgb(0.0, 0.0, 0.27))
.no_fill();
}
| conditional_block |
p_4_3_1_01.rs | // P_4_3_1_01
//
// Generative Gestaltung – Creative Coding im Web
// ISBN: 978-3-87439-902-9, First Edition, Hermann Schmidt, Mainz, 2018
// Benedikt Groß, Hartmut Bohnacker, Julia Laub, Claudius Lazzeroni
// with contributions by Joey Lee and Niels Poldervaart
// Copyright 2018
//
// http://www.generative-gestaltung.de
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/**
* pixel mapping. each pixel is translated into a new element
*
* MOUSE
* position x/y : various parameters (depending on draw mode)
*
* KEYS
* 1-9 : switch draw mode
* s : save png
*/
use nannou::prelude::*;
use nannou::image;
use nannou::image::GenericImageView;
fn main() {
nannou::app(model).run();
}
struct Model {
image: image::DynamicImage,
draw_mode: u8,
}
fn model(app: &App) -> Model {
// Create a new window! Store the ID so we can refer to it later.
app.new_window()
.size(603, 873)
.view(view)
.key_released(key_released)
.build()
.unwrap();
let assets = app.assets_path().unwrap();
let img_path = assets
.join("images")
.join("generative_examples")
.join("p_4_3_1_01.png");
let image = image::open(img_path).unwrap();
Model {
image,
draw_mode: 1,
}
}
// Draw the state of your `Model` into the given `Frame` here.
fn view(app: &App, model: &Model, frame: Frame) {
frame.clear(WHITE);
let draw = app.draw();
let win = app.window_rect();
let mouse_x_factor = map_range(app.mouse.x, win.left(), win.right(), 0.01, 1.0);
let mouse_y_factor = map_range(app.mouse.y, win.bottom(), win.top(), 0.01, 1.0);
let (w, h) = model.image.dimensions();
for grid_x in 0..w {
for grid_y in 0..h {
// get current color
let c = model.image.get_pixel(grid_x, grid_y);
// greyscale conversion
let red = c[0] as f32 / 255.0;
let green = c[1] as f32 / 255.0;
let blue = c[2] as f32 / 255.0;
let greyscale = red * 0.222 + green * 0.707 + blue * 0.071;
// Grid position + tile size
let tile_width = win.w() / w as f32;
let tile_height = win.h() / h as f32;
let pos_x = win.left() + tile_width * grid_x as f32 + (tile_width / 2.0);
let pos_y = win.top() - tile_height * grid_y as f32 - (tile_height / 2.0);
match model.draw_mode {
1 => {
// greyscale to stroke weight
let w1 = map_range(greyscale, 0.0, 1.0, 15.0, 0.1);
draw.line()
.start(pt2(pos_x, pos_y))
.end(pt2(pos_x + 5.0, pos_y + 5.0))
.weight(w1 * mouse_x_factor)
.caps_round()
.color(BLACK);
}
2 => {
// greyscale to ellise area
let mut r2 = 1.1284 * (tile_width * tile_width * (1.0 - greyscale)).sqrt();
r2 *= mouse_x_factor * 3.0;
draw.ellipse()
.x_y(pos_x, pos_y)
.radius(r2 / 2.0)
.color(BLACK);
}
3 => {
// greyscale to line length
let mut l3 = map_range(greyscale, 0.0, 1.0, 30.0, 0.1);
l3 *= mouse_x_factor;
draw.line()
.start(pt2(pos_x, pos_y))
.end(pt2(pos_x + l3, pos_y - l3))
.weight(10.0 * mouse_y_factor)
.caps_round()
.color(BLACK);
}
4 => {
// greyscale to rotation, line length and stroke weight
let w4 = map_range(greyscale, 0.0, 1.0, 10.0, 0.0);
let mut l4 = map_range(greyscale, 0.0, 1.0, 35.0, 0.0);
l4 *= mouse_x_factor;
let draw = draw.x_y(pos_x, pos_y).rotate(greyscale * PI);
draw.line()
.start(pt2(0.0, 0.0))
.end(pt2(l4, -l4))
.weight(w4 * mouse_x_factor + 0.1)
.caps_round()
.color(BLACK);
}
5 => {
// greyscale to line relief
let w5 = map_range(greyscale, 0.0, 1.0, 5.0, 0.2);
// get neighbour pixel, limit it to image width
let c2 = model.image.get_pixel((grid_x + 1).min(w - 1), grid_y);
// greyscale conversion
let red = c2[0] as f32 / 255.0;
let green = c2[1] as f32 / 255.0;
let blue = c2[2] as f32 / 255.0;
let greyscale2 = red * 0.222 + green * 0.707 + blue * 0.071;
let h5 = 50.0 * mouse_x_factor;
let d1 = map_range(greyscale, 0.0, 1.0, h5, 0.0);
let d2 = map_range(greyscale2, 0.0, 1.0, h5, 0.0);
draw.line()
.start(pt2(pos_x - d1, pos_y - d1))
.end(pt2(pos_x + tile_width - d2, pos_y - d2))
.weight(w5 * mouse_y_factor + 0.1)
.rgb(red, green, blue);
}
6 => {
// pixel color to fill, greyscale to ellipse size
let w6 = map_range(greyscale, 0.0, 1.0, 25.0, 0.0);
draw.ellipse()
.x_y(pos_x, pos_y)
.w_h(w6 * mouse_x_factor, w6 * mouse_x_factor)
.rgb(red, green, blue);
}
7 => {
let w7 = map_range(greyscale, 0.0, 1.0, 5.0, 0.1);
let draw = draw
.x_y(pos_x, pos_y)
.rotate(greyscale * PI * mouse_y_factor);
draw.rect()
.x_y(0.0, 0.0)
.w_h(15.0, 15.0)
.stroke_weight(w7)
.stroke(rgb(red, green, blue))
.rgba(1.0, 1.0, 1.0, mouse_x_factor);
}
8 => {
let col = rgb(greyscale, greyscale * mouse_x_factor, mouse_y_factor);
draw.rect().x_y(pos_x, pos_y).w_h(3.5, 3.5).color(col);
draw.rect().x_y(pos_x + 4.0, pos_y).w_h(3.5, 3.5).color(col);
draw.rect().x_y(pos_x, pos_y - 4.0).w_h(3.5, 3.5).color(col);
draw.rect()
.x_y(pos_x + 4.0, pos_y - 4.0)
.w_h(3.5, 3.5)
.color(col);
}
9 => {
let draw = draw.x_y(pos_x, pos_y).rotate(greyscale * PI);
draw.rect()
.x_y(0.0, 0.0)
.w_h(15.0 * mouse_x_factor, 15.0 * mouse_y_factor)
.stroke_weight(1.0)
.stroke(rgb(1.0, greyscale, 0.0))
.no_fill();
let w9 = map_range(greyscale, 0.0, 1.0, 15.0, 0.1);
draw.ellipse()
.x_y(0.0, 0.0)
.w_h(5.0, 2.5)
.stroke_weight(w9)
.stroke(rgb(0.0, 0.0, 0.27))
.no_fill();
}
_ => (),
}
}
}
draw.to_frame(app, &frame).unwrap();
}
fn key | p: &App, model: &mut Model, key: Key) {
match key {
Key::Key1 => {
model.draw_mode = 1;
}
Key::Key2 => {
model.draw_mode = 2;
}
Key::Key3 => {
model.draw_mode = 3;
}
Key::Key4 => {
model.draw_mode = 4;
}
Key::Key5 => {
model.draw_mode = 5;
}
Key::Key6 => {
model.draw_mode = 6;
}
Key::Key7 => {
model.draw_mode = 7;
}
Key::Key8 => {
model.draw_mode = 8;
}
Key::Key9 => {
model.draw_mode = 9;
}
Key::S => {
app.main_window()
.capture_frame(app.exe_name().unwrap() + ".png");
}
_otherkey => (),
}
}
| _released(ap | identifier_name |
rotational-cipher.rs | extern crate rotational_cipher as cipher;
#[test]
fn rotate_a_1() {
assert_eq!("b", cipher::rotate("a", 1));
}
#[test]
#[ignore]
fn rotate_a_26() {
assert_eq!("a", cipher::rotate("a", 26));
}
#[test]
#[ignore]
fn rotate_a_0() {
assert_eq!("a", cipher::rotate("a", 0));
}
#[test] | #[test]
#[ignore]
fn rotate_n_13_with_wrap() {
assert_eq!("a", cipher::rotate("n", 13));
}
#[test]
#[ignore]
fn rotate_caps() {
assert_eq!("TRL", cipher::rotate("OMG", 5));
}
#[test]
#[ignore]
fn rotate_spaces() {
assert_eq!("T R L", cipher::rotate("O M G", 5));
}
#[test]
#[ignore]
fn rotate_numbers() {
assert_eq!("Xiwxmrk 1 2 3 xiwxmrk", cipher::rotate("Testing 1 2 3 testing", 4));
}
#[test]
#[ignore]
fn rotate_punctuation() {
assert_eq!("Gzo\'n zvo, Bmviyhv!", cipher::rotate("Let\'s eat, Grandma!", 21));
}
#[test]
#[ignore]
fn rotate_all_the_letters() {
assert_eq!("Gur dhvpx oebja sbk whzcf bire gur ynml qbt.",
cipher::rotate("The quick brown fox jumps over the lazy dog.", 13));
} | #[ignore]
fn rotate_m_13() {
assert_eq!("z", cipher::rotate("m", 13));
}
| random_line_split |
rotational-cipher.rs | extern crate rotational_cipher as cipher;
#[test]
fn rotate_a_1() {
assert_eq!("b", cipher::rotate("a", 1));
}
#[test]
#[ignore]
fn rotate_a_26() {
assert_eq!("a", cipher::rotate("a", 26));
}
#[test]
#[ignore]
fn rotate_a_0() {
assert_eq!("a", cipher::rotate("a", 0));
}
#[test]
#[ignore]
fn rotate_m_13() {
assert_eq!("z", cipher::rotate("m", 13));
}
#[test]
#[ignore]
fn rotate_n_13_with_wrap() |
#[test]
#[ignore]
fn rotate_caps() {
assert_eq!("TRL", cipher::rotate("OMG", 5));
}
#[test]
#[ignore]
fn rotate_spaces() {
assert_eq!("T R L", cipher::rotate("O M G", 5));
}
#[test]
#[ignore]
fn rotate_numbers() {
assert_eq!("Xiwxmrk 1 2 3 xiwxmrk", cipher::rotate("Testing 1 2 3 testing", 4));
}
#[test]
#[ignore]
fn rotate_punctuation() {
assert_eq!("Gzo\'n zvo, Bmviyhv!", cipher::rotate("Let\'s eat, Grandma!", 21));
}
#[test]
#[ignore]
fn rotate_all_the_letters() {
assert_eq!("Gur dhvpx oebja sbk whzcf bire gur ynml qbt.",
cipher::rotate("The quick brown fox jumps over the lazy dog.", 13));
}
| {
assert_eq!("a", cipher::rotate("n", 13));
} | identifier_body |
rotational-cipher.rs | extern crate rotational_cipher as cipher;
#[test]
fn rotate_a_1() {
assert_eq!("b", cipher::rotate("a", 1));
}
#[test]
#[ignore]
fn rotate_a_26() {
assert_eq!("a", cipher::rotate("a", 26));
}
#[test]
#[ignore]
fn rotate_a_0() {
assert_eq!("a", cipher::rotate("a", 0));
}
#[test]
#[ignore]
fn rotate_m_13() {
assert_eq!("z", cipher::rotate("m", 13));
}
#[test]
#[ignore]
fn | () {
assert_eq!("a", cipher::rotate("n", 13));
}
#[test]
#[ignore]
fn rotate_caps() {
assert_eq!("TRL", cipher::rotate("OMG", 5));
}
#[test]
#[ignore]
fn rotate_spaces() {
assert_eq!("T R L", cipher::rotate("O M G", 5));
}
#[test]
#[ignore]
fn rotate_numbers() {
assert_eq!("Xiwxmrk 1 2 3 xiwxmrk", cipher::rotate("Testing 1 2 3 testing", 4));
}
#[test]
#[ignore]
fn rotate_punctuation() {
assert_eq!("Gzo\'n zvo, Bmviyhv!", cipher::rotate("Let\'s eat, Grandma!", 21));
}
#[test]
#[ignore]
fn rotate_all_the_letters() {
assert_eq!("Gur dhvpx oebja sbk whzcf bire gur ynml qbt.",
cipher::rotate("The quick brown fox jumps over the lazy dog.", 13));
}
| rotate_n_13_with_wrap | identifier_name |
trait-generic.rs | // Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
trait to_str {
fn to_string_(&self) -> String;
}
impl to_str for int {
fn to_string_(&self) -> String { self.to_string() }
}
impl to_str for String {
fn to_string_(&self) -> String { self.clone() }
}
impl to_str for () {
fn to_string_(&self) -> String { "()".to_string() }
}
trait map<T> {
fn map<U, F>(&self, f: F) -> Vec<U> where F: FnMut(&T) -> U;
}
impl<T> map<T> for Vec<T> {
fn map<U, F>(&self, mut f: F) -> Vec<U> where F: FnMut(&T) -> U {
let mut r = Vec::new();
for i in self {
r.push(f(i));
}
r
}
}
fn foo<U, T: map<U>>(x: T) -> Vec<String> {
x.map(|_e| "hi".to_string() )
}
fn bar<U:to_str,T:map<U>>(x: T) -> Vec<String> {
x.map(|_e| _e.to_string_() )
}
pub fn main() | {
assert_eq!(foo(vec!(1)), vec!("hi".to_string()));
assert_eq!(bar::<int, Vec<int> >(vec!(4, 5)), vec!("4".to_string(), "5".to_string()));
assert_eq!(bar::<String, Vec<String> >(vec!("x".to_string(), "y".to_string())),
vec!("x".to_string(), "y".to_string()));
assert_eq!(bar::<(), Vec<()>>(vec!(())), vec!("()".to_string()));
} | identifier_body |
|
trait-generic.rs | // Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
trait to_str {
fn to_string_(&self) -> String;
}
impl to_str for int {
fn to_string_(&self) -> String { self.to_string() }
}
impl to_str for String {
fn to_string_(&self) -> String { self.clone() }
}
impl to_str for () {
fn | (&self) -> String { "()".to_string() }
}
trait map<T> {
fn map<U, F>(&self, f: F) -> Vec<U> where F: FnMut(&T) -> U;
}
impl<T> map<T> for Vec<T> {
fn map<U, F>(&self, mut f: F) -> Vec<U> where F: FnMut(&T) -> U {
let mut r = Vec::new();
for i in self {
r.push(f(i));
}
r
}
}
fn foo<U, T: map<U>>(x: T) -> Vec<String> {
x.map(|_e| "hi".to_string() )
}
fn bar<U:to_str,T:map<U>>(x: T) -> Vec<String> {
x.map(|_e| _e.to_string_() )
}
pub fn main() {
assert_eq!(foo(vec!(1)), vec!("hi".to_string()));
assert_eq!(bar::<int, Vec<int> >(vec!(4, 5)), vec!("4".to_string(), "5".to_string()));
assert_eq!(bar::<String, Vec<String> >(vec!("x".to_string(), "y".to_string())),
vec!("x".to_string(), "y".to_string()));
assert_eq!(bar::<(), Vec<()>>(vec!(())), vec!("()".to_string()));
}
| to_string_ | identifier_name |
trait-generic.rs | // Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
trait to_str {
fn to_string_(&self) -> String;
}
impl to_str for int {
fn to_string_(&self) -> String { self.to_string() }
}
impl to_str for String {
fn to_string_(&self) -> String { self.clone() }
}
impl to_str for () {
fn to_string_(&self) -> String { "()".to_string() }
}
trait map<T> {
fn map<U, F>(&self, f: F) -> Vec<U> where F: FnMut(&T) -> U;
} | }
r
}
}
fn foo<U, T: map<U>>(x: T) -> Vec<String> {
x.map(|_e| "hi".to_string() )
}
fn bar<U:to_str,T:map<U>>(x: T) -> Vec<String> {
x.map(|_e| _e.to_string_() )
}
pub fn main() {
assert_eq!(foo(vec!(1)), vec!("hi".to_string()));
assert_eq!(bar::<int, Vec<int> >(vec!(4, 5)), vec!("4".to_string(), "5".to_string()));
assert_eq!(bar::<String, Vec<String> >(vec!("x".to_string(), "y".to_string())),
vec!("x".to_string(), "y".to_string()));
assert_eq!(bar::<(), Vec<()>>(vec!(())), vec!("()".to_string()));
} | impl<T> map<T> for Vec<T> {
fn map<U, F>(&self, mut f: F) -> Vec<U> where F: FnMut(&T) -> U {
let mut r = Vec::new();
for i in self {
r.push(f(i)); | random_line_split |
030.rs | use std::io::{self, BufRead};
use std::str::FromStr;
fn read_line<F>() -> Result<F, F::Err> where F: FromStr {
let mut input = String::new();
io::stdin().read_line(&mut input).expect("Could not read stdin!");
input.trim().parse()
}
fn is_writable(number: u32, power: u32) -> bool {
let mut sum = 0;
let mut n = number;
while n > 0 {
sum += (n % 10).pow(power);
n /= 10;
}
sum == number
}
fn main() { | let power = read_line::<u32>().unwrap_or(0);
let sum = (10.. 10u32.pow(6)).filter(|&k| is_writable(k, power))
.fold(0, |sum, k| sum + k);
println!("{}", sum);
} | random_line_split |
|
030.rs | use std::io::{self, BufRead};
use std::str::FromStr;
fn read_line<F>() -> Result<F, F::Err> where F: FromStr {
let mut input = String::new();
io::stdin().read_line(&mut input).expect("Could not read stdin!");
input.trim().parse()
}
fn | (number: u32, power: u32) -> bool {
let mut sum = 0;
let mut n = number;
while n > 0 {
sum += (n % 10).pow(power);
n /= 10;
}
sum == number
}
fn main() {
let power = read_line::<u32>().unwrap_or(0);
let sum = (10.. 10u32.pow(6)).filter(|&k| is_writable(k, power))
.fold(0, |sum, k| sum + k);
println!("{}", sum);
}
| is_writable | identifier_name |
030.rs | use std::io::{self, BufRead};
use std::str::FromStr;
fn read_line<F>() -> Result<F, F::Err> where F: FromStr {
let mut input = String::new();
io::stdin().read_line(&mut input).expect("Could not read stdin!");
input.trim().parse()
}
fn is_writable(number: u32, power: u32) -> bool {
let mut sum = 0;
let mut n = number;
while n > 0 {
sum += (n % 10).pow(power);
n /= 10;
}
sum == number
}
fn main() | {
let power = read_line::<u32>().unwrap_or(0);
let sum = (10 .. 10u32.pow(6)).filter(|&k| is_writable(k, power))
.fold(0, |sum, k| sum + k);
println!("{}", sum);
} | identifier_body |
|
vector.rs | #[derive(Debug, Copy, Clone)]
pub struct Vector2d {
x: f32,
y: f32,
}
impl Vector2d {
pub fn new(x: f32, y: f32) -> Vector2d {
Vector2d {
x: x,
y: y,
}
}
pub fn get_pos(&self) -> (f32, f32) {
(self.x, self.y)
}
}
#[derive(Debug, Copy, Clone)]
pub struct Vector3d {
x: f32,
y: f32,
z: f32,
}
impl Vector3d {
pub fn new(x: f32, y: f32, z: f32) -> Vector3d {
Vector3d {
x: x,
y: y, | (self.x, self.y, self.z)
}
} | z: z,
}
}
pub fn get_pos(&self) -> (f32, f32, f32) { | random_line_split |
vector.rs | #[derive(Debug, Copy, Clone)]
pub struct Vector2d {
x: f32,
y: f32,
}
impl Vector2d {
pub fn new(x: f32, y: f32) -> Vector2d {
Vector2d {
x: x,
y: y,
}
}
pub fn get_pos(&self) -> (f32, f32) {
(self.x, self.y)
}
}
#[derive(Debug, Copy, Clone)]
pub struct Vector3d {
x: f32,
y: f32,
z: f32,
}
impl Vector3d {
pub fn | (x: f32, y: f32, z: f32) -> Vector3d {
Vector3d {
x: x,
y: y,
z: z,
}
}
pub fn get_pos(&self) -> (f32, f32, f32) {
(self.x, self.y, self.z)
}
}
| new | identifier_name |
read_method.rs | use super::branchify::generate_branchified_method;
use super::get_writer;
use std::io::IoResult;
pub fn generate(output_dir: Path) -> IoResult<()> {
let mut writer = get_writer(output_dir, "read_method.rs");
try!(writer.write(b"\
// This automatically generated file is included in request.rs.
pub mod dummy {
use std::io::{Stream, IoResult};
use method::Method; | use server::request::MAX_METHOD_LEN;
use rfc2616::{SP, is_token_item};
use buffer::BufferedStream;
#[inline]
pub fn read_method<S: Stream>(stream: &mut BufferedStream<S>) -> IoResult<Method> {
"));
try!(generate_branchified_method(
&mut *writer,
branchify!(case sensitive,
"CONNECT" => Connect,
"DELETE" => Delete,
"GET" => Get,
"HEAD" => Head,
"OPTIONS" => Options,
"PATCH" => Patch,
"POST" => Post,
"PUT" => Put,
"TRACE" => Trace
),
1,
"stream.read_byte()",
"SP",
"MAX_METHOD_LEN",
"is_token_item(b)",
"ExtensionMethod({})"));
writer.write(b"}\n}\n")
} | use method::Method::{Connect, Delete, Get, Head, Options, Patch, Post, Put, Trace, ExtensionMethod}; | random_line_split |
read_method.rs | use super::branchify::generate_branchified_method;
use super::get_writer;
use std::io::IoResult;
pub fn | (output_dir: Path) -> IoResult<()> {
let mut writer = get_writer(output_dir, "read_method.rs");
try!(writer.write(b"\
// This automatically generated file is included in request.rs.
pub mod dummy {
use std::io::{Stream, IoResult};
use method::Method;
use method::Method::{Connect, Delete, Get, Head, Options, Patch, Post, Put, Trace, ExtensionMethod};
use server::request::MAX_METHOD_LEN;
use rfc2616::{SP, is_token_item};
use buffer::BufferedStream;
#[inline]
pub fn read_method<S: Stream>(stream: &mut BufferedStream<S>) -> IoResult<Method> {
"));
try!(generate_branchified_method(
&mut *writer,
branchify!(case sensitive,
"CONNECT" => Connect,
"DELETE" => Delete,
"GET" => Get,
"HEAD" => Head,
"OPTIONS" => Options,
"PATCH" => Patch,
"POST" => Post,
"PUT" => Put,
"TRACE" => Trace
),
1,
"stream.read_byte()",
"SP",
"MAX_METHOD_LEN",
"is_token_item(b)",
"ExtensionMethod({})"));
writer.write(b"}\n}\n")
}
| generate | identifier_name |
read_method.rs | use super::branchify::generate_branchified_method;
use super::get_writer;
use std::io::IoResult;
pub fn generate(output_dir: Path) -> IoResult<()> | "DELETE" => Delete,
"GET" => Get,
"HEAD" => Head,
"OPTIONS" => Options,
"PATCH" => Patch,
"POST" => Post,
"PUT" => Put,
"TRACE" => Trace
),
1,
"stream.read_byte()",
"SP",
"MAX_METHOD_LEN",
"is_token_item(b)",
"ExtensionMethod({})"));
writer.write(b"}\n}\n")
}
| {
let mut writer = get_writer(output_dir, "read_method.rs");
try!(writer.write(b"\
// This automatically generated file is included in request.rs.
pub mod dummy {
use std::io::{Stream, IoResult};
use method::Method;
use method::Method::{Connect, Delete, Get, Head, Options, Patch, Post, Put, Trace, ExtensionMethod};
use server::request::MAX_METHOD_LEN;
use rfc2616::{SP, is_token_item};
use buffer::BufferedStream;
#[inline]
pub fn read_method<S: Stream>(stream: &mut BufferedStream<S>) -> IoResult<Method> {
"));
try!(generate_branchified_method(
&mut *writer,
branchify!(case sensitive,
"CONNECT" => Connect, | identifier_body |
dispatcher.rs | use std::collections::HashMap;
use std::collections::hash_map::Entry;
use std::io::{self, Cursor};
use std::net::SocketAddr;
use std::thread;
use bip_handshake::{DiscoveryInfo, InitiateMessage, Protocol};
use bip_util::bt::PeerId;
use chrono::{DateTime, Duration};
use chrono::offset::Utc;
use futures::future::Either;
use futures::sink::{Wait, Sink};
use nom::IResult;
use rand;
use umio::{ELoopBuilder, Dispatcher, Provider};
use umio::external::{self, Timeout};
use announce::{AnnounceRequest, SourceIP, DesiredPeers};
use client::{ClientToken, ClientRequest, RequestLimiter, ClientMetadata, ClientResponse};
use client::error::{ClientResult, ClientError};
use option::AnnounceOptions;
use request::{self, TrackerRequest, RequestType};
use response::{TrackerResponse, ResponseType};
use scrape::ScrapeRequest;
const EXPECTED_PACKET_LENGTH: usize = 1500;
const CONNECTION_ID_VALID_DURATION_MILLIS: i64 = 60000;
const MAXIMUM_REQUEST_RETRANSMIT_ATTEMPTS: u64 = 8;
/// Internal dispatch timeout.
enum DispatchTimeout {
Connect(ClientToken),
CleanUp,
}
/// Internal dispatch message for clients.
pub enum DispatchMessage {
Request(SocketAddr, ClientToken, ClientRequest),
StartTimer,
Shutdown,
}
/// Create a new background dispatcher to execute request and send responses back.
///
/// Assumes msg_capacity is less than usize::max_value().
pub fn create_dispatcher<H>(bind: SocketAddr,
handshaker: H,
msg_capacity: usize,
limiter: RequestLimiter)
-> io::Result<external::Sender<DispatchMessage>>
where H: Sink + DiscoveryInfo +'static + Send,
H::SinkItem: From<Either<InitiateMessage, ClientMetadata>>
{
// Timer capacity is plus one for the cache cleanup timer
let builder = ELoopBuilder::new()
.channel_capacity(msg_capacity)
.timer_capacity(msg_capacity + 1)
.bind_address(bind)
.buffer_length(EXPECTED_PACKET_LENGTH);
let mut eloop = try!(builder.build());
let channel = eloop.channel();
let dispatch = ClientDispatcher::new(handshaker, bind, limiter);
thread::spawn(move || {
eloop.run(dispatch).expect("bip_utracker: ELoop Shutdown Unexpectedly...");
});
channel.send(DispatchMessage::StartTimer)
.expect("bip_utracker: ELoop Failed To Start Connect ID Timer...");
Ok(channel)
}
// ----------------------------------------------------------------------------//
/// Dispatcher that executes requests asynchronously.
struct ClientDispatcher<H> {
handshaker: Wait<H>,
pid: PeerId,
port: u16,
bound_addr: SocketAddr,
active_requests: HashMap<ClientToken, ConnectTimer>,
id_cache: ConnectIdCache,
limiter: RequestLimiter,
}
impl<H> ClientDispatcher<H>
where H: Sink + DiscoveryInfo,
H::SinkItem: From<Either<InitiateMessage, ClientMetadata>>
{
/// Create a new ClientDispatcher.
pub fn new(handshaker: H, bind: SocketAddr, limiter: RequestLimiter) -> ClientDispatcher<H> {
let peer_id = handshaker.peer_id();
let port = handshaker.port();
ClientDispatcher {
handshaker: handshaker.wait(),
pid: peer_id,
port: port,
bound_addr: bind,
active_requests: HashMap::new(),
id_cache: ConnectIdCache::new(),
limiter: limiter,
}
}
/// Shutdown the current dispatcher, notifying all pending requests.
pub fn shutdown<'a>(&mut self, provider: &mut Provider<'a, ClientDispatcher<H>>) {
// Notify all active requests with the appropriate error
for token_index in 0..self.active_requests.len() {
let next_token = *self.active_requests.keys().skip(token_index).next().unwrap();
self.notify_client(next_token, Err(ClientError::ClientShutdown));
}
// TODO: Clear active timeouts
self.active_requests.clear();
provider.shutdown();
}
/// Finish a request by sending the result back to the client.
pub fn notify_client(&mut self, token: ClientToken, result: ClientResult<ClientResponse>) {
self.handshaker.send(Either::B(ClientMetadata::new(token, result)).into())
.unwrap_or_else(|_| panic!("NEED TO FIX"));
self.limiter.acknowledge();
}
/// Process a request to be sent to the given address and associated with the given token.
pub fn send_request<'a>(&mut self,
provider: &mut Provider<'a, ClientDispatcher<H>>,
addr: SocketAddr,
token: ClientToken,
request: ClientRequest) {
// Check for IP version mismatch between source addr and dest addr
match (self.bound_addr, addr) {
(SocketAddr::V4(_), SocketAddr::V6(_)) |
(SocketAddr::V6(_), SocketAddr::V4(_)) => {
self.notify_client(token, Err(ClientError::IPVersionMismatch));
return;
}
_ => (),
};
self.active_requests.insert(token, ConnectTimer::new(addr, request));
self.process_request(provider, token, false);
}
/// Process a response received from some tracker and match it up against our sent requests.
pub fn recv_response<'a, 'b>(&mut self,
provider: &mut Provider<'a, ClientDispatcher<H>>,
addr: SocketAddr,
response: TrackerResponse<'b>) {
let token = ClientToken(response.transaction_id());
let conn_timer = if let Some(conn_timer) = self.active_requests.remove(&token) {
if conn_timer.message_params().0 == addr {
conn_timer
} else {
return;
} // TODO: Add Logging (Server Receive Addr Different Than Send Addr)
} else {
return;
}; // TODO: Add Logging (Server Gave Us Invalid Transaction Id)
provider.clear_timeout(conn_timer.timeout_id()
.expect("bip_utracker: Failed To Clear Request Timeout"));
// Check if the response requires us to update the connection timer
if let &ResponseType::Connect(id) = response.response_type() {
self.id_cache.put(addr, id);
self.active_requests.insert(token, conn_timer);
self.process_request(provider, token, false);
} else {
// Match the request type against the response type and update our client
match (conn_timer.message_params().1, response.response_type()) {
(&ClientRequest::Announce(hash, _), &ResponseType::Announce(ref res)) => {
// Forward contact information on to the handshaker
for addr in res.peers().iter() {
self.handshaker.send(Either::A(InitiateMessage::new(Protocol::BitTorrent, hash, addr)).into())
.unwrap_or_else(|_| panic!("NEED TO FIX"));
}
self.notify_client(token, Ok(ClientResponse::Announce(res.to_owned())));
}
(&ClientRequest::Scrape(..), &ResponseType::Scrape(ref res)) => {
self.notify_client(token, Ok(ClientResponse::Scrape(res.to_owned())));
}
(_, &ResponseType::Error(ref res)) => {
self.notify_client(token, Err(ClientError::ServerMessage(res.to_owned())));
}
_ => {
self.notify_client(token, Err(ClientError::ServerError));
}
}
}
}
/// Process an existing request, either re requesting a connection id or sending the actual request again.
///
/// If this call is the result of a timeout, that will decide whether to cancel the request or not.
fn process_request<'a>(&mut self,
provider: &mut Provider<'a, ClientDispatcher<H>>,
token: ClientToken,
timed_out: bool) {
let mut conn_timer = if let Some(conn_timer) = self.active_requests.remove(&token) {
conn_timer
} else {
return;
}; // TODO: Add logging
// Resolve the duration of the current timeout to use
let next_timeout = match conn_timer.current_timeout(timed_out) {
Some(timeout) => timeout,
None => {
self.notify_client(token, Err(ClientError::MaxTimeout));
return;
}
};
let addr = conn_timer.message_params().0;
let opt_conn_id = self.id_cache.get(conn_timer.message_params().0);
// Resolve the type of request we need to make
let (conn_id, request_type) = match (opt_conn_id, conn_timer.message_params().1) {
(Some(id), &ClientRequest::Announce(hash, state)) => {
let source_ip = match addr {
SocketAddr::V4(_) => SourceIP::ImpliedV4,
SocketAddr::V6(_) => SourceIP::ImpliedV6,
};
let key = rand::random::<u32>();
(id,
RequestType::Announce(AnnounceRequest::new(hash,
self.pid,
state,
source_ip,
key,
DesiredPeers::Default,
self.port,
AnnounceOptions::new())))
}
(Some(id), &ClientRequest::Scrape(hash)) => {
let mut scrape_request = ScrapeRequest::new();
scrape_request.insert(hash);
(id, RequestType::Scrape(scrape_request))
}
(None, _) => (request::CONNECT_ID_PROTOCOL_ID, RequestType::Connect),
};
let tracker_request = TrackerRequest::new(conn_id, token.0, request_type);
// Try to write the request out to the server
let mut write_success = false;
provider.outgoing(|bytes| {
let mut writer = Cursor::new(bytes);
write_success = tracker_request.write_bytes(&mut writer).is_ok();
if write_success {
Some((writer.position() as usize, addr))
} else {
None
}
});
// If message was not sent (too long to fit) then end the request
if!write_success {
self.notify_client(token, Err(ClientError::MaxLength));
} else {
conn_timer.set_timeout_id(
provider.set_timeout(DispatchTimeout::Connect(token), next_timeout)
.expect("bip_utracker: Failed To Set Timeout For Request"));
self.active_requests.insert(token, conn_timer);
}
}
}
impl<H> Dispatcher for ClientDispatcher<H>
where H: Sink + DiscoveryInfo,
H::SinkItem: From<Either<InitiateMessage, ClientMetadata>>
{
type Timeout = DispatchTimeout;
type Message = DispatchMessage;
fn incoming<'a>(&mut self,
mut provider: Provider<'a, Self>,
message: &[u8],
addr: SocketAddr) {
let response = match TrackerResponse::from_bytes(message) {
IResult::Done(_, rsp) => rsp,
_ => return, // TODO: Add Logging
};
self.recv_response(&mut provider, addr, response);
}
fn notify<'a>(&mut self, mut provider: Provider<'a, Self>, message: DispatchMessage) {
match message {
DispatchMessage::Request(addr, token, req_type) => {
self.send_request(&mut provider, addr, token, req_type);
}
DispatchMessage::StartTimer => self.timeout(provider, DispatchTimeout::CleanUp),
DispatchMessage::Shutdown => self.shutdown(&mut provider),
}
}
fn timeout<'a>(&mut self, mut provider: Provider<'a, Self>, timeout: DispatchTimeout) {
match timeout {
DispatchTimeout::Connect(token) => self.process_request(&mut provider, token, true),
DispatchTimeout::CleanUp => {
self.id_cache.clean_expired();
provider.set_timeout(DispatchTimeout::CleanUp,
CONNECTION_ID_VALID_DURATION_MILLIS as u64)
.expect("bip_utracker: Failed To Restart Connect Id Cleanup Timer");
}
};
}
}
// ----------------------------------------------------------------------------//
/// Contains logic for making sure a valid connection id is present
/// and correctly timing out when sending requests to the server.
struct | {
addr: SocketAddr,
attempt: u64,
request: ClientRequest,
timeout_id: Option<Timeout>,
}
impl ConnectTimer {
/// Create a new ConnectTimer.
pub fn new(addr: SocketAddr, request: ClientRequest) -> ConnectTimer {
ConnectTimer {
addr: addr,
attempt: 0,
request: request,
timeout_id: None,
}
}
/// Yields the current timeout value to use or None if the request should time out completely.
pub fn current_timeout(&mut self, timed_out: bool) -> Option<u64> {
if self.attempt == MAXIMUM_REQUEST_RETRANSMIT_ATTEMPTS {
None
} else {
if timed_out {
self.attempt += 1;
}
Some(calculate_message_timeout_millis(self.attempt))
}
}
/// Yields the current timeout id if one is set.
pub fn timeout_id(&self) -> Option<Timeout> {
self.timeout_id
}
/// Sets a new timeout id.
pub fn set_timeout_id(&mut self, id: Timeout) {
self.timeout_id = Some(id);
}
/// Yields the message parameters for the current connection.
pub fn message_params(&self) -> (SocketAddr, &ClientRequest) {
(self.addr, &self.request)
}
}
/// Calculates the timeout for the request given the attempt count.
fn calculate_message_timeout_millis(attempt: u64) -> u64 {
(15 * 2u64.pow(attempt as u32)) * 1000
}
// ----------------------------------------------------------------------------//
/// Cache for storing connection ids associated with a specific server address.
struct ConnectIdCache {
cache: HashMap<SocketAddr, (u64, DateTime<Utc>)>,
}
impl ConnectIdCache {
/// Create a new connect id cache.
fn new() -> ConnectIdCache {
ConnectIdCache { cache: HashMap::new() }
}
/// Get an un expired connection id for the given addr.
fn get(&mut self, addr: SocketAddr) -> Option<u64> {
match self.cache.entry(addr) {
Entry::Vacant(_) => None,
Entry::Occupied(occ) => {
let curr_time = Utc::now();
let prev_time = occ.get().1;
if is_expired(curr_time, prev_time) {
occ.remove();
None
} else {
Some(occ.get().0)
}
}
}
}
/// Put an un expired connection id into cache for the given addr.
fn put(&mut self, addr: SocketAddr, connect_id: u64) {
let curr_time = Utc::now();
self.cache.insert(addr, (connect_id, curr_time));
}
/// Removes all entries that have expired.
fn clean_expired(&mut self) {
let curr_time = Utc::now();
let mut curr_index = 0;
let mut opt_curr_entry = self.cache.iter().skip(curr_index).map(|(&k, &v)| (k, v)).next();
while let Some((addr, (_, prev_time))) = opt_curr_entry.take() {
if is_expired(curr_time, prev_time) {
self.cache.remove(&addr);
}
curr_index += 1;
opt_curr_entry = self.cache.iter().skip(curr_index).map(|(&k, &v)| (k, v)).next();
}
}
}
/// Returns true if the connect id received at prev_time is now expired.
fn is_expired(curr_time: DateTime<Utc>, prev_time: DateTime<Utc>) -> bool {
let valid_duration = Duration::milliseconds(CONNECTION_ID_VALID_DURATION_MILLIS);
let difference = prev_time.signed_duration_since(curr_time);
difference >= valid_duration
}
| ConnectTimer | identifier_name |
dispatcher.rs | use std::collections::HashMap;
use std::collections::hash_map::Entry;
use std::io::{self, Cursor};
use std::net::SocketAddr;
use std::thread;
use bip_handshake::{DiscoveryInfo, InitiateMessage, Protocol};
use bip_util::bt::PeerId;
use chrono::{DateTime, Duration};
use chrono::offset::Utc;
use futures::future::Either;
use futures::sink::{Wait, Sink};
use nom::IResult;
use rand;
use umio::{ELoopBuilder, Dispatcher, Provider};
use umio::external::{self, Timeout};
use announce::{AnnounceRequest, SourceIP, DesiredPeers};
use client::{ClientToken, ClientRequest, RequestLimiter, ClientMetadata, ClientResponse};
use client::error::{ClientResult, ClientError};
use option::AnnounceOptions;
use request::{self, TrackerRequest, RequestType};
use response::{TrackerResponse, ResponseType};
use scrape::ScrapeRequest;
const EXPECTED_PACKET_LENGTH: usize = 1500;
const CONNECTION_ID_VALID_DURATION_MILLIS: i64 = 60000;
const MAXIMUM_REQUEST_RETRANSMIT_ATTEMPTS: u64 = 8;
/// Internal dispatch timeout.
enum DispatchTimeout {
Connect(ClientToken),
CleanUp,
}
/// Internal dispatch message for clients.
pub enum DispatchMessage {
Request(SocketAddr, ClientToken, ClientRequest),
StartTimer,
Shutdown,
}
/// Create a new background dispatcher to execute request and send responses back.
///
/// Assumes msg_capacity is less than usize::max_value().
pub fn create_dispatcher<H>(bind: SocketAddr,
handshaker: H,
msg_capacity: usize,
limiter: RequestLimiter)
-> io::Result<external::Sender<DispatchMessage>>
where H: Sink + DiscoveryInfo +'static + Send,
H::SinkItem: From<Either<InitiateMessage, ClientMetadata>>
{
// Timer capacity is plus one for the cache cleanup timer
let builder = ELoopBuilder::new()
.channel_capacity(msg_capacity)
.timer_capacity(msg_capacity + 1)
.bind_address(bind)
.buffer_length(EXPECTED_PACKET_LENGTH);
let mut eloop = try!(builder.build());
let channel = eloop.channel();
let dispatch = ClientDispatcher::new(handshaker, bind, limiter);
thread::spawn(move || {
eloop.run(dispatch).expect("bip_utracker: ELoop Shutdown Unexpectedly...");
});
channel.send(DispatchMessage::StartTimer)
.expect("bip_utracker: ELoop Failed To Start Connect ID Timer...");
Ok(channel)
}
// ----------------------------------------------------------------------------//
/// Dispatcher that executes requests asynchronously.
struct ClientDispatcher<H> {
handshaker: Wait<H>,
pid: PeerId,
port: u16,
bound_addr: SocketAddr,
active_requests: HashMap<ClientToken, ConnectTimer>,
id_cache: ConnectIdCache,
limiter: RequestLimiter,
}
impl<H> ClientDispatcher<H>
where H: Sink + DiscoveryInfo,
H::SinkItem: From<Either<InitiateMessage, ClientMetadata>>
{
/// Create a new ClientDispatcher.
pub fn new(handshaker: H, bind: SocketAddr, limiter: RequestLimiter) -> ClientDispatcher<H> {
let peer_id = handshaker.peer_id();
let port = handshaker.port();
ClientDispatcher {
handshaker: handshaker.wait(),
pid: peer_id,
port: port,
bound_addr: bind,
active_requests: HashMap::new(),
id_cache: ConnectIdCache::new(),
limiter: limiter,
}
}
/// Shutdown the current dispatcher, notifying all pending requests.
pub fn shutdown<'a>(&mut self, provider: &mut Provider<'a, ClientDispatcher<H>>) {
// Notify all active requests with the appropriate error
for token_index in 0..self.active_requests.len() {
let next_token = *self.active_requests.keys().skip(token_index).next().unwrap();
self.notify_client(next_token, Err(ClientError::ClientShutdown));
}
// TODO: Clear active timeouts
self.active_requests.clear();
provider.shutdown();
}
/// Finish a request by sending the result back to the client.
pub fn notify_client(&mut self, token: ClientToken, result: ClientResult<ClientResponse>) |
/// Process a request to be sent to the given address and associated with the given token.
pub fn send_request<'a>(&mut self,
provider: &mut Provider<'a, ClientDispatcher<H>>,
addr: SocketAddr,
token: ClientToken,
request: ClientRequest) {
// Check for IP version mismatch between source addr and dest addr
match (self.bound_addr, addr) {
(SocketAddr::V4(_), SocketAddr::V6(_)) |
(SocketAddr::V6(_), SocketAddr::V4(_)) => {
self.notify_client(token, Err(ClientError::IPVersionMismatch));
return;
}
_ => (),
};
self.active_requests.insert(token, ConnectTimer::new(addr, request));
self.process_request(provider, token, false);
}
/// Process a response received from some tracker and match it up against our sent requests.
pub fn recv_response<'a, 'b>(&mut self,
provider: &mut Provider<'a, ClientDispatcher<H>>,
addr: SocketAddr,
response: TrackerResponse<'b>) {
let token = ClientToken(response.transaction_id());
let conn_timer = if let Some(conn_timer) = self.active_requests.remove(&token) {
if conn_timer.message_params().0 == addr {
conn_timer
} else {
return;
} // TODO: Add Logging (Server Receive Addr Different Than Send Addr)
} else {
return;
}; // TODO: Add Logging (Server Gave Us Invalid Transaction Id)
provider.clear_timeout(conn_timer.timeout_id()
.expect("bip_utracker: Failed To Clear Request Timeout"));
// Check if the response requires us to update the connection timer
if let &ResponseType::Connect(id) = response.response_type() {
self.id_cache.put(addr, id);
self.active_requests.insert(token, conn_timer);
self.process_request(provider, token, false);
} else {
// Match the request type against the response type and update our client
match (conn_timer.message_params().1, response.response_type()) {
(&ClientRequest::Announce(hash, _), &ResponseType::Announce(ref res)) => {
// Forward contact information on to the handshaker
for addr in res.peers().iter() {
self.handshaker.send(Either::A(InitiateMessage::new(Protocol::BitTorrent, hash, addr)).into())
.unwrap_or_else(|_| panic!("NEED TO FIX"));
}
self.notify_client(token, Ok(ClientResponse::Announce(res.to_owned())));
}
(&ClientRequest::Scrape(..), &ResponseType::Scrape(ref res)) => {
self.notify_client(token, Ok(ClientResponse::Scrape(res.to_owned())));
}
(_, &ResponseType::Error(ref res)) => {
self.notify_client(token, Err(ClientError::ServerMessage(res.to_owned())));
}
_ => {
self.notify_client(token, Err(ClientError::ServerError));
}
}
}
}
/// Process an existing request, either re requesting a connection id or sending the actual request again.
///
/// If this call is the result of a timeout, that will decide whether to cancel the request or not.
fn process_request<'a>(&mut self,
provider: &mut Provider<'a, ClientDispatcher<H>>,
token: ClientToken,
timed_out: bool) {
let mut conn_timer = if let Some(conn_timer) = self.active_requests.remove(&token) {
conn_timer
} else {
return;
}; // TODO: Add logging
// Resolve the duration of the current timeout to use
let next_timeout = match conn_timer.current_timeout(timed_out) {
Some(timeout) => timeout,
None => {
self.notify_client(token, Err(ClientError::MaxTimeout));
return;
}
};
let addr = conn_timer.message_params().0;
let opt_conn_id = self.id_cache.get(conn_timer.message_params().0);
// Resolve the type of request we need to make
let (conn_id, request_type) = match (opt_conn_id, conn_timer.message_params().1) {
(Some(id), &ClientRequest::Announce(hash, state)) => {
let source_ip = match addr {
SocketAddr::V4(_) => SourceIP::ImpliedV4,
SocketAddr::V6(_) => SourceIP::ImpliedV6,
};
let key = rand::random::<u32>();
(id,
RequestType::Announce(AnnounceRequest::new(hash,
self.pid,
state,
source_ip,
key,
DesiredPeers::Default,
self.port,
AnnounceOptions::new())))
}
(Some(id), &ClientRequest::Scrape(hash)) => {
let mut scrape_request = ScrapeRequest::new();
scrape_request.insert(hash);
(id, RequestType::Scrape(scrape_request))
}
(None, _) => (request::CONNECT_ID_PROTOCOL_ID, RequestType::Connect),
};
let tracker_request = TrackerRequest::new(conn_id, token.0, request_type);
// Try to write the request out to the server
let mut write_success = false;
provider.outgoing(|bytes| {
let mut writer = Cursor::new(bytes);
write_success = tracker_request.write_bytes(&mut writer).is_ok();
if write_success {
Some((writer.position() as usize, addr))
} else {
None
}
});
// If message was not sent (too long to fit) then end the request
if!write_success {
self.notify_client(token, Err(ClientError::MaxLength));
} else {
conn_timer.set_timeout_id(
provider.set_timeout(DispatchTimeout::Connect(token), next_timeout)
.expect("bip_utracker: Failed To Set Timeout For Request"));
self.active_requests.insert(token, conn_timer);
}
}
}
impl<H> Dispatcher for ClientDispatcher<H>
where H: Sink + DiscoveryInfo,
H::SinkItem: From<Either<InitiateMessage, ClientMetadata>>
{
type Timeout = DispatchTimeout;
type Message = DispatchMessage;
fn incoming<'a>(&mut self,
mut provider: Provider<'a, Self>,
message: &[u8],
addr: SocketAddr) {
let response = match TrackerResponse::from_bytes(message) {
IResult::Done(_, rsp) => rsp,
_ => return, // TODO: Add Logging
};
self.recv_response(&mut provider, addr, response);
}
fn notify<'a>(&mut self, mut provider: Provider<'a, Self>, message: DispatchMessage) {
match message {
DispatchMessage::Request(addr, token, req_type) => {
self.send_request(&mut provider, addr, token, req_type);
}
DispatchMessage::StartTimer => self.timeout(provider, DispatchTimeout::CleanUp),
DispatchMessage::Shutdown => self.shutdown(&mut provider),
}
}
fn timeout<'a>(&mut self, mut provider: Provider<'a, Self>, timeout: DispatchTimeout) {
match timeout {
DispatchTimeout::Connect(token) => self.process_request(&mut provider, token, true),
DispatchTimeout::CleanUp => {
self.id_cache.clean_expired();
provider.set_timeout(DispatchTimeout::CleanUp,
CONNECTION_ID_VALID_DURATION_MILLIS as u64)
.expect("bip_utracker: Failed To Restart Connect Id Cleanup Timer");
}
};
}
}
// ----------------------------------------------------------------------------//
/// Contains logic for making sure a valid connection id is present
/// and correctly timing out when sending requests to the server.
struct ConnectTimer {
addr: SocketAddr,
attempt: u64,
request: ClientRequest,
timeout_id: Option<Timeout>,
}
impl ConnectTimer {
/// Create a new ConnectTimer.
pub fn new(addr: SocketAddr, request: ClientRequest) -> ConnectTimer {
ConnectTimer {
addr: addr,
attempt: 0,
request: request,
timeout_id: None,
}
}
/// Yields the current timeout value to use or None if the request should time out completely.
pub fn current_timeout(&mut self, timed_out: bool) -> Option<u64> {
if self.attempt == MAXIMUM_REQUEST_RETRANSMIT_ATTEMPTS {
None
} else {
if timed_out {
self.attempt += 1;
}
Some(calculate_message_timeout_millis(self.attempt))
}
}
/// Yields the current timeout id if one is set.
pub fn timeout_id(&self) -> Option<Timeout> {
self.timeout_id
}
/// Sets a new timeout id.
pub fn set_timeout_id(&mut self, id: Timeout) {
self.timeout_id = Some(id);
}
/// Yields the message parameters for the current connection.
pub fn message_params(&self) -> (SocketAddr, &ClientRequest) {
(self.addr, &self.request)
}
}
/// Calculates the timeout for the request given the attempt count.
fn calculate_message_timeout_millis(attempt: u64) -> u64 {
(15 * 2u64.pow(attempt as u32)) * 1000
}
// ----------------------------------------------------------------------------//
/// Cache for storing connection ids associated with a specific server address.
struct ConnectIdCache {
cache: HashMap<SocketAddr, (u64, DateTime<Utc>)>,
}
impl ConnectIdCache {
/// Create a new connect id cache.
fn new() -> ConnectIdCache {
ConnectIdCache { cache: HashMap::new() }
}
/// Get an un expired connection id for the given addr.
fn get(&mut self, addr: SocketAddr) -> Option<u64> {
match self.cache.entry(addr) {
Entry::Vacant(_) => None,
Entry::Occupied(occ) => {
let curr_time = Utc::now();
let prev_time = occ.get().1;
if is_expired(curr_time, prev_time) {
occ.remove();
None
} else {
Some(occ.get().0)
}
}
}
}
/// Put an un expired connection id into cache for the given addr.
fn put(&mut self, addr: SocketAddr, connect_id: u64) {
let curr_time = Utc::now();
self.cache.insert(addr, (connect_id, curr_time));
}
/// Removes all entries that have expired.
fn clean_expired(&mut self) {
let curr_time = Utc::now();
let mut curr_index = 0;
let mut opt_curr_entry = self.cache.iter().skip(curr_index).map(|(&k, &v)| (k, v)).next();
while let Some((addr, (_, prev_time))) = opt_curr_entry.take() {
if is_expired(curr_time, prev_time) {
self.cache.remove(&addr);
}
curr_index += 1;
opt_curr_entry = self.cache.iter().skip(curr_index).map(|(&k, &v)| (k, v)).next();
}
}
}
/// Returns true if the connect id received at prev_time is now expired.
fn is_expired(curr_time: DateTime<Utc>, prev_time: DateTime<Utc>) -> bool {
let valid_duration = Duration::milliseconds(CONNECTION_ID_VALID_DURATION_MILLIS);
let difference = prev_time.signed_duration_since(curr_time);
difference >= valid_duration
}
| {
self.handshaker.send(Either::B(ClientMetadata::new(token, result)).into())
.unwrap_or_else(|_| panic!("NEED TO FIX"));
self.limiter.acknowledge();
} | identifier_body |
dispatcher.rs | use std::collections::HashMap;
use std::collections::hash_map::Entry;
use std::io::{self, Cursor};
use std::net::SocketAddr;
use std::thread;
use bip_handshake::{DiscoveryInfo, InitiateMessage, Protocol};
use bip_util::bt::PeerId;
use chrono::{DateTime, Duration};
use chrono::offset::Utc;
use futures::future::Either;
use futures::sink::{Wait, Sink};
use nom::IResult;
use rand;
use umio::{ELoopBuilder, Dispatcher, Provider};
use umio::external::{self, Timeout};
use announce::{AnnounceRequest, SourceIP, DesiredPeers};
use client::{ClientToken, ClientRequest, RequestLimiter, ClientMetadata, ClientResponse};
use client::error::{ClientResult, ClientError};
use option::AnnounceOptions;
use request::{self, TrackerRequest, RequestType};
use response::{TrackerResponse, ResponseType};
use scrape::ScrapeRequest;
const EXPECTED_PACKET_LENGTH: usize = 1500;
const CONNECTION_ID_VALID_DURATION_MILLIS: i64 = 60000;
const MAXIMUM_REQUEST_RETRANSMIT_ATTEMPTS: u64 = 8;
/// Internal dispatch timeout.
enum DispatchTimeout {
Connect(ClientToken),
CleanUp,
}
/// Internal dispatch message for clients.
pub enum DispatchMessage {
Request(SocketAddr, ClientToken, ClientRequest),
StartTimer,
Shutdown,
}
/// Create a new background dispatcher to execute request and send responses back.
///
/// Assumes msg_capacity is less than usize::max_value().
pub fn create_dispatcher<H>(bind: SocketAddr,
handshaker: H,
msg_capacity: usize,
limiter: RequestLimiter)
-> io::Result<external::Sender<DispatchMessage>>
where H: Sink + DiscoveryInfo +'static + Send,
H::SinkItem: From<Either<InitiateMessage, ClientMetadata>>
{
// Timer capacity is plus one for the cache cleanup timer
let builder = ELoopBuilder::new()
.channel_capacity(msg_capacity)
.timer_capacity(msg_capacity + 1)
.bind_address(bind)
.buffer_length(EXPECTED_PACKET_LENGTH);
let mut eloop = try!(builder.build());
let channel = eloop.channel();
let dispatch = ClientDispatcher::new(handshaker, bind, limiter);
thread::spawn(move || {
eloop.run(dispatch).expect("bip_utracker: ELoop Shutdown Unexpectedly...");
});
channel.send(DispatchMessage::StartTimer)
.expect("bip_utracker: ELoop Failed To Start Connect ID Timer...");
Ok(channel)
}
// ----------------------------------------------------------------------------//
/// Dispatcher that executes requests asynchronously.
struct ClientDispatcher<H> {
handshaker: Wait<H>,
pid: PeerId,
port: u16,
bound_addr: SocketAddr,
active_requests: HashMap<ClientToken, ConnectTimer>,
id_cache: ConnectIdCache,
limiter: RequestLimiter,
}
impl<H> ClientDispatcher<H>
where H: Sink + DiscoveryInfo,
H::SinkItem: From<Either<InitiateMessage, ClientMetadata>>
{
/// Create a new ClientDispatcher.
pub fn new(handshaker: H, bind: SocketAddr, limiter: RequestLimiter) -> ClientDispatcher<H> {
let peer_id = handshaker.peer_id();
let port = handshaker.port();
ClientDispatcher {
handshaker: handshaker.wait(),
pid: peer_id,
port: port,
bound_addr: bind,
active_requests: HashMap::new(),
id_cache: ConnectIdCache::new(),
limiter: limiter,
}
}
/// Shutdown the current dispatcher, notifying all pending requests.
pub fn shutdown<'a>(&mut self, provider: &mut Provider<'a, ClientDispatcher<H>>) {
// Notify all active requests with the appropriate error
for token_index in 0..self.active_requests.len() {
let next_token = *self.active_requests.keys().skip(token_index).next().unwrap();
self.notify_client(next_token, Err(ClientError::ClientShutdown));
}
// TODO: Clear active timeouts
self.active_requests.clear();
provider.shutdown();
}
/// Finish a request by sending the result back to the client.
pub fn notify_client(&mut self, token: ClientToken, result: ClientResult<ClientResponse>) {
self.handshaker.send(Either::B(ClientMetadata::new(token, result)).into())
.unwrap_or_else(|_| panic!("NEED TO FIX"));
self.limiter.acknowledge();
}
/// Process a request to be sent to the given address and associated with the given token.
pub fn send_request<'a>(&mut self,
provider: &mut Provider<'a, ClientDispatcher<H>>,
addr: SocketAddr,
token: ClientToken,
request: ClientRequest) {
// Check for IP version mismatch between source addr and dest addr
match (self.bound_addr, addr) {
(SocketAddr::V4(_), SocketAddr::V6(_)) |
(SocketAddr::V6(_), SocketAddr::V4(_)) => {
self.notify_client(token, Err(ClientError::IPVersionMismatch));
return;
}
_ => (),
};
self.active_requests.insert(token, ConnectTimer::new(addr, request));
self.process_request(provider, token, false);
}
/// Process a response received from some tracker and match it up against our sent requests.
pub fn recv_response<'a, 'b>(&mut self,
provider: &mut Provider<'a, ClientDispatcher<H>>,
addr: SocketAddr,
response: TrackerResponse<'b>) {
let token = ClientToken(response.transaction_id());
let conn_timer = if let Some(conn_timer) = self.active_requests.remove(&token) {
if conn_timer.message_params().0 == addr {
conn_timer
} else {
return;
} // TODO: Add Logging (Server Receive Addr Different Than Send Addr)
} else {
return;
}; // TODO: Add Logging (Server Gave Us Invalid Transaction Id)
provider.clear_timeout(conn_timer.timeout_id()
.expect("bip_utracker: Failed To Clear Request Timeout"));
// Check if the response requires us to update the connection timer
if let &ResponseType::Connect(id) = response.response_type() {
self.id_cache.put(addr, id);
self.active_requests.insert(token, conn_timer);
self.process_request(provider, token, false);
} else {
// Match the request type against the response type and update our client
match (conn_timer.message_params().1, response.response_type()) {
(&ClientRequest::Announce(hash, _), &ResponseType::Announce(ref res)) => {
// Forward contact information on to the handshaker
for addr in res.peers().iter() {
self.handshaker.send(Either::A(InitiateMessage::new(Protocol::BitTorrent, hash, addr)).into())
.unwrap_or_else(|_| panic!("NEED TO FIX"));
}
self.notify_client(token, Ok(ClientResponse::Announce(res.to_owned())));
}
(&ClientRequest::Scrape(..), &ResponseType::Scrape(ref res)) => {
self.notify_client(token, Ok(ClientResponse::Scrape(res.to_owned())));
}
(_, &ResponseType::Error(ref res)) => {
self.notify_client(token, Err(ClientError::ServerMessage(res.to_owned())));
}
_ => {
self.notify_client(token, Err(ClientError::ServerError));
}
}
}
}
/// Process an existing request, either re requesting a connection id or sending the actual request again.
///
/// If this call is the result of a timeout, that will decide whether to cancel the request or not.
fn process_request<'a>(&mut self,
provider: &mut Provider<'a, ClientDispatcher<H>>,
token: ClientToken,
timed_out: bool) {
let mut conn_timer = if let Some(conn_timer) = self.active_requests.remove(&token) {
conn_timer
} else {
return;
}; // TODO: Add logging
// Resolve the duration of the current timeout to use
let next_timeout = match conn_timer.current_timeout(timed_out) {
Some(timeout) => timeout,
None => {
self.notify_client(token, Err(ClientError::MaxTimeout));
return;
}
};
let addr = conn_timer.message_params().0;
let opt_conn_id = self.id_cache.get(conn_timer.message_params().0);
// Resolve the type of request we need to make
let (conn_id, request_type) = match (opt_conn_id, conn_timer.message_params().1) {
(Some(id), &ClientRequest::Announce(hash, state)) => {
let source_ip = match addr {
SocketAddr::V4(_) => SourceIP::ImpliedV4,
SocketAddr::V6(_) => SourceIP::ImpliedV6,
};
let key = rand::random::<u32>();
(id,
RequestType::Announce(AnnounceRequest::new(hash,
self.pid,
state,
source_ip,
key,
DesiredPeers::Default,
self.port,
AnnounceOptions::new())))
}
(Some(id), &ClientRequest::Scrape(hash)) => {
let mut scrape_request = ScrapeRequest::new();
scrape_request.insert(hash);
(id, RequestType::Scrape(scrape_request))
}
(None, _) => (request::CONNECT_ID_PROTOCOL_ID, RequestType::Connect),
};
let tracker_request = TrackerRequest::new(conn_id, token.0, request_type);
// Try to write the request out to the server
let mut write_success = false;
provider.outgoing(|bytes| {
let mut writer = Cursor::new(bytes);
write_success = tracker_request.write_bytes(&mut writer).is_ok();
if write_success {
Some((writer.position() as usize, addr))
} else {
None
}
});
// If message was not sent (too long to fit) then end the request
if!write_success {
self.notify_client(token, Err(ClientError::MaxLength));
} else {
conn_timer.set_timeout_id(
provider.set_timeout(DispatchTimeout::Connect(token), next_timeout)
.expect("bip_utracker: Failed To Set Timeout For Request"));
self.active_requests.insert(token, conn_timer);
}
}
}
impl<H> Dispatcher for ClientDispatcher<H>
where H: Sink + DiscoveryInfo,
H::SinkItem: From<Either<InitiateMessage, ClientMetadata>>
{
type Timeout = DispatchTimeout;
type Message = DispatchMessage;
fn incoming<'a>(&mut self,
mut provider: Provider<'a, Self>,
message: &[u8],
addr: SocketAddr) {
let response = match TrackerResponse::from_bytes(message) {
IResult::Done(_, rsp) => rsp,
_ => return, // TODO: Add Logging
};
self.recv_response(&mut provider, addr, response);
}
fn notify<'a>(&mut self, mut provider: Provider<'a, Self>, message: DispatchMessage) {
match message {
DispatchMessage::Request(addr, token, req_type) => {
self.send_request(&mut provider, addr, token, req_type);
}
DispatchMessage::StartTimer => self.timeout(provider, DispatchTimeout::CleanUp),
DispatchMessage::Shutdown => self.shutdown(&mut provider),
}
}
fn timeout<'a>(&mut self, mut provider: Provider<'a, Self>, timeout: DispatchTimeout) {
match timeout {
DispatchTimeout::Connect(token) => self.process_request(&mut provider, token, true),
DispatchTimeout::CleanUp => {
self.id_cache.clean_expired();
provider.set_timeout(DispatchTimeout::CleanUp,
CONNECTION_ID_VALID_DURATION_MILLIS as u64)
.expect("bip_utracker: Failed To Restart Connect Id Cleanup Timer");
}
};
}
}
// ----------------------------------------------------------------------------//
/// Contains logic for making sure a valid connection id is present
/// and correctly timing out when sending requests to the server.
struct ConnectTimer {
addr: SocketAddr,
attempt: u64,
request: ClientRequest,
timeout_id: Option<Timeout>,
}
impl ConnectTimer {
/// Create a new ConnectTimer.
pub fn new(addr: SocketAddr, request: ClientRequest) -> ConnectTimer {
ConnectTimer {
addr: addr,
attempt: 0,
request: request,
timeout_id: None,
}
}
/// Yields the current timeout value to use or None if the request should time out completely.
pub fn current_timeout(&mut self, timed_out: bool) -> Option<u64> {
if self.attempt == MAXIMUM_REQUEST_RETRANSMIT_ATTEMPTS {
None |
Some(calculate_message_timeout_millis(self.attempt))
}
}
/// Yields the current timeout id if one is set.
pub fn timeout_id(&self) -> Option<Timeout> {
self.timeout_id
}
/// Sets a new timeout id.
pub fn set_timeout_id(&mut self, id: Timeout) {
self.timeout_id = Some(id);
}
/// Yields the message parameters for the current connection.
pub fn message_params(&self) -> (SocketAddr, &ClientRequest) {
(self.addr, &self.request)
}
}
/// Calculates the timeout for the request given the attempt count.
fn calculate_message_timeout_millis(attempt: u64) -> u64 {
(15 * 2u64.pow(attempt as u32)) * 1000
}
// ----------------------------------------------------------------------------//
/// Cache for storing connection ids associated with a specific server address.
struct ConnectIdCache {
cache: HashMap<SocketAddr, (u64, DateTime<Utc>)>,
}
impl ConnectIdCache {
/// Create a new connect id cache.
fn new() -> ConnectIdCache {
ConnectIdCache { cache: HashMap::new() }
}
/// Get an un expired connection id for the given addr.
fn get(&mut self, addr: SocketAddr) -> Option<u64> {
match self.cache.entry(addr) {
Entry::Vacant(_) => None,
Entry::Occupied(occ) => {
let curr_time = Utc::now();
let prev_time = occ.get().1;
if is_expired(curr_time, prev_time) {
occ.remove();
None
} else {
Some(occ.get().0)
}
}
}
}
/// Put an un expired connection id into cache for the given addr.
fn put(&mut self, addr: SocketAddr, connect_id: u64) {
let curr_time = Utc::now();
self.cache.insert(addr, (connect_id, curr_time));
}
/// Removes all entries that have expired.
fn clean_expired(&mut self) {
let curr_time = Utc::now();
let mut curr_index = 0;
let mut opt_curr_entry = self.cache.iter().skip(curr_index).map(|(&k, &v)| (k, v)).next();
while let Some((addr, (_, prev_time))) = opt_curr_entry.take() {
if is_expired(curr_time, prev_time) {
self.cache.remove(&addr);
}
curr_index += 1;
opt_curr_entry = self.cache.iter().skip(curr_index).map(|(&k, &v)| (k, v)).next();
}
}
}
/// Returns true if the connect id received at prev_time is now expired.
fn is_expired(curr_time: DateTime<Utc>, prev_time: DateTime<Utc>) -> bool {
let valid_duration = Duration::milliseconds(CONNECTION_ID_VALID_DURATION_MILLIS);
let difference = prev_time.signed_duration_since(curr_time);
difference >= valid_duration
} | } else {
if timed_out {
self.attempt += 1;
} | random_line_split |
os.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Implementation of `std::os` functionality for Windows
#![allow(bad_style)]
use prelude::v1::*;
use os::windows::*;
use error::Error as StdError;
use ffi::{OsString, OsStr, AsOsStr};
use fmt;
use iter::Range;
use libc::types::os::arch::extra::LPWCH;
use libc::{self, c_int, c_void};
use mem;
use old_io::{IoError, IoResult};
use ptr;
use slice;
use sys::c;
use sys::fs::FileDesc;
use sys::handle::Handle as RawHandle;
use libc::funcs::extra::kernel32::{
GetEnvironmentStringsW,
FreeEnvironmentStringsW
};
pub fn errno() -> i32 {
unsafe { libc::GetLastError() as i32 }
}
/// Get a detailed string description for the given error number
pub fn error_string(errnum: i32) -> String {
use libc::types::os::arch::extra::DWORD;
use libc::types::os::arch::extra::LPWSTR;
use libc::types::os::arch::extra::LPVOID;
use libc::types::os::arch::extra::WCHAR;
#[link_name = "kernel32"]
extern "system" {
fn FormatMessageW(flags: DWORD,
lpSrc: LPVOID,
msgId: DWORD,
langId: DWORD,
buf: LPWSTR,
nsize: DWORD,
args: *const c_void)
-> DWORD;
}
static FORMAT_MESSAGE_FROM_SYSTEM: DWORD = 0x00001000;
static FORMAT_MESSAGE_IGNORE_INSERTS: DWORD = 0x00000200;
// This value is calculated from the macro
// MAKELANGID(LANG_SYSTEM_DEFAULT, SUBLANG_SYS_DEFAULT)
let langId = 0x0800 as DWORD;
let mut buf = [0 as WCHAR; 2048];
unsafe {
let res = FormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS,
ptr::null_mut(),
errnum as DWORD,
langId,
buf.as_mut_ptr(),
buf.len() as DWORD,
ptr::null());
if res == 0 {
// Sometimes FormatMessageW can fail e.g. system doesn't like langId,
let fm_err = errno();
return format!("OS Error {} (FormatMessageW() returned error {})",
errnum, fm_err);
}
let b = buf.iter().position(|&b| b == 0).unwrap_or(buf.len());
let msg = String::from_utf16(&buf[..b]);
match msg {
Ok(msg) => msg,
Err(..) => format!("OS Error {} (FormatMessageW() returned \
invalid UTF-16)", errnum),
}
}
}
pub struct Env {
base: LPWCH,
cur: LPWCH,
}
impl Iterator for Env {
type Item = (OsString, OsString);
fn next(&mut self) -> Option<(OsString, OsString)> {
unsafe {
if *self.cur == 0 { return None }
let p = &*self.cur;
let mut len = 0;
while *(p as *const _).offset(len)!= 0 {
len += 1;
}
let p = p as *const u16;
let s = slice::from_raw_parts(p, len as usize);
self.cur = self.cur.offset(len + 1);
let (k, v) = match s.iter().position(|&b| b == '=' as u16) {
Some(n) => (&s[..n], &s[n+1..]),
None => (s, &[][]),
};
Some((OsStringExt::from_wide(k), OsStringExt::from_wide(v)))
} | }
impl Drop for Env {
fn drop(&mut self) {
unsafe { FreeEnvironmentStringsW(self.base); }
}
}
pub fn env() -> Env {
unsafe {
let ch = GetEnvironmentStringsW();
if ch as usize == 0 {
panic!("failure getting env string from OS: {}",
IoError::last_error());
}
Env { base: ch, cur: ch }
}
}
pub struct SplitPaths<'a> {
data: EncodeWide<'a>,
must_yield: bool,
}
pub fn split_paths(unparsed: &OsStr) -> SplitPaths {
SplitPaths {
data: unparsed.encode_wide(),
must_yield: true,
}
}
impl<'a> Iterator for SplitPaths<'a> {
type Item = Path;
fn next(&mut self) -> Option<Path> {
// On Windows, the PATH environment variable is semicolon separated.
// Double quotes are used as a way of introducing literal semicolons
// (since c:\some;dir is a valid Windows path). Double quotes are not
// themselves permitted in path names, so there is no way to escape a
// double quote. Quoted regions can appear in arbitrary locations, so
//
// c:\foo;c:\som"e;di"r;c:\bar
//
// Should parse as [c:\foo, c:\some;dir, c:\bar].
//
// (The above is based on testing; there is no clear reference available
// for the grammar.)
let must_yield = self.must_yield;
self.must_yield = false;
let mut in_progress = Vec::new();
let mut in_quote = false;
for b in self.data.by_ref() {
if b == '"' as u16 {
in_quote =!in_quote;
} else if b == ';' as u16 &&!in_quote {
self.must_yield = true;
break
} else {
in_progress.push(b)
}
}
if!must_yield && in_progress.is_empty() {
None
} else {
Some(super::os2path(&in_progress[]))
}
}
}
#[derive(Debug)]
pub struct JoinPathsError;
pub fn join_paths<I, T>(paths: I) -> Result<OsString, JoinPathsError>
where I: Iterator<Item=T>, T: AsOsStr
{
let mut joined = Vec::new();
let sep = b';' as u16;
for (i, path) in paths.enumerate() {
let path = path.as_os_str();
if i > 0 { joined.push(sep) }
let v = path.encode_wide().collect::<Vec<u16>>();
if v.contains(&(b'"' as u16)) {
return Err(JoinPathsError)
} else if v.contains(&sep) {
joined.push(b'"' as u16);
joined.push_all(&v[]);
joined.push(b'"' as u16);
} else {
joined.push_all(&v[]);
}
}
Ok(OsStringExt::from_wide(&joined[]))
}
impl fmt::Display for JoinPathsError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
"path segment contains `\"`".fmt(f)
}
}
impl StdError for JoinPathsError {
fn description(&self) -> &str { "failed to join paths" }
}
pub fn current_exe() -> IoResult<Path> {
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetModuleFileNameW(ptr::null_mut(), buf, sz)
}, super::os2path)
}
pub fn getcwd() -> IoResult<Path> {
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetCurrentDirectoryW(sz, buf)
}, super::os2path)
}
pub fn chdir(p: &Path) -> IoResult<()> {
let mut p = p.as_os_str().encode_wide().collect::<Vec<_>>();
p.push(0);
unsafe {
match libc::SetCurrentDirectoryW(p.as_ptr())!= (0 as libc::BOOL) {
true => Ok(()),
false => Err(IoError::last_error()),
}
}
}
pub fn getenv(k: &OsStr) -> Option<OsString> {
let k = super::to_utf16_os(k);
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetEnvironmentVariableW(k.as_ptr(), buf, sz)
}, |buf| {
OsStringExt::from_wide(buf)
}).ok()
}
pub fn setenv(k: &OsStr, v: &OsStr) {
let k = super::to_utf16_os(k);
let v = super::to_utf16_os(v);
unsafe {
if libc::SetEnvironmentVariableW(k.as_ptr(), v.as_ptr()) == 0 {
panic!("failed to set env: {}", IoError::last_error());
}
}
}
pub fn unsetenv(n: &OsStr) {
let v = super::to_utf16_os(n);
unsafe {
if libc::SetEnvironmentVariableW(v.as_ptr(), ptr::null()) == 0 {
panic!("failed to unset env: {}", IoError::last_error());
}
}
}
pub struct Args {
range: Range<isize>,
cur: *mut *mut u16,
}
impl Iterator for Args {
type Item = OsString;
fn next(&mut self) -> Option<OsString> {
self.range.next().map(|i| unsafe {
let ptr = *self.cur.offset(i);
let mut len = 0;
while *ptr.offset(len)!= 0 { len += 1; }
// Push it onto the list.
let ptr = ptr as *const u16;
let buf = slice::from_raw_parts(ptr, len as usize);
OsStringExt::from_wide(buf)
})
}
fn size_hint(&self) -> (usize, Option<usize>) { self.range.size_hint() }
}
impl Drop for Args {
fn drop(&mut self) {
unsafe { c::LocalFree(self.cur as *mut c_void); }
}
}
pub fn args() -> Args {
unsafe {
let mut nArgs: c_int = 0;
let lpCmdLine = c::GetCommandLineW();
let szArgList = c::CommandLineToArgvW(lpCmdLine, &mut nArgs);
Args { cur: szArgList, range: range(0, nArgs as isize) }
}
}
pub fn page_size() -> usize {
unsafe {
let mut info = mem::zeroed();
libc::GetSystemInfo(&mut info);
return info.dwPageSize as usize;
}
}
pub unsafe fn pipe() -> IoResult<(FileDesc, FileDesc)> {
// Windows pipes work subtly differently than unix pipes, and their
// inheritance has to be handled in a different way that I do not
// fully understand. Here we explicitly make the pipe non-inheritable,
// which means to pass it to a subprocess they need to be duplicated
// first, as in std::run.
let mut fds = [0; 2];
match libc::pipe(fds.as_mut_ptr(), 1024 as ::libc::c_uint,
(libc::O_BINARY | libc::O_NOINHERIT) as c_int) {
0 => {
assert!(fds[0]!= -1 && fds[0]!= 0);
assert!(fds[1]!= -1 && fds[1]!= 0);
Ok((FileDesc::new(fds[0], true), FileDesc::new(fds[1], true)))
}
_ => Err(IoError::last_error()),
}
}
pub fn temp_dir() -> Path {
super::fill_utf16_buf(|buf, sz| unsafe {
c::GetTempPathW(sz, buf)
}, super::os2path).unwrap()
}
pub fn home_dir() -> Option<Path> {
getenv("HOME".as_os_str()).or_else(|| {
getenv("USERPROFILE".as_os_str())
}).map(|os| {
// FIXME: OsString => Path
Path::new(os.to_str().unwrap())
}).or_else(|| unsafe {
let me = c::GetCurrentProcess();
let mut token = ptr::null_mut();
if c::OpenProcessToken(me, c::TOKEN_READ, &mut token) == 0 {
return None
}
let _handle = RawHandle::new(token);
super::fill_utf16_buf(|buf, mut sz| {
match c::GetUserProfileDirectoryW(token, buf, &mut sz) {
0 if libc::GetLastError()!= 0 => 0,
0 => sz,
n => n as libc::DWORD,
}
}, super::os2path).ok()
})
} | } | random_line_split |
os.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Implementation of `std::os` functionality for Windows
#![allow(bad_style)]
use prelude::v1::*;
use os::windows::*;
use error::Error as StdError;
use ffi::{OsString, OsStr, AsOsStr};
use fmt;
use iter::Range;
use libc::types::os::arch::extra::LPWCH;
use libc::{self, c_int, c_void};
use mem;
use old_io::{IoError, IoResult};
use ptr;
use slice;
use sys::c;
use sys::fs::FileDesc;
use sys::handle::Handle as RawHandle;
use libc::funcs::extra::kernel32::{
GetEnvironmentStringsW,
FreeEnvironmentStringsW
};
pub fn errno() -> i32 {
unsafe { libc::GetLastError() as i32 }
}
/// Get a detailed string description for the given error number
pub fn error_string(errnum: i32) -> String {
use libc::types::os::arch::extra::DWORD;
use libc::types::os::arch::extra::LPWSTR;
use libc::types::os::arch::extra::LPVOID;
use libc::types::os::arch::extra::WCHAR;
#[link_name = "kernel32"]
extern "system" {
fn FormatMessageW(flags: DWORD,
lpSrc: LPVOID,
msgId: DWORD,
langId: DWORD,
buf: LPWSTR,
nsize: DWORD,
args: *const c_void)
-> DWORD;
}
static FORMAT_MESSAGE_FROM_SYSTEM: DWORD = 0x00001000;
static FORMAT_MESSAGE_IGNORE_INSERTS: DWORD = 0x00000200;
// This value is calculated from the macro
// MAKELANGID(LANG_SYSTEM_DEFAULT, SUBLANG_SYS_DEFAULT)
let langId = 0x0800 as DWORD;
let mut buf = [0 as WCHAR; 2048];
unsafe {
let res = FormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS,
ptr::null_mut(),
errnum as DWORD,
langId,
buf.as_mut_ptr(),
buf.len() as DWORD,
ptr::null());
if res == 0 {
// Sometimes FormatMessageW can fail e.g. system doesn't like langId,
let fm_err = errno();
return format!("OS Error {} (FormatMessageW() returned error {})",
errnum, fm_err);
}
let b = buf.iter().position(|&b| b == 0).unwrap_or(buf.len());
let msg = String::from_utf16(&buf[..b]);
match msg {
Ok(msg) => msg,
Err(..) => format!("OS Error {} (FormatMessageW() returned \
invalid UTF-16)", errnum),
}
}
}
pub struct Env {
base: LPWCH,
cur: LPWCH,
}
impl Iterator for Env {
type Item = (OsString, OsString);
fn next(&mut self) -> Option<(OsString, OsString)> {
unsafe {
if *self.cur == 0 { return None }
let p = &*self.cur;
let mut len = 0;
while *(p as *const _).offset(len)!= 0 {
len += 1;
}
let p = p as *const u16;
let s = slice::from_raw_parts(p, len as usize);
self.cur = self.cur.offset(len + 1);
let (k, v) = match s.iter().position(|&b| b == '=' as u16) {
Some(n) => (&s[..n], &s[n+1..]),
None => (s, &[][]),
};
Some((OsStringExt::from_wide(k), OsStringExt::from_wide(v)))
}
}
}
impl Drop for Env {
fn | (&mut self) {
unsafe { FreeEnvironmentStringsW(self.base); }
}
}
pub fn env() -> Env {
unsafe {
let ch = GetEnvironmentStringsW();
if ch as usize == 0 {
panic!("failure getting env string from OS: {}",
IoError::last_error());
}
Env { base: ch, cur: ch }
}
}
pub struct SplitPaths<'a> {
data: EncodeWide<'a>,
must_yield: bool,
}
pub fn split_paths(unparsed: &OsStr) -> SplitPaths {
SplitPaths {
data: unparsed.encode_wide(),
must_yield: true,
}
}
impl<'a> Iterator for SplitPaths<'a> {
type Item = Path;
fn next(&mut self) -> Option<Path> {
// On Windows, the PATH environment variable is semicolon separated.
// Double quotes are used as a way of introducing literal semicolons
// (since c:\some;dir is a valid Windows path). Double quotes are not
// themselves permitted in path names, so there is no way to escape a
// double quote. Quoted regions can appear in arbitrary locations, so
//
// c:\foo;c:\som"e;di"r;c:\bar
//
// Should parse as [c:\foo, c:\some;dir, c:\bar].
//
// (The above is based on testing; there is no clear reference available
// for the grammar.)
let must_yield = self.must_yield;
self.must_yield = false;
let mut in_progress = Vec::new();
let mut in_quote = false;
for b in self.data.by_ref() {
if b == '"' as u16 {
in_quote =!in_quote;
} else if b == ';' as u16 &&!in_quote {
self.must_yield = true;
break
} else {
in_progress.push(b)
}
}
if!must_yield && in_progress.is_empty() {
None
} else {
Some(super::os2path(&in_progress[]))
}
}
}
#[derive(Debug)]
pub struct JoinPathsError;
pub fn join_paths<I, T>(paths: I) -> Result<OsString, JoinPathsError>
where I: Iterator<Item=T>, T: AsOsStr
{
let mut joined = Vec::new();
let sep = b';' as u16;
for (i, path) in paths.enumerate() {
let path = path.as_os_str();
if i > 0 { joined.push(sep) }
let v = path.encode_wide().collect::<Vec<u16>>();
if v.contains(&(b'"' as u16)) {
return Err(JoinPathsError)
} else if v.contains(&sep) {
joined.push(b'"' as u16);
joined.push_all(&v[]);
joined.push(b'"' as u16);
} else {
joined.push_all(&v[]);
}
}
Ok(OsStringExt::from_wide(&joined[]))
}
impl fmt::Display for JoinPathsError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
"path segment contains `\"`".fmt(f)
}
}
impl StdError for JoinPathsError {
fn description(&self) -> &str { "failed to join paths" }
}
pub fn current_exe() -> IoResult<Path> {
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetModuleFileNameW(ptr::null_mut(), buf, sz)
}, super::os2path)
}
pub fn getcwd() -> IoResult<Path> {
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetCurrentDirectoryW(sz, buf)
}, super::os2path)
}
pub fn chdir(p: &Path) -> IoResult<()> {
let mut p = p.as_os_str().encode_wide().collect::<Vec<_>>();
p.push(0);
unsafe {
match libc::SetCurrentDirectoryW(p.as_ptr())!= (0 as libc::BOOL) {
true => Ok(()),
false => Err(IoError::last_error()),
}
}
}
pub fn getenv(k: &OsStr) -> Option<OsString> {
let k = super::to_utf16_os(k);
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetEnvironmentVariableW(k.as_ptr(), buf, sz)
}, |buf| {
OsStringExt::from_wide(buf)
}).ok()
}
pub fn setenv(k: &OsStr, v: &OsStr) {
let k = super::to_utf16_os(k);
let v = super::to_utf16_os(v);
unsafe {
if libc::SetEnvironmentVariableW(k.as_ptr(), v.as_ptr()) == 0 {
panic!("failed to set env: {}", IoError::last_error());
}
}
}
pub fn unsetenv(n: &OsStr) {
let v = super::to_utf16_os(n);
unsafe {
if libc::SetEnvironmentVariableW(v.as_ptr(), ptr::null()) == 0 {
panic!("failed to unset env: {}", IoError::last_error());
}
}
}
pub struct Args {
range: Range<isize>,
cur: *mut *mut u16,
}
impl Iterator for Args {
type Item = OsString;
fn next(&mut self) -> Option<OsString> {
self.range.next().map(|i| unsafe {
let ptr = *self.cur.offset(i);
let mut len = 0;
while *ptr.offset(len)!= 0 { len += 1; }
// Push it onto the list.
let ptr = ptr as *const u16;
let buf = slice::from_raw_parts(ptr, len as usize);
OsStringExt::from_wide(buf)
})
}
fn size_hint(&self) -> (usize, Option<usize>) { self.range.size_hint() }
}
impl Drop for Args {
fn drop(&mut self) {
unsafe { c::LocalFree(self.cur as *mut c_void); }
}
}
pub fn args() -> Args {
unsafe {
let mut nArgs: c_int = 0;
let lpCmdLine = c::GetCommandLineW();
let szArgList = c::CommandLineToArgvW(lpCmdLine, &mut nArgs);
Args { cur: szArgList, range: range(0, nArgs as isize) }
}
}
pub fn page_size() -> usize {
unsafe {
let mut info = mem::zeroed();
libc::GetSystemInfo(&mut info);
return info.dwPageSize as usize;
}
}
pub unsafe fn pipe() -> IoResult<(FileDesc, FileDesc)> {
// Windows pipes work subtly differently than unix pipes, and their
// inheritance has to be handled in a different way that I do not
// fully understand. Here we explicitly make the pipe non-inheritable,
// which means to pass it to a subprocess they need to be duplicated
// first, as in std::run.
let mut fds = [0; 2];
match libc::pipe(fds.as_mut_ptr(), 1024 as ::libc::c_uint,
(libc::O_BINARY | libc::O_NOINHERIT) as c_int) {
0 => {
assert!(fds[0]!= -1 && fds[0]!= 0);
assert!(fds[1]!= -1 && fds[1]!= 0);
Ok((FileDesc::new(fds[0], true), FileDesc::new(fds[1], true)))
}
_ => Err(IoError::last_error()),
}
}
pub fn temp_dir() -> Path {
super::fill_utf16_buf(|buf, sz| unsafe {
c::GetTempPathW(sz, buf)
}, super::os2path).unwrap()
}
pub fn home_dir() -> Option<Path> {
getenv("HOME".as_os_str()).or_else(|| {
getenv("USERPROFILE".as_os_str())
}).map(|os| {
// FIXME: OsString => Path
Path::new(os.to_str().unwrap())
}).or_else(|| unsafe {
let me = c::GetCurrentProcess();
let mut token = ptr::null_mut();
if c::OpenProcessToken(me, c::TOKEN_READ, &mut token) == 0 {
return None
}
let _handle = RawHandle::new(token);
super::fill_utf16_buf(|buf, mut sz| {
match c::GetUserProfileDirectoryW(token, buf, &mut sz) {
0 if libc::GetLastError()!= 0 => 0,
0 => sz,
n => n as libc::DWORD,
}
}, super::os2path).ok()
})
}
| drop | identifier_name |
os.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Implementation of `std::os` functionality for Windows
#![allow(bad_style)]
use prelude::v1::*;
use os::windows::*;
use error::Error as StdError;
use ffi::{OsString, OsStr, AsOsStr};
use fmt;
use iter::Range;
use libc::types::os::arch::extra::LPWCH;
use libc::{self, c_int, c_void};
use mem;
use old_io::{IoError, IoResult};
use ptr;
use slice;
use sys::c;
use sys::fs::FileDesc;
use sys::handle::Handle as RawHandle;
use libc::funcs::extra::kernel32::{
GetEnvironmentStringsW,
FreeEnvironmentStringsW
};
pub fn errno() -> i32 {
unsafe { libc::GetLastError() as i32 }
}
/// Get a detailed string description for the given error number
pub fn error_string(errnum: i32) -> String {
use libc::types::os::arch::extra::DWORD;
use libc::types::os::arch::extra::LPWSTR;
use libc::types::os::arch::extra::LPVOID;
use libc::types::os::arch::extra::WCHAR;
#[link_name = "kernel32"]
extern "system" {
fn FormatMessageW(flags: DWORD,
lpSrc: LPVOID,
msgId: DWORD,
langId: DWORD,
buf: LPWSTR,
nsize: DWORD,
args: *const c_void)
-> DWORD;
}
static FORMAT_MESSAGE_FROM_SYSTEM: DWORD = 0x00001000;
static FORMAT_MESSAGE_IGNORE_INSERTS: DWORD = 0x00000200;
// This value is calculated from the macro
// MAKELANGID(LANG_SYSTEM_DEFAULT, SUBLANG_SYS_DEFAULT)
let langId = 0x0800 as DWORD;
let mut buf = [0 as WCHAR; 2048];
unsafe {
let res = FormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS,
ptr::null_mut(),
errnum as DWORD,
langId,
buf.as_mut_ptr(),
buf.len() as DWORD,
ptr::null());
if res == 0 {
// Sometimes FormatMessageW can fail e.g. system doesn't like langId,
let fm_err = errno();
return format!("OS Error {} (FormatMessageW() returned error {})",
errnum, fm_err);
}
let b = buf.iter().position(|&b| b == 0).unwrap_or(buf.len());
let msg = String::from_utf16(&buf[..b]);
match msg {
Ok(msg) => msg,
Err(..) => format!("OS Error {} (FormatMessageW() returned \
invalid UTF-16)", errnum),
}
}
}
pub struct Env {
base: LPWCH,
cur: LPWCH,
}
impl Iterator for Env {
type Item = (OsString, OsString);
fn next(&mut self) -> Option<(OsString, OsString)> {
unsafe {
if *self.cur == 0 { return None }
let p = &*self.cur;
let mut len = 0;
while *(p as *const _).offset(len)!= 0 {
len += 1;
}
let p = p as *const u16;
let s = slice::from_raw_parts(p, len as usize);
self.cur = self.cur.offset(len + 1);
let (k, v) = match s.iter().position(|&b| b == '=' as u16) {
Some(n) => (&s[..n], &s[n+1..]),
None => (s, &[][]),
};
Some((OsStringExt::from_wide(k), OsStringExt::from_wide(v)))
}
}
}
impl Drop for Env {
fn drop(&mut self) {
unsafe { FreeEnvironmentStringsW(self.base); }
}
}
pub fn env() -> Env {
unsafe {
let ch = GetEnvironmentStringsW();
if ch as usize == 0 {
panic!("failure getting env string from OS: {}",
IoError::last_error());
}
Env { base: ch, cur: ch }
}
}
pub struct SplitPaths<'a> {
data: EncodeWide<'a>,
must_yield: bool,
}
pub fn split_paths(unparsed: &OsStr) -> SplitPaths {
SplitPaths {
data: unparsed.encode_wide(),
must_yield: true,
}
}
impl<'a> Iterator for SplitPaths<'a> {
type Item = Path;
fn next(&mut self) -> Option<Path> {
// On Windows, the PATH environment variable is semicolon separated.
// Double quotes are used as a way of introducing literal semicolons
// (since c:\some;dir is a valid Windows path). Double quotes are not
// themselves permitted in path names, so there is no way to escape a
// double quote. Quoted regions can appear in arbitrary locations, so
//
// c:\foo;c:\som"e;di"r;c:\bar
//
// Should parse as [c:\foo, c:\some;dir, c:\bar].
//
// (The above is based on testing; there is no clear reference available
// for the grammar.)
let must_yield = self.must_yield;
self.must_yield = false;
let mut in_progress = Vec::new();
let mut in_quote = false;
for b in self.data.by_ref() {
if b == '"' as u16 {
in_quote =!in_quote;
} else if b == ';' as u16 &&!in_quote {
self.must_yield = true;
break
} else {
in_progress.push(b)
}
}
if!must_yield && in_progress.is_empty() {
None
} else {
Some(super::os2path(&in_progress[]))
}
}
}
#[derive(Debug)]
pub struct JoinPathsError;
pub fn join_paths<I, T>(paths: I) -> Result<OsString, JoinPathsError>
where I: Iterator<Item=T>, T: AsOsStr
{
let mut joined = Vec::new();
let sep = b';' as u16;
for (i, path) in paths.enumerate() {
let path = path.as_os_str();
if i > 0 |
let v = path.encode_wide().collect::<Vec<u16>>();
if v.contains(&(b'"' as u16)) {
return Err(JoinPathsError)
} else if v.contains(&sep) {
joined.push(b'"' as u16);
joined.push_all(&v[]);
joined.push(b'"' as u16);
} else {
joined.push_all(&v[]);
}
}
Ok(OsStringExt::from_wide(&joined[]))
}
impl fmt::Display for JoinPathsError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
"path segment contains `\"`".fmt(f)
}
}
impl StdError for JoinPathsError {
fn description(&self) -> &str { "failed to join paths" }
}
pub fn current_exe() -> IoResult<Path> {
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetModuleFileNameW(ptr::null_mut(), buf, sz)
}, super::os2path)
}
pub fn getcwd() -> IoResult<Path> {
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetCurrentDirectoryW(sz, buf)
}, super::os2path)
}
pub fn chdir(p: &Path) -> IoResult<()> {
let mut p = p.as_os_str().encode_wide().collect::<Vec<_>>();
p.push(0);
unsafe {
match libc::SetCurrentDirectoryW(p.as_ptr())!= (0 as libc::BOOL) {
true => Ok(()),
false => Err(IoError::last_error()),
}
}
}
pub fn getenv(k: &OsStr) -> Option<OsString> {
let k = super::to_utf16_os(k);
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetEnvironmentVariableW(k.as_ptr(), buf, sz)
}, |buf| {
OsStringExt::from_wide(buf)
}).ok()
}
pub fn setenv(k: &OsStr, v: &OsStr) {
let k = super::to_utf16_os(k);
let v = super::to_utf16_os(v);
unsafe {
if libc::SetEnvironmentVariableW(k.as_ptr(), v.as_ptr()) == 0 {
panic!("failed to set env: {}", IoError::last_error());
}
}
}
pub fn unsetenv(n: &OsStr) {
let v = super::to_utf16_os(n);
unsafe {
if libc::SetEnvironmentVariableW(v.as_ptr(), ptr::null()) == 0 {
panic!("failed to unset env: {}", IoError::last_error());
}
}
}
pub struct Args {
range: Range<isize>,
cur: *mut *mut u16,
}
impl Iterator for Args {
type Item = OsString;
fn next(&mut self) -> Option<OsString> {
self.range.next().map(|i| unsafe {
let ptr = *self.cur.offset(i);
let mut len = 0;
while *ptr.offset(len)!= 0 { len += 1; }
// Push it onto the list.
let ptr = ptr as *const u16;
let buf = slice::from_raw_parts(ptr, len as usize);
OsStringExt::from_wide(buf)
})
}
fn size_hint(&self) -> (usize, Option<usize>) { self.range.size_hint() }
}
impl Drop for Args {
fn drop(&mut self) {
unsafe { c::LocalFree(self.cur as *mut c_void); }
}
}
pub fn args() -> Args {
unsafe {
let mut nArgs: c_int = 0;
let lpCmdLine = c::GetCommandLineW();
let szArgList = c::CommandLineToArgvW(lpCmdLine, &mut nArgs);
Args { cur: szArgList, range: range(0, nArgs as isize) }
}
}
pub fn page_size() -> usize {
unsafe {
let mut info = mem::zeroed();
libc::GetSystemInfo(&mut info);
return info.dwPageSize as usize;
}
}
pub unsafe fn pipe() -> IoResult<(FileDesc, FileDesc)> {
// Windows pipes work subtly differently than unix pipes, and their
// inheritance has to be handled in a different way that I do not
// fully understand. Here we explicitly make the pipe non-inheritable,
// which means to pass it to a subprocess they need to be duplicated
// first, as in std::run.
let mut fds = [0; 2];
match libc::pipe(fds.as_mut_ptr(), 1024 as ::libc::c_uint,
(libc::O_BINARY | libc::O_NOINHERIT) as c_int) {
0 => {
assert!(fds[0]!= -1 && fds[0]!= 0);
assert!(fds[1]!= -1 && fds[1]!= 0);
Ok((FileDesc::new(fds[0], true), FileDesc::new(fds[1], true)))
}
_ => Err(IoError::last_error()),
}
}
pub fn temp_dir() -> Path {
super::fill_utf16_buf(|buf, sz| unsafe {
c::GetTempPathW(sz, buf)
}, super::os2path).unwrap()
}
pub fn home_dir() -> Option<Path> {
getenv("HOME".as_os_str()).or_else(|| {
getenv("USERPROFILE".as_os_str())
}).map(|os| {
// FIXME: OsString => Path
Path::new(os.to_str().unwrap())
}).or_else(|| unsafe {
let me = c::GetCurrentProcess();
let mut token = ptr::null_mut();
if c::OpenProcessToken(me, c::TOKEN_READ, &mut token) == 0 {
return None
}
let _handle = RawHandle::new(token);
super::fill_utf16_buf(|buf, mut sz| {
match c::GetUserProfileDirectoryW(token, buf, &mut sz) {
0 if libc::GetLastError()!= 0 => 0,
0 => sz,
n => n as libc::DWORD,
}
}, super::os2path).ok()
})
}
| { joined.push(sep) } | conditional_block |
os.rs | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Implementation of `std::os` functionality for Windows
#![allow(bad_style)]
use prelude::v1::*;
use os::windows::*;
use error::Error as StdError;
use ffi::{OsString, OsStr, AsOsStr};
use fmt;
use iter::Range;
use libc::types::os::arch::extra::LPWCH;
use libc::{self, c_int, c_void};
use mem;
use old_io::{IoError, IoResult};
use ptr;
use slice;
use sys::c;
use sys::fs::FileDesc;
use sys::handle::Handle as RawHandle;
use libc::funcs::extra::kernel32::{
GetEnvironmentStringsW,
FreeEnvironmentStringsW
};
pub fn errno() -> i32 {
unsafe { libc::GetLastError() as i32 }
}
/// Get a detailed string description for the given error number
pub fn error_string(errnum: i32) -> String {
use libc::types::os::arch::extra::DWORD;
use libc::types::os::arch::extra::LPWSTR;
use libc::types::os::arch::extra::LPVOID;
use libc::types::os::arch::extra::WCHAR;
#[link_name = "kernel32"]
extern "system" {
fn FormatMessageW(flags: DWORD,
lpSrc: LPVOID,
msgId: DWORD,
langId: DWORD,
buf: LPWSTR,
nsize: DWORD,
args: *const c_void)
-> DWORD;
}
static FORMAT_MESSAGE_FROM_SYSTEM: DWORD = 0x00001000;
static FORMAT_MESSAGE_IGNORE_INSERTS: DWORD = 0x00000200;
// This value is calculated from the macro
// MAKELANGID(LANG_SYSTEM_DEFAULT, SUBLANG_SYS_DEFAULT)
let langId = 0x0800 as DWORD;
let mut buf = [0 as WCHAR; 2048];
unsafe {
let res = FormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS,
ptr::null_mut(),
errnum as DWORD,
langId,
buf.as_mut_ptr(),
buf.len() as DWORD,
ptr::null());
if res == 0 {
// Sometimes FormatMessageW can fail e.g. system doesn't like langId,
let fm_err = errno();
return format!("OS Error {} (FormatMessageW() returned error {})",
errnum, fm_err);
}
let b = buf.iter().position(|&b| b == 0).unwrap_or(buf.len());
let msg = String::from_utf16(&buf[..b]);
match msg {
Ok(msg) => msg,
Err(..) => format!("OS Error {} (FormatMessageW() returned \
invalid UTF-16)", errnum),
}
}
}
pub struct Env {
base: LPWCH,
cur: LPWCH,
}
impl Iterator for Env {
type Item = (OsString, OsString);
fn next(&mut self) -> Option<(OsString, OsString)> {
unsafe {
if *self.cur == 0 { return None }
let p = &*self.cur;
let mut len = 0;
while *(p as *const _).offset(len)!= 0 {
len += 1;
}
let p = p as *const u16;
let s = slice::from_raw_parts(p, len as usize);
self.cur = self.cur.offset(len + 1);
let (k, v) = match s.iter().position(|&b| b == '=' as u16) {
Some(n) => (&s[..n], &s[n+1..]),
None => (s, &[][]),
};
Some((OsStringExt::from_wide(k), OsStringExt::from_wide(v)))
}
}
}
impl Drop for Env {
fn drop(&mut self) {
unsafe { FreeEnvironmentStringsW(self.base); }
}
}
pub fn env() -> Env |
pub struct SplitPaths<'a> {
data: EncodeWide<'a>,
must_yield: bool,
}
pub fn split_paths(unparsed: &OsStr) -> SplitPaths {
SplitPaths {
data: unparsed.encode_wide(),
must_yield: true,
}
}
impl<'a> Iterator for SplitPaths<'a> {
type Item = Path;
fn next(&mut self) -> Option<Path> {
// On Windows, the PATH environment variable is semicolon separated.
// Double quotes are used as a way of introducing literal semicolons
// (since c:\some;dir is a valid Windows path). Double quotes are not
// themselves permitted in path names, so there is no way to escape a
// double quote. Quoted regions can appear in arbitrary locations, so
//
// c:\foo;c:\som"e;di"r;c:\bar
//
// Should parse as [c:\foo, c:\some;dir, c:\bar].
//
// (The above is based on testing; there is no clear reference available
// for the grammar.)
let must_yield = self.must_yield;
self.must_yield = false;
let mut in_progress = Vec::new();
let mut in_quote = false;
for b in self.data.by_ref() {
if b == '"' as u16 {
in_quote =!in_quote;
} else if b == ';' as u16 &&!in_quote {
self.must_yield = true;
break
} else {
in_progress.push(b)
}
}
if!must_yield && in_progress.is_empty() {
None
} else {
Some(super::os2path(&in_progress[]))
}
}
}
#[derive(Debug)]
pub struct JoinPathsError;
pub fn join_paths<I, T>(paths: I) -> Result<OsString, JoinPathsError>
where I: Iterator<Item=T>, T: AsOsStr
{
let mut joined = Vec::new();
let sep = b';' as u16;
for (i, path) in paths.enumerate() {
let path = path.as_os_str();
if i > 0 { joined.push(sep) }
let v = path.encode_wide().collect::<Vec<u16>>();
if v.contains(&(b'"' as u16)) {
return Err(JoinPathsError)
} else if v.contains(&sep) {
joined.push(b'"' as u16);
joined.push_all(&v[]);
joined.push(b'"' as u16);
} else {
joined.push_all(&v[]);
}
}
Ok(OsStringExt::from_wide(&joined[]))
}
impl fmt::Display for JoinPathsError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
"path segment contains `\"`".fmt(f)
}
}
impl StdError for JoinPathsError {
fn description(&self) -> &str { "failed to join paths" }
}
pub fn current_exe() -> IoResult<Path> {
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetModuleFileNameW(ptr::null_mut(), buf, sz)
}, super::os2path)
}
pub fn getcwd() -> IoResult<Path> {
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetCurrentDirectoryW(sz, buf)
}, super::os2path)
}
pub fn chdir(p: &Path) -> IoResult<()> {
let mut p = p.as_os_str().encode_wide().collect::<Vec<_>>();
p.push(0);
unsafe {
match libc::SetCurrentDirectoryW(p.as_ptr())!= (0 as libc::BOOL) {
true => Ok(()),
false => Err(IoError::last_error()),
}
}
}
pub fn getenv(k: &OsStr) -> Option<OsString> {
let k = super::to_utf16_os(k);
super::fill_utf16_buf(|buf, sz| unsafe {
libc::GetEnvironmentVariableW(k.as_ptr(), buf, sz)
}, |buf| {
OsStringExt::from_wide(buf)
}).ok()
}
pub fn setenv(k: &OsStr, v: &OsStr) {
let k = super::to_utf16_os(k);
let v = super::to_utf16_os(v);
unsafe {
if libc::SetEnvironmentVariableW(k.as_ptr(), v.as_ptr()) == 0 {
panic!("failed to set env: {}", IoError::last_error());
}
}
}
pub fn unsetenv(n: &OsStr) {
let v = super::to_utf16_os(n);
unsafe {
if libc::SetEnvironmentVariableW(v.as_ptr(), ptr::null()) == 0 {
panic!("failed to unset env: {}", IoError::last_error());
}
}
}
pub struct Args {
range: Range<isize>,
cur: *mut *mut u16,
}
impl Iterator for Args {
type Item = OsString;
fn next(&mut self) -> Option<OsString> {
self.range.next().map(|i| unsafe {
let ptr = *self.cur.offset(i);
let mut len = 0;
while *ptr.offset(len)!= 0 { len += 1; }
// Push it onto the list.
let ptr = ptr as *const u16;
let buf = slice::from_raw_parts(ptr, len as usize);
OsStringExt::from_wide(buf)
})
}
fn size_hint(&self) -> (usize, Option<usize>) { self.range.size_hint() }
}
impl Drop for Args {
fn drop(&mut self) {
unsafe { c::LocalFree(self.cur as *mut c_void); }
}
}
pub fn args() -> Args {
unsafe {
let mut nArgs: c_int = 0;
let lpCmdLine = c::GetCommandLineW();
let szArgList = c::CommandLineToArgvW(lpCmdLine, &mut nArgs);
Args { cur: szArgList, range: range(0, nArgs as isize) }
}
}
pub fn page_size() -> usize {
unsafe {
let mut info = mem::zeroed();
libc::GetSystemInfo(&mut info);
return info.dwPageSize as usize;
}
}
pub unsafe fn pipe() -> IoResult<(FileDesc, FileDesc)> {
// Windows pipes work subtly differently than unix pipes, and their
// inheritance has to be handled in a different way that I do not
// fully understand. Here we explicitly make the pipe non-inheritable,
// which means to pass it to a subprocess they need to be duplicated
// first, as in std::run.
let mut fds = [0; 2];
match libc::pipe(fds.as_mut_ptr(), 1024 as ::libc::c_uint,
(libc::O_BINARY | libc::O_NOINHERIT) as c_int) {
0 => {
assert!(fds[0]!= -1 && fds[0]!= 0);
assert!(fds[1]!= -1 && fds[1]!= 0);
Ok((FileDesc::new(fds[0], true), FileDesc::new(fds[1], true)))
}
_ => Err(IoError::last_error()),
}
}
pub fn temp_dir() -> Path {
super::fill_utf16_buf(|buf, sz| unsafe {
c::GetTempPathW(sz, buf)
}, super::os2path).unwrap()
}
pub fn home_dir() -> Option<Path> {
getenv("HOME".as_os_str()).or_else(|| {
getenv("USERPROFILE".as_os_str())
}).map(|os| {
// FIXME: OsString => Path
Path::new(os.to_str().unwrap())
}).or_else(|| unsafe {
let me = c::GetCurrentProcess();
let mut token = ptr::null_mut();
if c::OpenProcessToken(me, c::TOKEN_READ, &mut token) == 0 {
return None
}
let _handle = RawHandle::new(token);
super::fill_utf16_buf(|buf, mut sz| {
match c::GetUserProfileDirectoryW(token, buf, &mut sz) {
0 if libc::GetLastError()!= 0 => 0,
0 => sz,
n => n as libc::DWORD,
}
}, super::os2path).ok()
})
}
| {
unsafe {
let ch = GetEnvironmentStringsW();
if ch as usize == 0 {
panic!("failure getting env string from OS: {}",
IoError::last_error());
}
Env { base: ch, cur: ch }
}
} | identifier_body |
entry.rs | //
// imag - the personal information management suite for the commandline
// Copyright (C) 2015-2020 Matthias Beyer <[email protected]> and contributors
//
// This library is free software; you can redistribute it and/or
// modify it under the terms of the GNU Lesser General Public
// License as published by the Free Software Foundation; version
// 2.1 of the License.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
// Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public
// License along with this library; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
//
use libimagstore::store::Store;
use libimagstore::store::Entry;
use libimagentrymarkdown::processor::LinkProcessor;
use anyhow::Result;
use anyhow::Context;
use anyhow::Error;
pub trait WikiEntry {
fn autolink(&mut self, store: &Store) -> Result<()>;
fn autolink_with_processor(&mut self, store: &Store, processor: LinkProcessor) -> Result<()>;
}
impl WikiEntry for Entry {
/// Autolink entry to entries linked in content
///
/// Uses `libimagentrymarkdown::processor::LinkProcessor` for this, with the following settings:
///
/// * Interal link processing = true
/// * Internal targets creating = true
/// * External link processing = true
/// * Processing of Refs = true
///
/// This is a convenience function for `WikiEntry::autolink_with_processor()`.
///
/// # Warning
///
/// With this function, the `LinkProcessor` automatically creates entries in the store if they
/// are linked from the current entry but do not exists yet.
///
/// # See also
///
/// * The documentation of `WikiEntry::autolink_with_processor()`.
/// * The documentation of `::libimagentrymarkdown::processor::LinkProcessor`.
///
fn autolink(&mut self, store: &Store) -> Result<()> |
/// Autolink entry to entries linked in content with the passed `LinkProcessor` instance.
///
/// See the documentation of `::libimagentrymarkdown::processor::LinkProcessor`.
fn autolink_with_processor(&mut self, store: &Store, processor: LinkProcessor) -> Result<()> {
processor.process(self, store)
.context(anyhow!("Auto Link error: {}", self.get_location()))
.map_err(Error::from)
}
}
| {
let processor = LinkProcessor::default()
.process_links(true)
.create_targets(true)
.process_urls(true)
.process_refs(true);
self.autolink_with_processor(store, processor)
} | identifier_body |
entry.rs | //
// imag - the personal information management suite for the commandline
// Copyright (C) 2015-2020 Matthias Beyer <[email protected]> and contributors
//
// This library is free software; you can redistribute it and/or
// modify it under the terms of the GNU Lesser General Public
// License as published by the Free Software Foundation; version
// 2.1 of the License.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
// Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public
// License along with this library; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
//
use libimagstore::store::Store;
use libimagstore::store::Entry;
use libimagentrymarkdown::processor::LinkProcessor;
use anyhow::Result;
use anyhow::Context;
use anyhow::Error;
pub trait WikiEntry {
fn autolink(&mut self, store: &Store) -> Result<()>;
fn autolink_with_processor(&mut self, store: &Store, processor: LinkProcessor) -> Result<()>;
}
impl WikiEntry for Entry {
/// Autolink entry to entries linked in content
/// | /// * Processing of Refs = true
///
/// This is a convenience function for `WikiEntry::autolink_with_processor()`.
///
/// # Warning
///
/// With this function, the `LinkProcessor` automatically creates entries in the store if they
/// are linked from the current entry but do not exists yet.
///
/// # See also
///
/// * The documentation of `WikiEntry::autolink_with_processor()`.
/// * The documentation of `::libimagentrymarkdown::processor::LinkProcessor`.
///
fn autolink(&mut self, store: &Store) -> Result<()> {
let processor = LinkProcessor::default()
.process_links(true)
.create_targets(true)
.process_urls(true)
.process_refs(true);
self.autolink_with_processor(store, processor)
}
/// Autolink entry to entries linked in content with the passed `LinkProcessor` instance.
///
/// See the documentation of `::libimagentrymarkdown::processor::LinkProcessor`.
fn autolink_with_processor(&mut self, store: &Store, processor: LinkProcessor) -> Result<()> {
processor.process(self, store)
.context(anyhow!("Auto Link error: {}", self.get_location()))
.map_err(Error::from)
}
} | /// Uses `libimagentrymarkdown::processor::LinkProcessor` for this, with the following settings:
///
/// * Interal link processing = true
/// * Internal targets creating = true
/// * External link processing = true | random_line_split |
entry.rs | //
// imag - the personal information management suite for the commandline
// Copyright (C) 2015-2020 Matthias Beyer <[email protected]> and contributors
//
// This library is free software; you can redistribute it and/or
// modify it under the terms of the GNU Lesser General Public
// License as published by the Free Software Foundation; version
// 2.1 of the License.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
// Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public
// License along with this library; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
//
use libimagstore::store::Store;
use libimagstore::store::Entry;
use libimagentrymarkdown::processor::LinkProcessor;
use anyhow::Result;
use anyhow::Context;
use anyhow::Error;
pub trait WikiEntry {
fn autolink(&mut self, store: &Store) -> Result<()>;
fn autolink_with_processor(&mut self, store: &Store, processor: LinkProcessor) -> Result<()>;
}
impl WikiEntry for Entry {
/// Autolink entry to entries linked in content
///
/// Uses `libimagentrymarkdown::processor::LinkProcessor` for this, with the following settings:
///
/// * Interal link processing = true
/// * Internal targets creating = true
/// * External link processing = true
/// * Processing of Refs = true
///
/// This is a convenience function for `WikiEntry::autolink_with_processor()`.
///
/// # Warning
///
/// With this function, the `LinkProcessor` automatically creates entries in the store if they
/// are linked from the current entry but do not exists yet.
///
/// # See also
///
/// * The documentation of `WikiEntry::autolink_with_processor()`.
/// * The documentation of `::libimagentrymarkdown::processor::LinkProcessor`.
///
fn | (&mut self, store: &Store) -> Result<()> {
let processor = LinkProcessor::default()
.process_links(true)
.create_targets(true)
.process_urls(true)
.process_refs(true);
self.autolink_with_processor(store, processor)
}
/// Autolink entry to entries linked in content with the passed `LinkProcessor` instance.
///
/// See the documentation of `::libimagentrymarkdown::processor::LinkProcessor`.
fn autolink_with_processor(&mut self, store: &Store, processor: LinkProcessor) -> Result<()> {
processor.process(self, store)
.context(anyhow!("Auto Link error: {}", self.get_location()))
.map_err(Error::from)
}
}
| autolink | identifier_name |
xul.mako.rs | /* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/. */
<%namespace name="helpers" file="/helpers.mako.rs" />
<% from data import Method %>
// Non-standard properties that Gecko uses for XUL elements.
<% data.new_style_struct("XUL", inherited=False) %>
${helpers.single_keyword(
"-moz-box-align",
"stretch start center baseline end",
products="gecko",
gecko_ffi_name="mBoxAlign",
gecko_enum_prefix="StyleBoxAlign",
animation_value_type="discrete",
alias="-webkit-box-align",
spec="Nonstandard (https://developer.mozilla.org/en-US/docs/Web/CSS/box-align)",
)}
${helpers.single_keyword(
"-moz-box-direction",
"normal reverse",
products="gecko",
gecko_ffi_name="mBoxDirection",
gecko_enum_prefix="StyleBoxDirection",
animation_value_type="discrete",
alias="-webkit-box-direction",
spec="Nonstandard (https://developer.mozilla.org/en-US/docs/Web/CSS/box-direction)",
)}
${helpers.predefined_type(
"-moz-box-flex",
"NonNegativeNumber",
"From::from(0.)",
products="gecko",
gecko_ffi_name="mBoxFlex",
animation_value_type="NonNegativeNumber",
alias="-webkit-box-flex",
spec="Nonstandard (https://developer.mozilla.org/en-US/docs/Web/CSS/box-flex)",
)}
${helpers.single_keyword(
"-moz-box-orient",
"horizontal vertical",
products="gecko",
gecko_ffi_name="mBoxOrient",
extra_gecko_aliases="inline-axis=horizontal block-axis=vertical",
gecko_enum_prefix="StyleBoxOrient",
animation_value_type="discrete",
alias="-webkit-box-orient",
spec="Nonstandard (https://developer.mozilla.org/en-US/docs/Web/CSS/box-orient)",
)}
${helpers.single_keyword(
"-moz-box-pack",
"start center end justify",
products="gecko", gecko_ffi_name="mBoxPack",
gecko_enum_prefix="StyleBoxPack",
animation_value_type="discrete",
alias="-webkit-box-pack",
spec="Nonstandard (https://developer.mozilla.org/en-US/docs/Web/CSS/box-pack)",
)}
${helpers.single_keyword(
"-moz-stack-sizing",
"stretch-to-fit ignore ignore-horizontal ignore-vertical",
products="gecko",
gecko_ffi_name="mStackSizing", |
${helpers.predefined_type(
"-moz-box-ordinal-group",
"Integer",
"0",
parse_method="parse_non_negative",
products="gecko",
alias="-webkit-box-ordinal-group",
gecko_ffi_name="mBoxOrdinal",
animation_value_type="discrete",
spec="Nonstandard (https://developer.mozilla.org/en-US/docs/Web/CSS/-moz-box-ordinal-group)",
)} | gecko_enum_prefix="StyleStackSizing",
animation_value_type="discrete",
spec="Nonstandard (https://developer.mozilla.org/en-US/docs/Web/CSS/-moz-stack-sizing)",
)} | random_line_split |
interface.rs | // Copyright 2014 The html5ever Project Developers. See the
// COPYRIGHT file at the top-level directory of this distribution.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! The interface for consumers of the tree builder (and thus the
//! parser overall).
use core::prelude::*;
use tokenizer::Attribute;
use collections::vec::Vec;
use collections::string::String;
use std::string::CowString;
use string_cache::QualName;
pub use self::QuirksMode::{Quirks, LimitedQuirks, NoQuirks};
pub use self::NodeOrText::{AppendNode, AppendText};
/// A document's quirks mode.
#[derive(PartialEq, Eq, Copy, Clone, Hash, Debug)]
pub enum QuirksMode {
Quirks,
LimitedQuirks,
NoQuirks,
}
/// Something which can be inserted into the DOM.
///
/// Adjacent sibling text nodes are merged into a single node, so
/// the sink may not want to allocate a `Handle` for each.
pub enum | <Handle> {
AppendNode(Handle),
AppendText(String),
}
/// Types which can process tree modifications from the tree builder.
pub trait TreeSink {
/// `Handle` is a reference to a DOM node. The tree builder requires
/// that a `Handle` implements `Clone` to get another reference to
/// the same node.
type Handle: Clone;
/// Signal a parse error.
fn parse_error(&mut self, msg: CowString<'static>);
/// Get a handle to the `Document` node.
fn get_document(&mut self) -> Self::Handle;
/// Do two handles refer to the same node?
fn same_node(&self, x: Self::Handle, y: Self::Handle) -> bool;
/// What is the name of this element?
///
/// Should never be called on a non-element node;
/// feel free to `panic!`.
fn elem_name(&self, target: Self::Handle) -> QualName;
/// Set the document's quirks mode.
fn set_quirks_mode(&mut self, mode: QuirksMode);
/// Create an element.
fn create_element(&mut self, name: QualName, attrs: Vec<Attribute>) -> Self::Handle;
/// Create a comment node.
fn create_comment(&mut self, text: String) -> Self::Handle;
/// Append a node as the last child of the given node. If this would
/// produce adjacent sibling text nodes, it should concatenate the text
/// instead.
///
/// The child node will not already have a parent.
fn append(&mut self, parent: Self::Handle, child: NodeOrText<Self::Handle>);
/// Append a node as the sibling immediately before the given node. If that node
/// has no parent, do nothing and return Err(new_node).
///
/// The tree builder promises that `sibling` is not a text node. However its
/// old previous sibling, which would become the new node's previous sibling,
/// could be a text node. If the new node is also a text node, the two should
/// be merged, as in the behavior of `append`.
///
/// NB: `new_node` may have an old parent, from which it should be removed.
fn append_before_sibling(&mut self,
sibling: Self::Handle,
new_node: NodeOrText<Self::Handle>) -> Result<(), NodeOrText<Self::Handle>>;
/// Append a `DOCTYPE` element to the `Document` node.
fn append_doctype_to_document(&mut self, name: String, public_id: String, system_id: String);
/// Add each attribute to the given element, if no attribute
/// with that name already exists.
fn add_attrs_if_missing(&mut self, target: Self::Handle, attrs: Vec<Attribute>);
/// Detach the given node from its parent.
fn remove_from_parent(&mut self, target: Self::Handle);
/// Remove all the children from node and append them to new_parent.
fn reparent_children(&mut self, node: Self::Handle, new_parent: Self::Handle);
/// Mark a HTML `<script>` element as "already started".
fn mark_script_already_started(&mut self, node: Self::Handle);
/// Indicate that a `<script>` element is complete.
fn complete_script(&mut self, _node: Self::Handle) { }
}
/// Trace hooks for a garbage-collected DOM.
pub trait Tracer {
type Handle;
/// Upon a call to `trace_handles`, the tree builder will call this method
/// for each handle in its internal state.
fn trace_handle(&self, node: Self::Handle);
}
| NodeOrText | identifier_name |
interface.rs | // Copyright 2014 The html5ever Project Developers. See the
// COPYRIGHT file at the top-level directory of this distribution.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! The interface for consumers of the tree builder (and thus the
//! parser overall).
use core::prelude::*;
use tokenizer::Attribute;
use collections::vec::Vec;
use collections::string::String;
use std::string::CowString;
use string_cache::QualName;
pub use self::QuirksMode::{Quirks, LimitedQuirks, NoQuirks};
pub use self::NodeOrText::{AppendNode, AppendText};
/// A document's quirks mode.
#[derive(PartialEq, Eq, Copy, Clone, Hash, Debug)]
pub enum QuirksMode {
Quirks,
LimitedQuirks,
NoQuirks,
}
/// Something which can be inserted into the DOM.
///
/// Adjacent sibling text nodes are merged into a single node, so
/// the sink may not want to allocate a `Handle` for each.
pub enum NodeOrText<Handle> {
AppendNode(Handle),
AppendText(String),
}
/// Types which can process tree modifications from the tree builder.
pub trait TreeSink {
/// `Handle` is a reference to a DOM node. The tree builder requires
/// that a `Handle` implements `Clone` to get another reference to
/// the same node.
type Handle: Clone;
/// Signal a parse error.
fn parse_error(&mut self, msg: CowString<'static>);
/// Get a handle to the `Document` node.
fn get_document(&mut self) -> Self::Handle;
/// Do two handles refer to the same node?
fn same_node(&self, x: Self::Handle, y: Self::Handle) -> bool;
/// What is the name of this element?
///
/// Should never be called on a non-element node;
/// feel free to `panic!`.
fn elem_name(&self, target: Self::Handle) -> QualName;
/// Set the document's quirks mode.
fn set_quirks_mode(&mut self, mode: QuirksMode);
/// Create an element.
fn create_element(&mut self, name: QualName, attrs: Vec<Attribute>) -> Self::Handle;
/// Create a comment node.
fn create_comment(&mut self, text: String) -> Self::Handle;
/// Append a node as the last child of the given node. If this would
/// produce adjacent sibling text nodes, it should concatenate the text
/// instead.
///
/// The child node will not already have a parent.
fn append(&mut self, parent: Self::Handle, child: NodeOrText<Self::Handle>);
/// Append a node as the sibling immediately before the given node. If that node
/// has no parent, do nothing and return Err(new_node).
///
/// The tree builder promises that `sibling` is not a text node. However its
/// old previous sibling, which would become the new node's previous sibling,
/// could be a text node. If the new node is also a text node, the two should
/// be merged, as in the behavior of `append`.
///
/// NB: `new_node` may have an old parent, from which it should be removed.
fn append_before_sibling(&mut self,
sibling: Self::Handle,
new_node: NodeOrText<Self::Handle>) -> Result<(), NodeOrText<Self::Handle>>;
/// Append a `DOCTYPE` element to the `Document` node.
fn append_doctype_to_document(&mut self, name: String, public_id: String, system_id: String);
/// Add each attribute to the given element, if no attribute
/// with that name already exists.
fn add_attrs_if_missing(&mut self, target: Self::Handle, attrs: Vec<Attribute>);
/// Detach the given node from its parent.
fn remove_from_parent(&mut self, target: Self::Handle);
/// Remove all the children from node and append them to new_parent.
fn reparent_children(&mut self, node: Self::Handle, new_parent: Self::Handle);
/// Mark a HTML `<script>` element as "already started".
fn mark_script_already_started(&mut self, node: Self::Handle);
/// Indicate that a `<script>` element is complete.
fn complete_script(&mut self, _node: Self::Handle) |
}
/// Trace hooks for a garbage-collected DOM.
pub trait Tracer {
type Handle;
/// Upon a call to `trace_handles`, the tree builder will call this method
/// for each handle in its internal state.
fn trace_handle(&self, node: Self::Handle);
}
| { } | identifier_body |
interface.rs | // Copyright 2014 The html5ever Project Developers. See the
// COPYRIGHT file at the top-level directory of this distribution.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! The interface for consumers of the tree builder (and thus the
//! parser overall).
use core::prelude::*;
use tokenizer::Attribute;
use collections::vec::Vec;
use collections::string::String;
use std::string::CowString;
use string_cache::QualName;
pub use self::QuirksMode::{Quirks, LimitedQuirks, NoQuirks};
pub use self::NodeOrText::{AppendNode, AppendText};
/// A document's quirks mode.
#[derive(PartialEq, Eq, Copy, Clone, Hash, Debug)]
pub enum QuirksMode {
Quirks,
LimitedQuirks,
NoQuirks,
}
/// Something which can be inserted into the DOM.
///
/// Adjacent sibling text nodes are merged into a single node, so
/// the sink may not want to allocate a `Handle` for each.
pub enum NodeOrText<Handle> {
AppendNode(Handle),
AppendText(String),
}
/// Types which can process tree modifications from the tree builder.
pub trait TreeSink {
/// `Handle` is a reference to a DOM node. The tree builder requires
/// that a `Handle` implements `Clone` to get another reference to
/// the same node.
type Handle: Clone;
/// Signal a parse error.
fn parse_error(&mut self, msg: CowString<'static>);
/// Get a handle to the `Document` node.
fn get_document(&mut self) -> Self::Handle; |
/// Do two handles refer to the same node?
fn same_node(&self, x: Self::Handle, y: Self::Handle) -> bool;
/// What is the name of this element?
///
/// Should never be called on a non-element node;
/// feel free to `panic!`.
fn elem_name(&self, target: Self::Handle) -> QualName;
/// Set the document's quirks mode.
fn set_quirks_mode(&mut self, mode: QuirksMode);
/// Create an element.
fn create_element(&mut self, name: QualName, attrs: Vec<Attribute>) -> Self::Handle;
/// Create a comment node.
fn create_comment(&mut self, text: String) -> Self::Handle;
/// Append a node as the last child of the given node. If this would
/// produce adjacent sibling text nodes, it should concatenate the text
/// instead.
///
/// The child node will not already have a parent.
fn append(&mut self, parent: Self::Handle, child: NodeOrText<Self::Handle>);
/// Append a node as the sibling immediately before the given node. If that node
/// has no parent, do nothing and return Err(new_node).
///
/// The tree builder promises that `sibling` is not a text node. However its
/// old previous sibling, which would become the new node's previous sibling,
/// could be a text node. If the new node is also a text node, the two should
/// be merged, as in the behavior of `append`.
///
/// NB: `new_node` may have an old parent, from which it should be removed.
fn append_before_sibling(&mut self,
sibling: Self::Handle,
new_node: NodeOrText<Self::Handle>) -> Result<(), NodeOrText<Self::Handle>>;
/// Append a `DOCTYPE` element to the `Document` node.
fn append_doctype_to_document(&mut self, name: String, public_id: String, system_id: String);
/// Add each attribute to the given element, if no attribute
/// with that name already exists.
fn add_attrs_if_missing(&mut self, target: Self::Handle, attrs: Vec<Attribute>);
/// Detach the given node from its parent.
fn remove_from_parent(&mut self, target: Self::Handle);
/// Remove all the children from node and append them to new_parent.
fn reparent_children(&mut self, node: Self::Handle, new_parent: Self::Handle);
/// Mark a HTML `<script>` element as "already started".
fn mark_script_already_started(&mut self, node: Self::Handle);
/// Indicate that a `<script>` element is complete.
fn complete_script(&mut self, _node: Self::Handle) { }
}
/// Trace hooks for a garbage-collected DOM.
pub trait Tracer {
type Handle;
/// Upon a call to `trace_handles`, the tree builder will call this method
/// for each handle in its internal state.
fn trace_handle(&self, node: Self::Handle);
} | random_line_split |
|
command.rs | use std::ffi::OsStr;
use std::fs::File;
use std::io::{self, Write, BufRead, BufReader, Seek, SeekFrom};
use std::process::{self, Command, Stdio};
use std::time::Instant;
use regex::Regex;
use tempfile::tempfile;
use Cfg;
use errors::*;
use notifications::*;
use rustup_utils;
use telemetry::{Telemetry, TelemetryEvent};
pub fn run_command_for_dir<S: AsRef<OsStr>>(cmd: Command,
arg0: &str,
args: &[S],
cfg: &Cfg) -> Result<()> {
if (arg0 == "rustc" || arg0 == "rustc.exe") && try!(cfg.telemetry_enabled()) {
return telemetry_rustc(cmd, arg0, args, cfg);
}
run_command_for_dir_without_telemetry(cmd, arg0, args)
}
fn telemetry_rustc<S: AsRef<OsStr>>(mut cmd: Command,
arg0: &str,
args: &[S], cfg: &Cfg) -> Result<()> {
#[cfg(unix)]
fn file_as_stdio(file: &File) -> Stdio {
use std::os::unix::io::{AsRawFd, FromRawFd};
unsafe { Stdio::from_raw_fd(file.as_raw_fd()) }
}
#[cfg(windows)]
fn file_as_stdio(file: &File) -> Stdio {
use std::os::windows::io::{AsRawHandle, FromRawHandle};
unsafe { Stdio::from_raw_handle(file.as_raw_handle()) }
}
let now = Instant::now();
cmd.args(args);
let has_color_args = args.iter().any(|e| {
let e = e.as_ref().to_str().unwrap_or("");
e.starts_with("--color")
});
if stderr_isatty() &&!has_color_args
{
cmd.arg("--color");
cmd.arg("always");
}
let mut cmd_err_file = tempfile().unwrap();
let cmd_err_stdio = file_as_stdio(&cmd_err_file);
// FIXME rust-lang/rust#32254. It's not clear to me
// when and why this is needed.
let mut cmd = cmd.stdin(Stdio::inherit())
.stdout(Stdio::inherit())
.stderr(cmd_err_stdio)
.spawn()
.unwrap();
let status = cmd.wait();
let duration = now.elapsed();
let ms = (duration.as_secs() as u64 * 1000) + (duration.subsec_nanos() as u64 / 1000 / 1000);
let t = Telemetry::new(cfg.rustup_dir.join("telemetry"));
match status {
Ok(status) => {
let exit_code = status.code().unwrap_or(1);
let re = Regex::new(r"\[(?P<error>E.{4})\]").unwrap();
let mut buffer = String::new();
// Chose a HashSet instead of a Vec to avoid calls to sort() and dedup().
// The HashSet should be faster if there are a lot of errors, too.
let mut errors: Vec<String> = Vec::new();
let stderr = io::stderr();
let mut handle = stderr.lock();
cmd_err_file.seek(SeekFrom::Start(0)).unwrap();
let mut buffered_stderr = BufReader::new(cmd_err_file);
while buffered_stderr.read_line(&mut buffer).unwrap() > 0 {
let b = buffer.to_owned();
buffer.clear();
let _ = handle.write(b.as_bytes());
if let Some(caps) = re.captures(&b) {
if!caps.is_empty() {
errors.push(caps.name("error").unwrap_or("").to_owned());
}
};
}
let e = if errors.is_empty() | else { Some(errors) };
let te = TelemetryEvent::RustcRun { duration_ms: ms,
exit_code: exit_code,
errors: e };
let _ = t.log_telemetry(te).map_err(|xe| {
(cfg.notify_handler)(Notification::TelemetryCleanupError(&xe));
});
process::exit(exit_code);
},
Err(e) => {
let exit_code = e.raw_os_error().unwrap_or(1);
let te = TelemetryEvent::RustcRun { duration_ms: ms,
exit_code: exit_code,
errors: None };
let _ = t.log_telemetry(te).map_err(|xe| {
(cfg.notify_handler)(Notification::TelemetryCleanupError(&xe));
});
Err(e).chain_err(|| rustup_utils::ErrorKind::RunningCommand {
name: OsStr::new(arg0).to_owned(),
})
},
}
}
fn run_command_for_dir_without_telemetry<S: AsRef<OsStr>>(
mut cmd: Command, arg0: &str, args: &[S]) -> Result<()>
{
cmd.args(args);
// FIXME rust-lang/rust#32254. It's not clear to me
// when and why this is needed.
cmd.stdin(process::Stdio::inherit());
match cmd.status() {
Ok(status) => {
// Ensure correct exit code is returned
let code = status.code().unwrap_or(1);
process::exit(code);
}
Err(e) => {
Err(e).chain_err(|| rustup_utils::ErrorKind::RunningCommand {
name: OsStr::new(arg0).to_owned(),
})
}
}
}
#[cfg(unix)]
fn stderr_isatty() -> bool {
use libc;
unsafe { libc::isatty(libc::STDERR_FILENO)!= 0 }
}
#[cfg(windows)]
fn stderr_isatty() -> bool {
type DWORD = u32;
type BOOL = i32;
type HANDLE = *mut u8;
const STD_ERROR_HANDLE: DWORD = -12i32 as DWORD;
extern "system" {
fn GetStdHandle(which: DWORD) -> HANDLE;
fn GetConsoleMode(hConsoleHandle: HANDLE,
lpMode: *mut DWORD) -> BOOL;
}
unsafe {
let handle = GetStdHandle(STD_ERROR_HANDLE);
let mut out = 0;
GetConsoleMode(handle, &mut out)!= 0
}
}
| { None } | conditional_block |
command.rs | use std::ffi::OsStr;
use std::fs::File;
use std::io::{self, Write, BufRead, BufReader, Seek, SeekFrom};
use std::process::{self, Command, Stdio};
use std::time::Instant;
use regex::Regex;
use tempfile::tempfile;
use Cfg;
use errors::*;
use notifications::*;
use rustup_utils;
use telemetry::{Telemetry, TelemetryEvent};
pub fn run_command_for_dir<S: AsRef<OsStr>>(cmd: Command,
arg0: &str,
args: &[S],
cfg: &Cfg) -> Result<()> {
if (arg0 == "rustc" || arg0 == "rustc.exe") && try!(cfg.telemetry_enabled()) {
return telemetry_rustc(cmd, arg0, args, cfg);
}
run_command_for_dir_without_telemetry(cmd, arg0, args)
}
fn telemetry_rustc<S: AsRef<OsStr>>(mut cmd: Command,
arg0: &str,
args: &[S], cfg: &Cfg) -> Result<()> {
#[cfg(unix)]
fn | (file: &File) -> Stdio {
use std::os::unix::io::{AsRawFd, FromRawFd};
unsafe { Stdio::from_raw_fd(file.as_raw_fd()) }
}
#[cfg(windows)]
fn file_as_stdio(file: &File) -> Stdio {
use std::os::windows::io::{AsRawHandle, FromRawHandle};
unsafe { Stdio::from_raw_handle(file.as_raw_handle()) }
}
let now = Instant::now();
cmd.args(args);
let has_color_args = args.iter().any(|e| {
let e = e.as_ref().to_str().unwrap_or("");
e.starts_with("--color")
});
if stderr_isatty() &&!has_color_args
{
cmd.arg("--color");
cmd.arg("always");
}
let mut cmd_err_file = tempfile().unwrap();
let cmd_err_stdio = file_as_stdio(&cmd_err_file);
// FIXME rust-lang/rust#32254. It's not clear to me
// when and why this is needed.
let mut cmd = cmd.stdin(Stdio::inherit())
.stdout(Stdio::inherit())
.stderr(cmd_err_stdio)
.spawn()
.unwrap();
let status = cmd.wait();
let duration = now.elapsed();
let ms = (duration.as_secs() as u64 * 1000) + (duration.subsec_nanos() as u64 / 1000 / 1000);
let t = Telemetry::new(cfg.rustup_dir.join("telemetry"));
match status {
Ok(status) => {
let exit_code = status.code().unwrap_or(1);
let re = Regex::new(r"\[(?P<error>E.{4})\]").unwrap();
let mut buffer = String::new();
// Chose a HashSet instead of a Vec to avoid calls to sort() and dedup().
// The HashSet should be faster if there are a lot of errors, too.
let mut errors: Vec<String> = Vec::new();
let stderr = io::stderr();
let mut handle = stderr.lock();
cmd_err_file.seek(SeekFrom::Start(0)).unwrap();
let mut buffered_stderr = BufReader::new(cmd_err_file);
while buffered_stderr.read_line(&mut buffer).unwrap() > 0 {
let b = buffer.to_owned();
buffer.clear();
let _ = handle.write(b.as_bytes());
if let Some(caps) = re.captures(&b) {
if!caps.is_empty() {
errors.push(caps.name("error").unwrap_or("").to_owned());
}
};
}
let e = if errors.is_empty() { None } else { Some(errors) };
let te = TelemetryEvent::RustcRun { duration_ms: ms,
exit_code: exit_code,
errors: e };
let _ = t.log_telemetry(te).map_err(|xe| {
(cfg.notify_handler)(Notification::TelemetryCleanupError(&xe));
});
process::exit(exit_code);
},
Err(e) => {
let exit_code = e.raw_os_error().unwrap_or(1);
let te = TelemetryEvent::RustcRun { duration_ms: ms,
exit_code: exit_code,
errors: None };
let _ = t.log_telemetry(te).map_err(|xe| {
(cfg.notify_handler)(Notification::TelemetryCleanupError(&xe));
});
Err(e).chain_err(|| rustup_utils::ErrorKind::RunningCommand {
name: OsStr::new(arg0).to_owned(),
})
},
}
}
fn run_command_for_dir_without_telemetry<S: AsRef<OsStr>>(
mut cmd: Command, arg0: &str, args: &[S]) -> Result<()>
{
cmd.args(args);
// FIXME rust-lang/rust#32254. It's not clear to me
// when and why this is needed.
cmd.stdin(process::Stdio::inherit());
match cmd.status() {
Ok(status) => {
// Ensure correct exit code is returned
let code = status.code().unwrap_or(1);
process::exit(code);
}
Err(e) => {
Err(e).chain_err(|| rustup_utils::ErrorKind::RunningCommand {
name: OsStr::new(arg0).to_owned(),
})
}
}
}
#[cfg(unix)]
fn stderr_isatty() -> bool {
use libc;
unsafe { libc::isatty(libc::STDERR_FILENO)!= 0 }
}
#[cfg(windows)]
fn stderr_isatty() -> bool {
type DWORD = u32;
type BOOL = i32;
type HANDLE = *mut u8;
const STD_ERROR_HANDLE: DWORD = -12i32 as DWORD;
extern "system" {
fn GetStdHandle(which: DWORD) -> HANDLE;
fn GetConsoleMode(hConsoleHandle: HANDLE,
lpMode: *mut DWORD) -> BOOL;
}
unsafe {
let handle = GetStdHandle(STD_ERROR_HANDLE);
let mut out = 0;
GetConsoleMode(handle, &mut out)!= 0
}
}
| file_as_stdio | identifier_name |
command.rs | use std::ffi::OsStr;
use std::fs::File;
use std::io::{self, Write, BufRead, BufReader, Seek, SeekFrom};
use std::process::{self, Command, Stdio};
use std::time::Instant;
use regex::Regex;
use tempfile::tempfile;
use Cfg;
use errors::*;
use notifications::*;
use rustup_utils;
use telemetry::{Telemetry, TelemetryEvent};
pub fn run_command_for_dir<S: AsRef<OsStr>>(cmd: Command,
arg0: &str,
args: &[S],
cfg: &Cfg) -> Result<()> {
if (arg0 == "rustc" || arg0 == "rustc.exe") && try!(cfg.telemetry_enabled()) {
return telemetry_rustc(cmd, arg0, args, cfg);
}
run_command_for_dir_without_telemetry(cmd, arg0, args)
}
fn telemetry_rustc<S: AsRef<OsStr>>(mut cmd: Command,
arg0: &str,
args: &[S], cfg: &Cfg) -> Result<()> {
#[cfg(unix)]
fn file_as_stdio(file: &File) -> Stdio {
use std::os::unix::io::{AsRawFd, FromRawFd};
unsafe { Stdio::from_raw_fd(file.as_raw_fd()) }
}
#[cfg(windows)]
fn file_as_stdio(file: &File) -> Stdio {
use std::os::windows::io::{AsRawHandle, FromRawHandle};
unsafe { Stdio::from_raw_handle(file.as_raw_handle()) }
}
let now = Instant::now();
cmd.args(args);
let has_color_args = args.iter().any(|e| {
let e = e.as_ref().to_str().unwrap_or("");
e.starts_with("--color")
});
if stderr_isatty() &&!has_color_args
{
cmd.arg("--color");
cmd.arg("always");
}
let mut cmd_err_file = tempfile().unwrap();
let cmd_err_stdio = file_as_stdio(&cmd_err_file);
// FIXME rust-lang/rust#32254. It's not clear to me
// when and why this is needed.
let mut cmd = cmd.stdin(Stdio::inherit())
.stdout(Stdio::inherit())
.stderr(cmd_err_stdio)
.spawn()
.unwrap();
let status = cmd.wait();
let duration = now.elapsed();
let ms = (duration.as_secs() as u64 * 1000) + (duration.subsec_nanos() as u64 / 1000 / 1000);
let t = Telemetry::new(cfg.rustup_dir.join("telemetry"));
match status {
Ok(status) => {
let exit_code = status.code().unwrap_or(1);
let re = Regex::new(r"\[(?P<error>E.{4})\]").unwrap();
let mut buffer = String::new();
// Chose a HashSet instead of a Vec to avoid calls to sort() and dedup().
// The HashSet should be faster if there are a lot of errors, too.
let mut errors: Vec<String> = Vec::new();
let stderr = io::stderr();
let mut handle = stderr.lock();
cmd_err_file.seek(SeekFrom::Start(0)).unwrap();
let mut buffered_stderr = BufReader::new(cmd_err_file);
while buffered_stderr.read_line(&mut buffer).unwrap() > 0 {
let b = buffer.to_owned();
buffer.clear();
let _ = handle.write(b.as_bytes()); | };
}
let e = if errors.is_empty() { None } else { Some(errors) };
let te = TelemetryEvent::RustcRun { duration_ms: ms,
exit_code: exit_code,
errors: e };
let _ = t.log_telemetry(te).map_err(|xe| {
(cfg.notify_handler)(Notification::TelemetryCleanupError(&xe));
});
process::exit(exit_code);
},
Err(e) => {
let exit_code = e.raw_os_error().unwrap_or(1);
let te = TelemetryEvent::RustcRun { duration_ms: ms,
exit_code: exit_code,
errors: None };
let _ = t.log_telemetry(te).map_err(|xe| {
(cfg.notify_handler)(Notification::TelemetryCleanupError(&xe));
});
Err(e).chain_err(|| rustup_utils::ErrorKind::RunningCommand {
name: OsStr::new(arg0).to_owned(),
})
},
}
}
fn run_command_for_dir_without_telemetry<S: AsRef<OsStr>>(
mut cmd: Command, arg0: &str, args: &[S]) -> Result<()>
{
cmd.args(args);
// FIXME rust-lang/rust#32254. It's not clear to me
// when and why this is needed.
cmd.stdin(process::Stdio::inherit());
match cmd.status() {
Ok(status) => {
// Ensure correct exit code is returned
let code = status.code().unwrap_or(1);
process::exit(code);
}
Err(e) => {
Err(e).chain_err(|| rustup_utils::ErrorKind::RunningCommand {
name: OsStr::new(arg0).to_owned(),
})
}
}
}
#[cfg(unix)]
fn stderr_isatty() -> bool {
use libc;
unsafe { libc::isatty(libc::STDERR_FILENO)!= 0 }
}
#[cfg(windows)]
fn stderr_isatty() -> bool {
type DWORD = u32;
type BOOL = i32;
type HANDLE = *mut u8;
const STD_ERROR_HANDLE: DWORD = -12i32 as DWORD;
extern "system" {
fn GetStdHandle(which: DWORD) -> HANDLE;
fn GetConsoleMode(hConsoleHandle: HANDLE,
lpMode: *mut DWORD) -> BOOL;
}
unsafe {
let handle = GetStdHandle(STD_ERROR_HANDLE);
let mut out = 0;
GetConsoleMode(handle, &mut out)!= 0
}
} |
if let Some(caps) = re.captures(&b) {
if !caps.is_empty() {
errors.push(caps.name("error").unwrap_or("").to_owned());
} | random_line_split |
command.rs | use std::ffi::OsStr;
use std::fs::File;
use std::io::{self, Write, BufRead, BufReader, Seek, SeekFrom};
use std::process::{self, Command, Stdio};
use std::time::Instant;
use regex::Regex;
use tempfile::tempfile;
use Cfg;
use errors::*;
use notifications::*;
use rustup_utils;
use telemetry::{Telemetry, TelemetryEvent};
pub fn run_command_for_dir<S: AsRef<OsStr>>(cmd: Command,
arg0: &str,
args: &[S],
cfg: &Cfg) -> Result<()> {
if (arg0 == "rustc" || arg0 == "rustc.exe") && try!(cfg.telemetry_enabled()) {
return telemetry_rustc(cmd, arg0, args, cfg);
}
run_command_for_dir_without_telemetry(cmd, arg0, args)
}
fn telemetry_rustc<S: AsRef<OsStr>>(mut cmd: Command,
arg0: &str,
args: &[S], cfg: &Cfg) -> Result<()> {
#[cfg(unix)]
fn file_as_stdio(file: &File) -> Stdio {
use std::os::unix::io::{AsRawFd, FromRawFd};
unsafe { Stdio::from_raw_fd(file.as_raw_fd()) }
}
#[cfg(windows)]
fn file_as_stdio(file: &File) -> Stdio {
use std::os::windows::io::{AsRawHandle, FromRawHandle};
unsafe { Stdio::from_raw_handle(file.as_raw_handle()) }
}
let now = Instant::now();
cmd.args(args);
let has_color_args = args.iter().any(|e| {
let e = e.as_ref().to_str().unwrap_or("");
e.starts_with("--color")
});
if stderr_isatty() &&!has_color_args
{
cmd.arg("--color");
cmd.arg("always");
}
let mut cmd_err_file = tempfile().unwrap();
let cmd_err_stdio = file_as_stdio(&cmd_err_file);
// FIXME rust-lang/rust#32254. It's not clear to me
// when and why this is needed.
let mut cmd = cmd.stdin(Stdio::inherit())
.stdout(Stdio::inherit())
.stderr(cmd_err_stdio)
.spawn()
.unwrap();
let status = cmd.wait();
let duration = now.elapsed();
let ms = (duration.as_secs() as u64 * 1000) + (duration.subsec_nanos() as u64 / 1000 / 1000);
let t = Telemetry::new(cfg.rustup_dir.join("telemetry"));
match status {
Ok(status) => {
let exit_code = status.code().unwrap_or(1);
let re = Regex::new(r"\[(?P<error>E.{4})\]").unwrap();
let mut buffer = String::new();
// Chose a HashSet instead of a Vec to avoid calls to sort() and dedup().
// The HashSet should be faster if there are a lot of errors, too.
let mut errors: Vec<String> = Vec::new();
let stderr = io::stderr();
let mut handle = stderr.lock();
cmd_err_file.seek(SeekFrom::Start(0)).unwrap();
let mut buffered_stderr = BufReader::new(cmd_err_file);
while buffered_stderr.read_line(&mut buffer).unwrap() > 0 {
let b = buffer.to_owned();
buffer.clear();
let _ = handle.write(b.as_bytes());
if let Some(caps) = re.captures(&b) {
if!caps.is_empty() {
errors.push(caps.name("error").unwrap_or("").to_owned());
}
};
}
let e = if errors.is_empty() { None } else { Some(errors) };
let te = TelemetryEvent::RustcRun { duration_ms: ms,
exit_code: exit_code,
errors: e };
let _ = t.log_telemetry(te).map_err(|xe| {
(cfg.notify_handler)(Notification::TelemetryCleanupError(&xe));
});
process::exit(exit_code);
},
Err(e) => {
let exit_code = e.raw_os_error().unwrap_or(1);
let te = TelemetryEvent::RustcRun { duration_ms: ms,
exit_code: exit_code,
errors: None };
let _ = t.log_telemetry(te).map_err(|xe| {
(cfg.notify_handler)(Notification::TelemetryCleanupError(&xe));
});
Err(e).chain_err(|| rustup_utils::ErrorKind::RunningCommand {
name: OsStr::new(arg0).to_owned(),
})
},
}
}
fn run_command_for_dir_without_telemetry<S: AsRef<OsStr>>(
mut cmd: Command, arg0: &str, args: &[S]) -> Result<()>
{
cmd.args(args);
// FIXME rust-lang/rust#32254. It's not clear to me
// when and why this is needed.
cmd.stdin(process::Stdio::inherit());
match cmd.status() {
Ok(status) => {
// Ensure correct exit code is returned
let code = status.code().unwrap_or(1);
process::exit(code);
}
Err(e) => {
Err(e).chain_err(|| rustup_utils::ErrorKind::RunningCommand {
name: OsStr::new(arg0).to_owned(),
})
}
}
}
#[cfg(unix)]
fn stderr_isatty() -> bool {
use libc;
unsafe { libc::isatty(libc::STDERR_FILENO)!= 0 }
}
#[cfg(windows)]
fn stderr_isatty() -> bool | {
type DWORD = u32;
type BOOL = i32;
type HANDLE = *mut u8;
const STD_ERROR_HANDLE: DWORD = -12i32 as DWORD;
extern "system" {
fn GetStdHandle(which: DWORD) -> HANDLE;
fn GetConsoleMode(hConsoleHandle: HANDLE,
lpMode: *mut DWORD) -> BOOL;
}
unsafe {
let handle = GetStdHandle(STD_ERROR_HANDLE);
let mut out = 0;
GetConsoleMode(handle, &mut out) != 0
}
} | identifier_body |
|
method-self-arg.rs | // run-pass
// Test method calls with self as an argument
static mut COUNT: usize = 1;
#[derive(Copy, Clone)]
struct Foo;
impl Foo {
fn foo(self, x: &Foo) {
unsafe { COUNT *= 2; }
// Test internal call.
Foo::bar(&self);
Foo::bar(x);
Foo::baz(self);
Foo::baz(*x);
Foo::qux(Box::new(self));
Foo::qux(Box::new(*x));
}
fn bar(&self) {
unsafe { COUNT *= 3; }
}
fn baz(self) {
unsafe { COUNT *= 5; }
}
fn qux(self: Box<Foo>) {
unsafe { COUNT *= 7; }
}
}
fn main() { |
x.foo(&x);
unsafe { assert_eq!(COUNT, 2*3*3*3*5*5*5*7*7*7); }
} | let x = Foo;
// Test external call.
Foo::bar(&x);
Foo::baz(x);
Foo::qux(Box::new(x)); | random_line_split |
method-self-arg.rs | // run-pass
// Test method calls with self as an argument
static mut COUNT: usize = 1;
#[derive(Copy, Clone)]
struct Foo;
impl Foo {
fn foo(self, x: &Foo) |
fn bar(&self) {
unsafe { COUNT *= 3; }
}
fn baz(self) {
unsafe { COUNT *= 5; }
}
fn qux(self: Box<Foo>) {
unsafe { COUNT *= 7; }
}
}
fn main() {
let x = Foo;
// Test external call.
Foo::bar(&x);
Foo::baz(x);
Foo::qux(Box::new(x));
x.foo(&x);
unsafe { assert_eq!(COUNT, 2*3*3*3*5*5*5*7*7*7); }
}
| {
unsafe { COUNT *= 2; }
// Test internal call.
Foo::bar(&self);
Foo::bar(x);
Foo::baz(self);
Foo::baz(*x);
Foo::qux(Box::new(self));
Foo::qux(Box::new(*x));
} | identifier_body |
method-self-arg.rs | // run-pass
// Test method calls with self as an argument
static mut COUNT: usize = 1;
#[derive(Copy, Clone)]
struct Foo;
impl Foo {
fn foo(self, x: &Foo) {
unsafe { COUNT *= 2; }
// Test internal call.
Foo::bar(&self);
Foo::bar(x);
Foo::baz(self);
Foo::baz(*x);
Foo::qux(Box::new(self));
Foo::qux(Box::new(*x));
}
fn bar(&self) {
unsafe { COUNT *= 3; }
}
fn baz(self) {
unsafe { COUNT *= 5; }
}
fn qux(self: Box<Foo>) {
unsafe { COUNT *= 7; }
}
}
fn | () {
let x = Foo;
// Test external call.
Foo::bar(&x);
Foo::baz(x);
Foo::qux(Box::new(x));
x.foo(&x);
unsafe { assert_eq!(COUNT, 2*3*3*3*5*5*5*7*7*7); }
}
| main | identifier_name |
adt-brace-structs.rs | // Unit test for the "user substitutions" that are annotated on each
// node.
struct SomeStruct<T> { t: T }
fn no_annot() {
let c = 66;
SomeStruct { t: &c };
}
fn annot_underscore() {
let c = 66;
SomeStruct::<_> { t: &c };
}
fn annot_reference_any_lifetime() {
let c = 66;
SomeStruct::<&u32> { t: &c };
}
fn annot_reference_static_lifetime() {
let c = 66;
SomeStruct::<&'static u32> { t: &c }; //~ ERROR
}
fn annot_reference_named_lifetime<'a>(_d: &'a u32) {
let c = 66;
SomeStruct::<&'a u32> { t: &c }; //~ ERROR
}
fn annot_reference_named_lifetime_ok<'a>(c: &'a u32) {
SomeStruct::<&'a u32> { t: c };
}
fn | <'a>(_: &'a u32) {
let _closure = || {
let c = 66;
SomeStruct::<&'a u32> { t: &c }; //~ ERROR
};
}
fn annot_reference_named_lifetime_in_closure_ok<'a>(c: &'a u32) {
let _closure = || {
SomeStruct::<&'a u32> { t: c };
};
}
fn main() { }
| annot_reference_named_lifetime_in_closure | identifier_name |
adt-brace-structs.rs | fn no_annot() {
let c = 66;
SomeStruct { t: &c };
}
fn annot_underscore() {
let c = 66;
SomeStruct::<_> { t: &c };
}
fn annot_reference_any_lifetime() {
let c = 66;
SomeStruct::<&u32> { t: &c };
}
fn annot_reference_static_lifetime() {
let c = 66;
SomeStruct::<&'static u32> { t: &c }; //~ ERROR
}
fn annot_reference_named_lifetime<'a>(_d: &'a u32) {
let c = 66;
SomeStruct::<&'a u32> { t: &c }; //~ ERROR
}
fn annot_reference_named_lifetime_ok<'a>(c: &'a u32) {
SomeStruct::<&'a u32> { t: c };
}
fn annot_reference_named_lifetime_in_closure<'a>(_: &'a u32) {
let _closure = || {
let c = 66;
SomeStruct::<&'a u32> { t: &c }; //~ ERROR
};
}
fn annot_reference_named_lifetime_in_closure_ok<'a>(c: &'a u32) {
let _closure = || {
SomeStruct::<&'a u32> { t: c };
};
}
fn main() { } | // Unit test for the "user substitutions" that are annotated on each
// node.
struct SomeStruct<T> { t: T }
| random_line_split |
|
adt-brace-structs.rs | // Unit test for the "user substitutions" that are annotated on each
// node.
struct SomeStruct<T> { t: T }
fn no_annot() {
let c = 66;
SomeStruct { t: &c };
}
fn annot_underscore() {
let c = 66;
SomeStruct::<_> { t: &c };
}
fn annot_reference_any_lifetime() {
let c = 66;
SomeStruct::<&u32> { t: &c };
}
fn annot_reference_static_lifetime() {
let c = 66;
SomeStruct::<&'static u32> { t: &c }; //~ ERROR
}
fn annot_reference_named_lifetime<'a>(_d: &'a u32) {
let c = 66;
SomeStruct::<&'a u32> { t: &c }; //~ ERROR
}
fn annot_reference_named_lifetime_ok<'a>(c: &'a u32) {
SomeStruct::<&'a u32> { t: c };
}
fn annot_reference_named_lifetime_in_closure<'a>(_: &'a u32) {
let _closure = || {
let c = 66;
SomeStruct::<&'a u32> { t: &c }; //~ ERROR
};
}
fn annot_reference_named_lifetime_in_closure_ok<'a>(c: &'a u32) |
fn main() { }
| {
let _closure = || {
SomeStruct::<&'a u32> { t: c };
};
} | identifier_body |
transaction.rs | // Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! State test transaction deserialization.
use uint::Uint; | use bytes::Bytes;
use hash::{Address, H256};
use maybe::MaybeEmpty;
/// State test transaction deserialization.
#[derive(Debug, PartialEq, Deserialize)]
pub struct Transaction {
/// Transaction data.
pub data: Bytes,
/// Gas limit.
#[serde(rename="gasLimit")]
pub gas_limit: Uint,
/// Gas price.
#[serde(rename="gasPrice")]
pub gas_price: Uint,
/// Nonce.
pub nonce: Uint,
/// Secret key.
#[serde(rename="secretKey")]
pub secret: Option<H256>,
/// To.
pub to: MaybeEmpty<Address>,
/// Value.
pub value: Uint,
}
#[cfg(test)]
mod tests {
use serde_json;
use state::Transaction;
#[test]
fn transaction_deserialization() {
let s = r#"{
"data" : "",
"gasLimit" : "0x2dc6c0",
"gasPrice" : "0x01",
"nonce" : "0x00",
"secretKey" : "45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8",
"to" : "1000000000000000000000000000000000000000",
"value" : "0x00"
}"#;
let _deserialized: Transaction = serde_json::from_str(s).unwrap();
// TODO: validate all fields
}
} | random_line_split |
|
transaction.rs | // Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! State test transaction deserialization.
use uint::Uint;
use bytes::Bytes;
use hash::{Address, H256};
use maybe::MaybeEmpty;
/// State test transaction deserialization.
#[derive(Debug, PartialEq, Deserialize)]
pub struct Transaction {
/// Transaction data.
pub data: Bytes,
/// Gas limit.
#[serde(rename="gasLimit")]
pub gas_limit: Uint,
/// Gas price.
#[serde(rename="gasPrice")]
pub gas_price: Uint,
/// Nonce.
pub nonce: Uint,
/// Secret key.
#[serde(rename="secretKey")]
pub secret: Option<H256>,
/// To.
pub to: MaybeEmpty<Address>,
/// Value.
pub value: Uint,
}
#[cfg(test)]
mod tests {
use serde_json;
use state::Transaction;
#[test]
fn transaction_deserialization() |
}
| {
let s = r#"{
"data" : "",
"gasLimit" : "0x2dc6c0",
"gasPrice" : "0x01",
"nonce" : "0x00",
"secretKey" : "45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8",
"to" : "1000000000000000000000000000000000000000",
"value" : "0x00"
}"#;
let _deserialized: Transaction = serde_json::from_str(s).unwrap();
// TODO: validate all fields
} | identifier_body |
transaction.rs | // Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
//! State test transaction deserialization.
use uint::Uint;
use bytes::Bytes;
use hash::{Address, H256};
use maybe::MaybeEmpty;
/// State test transaction deserialization.
#[derive(Debug, PartialEq, Deserialize)]
pub struct Transaction {
/// Transaction data.
pub data: Bytes,
/// Gas limit.
#[serde(rename="gasLimit")]
pub gas_limit: Uint,
/// Gas price.
#[serde(rename="gasPrice")]
pub gas_price: Uint,
/// Nonce.
pub nonce: Uint,
/// Secret key.
#[serde(rename="secretKey")]
pub secret: Option<H256>,
/// To.
pub to: MaybeEmpty<Address>,
/// Value.
pub value: Uint,
}
#[cfg(test)]
mod tests {
use serde_json;
use state::Transaction;
#[test]
fn | () {
let s = r#"{
"data" : "",
"gasLimit" : "0x2dc6c0",
"gasPrice" : "0x01",
"nonce" : "0x00",
"secretKey" : "45a915e4d060149eb4365960e6a7a45f334393093061116b197e3240065ff2d8",
"to" : "1000000000000000000000000000000000000000",
"value" : "0x00"
}"#;
let _deserialized: Transaction = serde_json::from_str(s).unwrap();
// TODO: validate all fields
}
}
| transaction_deserialization | identifier_name |
overflowing_pow.rs | use num::arithmetic::traits::{OverflowingPow, OverflowingPowAssign};
use num::conversion::traits::ExactFrom;
macro_rules! impl_overflowing_pow {
($t:ident) => {
impl OverflowingPow<u64> for $t {
type Output = $t;
#[inline]
fn overflowing_pow(self, exp: u64) -> ($t, bool) {
$t::overflowing_pow(self, u32::exact_from(exp))
} | impl OverflowingPowAssign<u64> for $t {
/// Replaces `self` with `self ^ exp`.
///
/// Returns a boolean indicating whether an arithmetic overflow would occur. If an
/// overflow would have occurred, then the wrapped value is assigned.
///
/// # Worst-case complexity
/// Constant time and additional memory.
///
/// # Examples
/// See the documentation of the `num::arithmetic::overflowing_pow` module.
#[inline]
fn overflowing_pow_assign(&mut self, exp: u64) -> bool {
let (pow, overflow) = OverflowingPow::overflowing_pow(*self, exp);
*self = pow;
overflow
}
}
};
}
apply_to_primitive_ints!(impl_overflowing_pow); | }
| random_line_split |
instant_seal.rs | // Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use engines::{Engine, Seal};
use parity_machine::{Machine, Transactions};
/// An engine which does not provide any consensus mechanism, just seals blocks internally.
/// Only seals blocks which have transactions.
pub struct InstantSeal<M> {
machine: M,
}
impl<M> InstantSeal<M> {
/// Returns new instance of InstantSeal over the given state machine.
pub fn new(machine: M) -> Self {
InstantSeal {
machine: machine,
}
}
}
impl<M: Machine> Engine<M> for InstantSeal<M>
where M::LiveBlock: Transactions
{
fn name(&self) -> &str {
"InstantSeal"
}
fn | (&self) -> &M { &self.machine }
fn seals_internally(&self) -> Option<bool> { Some(true) }
fn generate_seal(&self, block: &M::LiveBlock) -> Seal {
if block.transactions().is_empty() { Seal::None } else { Seal::Regular(Vec::new()) }
}
fn verify_local_seal(&self, _header: &M::Header) -> Result<(), M::Error> {
Ok(())
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use bigint::hash::H520;
use util::*;
use tests::helpers::*;
use spec::Spec;
use header::Header;
use block::*;
use engines::Seal;
#[test]
fn instant_can_seal() {
let spec = Spec::new_instant();
let engine = &*spec.engine;
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
let genesis_header = spec.genesis_header();
let last_hashes = Arc::new(vec![genesis_header.hash()]);
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::default(), (3141562.into(), 31415620.into()), vec![], false).unwrap();
let b = b.close_and_lock();
if let Seal::Regular(seal) = engine.generate_seal(b.block()) {
assert!(b.try_seal(engine, seal).is_ok());
}
}
#[test]
fn instant_cant_verify() {
let engine = Spec::new_instant().engine;
let mut header: Header = Header::default();
assert!(engine.verify_block_basic(&header).is_ok());
header.set_seal(vec![::rlp::encode(&H520::default()).into_vec()]);
assert!(engine.verify_block_unordered(&header).is_ok());
}
}
| machine | identifier_name |
instant_seal.rs | // Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use engines::{Engine, Seal};
use parity_machine::{Machine, Transactions};
/// An engine which does not provide any consensus mechanism, just seals blocks internally.
/// Only seals blocks which have transactions.
pub struct InstantSeal<M> {
machine: M,
}
impl<M> InstantSeal<M> {
/// Returns new instance of InstantSeal over the given state machine.
pub fn new(machine: M) -> Self {
InstantSeal {
machine: machine,
}
}
}
impl<M: Machine> Engine<M> for InstantSeal<M>
where M::LiveBlock: Transactions
{
fn name(&self) -> &str |
fn machine(&self) -> &M { &self.machine }
fn seals_internally(&self) -> Option<bool> { Some(true) }
fn generate_seal(&self, block: &M::LiveBlock) -> Seal {
if block.transactions().is_empty() { Seal::None } else { Seal::Regular(Vec::new()) }
}
fn verify_local_seal(&self, _header: &M::Header) -> Result<(), M::Error> {
Ok(())
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use bigint::hash::H520;
use util::*;
use tests::helpers::*;
use spec::Spec;
use header::Header;
use block::*;
use engines::Seal;
#[test]
fn instant_can_seal() {
let spec = Spec::new_instant();
let engine = &*spec.engine;
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
let genesis_header = spec.genesis_header();
let last_hashes = Arc::new(vec![genesis_header.hash()]);
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::default(), (3141562.into(), 31415620.into()), vec![], false).unwrap();
let b = b.close_and_lock();
if let Seal::Regular(seal) = engine.generate_seal(b.block()) {
assert!(b.try_seal(engine, seal).is_ok());
}
}
#[test]
fn instant_cant_verify() {
let engine = Spec::new_instant().engine;
let mut header: Header = Header::default();
assert!(engine.verify_block_basic(&header).is_ok());
header.set_seal(vec![::rlp::encode(&H520::default()).into_vec()]);
assert!(engine.verify_block_unordered(&header).is_ok());
}
}
| {
"InstantSeal"
} | identifier_body |
instant_seal.rs | // Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use engines::{Engine, Seal};
use parity_machine::{Machine, Transactions};
/// An engine which does not provide any consensus mechanism, just seals blocks internally.
/// Only seals blocks which have transactions.
pub struct InstantSeal<M> {
machine: M,
}
impl<M> InstantSeal<M> {
/// Returns new instance of InstantSeal over the given state machine.
pub fn new(machine: M) -> Self {
InstantSeal {
machine: machine,
}
}
}
impl<M: Machine> Engine<M> for InstantSeal<M>
where M::LiveBlock: Transactions
{
fn name(&self) -> &str {
"InstantSeal"
}
fn machine(&self) -> &M { &self.machine }
fn seals_internally(&self) -> Option<bool> { Some(true) }
fn generate_seal(&self, block: &M::LiveBlock) -> Seal {
if block.transactions().is_empty() | else { Seal::Regular(Vec::new()) }
}
fn verify_local_seal(&self, _header: &M::Header) -> Result<(), M::Error> {
Ok(())
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use bigint::hash::H520;
use util::*;
use tests::helpers::*;
use spec::Spec;
use header::Header;
use block::*;
use engines::Seal;
#[test]
fn instant_can_seal() {
let spec = Spec::new_instant();
let engine = &*spec.engine;
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
let genesis_header = spec.genesis_header();
let last_hashes = Arc::new(vec![genesis_header.hash()]);
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::default(), (3141562.into(), 31415620.into()), vec![], false).unwrap();
let b = b.close_and_lock();
if let Seal::Regular(seal) = engine.generate_seal(b.block()) {
assert!(b.try_seal(engine, seal).is_ok());
}
}
#[test]
fn instant_cant_verify() {
let engine = Spec::new_instant().engine;
let mut header: Header = Header::default();
assert!(engine.verify_block_basic(&header).is_ok());
header.set_seal(vec![::rlp::encode(&H520::default()).into_vec()]);
assert!(engine.verify_block_unordered(&header).is_ok());
}
}
| { Seal::None } | conditional_block |
instant_seal.rs | // Copyright 2015-2017 Parity Technologies (UK) Ltd.
// This file is part of Parity.
// Parity is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Parity is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.
use engines::{Engine, Seal};
use parity_machine::{Machine, Transactions};
/// An engine which does not provide any consensus mechanism, just seals blocks internally.
/// Only seals blocks which have transactions.
pub struct InstantSeal<M> {
machine: M,
}
impl<M> InstantSeal<M> {
/// Returns new instance of InstantSeal over the given state machine.
pub fn new(machine: M) -> Self {
InstantSeal {
machine: machine,
}
}
}
impl<M: Machine> Engine<M> for InstantSeal<M>
where M::LiveBlock: Transactions
{
fn name(&self) -> &str {
"InstantSeal"
}
fn machine(&self) -> &M { &self.machine }
fn seals_internally(&self) -> Option<bool> { Some(true) }
fn generate_seal(&self, block: &M::LiveBlock) -> Seal {
if block.transactions().is_empty() { Seal::None } else { Seal::Regular(Vec::new()) }
}
fn verify_local_seal(&self, _header: &M::Header) -> Result<(), M::Error> {
Ok(())
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use bigint::hash::H520;
use util::*;
use tests::helpers::*;
use spec::Spec;
use header::Header;
use block::*;
use engines::Seal;
#[test]
fn instant_can_seal() {
let spec = Spec::new_instant();
let engine = &*spec.engine;
let db = spec.ensure_db_good(get_temp_state_db(), &Default::default()).unwrap();
let genesis_header = spec.genesis_header(); | let last_hashes = Arc::new(vec![genesis_header.hash()]);
let b = OpenBlock::new(engine, Default::default(), false, db, &genesis_header, last_hashes, Address::default(), (3141562.into(), 31415620.into()), vec![], false).unwrap();
let b = b.close_and_lock();
if let Seal::Regular(seal) = engine.generate_seal(b.block()) {
assert!(b.try_seal(engine, seal).is_ok());
}
}
#[test]
fn instant_cant_verify() {
let engine = Spec::new_instant().engine;
let mut header: Header = Header::default();
assert!(engine.verify_block_basic(&header).is_ok());
header.set_seal(vec![::rlp::encode(&H520::default()).into_vec()]);
assert!(engine.verify_block_unordered(&header).is_ok());
}
} | random_line_split |
|
main.rs | use std::fs::File;
use std::io::prelude::*;
use std::collections::HashMap;
fn get_input() -> i32 {
let mut file = File::open("input.txt").unwrap();
let mut content = String::new();
file.read_to_string(&mut content).unwrap();
content.parse().unwrap()
}
#[derive(Copy, Clone, Debug)]
enum Direction {
XPos,
XNeg,
YPos,
YNeg
}
#[derive(Copy, Clone, Debug)]
struct | {
stride: i32,
left: i32,
x: i32,
y: i32,
dir: Direction
}
impl State {
fn move_next(self: &mut State) {
if self.left > 0 {
match self.dir {
Direction::XPos => self.x += 1,
Direction::XNeg => self.x -= 1,
Direction::YPos => self.y += 1,
Direction::YNeg => self.y -= 1
}
self.left -= 1;
} else {
match self.dir {
Direction::XPos => {
self.dir = Direction::YPos;
self.y += 1;
},
Direction::YPos => {
self.stride += 1;
self.dir = Direction::XNeg;
self.x -= 1;
},
Direction::XNeg => {
self.dir = Direction::YNeg;
self.y -= 1;
},
Direction::YNeg => {
self.stride += 1;
self.dir = Direction::XPos;
self.x += 1;
}
}
self.left = self.stride - 1;
}
}
fn move_next_value(self: &mut State, grid: &mut HashMap<(i32, i32), i32>) -> i32 {
self.move_next();
let mut v = 0;
v += grid.get(&(self.x + 1, self.y)).unwrap_or(&0);
v += grid.get(&(self.x - 1, self.y)).unwrap_or(&0);
v += grid.get(&(self.x, self.y + 1)).unwrap_or(&0);
v += grid.get(&(self.x, self.y - 1)).unwrap_or(&0);
v += grid.get(&(self.x + 1, self.y + 1)).unwrap_or(&0);
v += grid.get(&(self.x + 1, self.y - 1)).unwrap_or(&0);
v += grid.get(&(self.x - 1, self.y + 1)).unwrap_or(&0);
v += grid.get(&(self.x - 1, self.y - 1)).unwrap_or(&0);
grid.insert((self.x, self.y), v);
v
}
}
fn part_1(input: i32) -> i32 {
let mut state = State { stride: 1, left: 1, x: 0, y: 0, dir: Direction::XPos };
for _ in 1.. input {
state.move_next();
}
state.x.abs() + state.y.abs()
}
fn part_2(input: i32) -> i32 {
let mut state = State { stride: 1, left: 1, x: 0, y: 0, dir: Direction::XPos };
let mut grid = HashMap::new();
grid.insert((0, 0), 1);
let mut v = 1;
while v <= input {
v = state.move_next_value(&mut grid);
}
v
}
fn main() {
let input = get_input();
println!("Part 1 distance: {}", part_1(input));
println!("Part 2 value: {}", part_2(input));
}
| State | identifier_name |
main.rs | use std::fs::File;
use std::io::prelude::*;
use std::collections::HashMap;
fn get_input() -> i32 {
let mut file = File::open("input.txt").unwrap();
let mut content = String::new();
file.read_to_string(&mut content).unwrap();
content.parse().unwrap()
}
#[derive(Copy, Clone, Debug)]
enum Direction {
XPos,
XNeg,
YPos,
YNeg
}
#[derive(Copy, Clone, Debug)]
struct State {
stride: i32,
left: i32,
x: i32,
y: i32,
dir: Direction
}
impl State {
fn move_next(self: &mut State) {
if self.left > 0 { | Direction::XNeg => self.x -= 1,
Direction::YPos => self.y += 1,
Direction::YNeg => self.y -= 1
}
self.left -= 1;
} else {
match self.dir {
Direction::XPos => {
self.dir = Direction::YPos;
self.y += 1;
},
Direction::YPos => {
self.stride += 1;
self.dir = Direction::XNeg;
self.x -= 1;
},
Direction::XNeg => {
self.dir = Direction::YNeg;
self.y -= 1;
},
Direction::YNeg => {
self.stride += 1;
self.dir = Direction::XPos;
self.x += 1;
}
}
self.left = self.stride - 1;
}
}
fn move_next_value(self: &mut State, grid: &mut HashMap<(i32, i32), i32>) -> i32 {
self.move_next();
let mut v = 0;
v += grid.get(&(self.x + 1, self.y)).unwrap_or(&0);
v += grid.get(&(self.x - 1, self.y)).unwrap_or(&0);
v += grid.get(&(self.x, self.y + 1)).unwrap_or(&0);
v += grid.get(&(self.x, self.y - 1)).unwrap_or(&0);
v += grid.get(&(self.x + 1, self.y + 1)).unwrap_or(&0);
v += grid.get(&(self.x + 1, self.y - 1)).unwrap_or(&0);
v += grid.get(&(self.x - 1, self.y + 1)).unwrap_or(&0);
v += grid.get(&(self.x - 1, self.y - 1)).unwrap_or(&0);
grid.insert((self.x, self.y), v);
v
}
}
fn part_1(input: i32) -> i32 {
let mut state = State { stride: 1, left: 1, x: 0, y: 0, dir: Direction::XPos };
for _ in 1.. input {
state.move_next();
}
state.x.abs() + state.y.abs()
}
fn part_2(input: i32) -> i32 {
let mut state = State { stride: 1, left: 1, x: 0, y: 0, dir: Direction::XPos };
let mut grid = HashMap::new();
grid.insert((0, 0), 1);
let mut v = 1;
while v <= input {
v = state.move_next_value(&mut grid);
}
v
}
fn main() {
let input = get_input();
println!("Part 1 distance: {}", part_1(input));
println!("Part 2 value: {}", part_2(input));
} | match self.dir {
Direction::XPos => self.x += 1, | random_line_split |
main.rs | #[cfg(debug_assertions)]
#[macro_use] extern crate log;
extern crate crypto;
#[cfg(debug_assertions)]
extern crate env_logger;
extern crate futures;
extern crate getopts;
extern crate hyper;
extern crate librespot;
extern crate rpassword;
#[macro_use]
extern crate serde_json;
extern crate tokio_core;
extern crate tokio_io;
extern crate tokio_signal;
use futures::{Future, Async, Poll, Stream};
use futures::sync::mpsc::UnboundedReceiver;
#[cfg(debug_assertions)]
use std::env;
use std::fs::File;
use std::io::{self, stderr, Write};
use std::path::PathBuf;
use std::process::exit;
use std::str::FromStr;
use tokio_core::reactor::{Handle, Core};
use tokio_io::IoStream;
use std::mem;
use crypto::digest::Digest;
use crypto::sha1::Sha1;
use librespot::core::authentication::{get_credentials, Credentials};
use librespot::core::cache::Cache;
use librespot::core::config::{DeviceType, SessionConfig, ConnectConfig};
use librespot::core::session::Session;
use librespot::playback::audio_backend::{self};
use librespot::playback::config::{Bitrate, PlayerConfig};
use librespot::connect::discovery::{discovery, DiscoveryStream};
use librespot::playback::mixer::{self, MixerConfig};
use librespot::playback::player::{Player, PlayerEvent};
use librespot::connect::spirc::{Spirc, SpircTask};
use librespot::core::spotify_id::SpotifyId;
mod lms;
use lms::LMS;
const VERSION: &'static str = concat!(env!("CARGO_PKG_NAME"), " v", env!("CARGO_PKG_VERSION"));
#[cfg(debug_assertions)]
const DEBUGMODE: bool = true;
#[cfg(not(debug_assertions))]
const DEBUGMODE: bool = false;
#[cfg(target_os="windows")]
const NULLDEVICE: &'static str = "NUL";
#[cfg(not(target_os="windows"))]
const NULLDEVICE: &'static str = "/dev/null";
fn device_id(name: &str) -> String {
let mut h = Sha1::new();
h.input_str(name);
h.result_str()
}
fn usage(program: &str, opts: &getopts::Options) -> String {
println!("{}", VERSION.to_string());
let brief = format!("Usage: {} [options]", program);
opts.usage(&brief)
}
#[cfg(debug_assertions)]
fn setup_logging(verbose: bool) {
let mut builder = env_logger::Builder::new();
match env::var("RUST_LOG") {
Ok(config) => {
builder.parse_filters(&config);
builder.init();
if verbose {
warn!("`--verbose` flag overidden by `RUST_LOG` environment variable");
}
}
Err(_) => {
if verbose {
builder.parse_filters("mdns=info,librespot=debug,spotty=info");
} else {
builder.parse_filters("mdns=error,librespot=warn,spotty=error");
}
builder.init();
}
}
}
#[derive(Clone)]
struct Setup {
cache: Option<Cache>,
player_config: PlayerConfig,
session_config: SessionConfig,
connect_config: ConnectConfig,
credentials: Option<Credentials>,
enable_discovery: bool,
authenticate: bool,
get_token: bool,
save_token: Option<String>,
client_id: Option<String>,
scope: Option<String>,
single_track: Option<String>,
start_position: u32,
lms: LMS
}
fn setup(args: &[String]) -> Setup | .optopt("T", "save-token", "Get oauth token to be used with the web API etc. and store it in the given file.", "TOKENFILE")
.optopt("i", "client-id", "A Spotify client_id to be used to get the oauth token. Required with the --get-token request.", "CLIENT_ID")
.optopt("", "scope", "The scopes you want to have access to with the oauth token.", "SCOPE")
.optflag("x", "check", "Run quick internal check")
.optflag("v", "verbose", "Enable verbose output");
let matches = match opts.parse(&args[1..]) {
Ok(m) => m,
Err(f) => {
writeln!(stderr(), "error: {}\n{}", f.to_string(), usage(&args[0], &opts)).unwrap();
exit(1);
}
};
if matches.opt_present("check") {
println!("ok {}", VERSION.to_string());
let capabilities = json!({
"version": env!("CARGO_PKG_VERSION").to_string(),
"lms-auth": true,
"volume-normalisation": true,
"debug": DEBUGMODE,
"ogg-direct": true,
"save-token": true,
"podcasts": true
});
println!("{}", capabilities.to_string());
exit(1);
}
#[cfg(debug_assertions)]
{
let verbose = matches.opt_present("verbose");
setup_logging(verbose);
}
let name = matches.opt_str("name").unwrap();
let use_audio_cache = matches.opt_present("enable-audio-cache") &&!matches.opt_present("disable-audio-cache");
let cache = matches.opt_str("c").map(|cache_location| {
Cache::new(PathBuf::from(cache_location), use_audio_cache)
});
let credentials = {
let cached_credentials = cache.as_ref().and_then(Cache::credentials);
let password = |username: &String| -> String {
write!(stderr(), "Password for {}: ", username).unwrap();
stderr().flush().unwrap();
rpassword::read_password().unwrap()
};
get_credentials(
matches.opt_str("username"),
matches.opt_str("password"),
cached_credentials,
password
)
};
let authenticate = matches.opt_present("authenticate");
let enable_discovery =!matches.opt_present("disable-discovery");
let start_position = matches.opt_str("start-position")
.unwrap_or("0".to_string())
.parse::<f32>().unwrap_or(0.0);
let session_config = {
let device_id = device_id(&name);
SessionConfig {
user_agent: VERSION.to_string(),
device_id: device_id,
proxy: None,
ap_port: matches
.opt_str("ap-port")
.map(|port| port.parse::<u16>().expect("Invalid port")),
}
};
let pass_through = matches.opt_present("pass-through");
let player_config = {
let bitrate = matches.opt_str("b").as_ref()
.map(|bitrate| Bitrate::from_str(bitrate).expect("Invalid bitrate"))
.unwrap_or(Bitrate::Bitrate320);
PlayerConfig {
bitrate: bitrate,
normalisation: matches.opt_present("enable-volume-normalisation"),
normalisation_pregain: PlayerConfig::default().normalisation_pregain,
pass_through: pass_through,
lms_connect_mode:!matches.opt_present("single-track")
}
};
let connect_config = {
ConnectConfig {
name: name,
device_type: DeviceType::Speaker,
volume: 0x8000 as u16,
linear_volume: true
}
};
let client_id = matches.opt_str("client-id")
.unwrap_or(format!("{}", include_str!("client_id.txt")));
let save_token = matches.opt_str("save-token").unwrap_or("".to_string());
let lms = LMS::new(matches.opt_str("lms"), matches.opt_str("player-mac"), matches.opt_str("lms-auth"));
Setup {
cache: cache,
session_config: session_config,
player_config: player_config,
connect_config: connect_config,
credentials: credentials,
authenticate: authenticate,
enable_discovery: enable_discovery,
get_token: matches.opt_present("get-token") || save_token.as_str().len()!= 0,
save_token: if save_token.as_str().len() == 0 { None } else { Some(save_token) },
client_id: if client_id.as_str().len() == 0 { None } else { Some(client_id) },
scope: matches.opt_str("scope"),
single_track: matches.opt_str("single-track"),
start_position: (start_position * 1000.0) as u32,
lms: lms
}
}
struct Main {
cache: Option<Cache>,
player_config: PlayerConfig,
session_config: SessionConfig,
connect_config: ConnectConfig,
handle: Handle,
discovery: Option<DiscoveryStream>,
signal: IoStream<()>,
spirc: Option<Spirc>,
spirc_task: Option<SpircTask>,
connect: Box<Future<Item=Session, Error=io::Error>>,
shutdown: bool,
authenticate: bool,
event_channel: Option<UnboundedReceiver<PlayerEvent>>,
lms: LMS
}
impl Main {
fn new(handle: Handle, setup: Setup) -> Main {
let mut task = Main {
handle: handle.clone(),
cache: setup.cache,
session_config: setup.session_config,
player_config: setup.player_config,
connect_config: setup.connect_config,
connect: Box::new(futures::future::empty()),
discovery: None,
spirc: None,
spirc_task: None,
shutdown: false,
authenticate: setup.authenticate,
signal: Box::new(tokio_signal::ctrl_c().flatten_stream()),
event_channel: None,
lms: setup.lms
};
if setup.enable_discovery {
let config = task.connect_config.clone();
let device_id = task.session_config.device_id.clone();
task.discovery = Some(discovery(&handle, config, device_id, 0).unwrap());
}
if let Some(credentials) = setup.credentials {
task.credentials(credentials);
}
task
}
fn credentials(&mut self, credentials: Credentials) {
let config = self.session_config.clone();
let handle = self.handle.clone();
let connection = Session::connect(config, credentials, self.cache.clone(), handle);
self.connect = connection;
self.spirc = None;
let task = mem::replace(&mut self.spirc_task, None);
if let Some(task) = task {
self.handle.spawn(task);
}
}
}
impl Future for Main {
type Item = ();
type Error = ();
fn poll(&mut self) -> Poll<(), ()> {
loop {
let mut progress = false;
if let Some(Async::Ready(Some(creds))) = self.discovery.as_mut().map(|d| d.poll().unwrap()) {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.credentials(creds);
progress = true;
}
if let Async::Ready(ref mut session) = self.connect.poll().unwrap() {
if self.authenticate {
if!self.shutdown {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.shutdown = true;
return Ok(Async::Ready(()));
}
}
else {
self.connect = Box::new(futures::future::empty());
let player_config = self.player_config.clone();
let connect_config = self.connect_config.clone();
let mixer_config = MixerConfig {
card: String::from("default"),
mixer: String::from("PCM"),
index: 0,
};
let mixer = (mixer::find(Some("softvol")).unwrap())(Some(mixer_config));
let audio_filter = mixer.get_audio_filter();
let backend = audio_backend::find(None).unwrap();
let (player, event_channel) = Player::new(player_config, session.clone(), audio_filter, move || {
(backend)(Some(NULLDEVICE.to_string()))
});
let (spirc, spirc_task) = Spirc::new(connect_config, session.clone(), player, mixer);
self.spirc = Some(spirc);
self.spirc_task = Some(spirc_task);
self.event_channel = Some(event_channel);
}
progress = true;
}
if let Async::Ready(Some(())) = self.signal.poll().unwrap() {
if!self.shutdown {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.shutdown = true;
} else {
return Ok(Async::Ready(()));
}
progress = true;
}
if let Some(ref mut spirc_task) = self.spirc_task {
if let Async::Ready(()) = spirc_task.poll().unwrap() {
if self.shutdown {
return Ok(Async::Ready(()));
} else {
panic!("Spirc shut down unexpectedly");
}
}
}
if let Some(ref mut event_channel) = self.event_channel {
if let Async::Ready(Some(event)) = event_channel.poll().unwrap() {
self.lms.signal_event(event, self.handle.clone());
}
}
if!progress {
return Ok(Async::NotReady);
}
}
}
}
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let args: Vec<String> = std::env::args().collect();
let Setup {
cache,
session_config,
player_config,
connect_config,
credentials,
authenticate,
enable_discovery,
get_token,
save_token,
client_id,
scope,
single_track,
start_position,
lms
} = setup(&args.clone());
if let Some(ref track_id) = single_track {
match credentials {
Some(credentials) => {
let backend = audio_backend::find(None).unwrap();
let track = SpotifyId::from_uri(
track_id.replace("spotty://", "spotify:")
.replace("://", ":")
.as_str());
let session = core.run(Session::connect(session_config.clone(), credentials, cache.clone(), handle)).unwrap();
let (player, _) = Player::new(player_config, session.clone(), None, move || (backend)(None));
core.run(player.load(track.unwrap(), true, start_position)).unwrap();
}
None => {
println!("Missing credentials");
}
}
}
else if authenticate &&!enable_discovery {
core.run(Session::connect(session_config.clone(), credentials.unwrap(), cache.clone(), handle)).unwrap();
println!("authorized");
}
else if get_token {
if let Some(client_id) = client_id {
let session = core.run(Session::connect(session_config, credentials.unwrap(), cache.clone(), handle)).unwrap();
let scope = scope.unwrap_or("user-read-private,playlist-read-private,playlist-read-collaborative,playlist-modify-public,playlist-modify-private,user-follow-modify,user-follow-read,user-library-read,user-library-modify,user-top-read,user-read-recently-played".to_string());
let url = format!("hm://keymaster/token/authenticated?client_id={}&scope={}", client_id, scope);
let result = core.run(Box::new(session.mercury().get(url).map(move |response| {
let data = response.payload.first().expect("Empty payload");
let token = String::from_utf8(data.clone()).unwrap();
if let Some(save_token) = save_token {
let mut file = File::create(save_token.to_string()).expect("Can't create token file");
file.write(&token.clone().into_bytes()).expect("Can't write token file");
}
else {
println!("{}", token);
}
})));
match result {
Ok(_) => (),
Err(e) => println!("error getting token {:?}", e),
}
}
else {
println!("Use --client-id to provide a CLIENT_ID");
}
}
else {
core.run(Main::new(handle, Setup {
cache,
session_config,
player_config,
connect_config,
credentials,
authenticate,
enable_discovery,
get_token,
save_token,
client_id,
scope,
single_track,
start_position,
lms
})).unwrap()
}
}
| {
let mut opts = getopts::Options::new();
opts.optopt("c", "cache", "Path to a directory where files will be cached.", "CACHE")
.optflag("", "enable-audio-cache", "Enable caching of the audio data.")
.optflag("", "disable-audio-cache", "(Only here fore compatibility with librespot - audio cache is disabled by default).")
.reqopt("n", "name", "Device name", "NAME")
.optopt("b", "bitrate", "Bitrate (96, 160 or 320). Defaults to 320.", "BITRATE")
.optflag("", "pass-through", "Pass raw OGG stream to output")
.optopt("", "player-mac", "MAC address of the Squeezebox to be controlled", "MAC")
.optopt("", "lms", "hostname and port of Logitech Media Server instance (eg. localhost:9000)", "LMS")
.optopt("", "lms-auth", "Authentication data to access Logitech Media Server", "LMSAUTH")
.optopt("", "single-track", "Play a single track ID and exit.", "ID")
.optopt("", "start-position", "Position (in seconds) where playback should be started. Only valid with the --single-track option.", "STARTPOSITION")
.optflag("", "enable-volume-normalisation", "Play all tracks at the same volume")
.optopt("u", "username", "Username to sign in with", "USERNAME")
.optopt("p", "password", "Password", "PASSWORD")
.optflag("a", "authenticate", "Authenticate given username and password. Make sure you define a cache folder to store credentials.")
.optopt("", "ap-port", "Connect to AP with specified port. If no AP with that port are present fallback AP will be used. Available ports are usually 80, 443 and 4070", "AP_PORT")
.optflag("", "disable-discovery", "Disable discovery mode")
.optflag("t", "get-token", "Get oauth token to be used with the web API etc. and print it to the console.") | identifier_body |
main.rs | #[cfg(debug_assertions)]
#[macro_use] extern crate log;
extern crate crypto;
#[cfg(debug_assertions)]
extern crate env_logger;
extern crate futures;
extern crate getopts;
extern crate hyper;
extern crate librespot;
extern crate rpassword;
#[macro_use]
extern crate serde_json;
extern crate tokio_core;
extern crate tokio_io;
extern crate tokio_signal;
use futures::{Future, Async, Poll, Stream};
use futures::sync::mpsc::UnboundedReceiver;
#[cfg(debug_assertions)]
use std::env;
use std::fs::File;
use std::io::{self, stderr, Write};
use std::path::PathBuf;
use std::process::exit;
use std::str::FromStr;
use tokio_core::reactor::{Handle, Core};
use tokio_io::IoStream;
use std::mem;
use crypto::digest::Digest;
use crypto::sha1::Sha1;
use librespot::core::authentication::{get_credentials, Credentials};
use librespot::core::cache::Cache;
use librespot::core::config::{DeviceType, SessionConfig, ConnectConfig};
use librespot::core::session::Session;
use librespot::playback::audio_backend::{self};
use librespot::playback::config::{Bitrate, PlayerConfig};
use librespot::connect::discovery::{discovery, DiscoveryStream};
use librespot::playback::mixer::{self, MixerConfig};
use librespot::playback::player::{Player, PlayerEvent};
use librespot::connect::spirc::{Spirc, SpircTask};
use librespot::core::spotify_id::SpotifyId;
mod lms;
use lms::LMS;
const VERSION: &'static str = concat!(env!("CARGO_PKG_NAME"), " v", env!("CARGO_PKG_VERSION"));
#[cfg(debug_assertions)]
const DEBUGMODE: bool = true;
#[cfg(not(debug_assertions))]
const DEBUGMODE: bool = false;
#[cfg(target_os="windows")]
const NULLDEVICE: &'static str = "NUL";
#[cfg(not(target_os="windows"))]
const NULLDEVICE: &'static str = "/dev/null";
fn device_id(name: &str) -> String {
let mut h = Sha1::new();
h.input_str(name);
h.result_str()
}
fn usage(program: &str, opts: &getopts::Options) -> String {
println!("{}", VERSION.to_string());
let brief = format!("Usage: {} [options]", program);
opts.usage(&brief)
}
#[cfg(debug_assertions)]
fn setup_logging(verbose: bool) {
let mut builder = env_logger::Builder::new();
match env::var("RUST_LOG") {
Ok(config) => {
builder.parse_filters(&config);
builder.init();
if verbose {
warn!("`--verbose` flag overidden by `RUST_LOG` environment variable");
}
}
Err(_) => {
if verbose {
builder.parse_filters("mdns=info,librespot=debug,spotty=info");
} else {
builder.parse_filters("mdns=error,librespot=warn,spotty=error");
}
builder.init();
}
}
}
#[derive(Clone)]
struct Setup {
cache: Option<Cache>,
player_config: PlayerConfig,
session_config: SessionConfig,
connect_config: ConnectConfig,
credentials: Option<Credentials>,
enable_discovery: bool,
authenticate: bool,
get_token: bool,
save_token: Option<String>,
client_id: Option<String>,
scope: Option<String>,
single_track: Option<String>,
start_position: u32,
lms: LMS
}
fn setup(args: &[String]) -> Setup {
let mut opts = getopts::Options::new();
opts.optopt("c", "cache", "Path to a directory where files will be cached.", "CACHE")
.optflag("", "enable-audio-cache", "Enable caching of the audio data.")
.optflag("", "disable-audio-cache", "(Only here fore compatibility with librespot - audio cache is disabled by default).")
.reqopt("n", "name", "Device name", "NAME")
.optopt("b", "bitrate", "Bitrate (96, 160 or 320). Defaults to 320.", "BITRATE")
.optflag("", "pass-through", "Pass raw OGG stream to output")
.optopt("", "player-mac", "MAC address of the Squeezebox to be controlled", "MAC")
.optopt("", "lms", "hostname and port of Logitech Media Server instance (eg. localhost:9000)", "LMS")
.optopt("", "lms-auth", "Authentication data to access Logitech Media Server", "LMSAUTH")
.optopt("", "single-track", "Play a single track ID and exit.", "ID")
.optopt("", "start-position", "Position (in seconds) where playback should be started. Only valid with the --single-track option.", "STARTPOSITION")
.optflag("", "enable-volume-normalisation", "Play all tracks at the same volume")
.optopt("u", "username", "Username to sign in with", "USERNAME")
.optopt("p", "password", "Password", "PASSWORD")
.optflag("a", "authenticate", "Authenticate given username and password. Make sure you define a cache folder to store credentials.")
.optopt("", "ap-port", "Connect to AP with specified port. If no AP with that port are present fallback AP will be used. Available ports are usually 80, 443 and 4070", "AP_PORT")
.optflag("", "disable-discovery", "Disable discovery mode")
.optflag("t", "get-token", "Get oauth token to be used with the web API etc. and print it to the console.")
.optopt("T", "save-token", "Get oauth token to be used with the web API etc. and store it in the given file.", "TOKENFILE")
.optopt("i", "client-id", "A Spotify client_id to be used to get the oauth token. Required with the --get-token request.", "CLIENT_ID")
.optopt("", "scope", "The scopes you want to have access to with the oauth token.", "SCOPE")
.optflag("x", "check", "Run quick internal check")
.optflag("v", "verbose", "Enable verbose output");
let matches = match opts.parse(&args[1..]) {
Ok(m) => m,
Err(f) => {
writeln!(stderr(), "error: {}\n{}", f.to_string(), usage(&args[0], &opts)).unwrap();
exit(1);
}
};
if matches.opt_present("check") {
println!("ok {}", VERSION.to_string());
let capabilities = json!({
"version": env!("CARGO_PKG_VERSION").to_string(),
"lms-auth": true,
"volume-normalisation": true,
"debug": DEBUGMODE,
"ogg-direct": true,
"save-token": true,
"podcasts": true
});
println!("{}", capabilities.to_string());
exit(1);
}
#[cfg(debug_assertions)]
{
let verbose = matches.opt_present("verbose");
setup_logging(verbose);
}
let name = matches.opt_str("name").unwrap();
let use_audio_cache = matches.opt_present("enable-audio-cache") &&!matches.opt_present("disable-audio-cache");
let cache = matches.opt_str("c").map(|cache_location| {
Cache::new(PathBuf::from(cache_location), use_audio_cache)
});
let credentials = {
let cached_credentials = cache.as_ref().and_then(Cache::credentials);
let password = |username: &String| -> String {
write!(stderr(), "Password for {}: ", username).unwrap();
stderr().flush().unwrap();
rpassword::read_password().unwrap()
};
get_credentials(
matches.opt_str("username"),
matches.opt_str("password"),
cached_credentials,
password
)
};
let authenticate = matches.opt_present("authenticate");
let enable_discovery =!matches.opt_present("disable-discovery");
let start_position = matches.opt_str("start-position")
.unwrap_or("0".to_string())
.parse::<f32>().unwrap_or(0.0);
let session_config = {
let device_id = device_id(&name);
SessionConfig {
user_agent: VERSION.to_string(),
device_id: device_id,
proxy: None,
ap_port: matches
.opt_str("ap-port")
.map(|port| port.parse::<u16>().expect("Invalid port")),
}
};
let pass_through = matches.opt_present("pass-through");
let player_config = {
let bitrate = matches.opt_str("b").as_ref()
.map(|bitrate| Bitrate::from_str(bitrate).expect("Invalid bitrate"))
.unwrap_or(Bitrate::Bitrate320);
PlayerConfig {
bitrate: bitrate,
normalisation: matches.opt_present("enable-volume-normalisation"),
normalisation_pregain: PlayerConfig::default().normalisation_pregain,
pass_through: pass_through,
lms_connect_mode:!matches.opt_present("single-track")
}
};
let connect_config = {
ConnectConfig {
name: name,
device_type: DeviceType::Speaker,
volume: 0x8000 as u16,
linear_volume: true
}
};
let client_id = matches.opt_str("client-id")
.unwrap_or(format!("{}", include_str!("client_id.txt")));
let save_token = matches.opt_str("save-token").unwrap_or("".to_string());
let lms = LMS::new(matches.opt_str("lms"), matches.opt_str("player-mac"), matches.opt_str("lms-auth"));
Setup {
cache: cache,
session_config: session_config,
player_config: player_config,
connect_config: connect_config,
credentials: credentials,
authenticate: authenticate,
enable_discovery: enable_discovery,
get_token: matches.opt_present("get-token") || save_token.as_str().len()!= 0,
save_token: if save_token.as_str().len() == 0 { None } else { Some(save_token) },
client_id: if client_id.as_str().len() == 0 { None } else { Some(client_id) },
scope: matches.opt_str("scope"),
single_track: matches.opt_str("single-track"),
start_position: (start_position * 1000.0) as u32,
lms: lms
}
}
struct Main {
cache: Option<Cache>,
player_config: PlayerConfig,
session_config: SessionConfig,
connect_config: ConnectConfig,
handle: Handle,
discovery: Option<DiscoveryStream>,
signal: IoStream<()>,
spirc: Option<Spirc>,
spirc_task: Option<SpircTask>,
connect: Box<Future<Item=Session, Error=io::Error>>,
shutdown: bool,
authenticate: bool,
event_channel: Option<UnboundedReceiver<PlayerEvent>>,
lms: LMS
}
impl Main {
fn new(handle: Handle, setup: Setup) -> Main {
let mut task = Main {
handle: handle.clone(),
cache: setup.cache,
session_config: setup.session_config,
player_config: setup.player_config,
connect_config: setup.connect_config,
connect: Box::new(futures::future::empty()),
discovery: None,
spirc: None,
spirc_task: None,
shutdown: false,
authenticate: setup.authenticate,
signal: Box::new(tokio_signal::ctrl_c().flatten_stream()),
event_channel: None,
lms: setup.lms
};
if setup.enable_discovery {
let config = task.connect_config.clone();
let device_id = task.session_config.device_id.clone();
task.discovery = Some(discovery(&handle, config, device_id, 0).unwrap());
}
if let Some(credentials) = setup.credentials {
task.credentials(credentials);
}
task
}
fn credentials(&mut self, credentials: Credentials) {
let config = self.session_config.clone();
let handle = self.handle.clone();
let connection = Session::connect(config, credentials, self.cache.clone(), handle);
self.connect = connection;
self.spirc = None;
let task = mem::replace(&mut self.spirc_task, None);
if let Some(task) = task {
self.handle.spawn(task);
}
}
}
impl Future for Main {
type Item = ();
type Error = ();
fn poll(&mut self) -> Poll<(), ()> {
loop {
let mut progress = false;
if let Some(Async::Ready(Some(creds))) = self.discovery.as_mut().map(|d| d.poll().unwrap()) {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.credentials(creds);
progress = true;
}
if let Async::Ready(ref mut session) = self.connect.poll().unwrap() {
if self.authenticate {
if!self.shutdown {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.shutdown = true;
return Ok(Async::Ready(()));
}
}
else {
self.connect = Box::new(futures::future::empty());
let player_config = self.player_config.clone();
let connect_config = self.connect_config.clone();
let mixer_config = MixerConfig {
card: String::from("default"),
mixer: String::from("PCM"),
index: 0,
};
let mixer = (mixer::find(Some("softvol")).unwrap())(Some(mixer_config));
let audio_filter = mixer.get_audio_filter();
let backend = audio_backend::find(None).unwrap();
let (player, event_channel) = Player::new(player_config, session.clone(), audio_filter, move || {
(backend)(Some(NULLDEVICE.to_string()))
});
let (spirc, spirc_task) = Spirc::new(connect_config, session.clone(), player, mixer);
self.spirc = Some(spirc);
self.spirc_task = Some(spirc_task);
self.event_channel = Some(event_channel);
}
progress = true;
}
if let Async::Ready(Some(())) = self.signal.poll().unwrap() {
if!self.shutdown {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.shutdown = true;
} else {
return Ok(Async::Ready(()));
}
progress = true;
}
if let Some(ref mut spirc_task) = self.spirc_task {
if let Async::Ready(()) = spirc_task.poll().unwrap() {
if self.shutdown {
return Ok(Async::Ready(()));
} else {
panic!("Spirc shut down unexpectedly");
}
}
}
if let Some(ref mut event_channel) = self.event_channel {
if let Async::Ready(Some(event)) = event_channel.poll().unwrap() {
self.lms.signal_event(event, self.handle.clone());
}
} | if!progress {
return Ok(Async::NotReady);
}
}
}
}
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let args: Vec<String> = std::env::args().collect();
let Setup {
cache,
session_config,
player_config,
connect_config,
credentials,
authenticate,
enable_discovery,
get_token,
save_token,
client_id,
scope,
single_track,
start_position,
lms
} = setup(&args.clone());
if let Some(ref track_id) = single_track {
match credentials {
Some(credentials) => {
let backend = audio_backend::find(None).unwrap();
let track = SpotifyId::from_uri(
track_id.replace("spotty://", "spotify:")
.replace("://", ":")
.as_str());
let session = core.run(Session::connect(session_config.clone(), credentials, cache.clone(), handle)).unwrap();
let (player, _) = Player::new(player_config, session.clone(), None, move || (backend)(None));
core.run(player.load(track.unwrap(), true, start_position)).unwrap();
}
None => {
println!("Missing credentials");
}
}
}
else if authenticate &&!enable_discovery {
core.run(Session::connect(session_config.clone(), credentials.unwrap(), cache.clone(), handle)).unwrap();
println!("authorized");
}
else if get_token {
if let Some(client_id) = client_id {
let session = core.run(Session::connect(session_config, credentials.unwrap(), cache.clone(), handle)).unwrap();
let scope = scope.unwrap_or("user-read-private,playlist-read-private,playlist-read-collaborative,playlist-modify-public,playlist-modify-private,user-follow-modify,user-follow-read,user-library-read,user-library-modify,user-top-read,user-read-recently-played".to_string());
let url = format!("hm://keymaster/token/authenticated?client_id={}&scope={}", client_id, scope);
let result = core.run(Box::new(session.mercury().get(url).map(move |response| {
let data = response.payload.first().expect("Empty payload");
let token = String::from_utf8(data.clone()).unwrap();
if let Some(save_token) = save_token {
let mut file = File::create(save_token.to_string()).expect("Can't create token file");
file.write(&token.clone().into_bytes()).expect("Can't write token file");
}
else {
println!("{}", token);
}
})));
match result {
Ok(_) => (),
Err(e) => println!("error getting token {:?}", e),
}
}
else {
println!("Use --client-id to provide a CLIENT_ID");
}
}
else {
core.run(Main::new(handle, Setup {
cache,
session_config,
player_config,
connect_config,
credentials,
authenticate,
enable_discovery,
get_token,
save_token,
client_id,
scope,
single_track,
start_position,
lms
})).unwrap()
}
} | random_line_split |
|
main.rs | #[cfg(debug_assertions)]
#[macro_use] extern crate log;
extern crate crypto;
#[cfg(debug_assertions)]
extern crate env_logger;
extern crate futures;
extern crate getopts;
extern crate hyper;
extern crate librespot;
extern crate rpassword;
#[macro_use]
extern crate serde_json;
extern crate tokio_core;
extern crate tokio_io;
extern crate tokio_signal;
use futures::{Future, Async, Poll, Stream};
use futures::sync::mpsc::UnboundedReceiver;
#[cfg(debug_assertions)]
use std::env;
use std::fs::File;
use std::io::{self, stderr, Write};
use std::path::PathBuf;
use std::process::exit;
use std::str::FromStr;
use tokio_core::reactor::{Handle, Core};
use tokio_io::IoStream;
use std::mem;
use crypto::digest::Digest;
use crypto::sha1::Sha1;
use librespot::core::authentication::{get_credentials, Credentials};
use librespot::core::cache::Cache;
use librespot::core::config::{DeviceType, SessionConfig, ConnectConfig};
use librespot::core::session::Session;
use librespot::playback::audio_backend::{self};
use librespot::playback::config::{Bitrate, PlayerConfig};
use librespot::connect::discovery::{discovery, DiscoveryStream};
use librespot::playback::mixer::{self, MixerConfig};
use librespot::playback::player::{Player, PlayerEvent};
use librespot::connect::spirc::{Spirc, SpircTask};
use librespot::core::spotify_id::SpotifyId;
mod lms;
use lms::LMS;
const VERSION: &'static str = concat!(env!("CARGO_PKG_NAME"), " v", env!("CARGO_PKG_VERSION"));
#[cfg(debug_assertions)]
const DEBUGMODE: bool = true;
#[cfg(not(debug_assertions))]
const DEBUGMODE: bool = false;
#[cfg(target_os="windows")]
const NULLDEVICE: &'static str = "NUL";
#[cfg(not(target_os="windows"))]
const NULLDEVICE: &'static str = "/dev/null";
fn device_id(name: &str) -> String {
let mut h = Sha1::new();
h.input_str(name);
h.result_str()
}
fn usage(program: &str, opts: &getopts::Options) -> String {
println!("{}", VERSION.to_string());
let brief = format!("Usage: {} [options]", program);
opts.usage(&brief)
}
#[cfg(debug_assertions)]
fn setup_logging(verbose: bool) {
let mut builder = env_logger::Builder::new();
match env::var("RUST_LOG") {
Ok(config) => {
builder.parse_filters(&config);
builder.init();
if verbose {
warn!("`--verbose` flag overidden by `RUST_LOG` environment variable");
}
}
Err(_) => {
if verbose {
builder.parse_filters("mdns=info,librespot=debug,spotty=info");
} else {
builder.parse_filters("mdns=error,librespot=warn,spotty=error");
}
builder.init();
}
}
}
#[derive(Clone)]
struct Setup {
cache: Option<Cache>,
player_config: PlayerConfig,
session_config: SessionConfig,
connect_config: ConnectConfig,
credentials: Option<Credentials>,
enable_discovery: bool,
authenticate: bool,
get_token: bool,
save_token: Option<String>,
client_id: Option<String>,
scope: Option<String>,
single_track: Option<String>,
start_position: u32,
lms: LMS
}
fn setup(args: &[String]) -> Setup {
let mut opts = getopts::Options::new();
opts.optopt("c", "cache", "Path to a directory where files will be cached.", "CACHE")
.optflag("", "enable-audio-cache", "Enable caching of the audio data.")
.optflag("", "disable-audio-cache", "(Only here fore compatibility with librespot - audio cache is disabled by default).")
.reqopt("n", "name", "Device name", "NAME")
.optopt("b", "bitrate", "Bitrate (96, 160 or 320). Defaults to 320.", "BITRATE")
.optflag("", "pass-through", "Pass raw OGG stream to output")
.optopt("", "player-mac", "MAC address of the Squeezebox to be controlled", "MAC")
.optopt("", "lms", "hostname and port of Logitech Media Server instance (eg. localhost:9000)", "LMS")
.optopt("", "lms-auth", "Authentication data to access Logitech Media Server", "LMSAUTH")
.optopt("", "single-track", "Play a single track ID and exit.", "ID")
.optopt("", "start-position", "Position (in seconds) where playback should be started. Only valid with the --single-track option.", "STARTPOSITION")
.optflag("", "enable-volume-normalisation", "Play all tracks at the same volume")
.optopt("u", "username", "Username to sign in with", "USERNAME")
.optopt("p", "password", "Password", "PASSWORD")
.optflag("a", "authenticate", "Authenticate given username and password. Make sure you define a cache folder to store credentials.")
.optopt("", "ap-port", "Connect to AP with specified port. If no AP with that port are present fallback AP will be used. Available ports are usually 80, 443 and 4070", "AP_PORT")
.optflag("", "disable-discovery", "Disable discovery mode")
.optflag("t", "get-token", "Get oauth token to be used with the web API etc. and print it to the console.")
.optopt("T", "save-token", "Get oauth token to be used with the web API etc. and store it in the given file.", "TOKENFILE")
.optopt("i", "client-id", "A Spotify client_id to be used to get the oauth token. Required with the --get-token request.", "CLIENT_ID")
.optopt("", "scope", "The scopes you want to have access to with the oauth token.", "SCOPE")
.optflag("x", "check", "Run quick internal check")
.optflag("v", "verbose", "Enable verbose output");
let matches = match opts.parse(&args[1..]) {
Ok(m) => m,
Err(f) => {
writeln!(stderr(), "error: {}\n{}", f.to_string(), usage(&args[0], &opts)).unwrap();
exit(1);
}
};
if matches.opt_present("check") {
println!("ok {}", VERSION.to_string());
let capabilities = json!({
"version": env!("CARGO_PKG_VERSION").to_string(),
"lms-auth": true,
"volume-normalisation": true,
"debug": DEBUGMODE,
"ogg-direct": true,
"save-token": true,
"podcasts": true
});
println!("{}", capabilities.to_string());
exit(1);
}
#[cfg(debug_assertions)]
{
let verbose = matches.opt_present("verbose");
setup_logging(verbose);
}
let name = matches.opt_str("name").unwrap();
let use_audio_cache = matches.opt_present("enable-audio-cache") &&!matches.opt_present("disable-audio-cache");
let cache = matches.opt_str("c").map(|cache_location| {
Cache::new(PathBuf::from(cache_location), use_audio_cache)
});
let credentials = {
let cached_credentials = cache.as_ref().and_then(Cache::credentials);
let password = |username: &String| -> String {
write!(stderr(), "Password for {}: ", username).unwrap();
stderr().flush().unwrap();
rpassword::read_password().unwrap()
};
get_credentials(
matches.opt_str("username"),
matches.opt_str("password"),
cached_credentials,
password
)
};
let authenticate = matches.opt_present("authenticate");
let enable_discovery =!matches.opt_present("disable-discovery");
let start_position = matches.opt_str("start-position")
.unwrap_or("0".to_string())
.parse::<f32>().unwrap_or(0.0);
let session_config = {
let device_id = device_id(&name);
SessionConfig {
user_agent: VERSION.to_string(),
device_id: device_id,
proxy: None,
ap_port: matches
.opt_str("ap-port")
.map(|port| port.parse::<u16>().expect("Invalid port")),
}
};
let pass_through = matches.opt_present("pass-through");
let player_config = {
let bitrate = matches.opt_str("b").as_ref()
.map(|bitrate| Bitrate::from_str(bitrate).expect("Invalid bitrate"))
.unwrap_or(Bitrate::Bitrate320);
PlayerConfig {
bitrate: bitrate,
normalisation: matches.opt_present("enable-volume-normalisation"),
normalisation_pregain: PlayerConfig::default().normalisation_pregain,
pass_through: pass_through,
lms_connect_mode:!matches.opt_present("single-track")
}
};
let connect_config = {
ConnectConfig {
name: name,
device_type: DeviceType::Speaker,
volume: 0x8000 as u16,
linear_volume: true
}
};
let client_id = matches.opt_str("client-id")
.unwrap_or(format!("{}", include_str!("client_id.txt")));
let save_token = matches.opt_str("save-token").unwrap_or("".to_string());
let lms = LMS::new(matches.opt_str("lms"), matches.opt_str("player-mac"), matches.opt_str("lms-auth"));
Setup {
cache: cache,
session_config: session_config,
player_config: player_config,
connect_config: connect_config,
credentials: credentials,
authenticate: authenticate,
enable_discovery: enable_discovery,
get_token: matches.opt_present("get-token") || save_token.as_str().len()!= 0,
save_token: if save_token.as_str().len() == 0 { None } else | ,
client_id: if client_id.as_str().len() == 0 { None } else { Some(client_id) },
scope: matches.opt_str("scope"),
single_track: matches.opt_str("single-track"),
start_position: (start_position * 1000.0) as u32,
lms: lms
}
}
struct Main {
cache: Option<Cache>,
player_config: PlayerConfig,
session_config: SessionConfig,
connect_config: ConnectConfig,
handle: Handle,
discovery: Option<DiscoveryStream>,
signal: IoStream<()>,
spirc: Option<Spirc>,
spirc_task: Option<SpircTask>,
connect: Box<Future<Item=Session, Error=io::Error>>,
shutdown: bool,
authenticate: bool,
event_channel: Option<UnboundedReceiver<PlayerEvent>>,
lms: LMS
}
impl Main {
fn new(handle: Handle, setup: Setup) -> Main {
let mut task = Main {
handle: handle.clone(),
cache: setup.cache,
session_config: setup.session_config,
player_config: setup.player_config,
connect_config: setup.connect_config,
connect: Box::new(futures::future::empty()),
discovery: None,
spirc: None,
spirc_task: None,
shutdown: false,
authenticate: setup.authenticate,
signal: Box::new(tokio_signal::ctrl_c().flatten_stream()),
event_channel: None,
lms: setup.lms
};
if setup.enable_discovery {
let config = task.connect_config.clone();
let device_id = task.session_config.device_id.clone();
task.discovery = Some(discovery(&handle, config, device_id, 0).unwrap());
}
if let Some(credentials) = setup.credentials {
task.credentials(credentials);
}
task
}
fn credentials(&mut self, credentials: Credentials) {
let config = self.session_config.clone();
let handle = self.handle.clone();
let connection = Session::connect(config, credentials, self.cache.clone(), handle);
self.connect = connection;
self.spirc = None;
let task = mem::replace(&mut self.spirc_task, None);
if let Some(task) = task {
self.handle.spawn(task);
}
}
}
impl Future for Main {
type Item = ();
type Error = ();
fn poll(&mut self) -> Poll<(), ()> {
loop {
let mut progress = false;
if let Some(Async::Ready(Some(creds))) = self.discovery.as_mut().map(|d| d.poll().unwrap()) {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.credentials(creds);
progress = true;
}
if let Async::Ready(ref mut session) = self.connect.poll().unwrap() {
if self.authenticate {
if!self.shutdown {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.shutdown = true;
return Ok(Async::Ready(()));
}
}
else {
self.connect = Box::new(futures::future::empty());
let player_config = self.player_config.clone();
let connect_config = self.connect_config.clone();
let mixer_config = MixerConfig {
card: String::from("default"),
mixer: String::from("PCM"),
index: 0,
};
let mixer = (mixer::find(Some("softvol")).unwrap())(Some(mixer_config));
let audio_filter = mixer.get_audio_filter();
let backend = audio_backend::find(None).unwrap();
let (player, event_channel) = Player::new(player_config, session.clone(), audio_filter, move || {
(backend)(Some(NULLDEVICE.to_string()))
});
let (spirc, spirc_task) = Spirc::new(connect_config, session.clone(), player, mixer);
self.spirc = Some(spirc);
self.spirc_task = Some(spirc_task);
self.event_channel = Some(event_channel);
}
progress = true;
}
if let Async::Ready(Some(())) = self.signal.poll().unwrap() {
if!self.shutdown {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.shutdown = true;
} else {
return Ok(Async::Ready(()));
}
progress = true;
}
if let Some(ref mut spirc_task) = self.spirc_task {
if let Async::Ready(()) = spirc_task.poll().unwrap() {
if self.shutdown {
return Ok(Async::Ready(()));
} else {
panic!("Spirc shut down unexpectedly");
}
}
}
if let Some(ref mut event_channel) = self.event_channel {
if let Async::Ready(Some(event)) = event_channel.poll().unwrap() {
self.lms.signal_event(event, self.handle.clone());
}
}
if!progress {
return Ok(Async::NotReady);
}
}
}
}
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let args: Vec<String> = std::env::args().collect();
let Setup {
cache,
session_config,
player_config,
connect_config,
credentials,
authenticate,
enable_discovery,
get_token,
save_token,
client_id,
scope,
single_track,
start_position,
lms
} = setup(&args.clone());
if let Some(ref track_id) = single_track {
match credentials {
Some(credentials) => {
let backend = audio_backend::find(None).unwrap();
let track = SpotifyId::from_uri(
track_id.replace("spotty://", "spotify:")
.replace("://", ":")
.as_str());
let session = core.run(Session::connect(session_config.clone(), credentials, cache.clone(), handle)).unwrap();
let (player, _) = Player::new(player_config, session.clone(), None, move || (backend)(None));
core.run(player.load(track.unwrap(), true, start_position)).unwrap();
}
None => {
println!("Missing credentials");
}
}
}
else if authenticate &&!enable_discovery {
core.run(Session::connect(session_config.clone(), credentials.unwrap(), cache.clone(), handle)).unwrap();
println!("authorized");
}
else if get_token {
if let Some(client_id) = client_id {
let session = core.run(Session::connect(session_config, credentials.unwrap(), cache.clone(), handle)).unwrap();
let scope = scope.unwrap_or("user-read-private,playlist-read-private,playlist-read-collaborative,playlist-modify-public,playlist-modify-private,user-follow-modify,user-follow-read,user-library-read,user-library-modify,user-top-read,user-read-recently-played".to_string());
let url = format!("hm://keymaster/token/authenticated?client_id={}&scope={}", client_id, scope);
let result = core.run(Box::new(session.mercury().get(url).map(move |response| {
let data = response.payload.first().expect("Empty payload");
let token = String::from_utf8(data.clone()).unwrap();
if let Some(save_token) = save_token {
let mut file = File::create(save_token.to_string()).expect("Can't create token file");
file.write(&token.clone().into_bytes()).expect("Can't write token file");
}
else {
println!("{}", token);
}
})));
match result {
Ok(_) => (),
Err(e) => println!("error getting token {:?}", e),
}
}
else {
println!("Use --client-id to provide a CLIENT_ID");
}
}
else {
core.run(Main::new(handle, Setup {
cache,
session_config,
player_config,
connect_config,
credentials,
authenticate,
enable_discovery,
get_token,
save_token,
client_id,
scope,
single_track,
start_position,
lms
})).unwrap()
}
}
| { Some(save_token) } | conditional_block |
main.rs | #[cfg(debug_assertions)]
#[macro_use] extern crate log;
extern crate crypto;
#[cfg(debug_assertions)]
extern crate env_logger;
extern crate futures;
extern crate getopts;
extern crate hyper;
extern crate librespot;
extern crate rpassword;
#[macro_use]
extern crate serde_json;
extern crate tokio_core;
extern crate tokio_io;
extern crate tokio_signal;
use futures::{Future, Async, Poll, Stream};
use futures::sync::mpsc::UnboundedReceiver;
#[cfg(debug_assertions)]
use std::env;
use std::fs::File;
use std::io::{self, stderr, Write};
use std::path::PathBuf;
use std::process::exit;
use std::str::FromStr;
use tokio_core::reactor::{Handle, Core};
use tokio_io::IoStream;
use std::mem;
use crypto::digest::Digest;
use crypto::sha1::Sha1;
use librespot::core::authentication::{get_credentials, Credentials};
use librespot::core::cache::Cache;
use librespot::core::config::{DeviceType, SessionConfig, ConnectConfig};
use librespot::core::session::Session;
use librespot::playback::audio_backend::{self};
use librespot::playback::config::{Bitrate, PlayerConfig};
use librespot::connect::discovery::{discovery, DiscoveryStream};
use librespot::playback::mixer::{self, MixerConfig};
use librespot::playback::player::{Player, PlayerEvent};
use librespot::connect::spirc::{Spirc, SpircTask};
use librespot::core::spotify_id::SpotifyId;
mod lms;
use lms::LMS;
const VERSION: &'static str = concat!(env!("CARGO_PKG_NAME"), " v", env!("CARGO_PKG_VERSION"));
#[cfg(debug_assertions)]
const DEBUGMODE: bool = true;
#[cfg(not(debug_assertions))]
const DEBUGMODE: bool = false;
#[cfg(target_os="windows")]
const NULLDEVICE: &'static str = "NUL";
#[cfg(not(target_os="windows"))]
const NULLDEVICE: &'static str = "/dev/null";
fn device_id(name: &str) -> String {
let mut h = Sha1::new();
h.input_str(name);
h.result_str()
}
fn usage(program: &str, opts: &getopts::Options) -> String {
println!("{}", VERSION.to_string());
let brief = format!("Usage: {} [options]", program);
opts.usage(&brief)
}
#[cfg(debug_assertions)]
fn setup_logging(verbose: bool) {
let mut builder = env_logger::Builder::new();
match env::var("RUST_LOG") {
Ok(config) => {
builder.parse_filters(&config);
builder.init();
if verbose {
warn!("`--verbose` flag overidden by `RUST_LOG` environment variable");
}
}
Err(_) => {
if verbose {
builder.parse_filters("mdns=info,librespot=debug,spotty=info");
} else {
builder.parse_filters("mdns=error,librespot=warn,spotty=error");
}
builder.init();
}
}
}
#[derive(Clone)]
struct Setup {
cache: Option<Cache>,
player_config: PlayerConfig,
session_config: SessionConfig,
connect_config: ConnectConfig,
credentials: Option<Credentials>,
enable_discovery: bool,
authenticate: bool,
get_token: bool,
save_token: Option<String>,
client_id: Option<String>,
scope: Option<String>,
single_track: Option<String>,
start_position: u32,
lms: LMS
}
fn setup(args: &[String]) -> Setup {
let mut opts = getopts::Options::new();
opts.optopt("c", "cache", "Path to a directory where files will be cached.", "CACHE")
.optflag("", "enable-audio-cache", "Enable caching of the audio data.")
.optflag("", "disable-audio-cache", "(Only here fore compatibility with librespot - audio cache is disabled by default).")
.reqopt("n", "name", "Device name", "NAME")
.optopt("b", "bitrate", "Bitrate (96, 160 or 320). Defaults to 320.", "BITRATE")
.optflag("", "pass-through", "Pass raw OGG stream to output")
.optopt("", "player-mac", "MAC address of the Squeezebox to be controlled", "MAC")
.optopt("", "lms", "hostname and port of Logitech Media Server instance (eg. localhost:9000)", "LMS")
.optopt("", "lms-auth", "Authentication data to access Logitech Media Server", "LMSAUTH")
.optopt("", "single-track", "Play a single track ID and exit.", "ID")
.optopt("", "start-position", "Position (in seconds) where playback should be started. Only valid with the --single-track option.", "STARTPOSITION")
.optflag("", "enable-volume-normalisation", "Play all tracks at the same volume")
.optopt("u", "username", "Username to sign in with", "USERNAME")
.optopt("p", "password", "Password", "PASSWORD")
.optflag("a", "authenticate", "Authenticate given username and password. Make sure you define a cache folder to store credentials.")
.optopt("", "ap-port", "Connect to AP with specified port. If no AP with that port are present fallback AP will be used. Available ports are usually 80, 443 and 4070", "AP_PORT")
.optflag("", "disable-discovery", "Disable discovery mode")
.optflag("t", "get-token", "Get oauth token to be used with the web API etc. and print it to the console.")
.optopt("T", "save-token", "Get oauth token to be used with the web API etc. and store it in the given file.", "TOKENFILE")
.optopt("i", "client-id", "A Spotify client_id to be used to get the oauth token. Required with the --get-token request.", "CLIENT_ID")
.optopt("", "scope", "The scopes you want to have access to with the oauth token.", "SCOPE")
.optflag("x", "check", "Run quick internal check")
.optflag("v", "verbose", "Enable verbose output");
let matches = match opts.parse(&args[1..]) {
Ok(m) => m,
Err(f) => {
writeln!(stderr(), "error: {}\n{}", f.to_string(), usage(&args[0], &opts)).unwrap();
exit(1);
}
};
if matches.opt_present("check") {
println!("ok {}", VERSION.to_string());
let capabilities = json!({
"version": env!("CARGO_PKG_VERSION").to_string(),
"lms-auth": true,
"volume-normalisation": true,
"debug": DEBUGMODE,
"ogg-direct": true,
"save-token": true,
"podcasts": true
});
println!("{}", capabilities.to_string());
exit(1);
}
#[cfg(debug_assertions)]
{
let verbose = matches.opt_present("verbose");
setup_logging(verbose);
}
let name = matches.opt_str("name").unwrap();
let use_audio_cache = matches.opt_present("enable-audio-cache") &&!matches.opt_present("disable-audio-cache");
let cache = matches.opt_str("c").map(|cache_location| {
Cache::new(PathBuf::from(cache_location), use_audio_cache)
});
let credentials = {
let cached_credentials = cache.as_ref().and_then(Cache::credentials);
let password = |username: &String| -> String {
write!(stderr(), "Password for {}: ", username).unwrap();
stderr().flush().unwrap();
rpassword::read_password().unwrap()
};
get_credentials(
matches.opt_str("username"),
matches.opt_str("password"),
cached_credentials,
password
)
};
let authenticate = matches.opt_present("authenticate");
let enable_discovery =!matches.opt_present("disable-discovery");
let start_position = matches.opt_str("start-position")
.unwrap_or("0".to_string())
.parse::<f32>().unwrap_or(0.0);
let session_config = {
let device_id = device_id(&name);
SessionConfig {
user_agent: VERSION.to_string(),
device_id: device_id,
proxy: None,
ap_port: matches
.opt_str("ap-port")
.map(|port| port.parse::<u16>().expect("Invalid port")),
}
};
let pass_through = matches.opt_present("pass-through");
let player_config = {
let bitrate = matches.opt_str("b").as_ref()
.map(|bitrate| Bitrate::from_str(bitrate).expect("Invalid bitrate"))
.unwrap_or(Bitrate::Bitrate320);
PlayerConfig {
bitrate: bitrate,
normalisation: matches.opt_present("enable-volume-normalisation"),
normalisation_pregain: PlayerConfig::default().normalisation_pregain,
pass_through: pass_through,
lms_connect_mode:!matches.opt_present("single-track")
}
};
let connect_config = {
ConnectConfig {
name: name,
device_type: DeviceType::Speaker,
volume: 0x8000 as u16,
linear_volume: true
}
};
let client_id = matches.opt_str("client-id")
.unwrap_or(format!("{}", include_str!("client_id.txt")));
let save_token = matches.opt_str("save-token").unwrap_or("".to_string());
let lms = LMS::new(matches.opt_str("lms"), matches.opt_str("player-mac"), matches.opt_str("lms-auth"));
Setup {
cache: cache,
session_config: session_config,
player_config: player_config,
connect_config: connect_config,
credentials: credentials,
authenticate: authenticate,
enable_discovery: enable_discovery,
get_token: matches.opt_present("get-token") || save_token.as_str().len()!= 0,
save_token: if save_token.as_str().len() == 0 { None } else { Some(save_token) },
client_id: if client_id.as_str().len() == 0 { None } else { Some(client_id) },
scope: matches.opt_str("scope"),
single_track: matches.opt_str("single-track"),
start_position: (start_position * 1000.0) as u32,
lms: lms
}
}
struct Main {
cache: Option<Cache>,
player_config: PlayerConfig,
session_config: SessionConfig,
connect_config: ConnectConfig,
handle: Handle,
discovery: Option<DiscoveryStream>,
signal: IoStream<()>,
spirc: Option<Spirc>,
spirc_task: Option<SpircTask>,
connect: Box<Future<Item=Session, Error=io::Error>>,
shutdown: bool,
authenticate: bool,
event_channel: Option<UnboundedReceiver<PlayerEvent>>,
lms: LMS
}
impl Main {
fn | (handle: Handle, setup: Setup) -> Main {
let mut task = Main {
handle: handle.clone(),
cache: setup.cache,
session_config: setup.session_config,
player_config: setup.player_config,
connect_config: setup.connect_config,
connect: Box::new(futures::future::empty()),
discovery: None,
spirc: None,
spirc_task: None,
shutdown: false,
authenticate: setup.authenticate,
signal: Box::new(tokio_signal::ctrl_c().flatten_stream()),
event_channel: None,
lms: setup.lms
};
if setup.enable_discovery {
let config = task.connect_config.clone();
let device_id = task.session_config.device_id.clone();
task.discovery = Some(discovery(&handle, config, device_id, 0).unwrap());
}
if let Some(credentials) = setup.credentials {
task.credentials(credentials);
}
task
}
fn credentials(&mut self, credentials: Credentials) {
let config = self.session_config.clone();
let handle = self.handle.clone();
let connection = Session::connect(config, credentials, self.cache.clone(), handle);
self.connect = connection;
self.spirc = None;
let task = mem::replace(&mut self.spirc_task, None);
if let Some(task) = task {
self.handle.spawn(task);
}
}
}
impl Future for Main {
type Item = ();
type Error = ();
fn poll(&mut self) -> Poll<(), ()> {
loop {
let mut progress = false;
if let Some(Async::Ready(Some(creds))) = self.discovery.as_mut().map(|d| d.poll().unwrap()) {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.credentials(creds);
progress = true;
}
if let Async::Ready(ref mut session) = self.connect.poll().unwrap() {
if self.authenticate {
if!self.shutdown {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.shutdown = true;
return Ok(Async::Ready(()));
}
}
else {
self.connect = Box::new(futures::future::empty());
let player_config = self.player_config.clone();
let connect_config = self.connect_config.clone();
let mixer_config = MixerConfig {
card: String::from("default"),
mixer: String::from("PCM"),
index: 0,
};
let mixer = (mixer::find(Some("softvol")).unwrap())(Some(mixer_config));
let audio_filter = mixer.get_audio_filter();
let backend = audio_backend::find(None).unwrap();
let (player, event_channel) = Player::new(player_config, session.clone(), audio_filter, move || {
(backend)(Some(NULLDEVICE.to_string()))
});
let (spirc, spirc_task) = Spirc::new(connect_config, session.clone(), player, mixer);
self.spirc = Some(spirc);
self.spirc_task = Some(spirc_task);
self.event_channel = Some(event_channel);
}
progress = true;
}
if let Async::Ready(Some(())) = self.signal.poll().unwrap() {
if!self.shutdown {
if let Some(ref spirc) = self.spirc {
spirc.shutdown();
}
self.shutdown = true;
} else {
return Ok(Async::Ready(()));
}
progress = true;
}
if let Some(ref mut spirc_task) = self.spirc_task {
if let Async::Ready(()) = spirc_task.poll().unwrap() {
if self.shutdown {
return Ok(Async::Ready(()));
} else {
panic!("Spirc shut down unexpectedly");
}
}
}
if let Some(ref mut event_channel) = self.event_channel {
if let Async::Ready(Some(event)) = event_channel.poll().unwrap() {
self.lms.signal_event(event, self.handle.clone());
}
}
if!progress {
return Ok(Async::NotReady);
}
}
}
}
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let args: Vec<String> = std::env::args().collect();
let Setup {
cache,
session_config,
player_config,
connect_config,
credentials,
authenticate,
enable_discovery,
get_token,
save_token,
client_id,
scope,
single_track,
start_position,
lms
} = setup(&args.clone());
if let Some(ref track_id) = single_track {
match credentials {
Some(credentials) => {
let backend = audio_backend::find(None).unwrap();
let track = SpotifyId::from_uri(
track_id.replace("spotty://", "spotify:")
.replace("://", ":")
.as_str());
let session = core.run(Session::connect(session_config.clone(), credentials, cache.clone(), handle)).unwrap();
let (player, _) = Player::new(player_config, session.clone(), None, move || (backend)(None));
core.run(player.load(track.unwrap(), true, start_position)).unwrap();
}
None => {
println!("Missing credentials");
}
}
}
else if authenticate &&!enable_discovery {
core.run(Session::connect(session_config.clone(), credentials.unwrap(), cache.clone(), handle)).unwrap();
println!("authorized");
}
else if get_token {
if let Some(client_id) = client_id {
let session = core.run(Session::connect(session_config, credentials.unwrap(), cache.clone(), handle)).unwrap();
let scope = scope.unwrap_or("user-read-private,playlist-read-private,playlist-read-collaborative,playlist-modify-public,playlist-modify-private,user-follow-modify,user-follow-read,user-library-read,user-library-modify,user-top-read,user-read-recently-played".to_string());
let url = format!("hm://keymaster/token/authenticated?client_id={}&scope={}", client_id, scope);
let result = core.run(Box::new(session.mercury().get(url).map(move |response| {
let data = response.payload.first().expect("Empty payload");
let token = String::from_utf8(data.clone()).unwrap();
if let Some(save_token) = save_token {
let mut file = File::create(save_token.to_string()).expect("Can't create token file");
file.write(&token.clone().into_bytes()).expect("Can't write token file");
}
else {
println!("{}", token);
}
})));
match result {
Ok(_) => (),
Err(e) => println!("error getting token {:?}", e),
}
}
else {
println!("Use --client-id to provide a CLIENT_ID");
}
}
else {
core.run(Main::new(handle, Setup {
cache,
session_config,
player_config,
connect_config,
credentials,
authenticate,
enable_discovery,
get_token,
save_token,
client_id,
scope,
single_track,
start_position,
lms
})).unwrap()
}
}
| new | identifier_name |
color.rs | use crate::rank::Rank;
use std::ops::Not;
/// Represent a color.
#[derive(PartialOrd, PartialEq, Eq, Copy, Clone, Debug, Hash)]
pub enum Color {
White,
Black,
}
/// How many colors are there?
pub const NUM_COLORS: usize = 2;
/// List all colors
pub const ALL_COLORS: [Color; NUM_COLORS] = [Color::White, Color::Black];
impl Color {
/// Convert the `Color` to a `usize` for table lookups.
#[inline]
pub fn to_index(&self) -> usize {
*self as usize
}
/// Convert a `Color` to my backrank, which represents the starting rank
/// for my pieces.
#[inline]
pub fn to_my_backrank(&self) -> Rank |
/// Convert a `Color` to my opponents backrank, which represents the starting rank for the
/// opponents pieces.
#[inline]
pub fn to_their_backrank(&self) -> Rank {
match *self {
Color::White => Rank::Eighth,
Color::Black => Rank::First,
}
}
/// Convert a `Color` to my second rank, which represents the starting rank for my pawns.
#[inline]
pub fn to_second_rank(&self) -> Rank {
match *self {
Color::White => Rank::Second,
Color::Black => Rank::Seventh,
}
}
/// Convert a `Color` to my fourth rank, which represents the rank of my pawns when
/// moving two squares forward.
#[inline]
pub fn to_fourth_rank(&self) -> Rank {
match *self {
Color::White => Rank::Fourth,
Color::Black => Rank::Fifth,
}
}
/// Convert a `Color` to my seventh rank, which represents the rank before pawn promotion.
#[inline]
pub fn to_seventh_rank(&self) -> Rank {
match *self {
Color::White => Rank::Seventh,
Color::Black => Rank::Second,
}
}
}
impl Not for Color {
type Output = Color;
/// Get the other color.
#[inline]
fn not(self) -> Color {
if self == Color::White {
Color::Black
} else {
Color::White
}
}
}
| {
match *self {
Color::White => Rank::First,
Color::Black => Rank::Eighth,
}
} | identifier_body |
color.rs | use crate::rank::Rank;
use std::ops::Not;
/// Represent a color.
#[derive(PartialOrd, PartialEq, Eq, Copy, Clone, Debug, Hash)]
pub enum Color {
White,
Black,
}
/// How many colors are there?
pub const NUM_COLORS: usize = 2;
/// List all colors
pub const ALL_COLORS: [Color; NUM_COLORS] = [Color::White, Color::Black];
impl Color {
/// Convert the `Color` to a `usize` for table lookups.
#[inline]
pub fn to_index(&self) -> usize {
*self as usize
}
/// Convert a `Color` to my backrank, which represents the starting rank
/// for my pieces.
#[inline]
pub fn to_my_backrank(&self) -> Rank {
match *self {
Color::White => Rank::First,
Color::Black => Rank::Eighth,
}
}
/// Convert a `Color` to my opponents backrank, which represents the starting rank for the
/// opponents pieces.
#[inline]
pub fn to_their_backrank(&self) -> Rank {
match *self {
Color::White => Rank::Eighth,
Color::Black => Rank::First,
}
}
/// Convert a `Color` to my second rank, which represents the starting rank for my pawns.
#[inline]
pub fn to_second_rank(&self) -> Rank {
match *self {
Color::White => Rank::Second,
Color::Black => Rank::Seventh,
}
}
/// Convert a `Color` to my fourth rank, which represents the rank of my pawns when
/// moving two squares forward. | match *self {
Color::White => Rank::Fourth,
Color::Black => Rank::Fifth,
}
}
/// Convert a `Color` to my seventh rank, which represents the rank before pawn promotion.
#[inline]
pub fn to_seventh_rank(&self) -> Rank {
match *self {
Color::White => Rank::Seventh,
Color::Black => Rank::Second,
}
}
}
impl Not for Color {
type Output = Color;
/// Get the other color.
#[inline]
fn not(self) -> Color {
if self == Color::White {
Color::Black
} else {
Color::White
}
}
} | #[inline]
pub fn to_fourth_rank(&self) -> Rank { | random_line_split |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.